Towards a (per)sonal topography of
grand piano and electronics
How can I develop a grand piano with live electronics through iterated development loops in the cognitive technological environment of instrument, music, performance and my poetics?
The instrument I am developing, a grand piano with electronic augmentations, is adapted to cater my poetics. This adaptation of the instrument will change the way I compose. The change of composition will change the music. The change of music will change my performances. The change in performative needs will change the instrument, because it needs to do different things. This change in the instrument will show me other poetics and change my ideas. The change of ideas demands another music and another instrument, because the instrument should cater to my poetics. And so it goes… These are the development loops I am talking about.
I have made an augmented grand piano using various music technologies. I call the instrument the HyPer(sonal) Piano, a name derived from the suspected interagency between the extended instrument (HyPer), the personal (my poetics) and the sonal result (music and sound). I use old analogue guitar pedals and my own computer programming side by side, processing the original piano sound. I also take out control signals from the piano keys to drive different sound processes. The sound output of the instrument is deciding colors, patterns and density on a 1x3 meter LED light carpet attached to the grand piano. I sing, yet the sound of my voice is heavily processed, a processing decided by what I am playing on the keys. All sound sources and control signal sources are interconnected, allowing for complex and sometimes incomprehensible situations in the instrument´s mechanisms.
First supervisor: Henrik Hellstenius
Second Supervisors: Øyvind Brandtsegg and Eivind Buene
Cover photo by Jørn Stenersen, www.anamorphiclofi.com
All other photo, audio and video recording/editing by Morten Qvenild, unless stated.
With Goodbye Intuition we seek to challenge our roles and artistic preferences as improvising musicians by improvising with "creative" machines. In our project the machines can both take the role of a performer for us to play with and they can be extensions of our own instruments. They can become both our duet partners and they can be additions, expansions or augmentations of our sound. Playing is core in our investigation, and it is based on this experience we try to articulate thoughts and answers to the following questions:
- How do we improvise with "creative" machines, how do we listen, how do we play?
- How will improvising within an interactive human-machine domain challenge our roles as improvisers?
- What music emerge from the human-machine improvisatory dialogue?
The project's artists are Andrea Neumann (GER), Morten Qvenild (NOR) and Ivar Grydeland (NOR). The artist Sidsel Endresen (NOR) is our observer, commentator, critic and discussion partner. Norwegian Center for Technology in Music and the Arts (NOTAM) is our technological collaborator. Additionally, musician, composer and researcher Henrik Frisk (SWE), writer, musician, composer David Toop (UK) and director and writer Annie Dorsen (US) contributes to the project.