The main limitation I found with this method of working was that I consistently failed when trying to merge the aesthetic and structural results from my sonification work together with writing for traditional instruments. My transition towards an understanding of how I could manipulate traditional instruments into coming closer to the sound world I was creating with sonification procedures began when I focused on the performative aspect of the material I was working with. This entailed analysing and stretching myself as if I were a traditional instrumentalist, while still being able to produce similar sounds to those generated electronically by sonification procedures.
The status quo of performance in live electronic music
In order to apply sonification procedures in a live performance context, I first had to revise the demands of the electronic performer in the existing repertoire for ensemble and live electronics.
I started investigating repertoire in which an electronic performer was deemed part of an ensemble. I realised that, in most cases I encountered as a performer, the contribution demanded from an electronic practitioner in the existing repertoire was often limited to live-triggering pre-recorded sound files, acting as a human score-follower to cue the starting and stopping of live processing of traditional instruments, and being a mixing technician who controls the balance between the instrumental and electronic sound sources. I am pointing at works by, for example, Kaija Saariaho (Six Japanese Gardens, 1994; Vent Nocturne, 2006), Philippe Manoury (Partita I, 2006), Cort Lippe (Music for Cajon and Computer, 2011) and Richard Karpen (Strand Lines, 2007), to name a few.
At the beginning of my research, these limited tasks allowed me to work towards the development of a performance practice, mainly through reinforcing the interactive nature of the actions demanded in relation to other performers in a concert situation: executing these actions onstage and seeing how my presence affected the other performers’ and the audience’s perception of the musical event.
I tried to understand what kinds of action such a rudimentary computer performer needed to be concerned with. Focusing on the existing repertoire up to the beginning of the twenty-first century, the demands on the computer performer could be listed as follows:
- Triggering a synthetic layer.
- Dealing with onsets and offsets of the different sections in a piece (involving many physical actions with no direct sonic consequence).
- Controlling and moving sound events in the physical space (also demanding physical actions with no linear sound result).
In this threefold scenario, it is the traditional instrumentalist who provides the control information to the synthetic layer through harmonic and amplitude content. It also feeds the Digital Signal Processing (DSP) component of a piece.
This is the way most repertoire for traditional instruments and live-electronics still operates; however, by performing these pieces, one can be confronted with certain technical issues and limitations. First, there is the logistical difficulty of signal extraction in a live performance situation. This is normally overcome by using a pickup or a microphone, where very small placement differences significantly influence the overall behaviour of the instrument-computer system; however, such differences are to be expected in a real-life scenario, such as at a festival, where the setup needs to be changed rapidly between performances.
Second, there is the confusion of signal analysis (i.e., the extraction of pitch and amplitude information in real time from a performer) with musical analysis (which would suggest the ability of the system to understand and judge the musical nuances produced by the decision-making of a performer). This misconception effectively demotes the traditional performer, however virtuosic, to the role of generating pitch and amplitude events over time.
This kind of musical setup risks reducing the traditional performer’s potential for expression, while also underusing the potential of another musical decision-maker, the electronic performer. Music technologist Miller Puckette and composer Cort Lippe were already concerned with the idea that, for these pieces to be genuinely responsive and to be appreciated by audiences as being generated by the decision-making of the musicians involved, they were forced to reduce the instrumentalists’ input only to pitch and dynamic: “We can now provide an instrumentalist with a high degree of timing control, and a certain level of expressive control over an electronic score. [...] A dynamic relationship between performer, musical material, and the computer can become an important aspect of the man/machine interface for the composer, performer, and listener, in an environment where musical expression is used to control an electronic score.” (Lippe and Puckette 1994: 64)
I proposed to help contribute to what this “musical expression” might mean by enhancing the decision-making potential of the electronic part, even in pieces where the electronic part was designed to be autonomous by responding to the signal of the traditional performer. I returned to the ideas I used when creating a method for my sonification pieces, and defined a very simple and systematic procedure:
- Design of the instrument
- Selection of the repertoire
- Definition of the instrumentation
Following Marcelo Wanderley’s model for digital instrument design, which focuses on describing the performer’s “expert interaction by means of the use of input devices to control real-time sound synthesis software” (Wanderley 2001: 3), one can split the task of creating the instrument into two components: software and hardware. The software side includes composing or programming the software environment, defining the tasks and limitations of the instrument and the parametrisation of the instrument. The hardware side includes the design of the gesture acquisition interface (for example, hand motions, hitting keys), the design of the gesture-acquisition platform (sensors, cameras, trigger interfaces) and analogue-to-digital conversion for the gestures.