During the twentieth century the recording of sound via microphone and its amplification via loudspeaker became the main means of listening to music. For live music these tools became important on stage due to their ability to amplify acoustic instruments and make electronic sound audible. At home, in the living room, recorded sound replaced the piano to create ‘concerts at home’ with the most famous musicians playing all kinds of music to a private audience. Although important for the perception of music, the microphone and loudspeaker are often seen, especially in the living room situation described above, as ‘neutral’ devices for recording and reproducing music. Of course the recording is always influenced by the characteristics of the microphones and loudspeakers. The aim for sound engineers is to keep awareness of their mediating role of this technology to a minimum, so that the audience is not conscious of the recording process. The recording should sound ‘as if you have an orchestra playing at home’. Even on stage, an amplified voice is often seen as a voice that is ‘just a little bit louder’.
At the same time in the field of art practice many artists use microphones and loudspeakers consciously as compositional devices as well as for the production of music in general. For my doctoral research in the DocArtes Program at the Orpheus Institute and the University of Leiden my practice focuses on this approach to loudspeakers and microphones. My research is divided into two related parts: a theoretical framework that considers loudspeakers and microphones as musical instruments, and an experimental praxis that develops my own projects with microphones and loudspeakers.
In this text, I focus on one of my performance projects entitled, Song No 3. On this page I give an overview of the concept of the performance as well as the research context for this. On the pages surrounding this one, readers can find more detailed information on several aspects of the performance and the theoretical framework, including technical details of the performance, theories on movement and music, and analyses of similar performances.
Loudspeakers and microphones as musical instruments
I call the conscious use of loudspeakers and microphones as sound shaping devices the ‘instrumental approach’. Artists that use this approach are clearly aware of the interaction between these devices and all other elements of a performance (other sound producing technologies, the performance space, the performer themself, etc.), using these characteristics as a central element that shapes the performance itself. During my research I looked at the difference between a performance that uses loudspeakers and microphones more in the sense of ‘neutral’ devices, and a performance that would use them more as ‘instruments’. I came to the conclusion that one of the main distinctions seems to be, and this is of course quite obvious, that this difference occurs from the moment that loudspeakers and microphones are ‘played’. Playing should be understood here as changing the parameters of an instrument to achieve an audible difference in the sound it produces; as is the case when the violinist changes the length of a string with his finger that results in a different pitch, or a clarinettist changes the air pressure to give a different loudness. The same kind of changes can be found for example when members of Gentle Fire, a group from the 1970s, specialised in performing electronic music, played Quintet by Hugh Davies. This piece was played by moving the microphone towards and away from the loudspeaker, the distance between the two changed and thus the length of the standing wave between them, which determines the pitch of the feedback, was altered.
After comparing more conventional performance practices, like playing the clarinet and more experimental performances, like Quintet by Hugh Davies, I started to investigate the role of playing an instrument and especially how this was done by several artists when using microphones and loudspeakers. The movement of the performer’s body is integral to the playing of acoustical instruments. As soon as electricity is used, many of these sound-producing movements are no longer necessary. The same sound can be played by different kinds of interfaces and therefore by using different movements, something that had been practically impossible before. This presents, of course, an enormous opportunity for redefining the relationship between sound and movement and for using these new relationships as a compositional parameter. 
For the remainder of this paper I consider the loudspeaker and microphone as musical instruments. I recognize that this is a simplification, since neither can be a musical instrument on its own: an amplifier is needed to amplify the signal of the microphone. Moreover, other elements are often part of the musical instrument, like the performance space, digital sound processing or all kind of objects. To undertake research about loudspeakers and microphones as musical instruments I aim to focus on their characteristics as musical instruments without neglecting their interaction with other elements. Since this kind of differentiation is out of the scope of this text, I will refer to a simplified concept of the musical instrument loudspeaker-microphone.
In my practical work, I look for ways to play loudspeakers and microphones. I study what kind of movements could be used and also what kind of cultural meaning these movements have. An example of this can be found in the piece Groene Ruis. In addition, I employ the theoretical framework to analyse and compare performances by other composers, such as the 1976 piece Bird and Person Dyning by Alvin Lucier, who used specific movements in order to play loudspeakers and microphones. During this piece, Alvin Lucier used head movements normally associated with listening to a sound, to control the sound of the performance. 
Song No 3 explores just such an approach of combining movements and sounds. During this performance the gestures of the performer are used in an unusual way for the performance context. I look for ways to use the singer’s arm and head gestures as a means to produce sound. Within traditional performance practice a singer’s movements are incidental to the production of sound. I decided to use these movements for sound production in Song No 3 because they are related to making sound but not perceived as its main cause: a singer might be able to sing without any arm movements (although it will probably sound different), but definitely not without breathing and pushing the air through the vocal chords. To bring movement in music to the foreground of my practice I have developed three categories of movement. The borders between these categories are not easily distinguished and, therefore, not to be applied rigidly to musical performances, I utilise them however as an aid for conceptualising and composing performances. This categorisation is a tool that helps to define the movements and their role in my performances. 
To render my arm and head movements audible, I decided to use a loudspeaker in front of my mouth instead of my singing voice to diffuse the sound during the performance. A large piece of white paper was glued on to the loudspeaker to obscure my face. The audience’s attention was thus diverted from the mouth, which signals the presence of sound visually, to a focus on the gestures of the arms and head. The paper was attached directly to the loudspeaker membrane to resonate together with it and give the sound a more readily recognisable physical source. (It should be noted, that this is unfortunately not audible on the several movies in this exposition, since their audio is recorded from the computer output, for practical reasons). To make the sound interact with the movements of my arm, I used a microphone, exactly as singers do. The distance between the microphone and loudspeaker can be manipulated to produce different sound shaping processes.  Sound can therefore be controlled using the gestures that normally only accompany a singing voice.
Although no singing was done, the performer made movements that are similar to the movements of a singer during performance. The microphone was, for example, brought to her mouth and she moved her arms and head in a similar way. These movements, instead of the singing voice, produced the sound of the performance. Every time the performer moved her hands or head the sound changed. It is possible to perform these movements without singing, and it is possible to sing with nearly no movement of the head or arms. Therefore, these movements remain in a certain way silent, since their result is not compulsorily audible, as is not necessarily the case with other movements during music performances. 
In Song No 3 there is thus a change of focus: as mentioned above, the gestures accompanying the singing voice have become the central focus, and the singing voice itself is not audible through the mouth and body of the performer, but in a simulated form through the loudspeaker in front of the mouth. By using several kinds of gestures for controlling the sound and also some different sound processing possibilities, using the computer, the relationship between sound and gesture changes during the performance.  In this way not only the means of producing sound is different to how a traditional singer performs, but so are the changes in sound normally produced by movements with a microphone. Whereas during a normal singing performance moving the microphone closer to the mouth will amplify the sound of the voice, during Song No 3 moving the microphone closer towards the loudspeaker attached before the mouth can result in all kinds of audible changes. The sound itself can change in a number of ways, since it is the amplitude of the microphone signal that is mapped on to several parameter changes. By moving the microphone, the sound may not only become louder or softer, but also faster, slower or noisier, or undergo a change in pitch.  In fact every possible sound is available and it is now the task of the composer to choose what kind of sound should be produced by what kind of movement. I can, therefore, now decide which gesture makes what kind of sound in a manner that is quite different from a conventional acoustical instrument.
In the performance Song No 3, and also the other performances which I discuss in this text, the gestures clearly have their own cultural connotations. In my performance Groene Ruis the movements may be recognized as those characteristic of playing a harp, while in my performance Song No 3 the arm movements of a singer are the main focus. In my view, these movements can be seen as symbolic gestures: they have a specific meaning which points to an activity or something outside of the movements themselves. It is for this reason that I started to work with them, since they have a strong auditive connotation; we can almost hear a harp when we see the movements for playing the instrument and, similarly, we can almost hear a singer singing when we see the movements of a singer’s arm. These movements can be seen as a part of certain vocabulary of gesture. To use them to control sound is to make an intervention in their normal meaning by affixing a different sound not previously associated with that gesture. The different relationships between sound and gesture can be composed and the relationships, which are established during the piece, can be questioned during the same performance by establishing another gesture-sound relationship. In Song No 3 for example, at the beginning of the performance, every time the microphone comes closer to the mouth-loudspeaker the sound becomes much louder and more chaotic. At the end of the performance, the opposite is the case. Only when the microphone is held very close to the mouth-loudspeaker does the sound become quite soft and tranquil. As soon as the microphone is taken away from the mouth, a loud singing voice can be heard through the loudspeaker.  The consequences of certain movements change during the performance; in this case from consequences we expect due to our cultural context (a microphone close to the sound source will normally amplify this sound source) and also consequences we do not necessarily expect. This change of consequences is a dramatic line during the performance and due to these changes the performer’s characteristics change as well. These changes happen due to playing the instrument. By playing the instrument, the identity of instrument and performer shift.
To compose such changing consequences between movements and sound, electricity is not only needed, but can be the means, by using loudspeakers and microphones, of producing such shifts in identity as described above. Since we do not connect a specific sound with them, as would be the case with for example a violin, they act like chameleons, which change their identity depending on the context. This context is formed by playing them and therefore giving them an identity. This change of identity is exactly what causes a different compositional approach: by using microphones and loudspeakers as musical instruments other aspects of composition can come into focus that differ from acoustic instruments: a composition can focus on identity shifts of the instrument itself, since the instrument itself does no have an identity as rigid as the piano or violin. The microphone-loudspeaker instrument is flexible in sound as well as in playing method.
One could argue that this kind of identity shift might also be achieved by using sensors. Indeed, the microphone acts like a sensor for amplitude of the diffused sound during Song No 3. What is significantly different, however, is that a sensor is outside the sound world, it cannot communicate in an auditive way. Both microphone and loudspeaker are able to do this. In Song No 3 they have a direct relationship with each other and this can be clearly experienced during the performance. If using sensors, for example, to measure the distance between the hand and head of the performer and to measure the speed of the movements, this would relate much more to the playing of acoustic musical instruments, since the parameter changes are predictable. The output of the sensor measuring the distance between the head and hand will be always the same every time hand and head are at the same distance. This is not the case for the microphone in Song No 3, since it is not measuring the distance but the amplitude of the diffused signal. The amplitude of the diffused signal changes due the microphone’s movement towards or away from the loudspeaker. But the amplitude will not necessarily change in direct proportion to the distance between microphone and loudspeaker, since the sound of the loudspeaker itself changes as well due to the changes of the amplitude of the signal picked up by the microphone. The signal of the loudspeaker can therefore become louder although the microphone is moving away from the loudspeaker. This might result in a non-directly proportional relationship between the hand and head distance and the amplitude signal of the microphone. Microphone and loudspeaker have a feedback relationship; all the changes in the sound diffused by the loudspeaker cause changes to the audio signal picked up by the microphone that again causes changes in the loudspeaker sound. This is exactly what makes working with loudspeakers and microphones interesting for me: using them in performances instead of sensors causes an interaction with the sound of the performance itself. In contrast, sensors control the sound, without being able to react to it. (With these last sentences I do not want to imply that performances with sensors are uninteresting, on the contrary, but they do demand other compositional strategies to those that prove useful when working with loudspeakers and microphones.)
As mentioned already above, the microphone in Song No 3 is used only to detect the amplitude of the signal diffused by the loudspeaker. A part of my future research will concentrate on using other parameters of audio signals picked up by the microphone and integrating them into the performance as well. These parameters might include pitch tracking and spectral analysis, but I am also considering just taking the whole audio signal as an input that is then fed back to the loudspeaker after some digital signal processing has been done.