This chapter focuses on the use of a shared stereo mix as the basic listening strategy during rehearsals, recording sessions and concerts for the T-EMP ensemble.
The ensemble consists of both acoustic and electronic instruments performing improvised live electroacoustic music. This chapter will describe observations and experiences related to the exploration of a shared listening strategy within this context.
Roles and technical setup
Premises for shared listening within the ensemble
Exploring a strategy of shared listening within a live electroacoustic free improvisation ensemble.
“The extensive range of sound sources admitted into music via the electroacoustic medium has initiated a revolution in the sounding content of musical works, appealing to a variety of listening responses not fully encompassed in previously existing musics.” (Smalley 1996)
“In something as rich and diverse as electroacoustic music, the basic carriers of musical meaning may be ambiguous on the listeners part. As with any music, we need to know what aspect of the material defines structure; such a basic understanding is absolutely necessary to its effective communication” (MacDonald 1995)
Within live electronic music there is an ongoing discourse concerning the audible and visual relationship between an ensemble and the audience (Cascone 2003) . Even though the music within this context is presented through speakers, and can be compared with acousmatic listening (Emmerson 1994) , there lies an expectation both from the audience and musicians to find a relationship between what they hear and what they see, especially when performed by live musicians.
Put aside the audible/visual relationship, and looking at the T-EMP ensemble or live electronic music in general there is also a potential challenge to differ between performers, because of the extensive variation in musical sound. Since every musician within a live electronic ensemble theoretically has the potential of producing every sound, it can be a challenge to define which musician is doing what seen both from an audience perspective, but also between the musicians interacting inside the ensemble. Additionally, the instrumentation within such an ensemble constructs a large degree of schizophonia (a term established by R.M Schaeffer describing the disconnection between the physical production mechanisms attached to the creation of acoustical sound and their reproduction independent of time and space either represented through recordings or electroacoustic transmissions (Schafer 1977). This aspect could be perceived ambiguous both for the audience and the performers. In advance to these aspects the nature of improvisation leads to a large degree of unpredictability both concerning musical parameters and sound production elements.
The absence of roles such as a conductor or a sound engineer, which potentially could balance or separate elements in this context, leaves the musicians with the responsibility of balancing and producing the output through their instruments and sound production systems. The responsibility of sound production both individually and globally during performance under this framework has therefore shifted from the “traditional” sound engineer and over to the musicians.
Given these role changes together with the ambiguities in sound sources within the ensemble, there is a challenge to construct an optimal listening system. Is it even possible to construct an optimal listening environment for the musicians, or to find general rules for how such a monitoring system should be constructed within this context?
Within the T-EMP ensemble we have chosen to use and explore a strategy of shared listening that potentially could highlight the listening environment as a collective responsibility within the ensemble. The strategy has been to deliver a shared monitor mix to the performers during rehearsals, studio sessions and concerts, giving the musicians the responsibility to balance their instruments into the total sound image. The basic idea is that if everyone hears the same mix, they will adjust individually and consequently globally to the sound image as a whole through their instrumental and sound production output.
This figure can be seen as simplistic representation of the signal flow behind the shared listening strategy. As shown in the figure the sound production takes place before the mixer in opposite to a “traditional” sound production chain. For a DJ or a purely electronic laptop orchestra this signal flow and monitoring system can seem familiar in many settings, but introducing this model into an ensemble with several musicians containing both acoustic and electronic instruments leads to a number of challenges both musically and sound production wise.
Before discussing the different challenges in question, or entering concrete examples on the experiences that have been done, I will describe the different roles and instrumental/technical setups within the ensemble.
Roles and technical setup
All the musicians in T-EMP have experience from different musical genres and are well trained as instrumental performers through previous works in their different fields. In advance, several of the members work with composition and sound production as part of their musical repertoire. What is common for all performers is that they are experienced in the utilization of music technology through their works. These experiences and traditions affect both their playing style and aesthetic expressions, but also how they design the music technological solutions for their instruments in case of interfaces, processing tools, system design and instrument augmentations. Most of the musicians are instrumentalists and producers at the same time. This means that the individual aesthetics can be traced not only through instrumental practice, but also through individual sound production aspects.
The ensemble also consists of both acoustical and electronic instruments, and all these elements, acoustical and electronic, are mixed together as part of the ensembles aesthetic expression.
Since the ensemble has concentrated on performing free improvisation, the main focus lies on sound production in real time. As a consequence, this therefore excludes reproductions through fixed media. The absence of pre-recorded material builds under the flexibility for the performers both musically and sound production wise since the challenges through the use of fixed media is absent. On the other hand this choice increases the level of unpredictability in both the music and sound production domain because of the lack of predetermined form. It can therefore be argued that the listening framework in T-EMP is closer to improvisation within an acoustical ensemble than what we can see in electronic music or electroacoustic mixed works when it comes to a certain rigidity in predetermined form.
Before the project started all members where experienced through the use and exploitation of most common listening systems found at venues and sound studios. This includes most normal practices were each performer receives individual fold back either through how they are placed in a room, or through monitors or headphones returning an individual mix. The instrumental balance received through these systems are most often chosen by the individual performer, and has a tendency to focus on their own instrument in order to gain detailed control over their own output. The nature of such a strategy draws more attention to the individual performance than to the total output.
Such a solution in the T-EMP ensemble would be a difficult starting point knowing that the global sound production responsibilities was transferred from the producer/conductor to the performers. In that way the idea of using a shared listening strategy was outside the normal practice for the performers before we started the project, but was seen as a necessity in order to investigate the collective responsibility for the sound production as a whole.
Within the ensemble the roles and technical set up for the different performers are quite different. I will not go into detail with the different systems, but briefly describe the different members´ signal input and output to further clarify how the total signal flow of the ensemble works in connection with the monitor fold back received by the performers.
Carl Haakon (drums) produces acoustical sound from his instrument, in addition several microphones capturing the drum kit are amplified, summed and distributed through the shared monitor mix. The direct sound of the drums also has the possibility of being further live processed in up to 3 different instances through Øyvind, Bernt or Trond`s systems. These signals (three pairs of stereo output) can also be distributed to the shared monitor mix.
Tone (vocals/electronics) produces processed electronic sounds based on vocal input, and these are distributed to the shared monitor mix from a stereo output. The direct sound of the voice also has a possibility of being further live processed in up to 3 different instances through Øyvind, Bernt or Trond`s systems. These signals (three pairs of stereo output) and can also be distributed to the shared monitor mix.
Bernt (guitar/electronics) produces electronic sounds through individual speaker(s) (sound in the room). The speaker(s) are further captured by microphones, summed and distributed to the shared monitor mix. In addition he produces processed electronic sound based from his instrumental input or direct sound input from Carl Haakon, Tone or additional guest musicians. These are distributed to the shared monitor mix through a stereo line signal, and/or the speaker(s)signal chain.
Trond (guitar/electronics) produces processed electronic sound based from his instrumental input or direct sound input from Carl Haakon, Tone or additional guest musicians. These are distributed to the shared monitor mix through a stereo signal.
Øyvind (electronics) produces processed electronic sound based from his voice or direct sound input from Carl Haakon, Tone or additional guest musicians. These are distributed to the shared monitor mix through a stereo signal.
Figure of the signal flow within the ensemble:
By looking at the figure it is needless to say that the number of input and output signals floating within the ensembles signal chain is numerous compared with the number of performers. A T-EMP concert or studio session normally occupy between 16 – 20 lines of output (without counting in additional guest musicians or internal signal summing within the individual systems). This means that the each musician’s individual sound production responsibility inside the total sound image is significant.
I will now pinpoint some of the challenges we have faced when exploring the mentioned listening strategy. I will mainly focus on some of the production technical and human factors that interfere with the collective and individual perception of what is experienced through a shared listening environment.
First of all the collective perception changes dependent on the different framework for performance either if it’s during a) rehearsals, b) concerts or c) studio recordings. Even though the number of input and output lines to a large degree are the same in all three situations, the listening conditions can be very different. To give some examples: During several rehearsals in Trondheim and in Maynooth we have used a semi amplified solution where the drums and guitar amplifiers have been kept out of the monitor mix, adjusting the monitor mix against these instruments´ direct sound. During concerts we have either used one pair of shared sidefield monitors or the front of house speakers as the shared stereo foldback. The listening during studio sessions has without exception been delivered through headphones.
I will now look closer into the three mentioned scenarios seen out from a shared listening strategy in its purest form, a perspective where the performers ultimately hear the exact same stereo mix.
(a) When looking at the rehearsal situations the listening for each performer is different. First of all the direct sound of some of the instruments are blended together with the monitor mix. The perceived balance for the performers is therefore dependent on which instrument they play, and where they are physical placed in the room both concerning instrumental balance and stereo perspective. It could therefore be difficult to know if you navigate against the total sound image on the same premises as the others. Secondly it can also be argued to what degree the level of detachment from the direct sound of the different performers instruments has influence for their perception of the total mix. This point will be looked closer into later in this chapter.
(b) The concert situation is where there has been most variation in the listening condition. This has mainly been related to the room size and technical facilities at the different venues. We have practised the mentioned semi amplified solution during two concerts in Trondheim (Kammersalen and Orgelsalen). In relation to the rehearsal situation the same challenges appeared, but was magnified by the size of the room, and the performers placement in the room. Because of a more spread placement, the stereo image was lost depending on how the different performers where placed according to the sweetspot of the stereo speakers. At the same time the distance from each performer to the direct sound of the drums and guitar amplifier decided which instrumental balance that was perceived. In most venues they have had a PA system that has allowed us to exceed the direct sound of the instruments, and therefore avoided some of the instrumental balance issues. Since the monitoring systems in most venues are designed for mono foldback, it was natural to follow up the strategy with a shared sidefield solution. As mentioned, this tidies up the perceived instrumental balance, but the stereo image is largely dependent on how you are placed on the stage. In addition, live processing and use of electronic instruments gives the possibility for performers to place the source in the stereo image independent of placement on stage, which then again can be confusing when navigating the placement standing outside the sweetspot. This means that placement both in the width and depth perspective in the mix becomes complicated. There has also been raised issues concerning the relationship between the level of physical distance to the sidefield speakers and the sensibility of nearness to the instrument having interfered with the feeling of instrumental control. This was most evident during the concert in Sweden (see 3.10), where the monitor speakers where placed at a large distance.
In addition the issue of bleeding between microphones causes fingerprints in several output systems. This solution also strengthens the possibility for feedback concerning the number of open microphones at the same time. This means that in a room where several sources are being unintended amplified through open microphones and then blended into the mix causes challenges for what is sent out to the monitoring.
(c) Looking at the listening strategy in its purest form, it has been the studio sessions that have been closest to the ideal of a shared listening experience.
During the studio sessions the bleeding between microphones has to some degree been avoided by separating especially the drums and guitar amplifiers in different rooms. The headphone sound level has also been turned up to a point where the direct sound of the instruments were exceeded by the monitor foldback. This controlled environment has given us time to work with a more detailed aspect concerning separation of the instruments.
First of all there has been issues with unclear definition in the tonal balance because of several performers tending to seek the same frequency area, and as a consequence blurring out the total sound image. The introduction of inter-processing has led to several lines of sounds filling up the same place in the mix. As mentioned in the section T-EMP and I, it is harder to distinguish between instruments in the same category as for example Bernt/Trond and Tone/Ingrid, but not necessarily for the musicians coupled in the same category. This distinction becomes clearer when presented through headphones. When doing live processing there are also several lines reinforcing the audible focus against the performer being processed, and at the same time taking much more space in the mix.
At the same time the more “controlled” monitoring draws attention to other factors. The level of detachment from the direct sound of the instrument leads to other ways of interacting (Engum, 2012) since the relationship between what you do and what you expect to hear from the response can change as a consequence of different processing applied to the input source. This was most evident during the pre-session with David Moss where the direct sounds of Carl Haakon´s drums were taken out of the monitor mix, leaving the processed feedback as the only response.
If we look at the musicians within the band, and the discussions we have had about the shared listen strategy during the project there are several issues related to different perceptions of what is received, and especially when it comes to individual instruments during performance. There have been, and still are, different perspectives about how loud each instrument is placed in the mix at a given time. Since everyone has the responsibility to find the right balance of their own instrument into the mix at any time, it doesn’t necessarily mean that other musicians perceive the same perspective. First of all, the urge to focus on individual performance is compromised using this strategy, and could therefore compromise detail in individual performance (McNUTT, 2003) , an aspect also mentioned both by Øyvind and Tone in the section T-Emp and I.
To sum up some of the challenges using this strategy at present time:
It is still hard to find a compromise in instrumental balance during rehearsals, studio and concerts. It is more difficult for the performers to control details through their own instruments using the shared listening. We have still not reached the point were the individual and global sound production is seamlessly integrated at the same time, since its still a challenge to balance individual playing and production. Given these premises it has therefore up to this point been impossible to achieve a “pure” shared listening environment between performers.
Premises for shared listening within the ensemble
Following up on the idea of a shared listening in its purest form were everyone hear the same thing, I will describe some premises, and possible individual and collective responsibilities in order to achieve this.
The performers need to be detached from the direct sound of their instruments in order to achieve a neutral listening environment for instrumental balance. This means that the monitoring either through headphones or speakers needs to exceed the direct sound of the instruments. At the same time it needs to be established a level of trust between the performers knowing that everyone works for the total sound image. Seen from an audience view the sound from the PA speakers needs to be louder than the direct sound from the instruments in order to get the correct instrumental balance.
- So what are the individual responsibilities from the performers in order to obtain the described listening strategy?
On the most basic level, balancing the sound of the instruments both acoustically and electronically through the performance. If we look at studio production in a traditional sense (an engineer balancing/adjusting and editing a track during repeated listening, or an electroacoustic concert where the composer is balancing and adjusting pre-recorded material through speakers), the challenge increases for individual musicians performing a finished mix at the same time as playing. This means multiple control over numerous in and outputs at the same time as treating the instrument.
Each performer must have the responsibility to tune their individual instruments (output level, internal summing, limiter, level range and multiple levels of control in the individual signal chains). Every performer has at this point control over their individual production at a high level, but to make a studio production in the traditional sense through this strategy is much harder to achieve. It could also be questioned if we even want the same thing at the same time.
When it comes to the collective responsibility, everyone has to over-win their need for hearing themselves loudest.
This needs to practiced. It is crucial that there is a level of trust between the performers – if not, we can end up with a purposeless mass, or we can miss strong individual musical statements.
This chapter has described the T-EMP ensemble´s exploration of a shared listening strategy. The strategy has been to deliver a shared monitor mix to the performers during rehearsals, studio sessions and concerts, giving the musicians the responsibility to balance their instruments into the total sound image. The basic idea is that if everyone hear the same mix, they will adjust individually and consequently globally to the sound image as a whole through their instrumental and sound production output.
How has this strategy affected the performance of the individual musicians and the ensemble as a whole?
Is it still a lot of unanswered questions attached to the use of a shared listening strategy, and as described the optimal arrangement of technical/acoustical framework in order to explore this strategy in its purest form still remains unfulfilled. Moreover, as described, the strategy could be further explored either through physical separation of acoustic sources to avoid bleeding between microphones, but also to gain a larger degree of detachment from the direct sound of the instruments. (This solution would leave out the drummer’s perspective). An alternative solution could be headphone listening,
In addition to this, we believe that this strategy has helped us in balancing the ensemble when it comes to awareness of individual instruments, and how they affect the total sound image. The use of this strategy has also strengthened the awareness of the individual sound production elements, and how they have affected the sound production as a whole. There are still challenges within the gap between “traditional” instrumental practice and sound production where this approach could be used for further exploration.
Even though the instrumental and sound production aspects are seamlessly integrated, there could be an idea to separate those two elements during rehearsals.
Possible rehearsals for focusing on the difference between individual and collective perception could be:
(1) The musicians perform a live mix at the same time on multi track recorded material balancing/producing their own instrument (discussions within the ensemble, and comparisons with original stereo live mix.)
(2) Every performer does a production of a multi track recording of the ensemble. (Compare within the ensemble - what is highlighted)?
(3) Each performer starts with his/hers instrument at level 0 and increases the volume until he/she feels it has the right balance in the mix. (Discuss with the ensemble).
Other possible solutions for the ensemble, if abandoning this strategy, could be to explore a multichannel solution instead of a stereo feed. This could help in order of instrument separation. Another solution could be that each performer sent from individual speakers during performance (close to an acoustic ensemble). This would make it easier to determine localisation of each musician, and less degree of schizophonia, but would leave out the possibility to choose placement in the room. There could also be an idea to explore a cooperation with either a conductor or a sound engineer in order to balance the total output, but such a solution would change the “democratic” power structure within the band.
 Smalley, D. (1996). "The listening imagination: Listening in the electroacoustic era." Contemporary Music Review 13(2): 77-107.
 MacDonald, A. (1995). "Performance Practice in the Presentation of Electroacoustic Music." Computer Music Journal 19(4): 88-92.
 Cascone, K. (2003). "Grain, Sequence, System: Three Levels of Reception in the Performance of Laptop Music." Contemporary Music Review 22(4): 101-104.
 Emmerson, S. (1994). "‘Live’ versus ‘real-time’." Contemporary Music Review 10(2): 95-101.
 Schafer, R. M. (1977). "Tuning of the worlds." Page 90
 Engum, T. (2012). Beat The Distance - music technological strategies for composition
and production, NTNU.
 McNUTT, E. (2003). "Performing electroacoustic music: a wider view of interactivity." Organised Sound 8(03): 297-304.