Control intimacy


What is control intimacy and why is it relevant in this context?
Latency: technical and mental.
Subtle musical control, mapping
Alternative modes of control
Audience: can they follow?

References

What is control intimacy and why is it relevant in this context?

In exploring the various modes of musical performance, communication and interaction, we have also needed to develop new musical instruments or new ways of using existing musical instruments and technology. In performing with these new instruments at the same time as we’re dealing with new stylistic and aesthetic challenges, we have repeatedly faced the question:

 

“Why does it (always) seem to take (too much) time to respond appropriately to a musical impulse?”

 

As we have pondered this, we have investigated and philosophised different issues that contribute to this time lag. The question of what is an appropriate response is of course also just as relevant as when you’re able to contribute it.

To shed some light on these issues we will now look into some qualities and aspects by which musical instruments can be evaluated in terms of playability, control and flexibility.

 

"For subtle musical control to be possible, an instrument must respond in consistent ways that are well matched to the psychophysiological capabilities of highly practiced performers. (...) The best traditional musical instruments are ones whose control systems exhibit an important quality that I call "intimacy." Control intimacy determines the match between the variety of musically desirable sounds produced and the psychophysiological capabilities of a practiced performer. It is based on the performer's subjective impression of the feedback control lag between the moment a sound is heard, a change is made by the performer, and the time when the effect of that control change is heard." (Moore 1988)[1]

 

The term control intimacy coined by Moore in 1998 is related both to the subtle and precise control of the musically desirable sounds, and the time it takes from the impulse to produce such sound until the sound can be heard. In the following we’ll look at possible causes for time lag, possible roadblocks for subtlety of control in our mode of music making, and also what kinds of musical parameters we may want to control. Finally, we’ll also look briefly into how an audience perhaps will relate differently to a performance with these kinds of new instruments.


back to top

Latency: technical and mental.

It is common to think of time lag, or latency, in a musical instrument as the time between  “a change is made by the performer, and the time when the effect of that control change is heard.“ (ibid.). This is especially true within the realm of music technology, where there has been a lot of attention to this kind of latency during the last 10-15 years, when realtime processing with affordable computers has been possible.

 

"Few practitioners of live performance computer music would deny that low latency is essential. Just how low is the subject of considerable debate. We place the acceptable upper bound on the computer’s audible reaction to gesture at 10 milliseconds (ms)" (Wessel and Wright 2002)[2]

 

Now, let’s take one more look at the citation from Moore and see if there is something we forgot to remember, in our eagerness to evaluate and refine our technological tools. Moore includes the time “between the moment a sound is heard, a change is made by the performer, and the time when the effect of that control change is heard.". So the mental reaction time of the performer, i.e. the time it takes between the moment when a sound is heard and an instrumental action is taken is indeed part of the equation.

Cognitive research [3, 4] tells us that the reaction time to unexpected stimuli can be in the order of 300 (results vary from around 150 ms for simple tests to 600 ms for the perception of NOW or the “thickness of the present), so the technical latency of the instrument is actually responsible for just a tiny fraction of the total musical reaction time of the performer+instrument system. We might use the term mental latency to describe the time lag from a sound is heard until an instrumental action is taken by the performer, and we use the term technical latency to describe the time lag in the instrument. The issues causing technical latency are well known (e.g. data rate, bandwidth and buffering for control and audio signals), and it is diminishing with each new iteration of technical development. On the contrary, the mental latency is by and large caused by the processes in the body and brain of the performer. This kind of latency is in most cases diminished by years of continued exercise and experience on an instrument. A well trained musician will in many cases exhibit a direct bodily reaction to musical stimuli, and this reaction is expressed directly on his instrument. He does not think about how this needs to be done, what clefs to push, how hard to bite on the reed, what keys to press etc., the reaction is as if there was no separation between body and instrument. This allows the musician to react fluidly and rapidly and facilitates instant musical communication. Now, in working with new technology, continuously developing new instruments to facilitate new sounds and new ways of playing music, the process of diminishing mental latency is much more difficult. Typically, we may change the instrument from day to day because that is part of the artistic process of exploring how this instrument can be its best and how it can be used to make the music we had in mind when starting to build it. Even the music we try to make with it will change as part of the development process. This is part of the natural dialogue between the technological development and the artistic expression. It would be conceptually possible to develop a new instrument, stop the instrument design work completely so the design is frozen, then start the process of becoming familiar with the instrument and develop a performance technique that diminish mental latency. However, in our work we feel that it is imperative to keep the dialogue between artistic intention and technological development open, so the two may continually follow and benefit from each other. In this context, the years of practice needed to create an immediate physical relationship to the instrument is not practically possible. Perhaps we need to search for new ways of diminishing mental latency, perhaps find ways to practice to enhance physical and mental adaptability to new instruments? In some of the ensemble exercises in this project, we have tried to focus on shortening the musical reaction time to unexpected stimuli.

 
back to top

Subtle musical control, mapping

To be able to perform expressively on a musical instrument, we must be able to control fine details of the produced sound, both in terms of timbral nuances and in articulation and phrasing of longer statements. As Moore mentions in the quote above: "For subtle musical control to be possible, an instrument must respond in consistent ways that are well matched to the psychophysiological capabilities of highly practiced performers.” The instruments’ response to performer action can be commonly described as the controller mapping. The field of mapping has been well researched during the last few years[5-8]. We commonly find several types of mapping in an instrument. One-to-one mapping (where one control input is used to control one parameter), one-to-many (also called divergent, where one input controls several instrument parameters), and many-to-one (convergent mapping, where several inputs control one instrument parameter). A typical mapping from input to instrument parameter also consists of a transfer function, describing how changes in the input will affect the output (or instrument) parameter. The transfer function may be linear (like scaling and offset) or it may have an exponential, logarithmic or nonlinear shape. Hunt and Kirk [5] points out that complex mappings can oftentimes be more engaging for the musician than simple and technologically based one-to-one mappings.

 

"One of the main characteristics of such a [performance] mode of operation is that it allows humans to explore an environment in a continuous manner, rather than to ‘perform a series of unit tasks’. Explorative operation means that the user discovers how to control a device by exploring different input control positions and combinations, thus gaining an immediate response from the system. The user may appear to be ‘playing around’ with the control, but they are actually discovering hidden relationships between parameters within the system. (...) This is the experience of a typical acoustic instrumental musician; the instrument stays constant whilst the focus is on the improvement of the human player." (Hunt  and Kirk 2000)[5]

 

The explorative operation of an instrument is vital for establishing a close connection between performer and instrument, as needed both for subtle and precise control and for diminishing mental latency. In this performance mode, the primary feedback is sonic, tactile & kinesthetic (ibid). In T-EMP, when we process the sound of each other, and simultaneously use the balanced monitor mix philosophy, the sonic feedback is less detailed than what one would normally expect from a musical instrument. The instruments also typically do not give tactile or kinesthetic feedback in a manner that reflects the actual sonic output (though this may very well be a fruitful area to explore). In sum this creates an immensely complex situation for understanding and relating to one’s own instrumental actions, obviously affecting the ability to produce clear and unambiguous musical statements. This in turn affects the ability to familiarize oneself with the instrument, and finally has an impact on the clarity of communication between performers.

 

"The total effect of all these convergent and divergent mappings, with various weightings and biasing, is to make a traditional acoustic instrument into a highly non-linear device. Such a device will necessarily take a substantial time to learn, but will give the user (and the listener) a rich and rewarding experience." (ibid)

 

As we understand, in the case of T-EMP, not only the instruments but also the overall musical situation is unusually complex. It is obvious that it will take substantial time to learn how to operate in such a situation, and during the research project we have merely started a process that will go on for a much longer time. The task of comprehending and being able to operate/play within this paradigm (of processing the sound of others in a freely improvised musical setting) needs time and devotion to an even higher degree than "simply" playing a musical instrument.

A possible mapping strategy for future exploration of a tactile or kinesthetic feedback in instrument design may be sound symbolism as explored by Liam O'sullivan[9], where the perceived “shape” of a timbre is translated into visual or tactile shape. Even though he has explored it in the context of correlations between virtual controller shapes and the associated output audio spectrum, we may suggest that the same metaphors can be applied to tactile or kinesthetic signals.


back to top

Alternative modes of control

Closely related to the mapping issues is the question of “What do we want to control?”, or perhaps “What modes of control do we want to facilitate?”

Playing a traditional instrument, like the piano or violin, the musician has direct control over each note produced by the instrument. Even though the timbral control mapping for a violin has both convergent and divergent elements, and as such is a complex mapping, the control over musical content is still a one-to-one mapping in that the performer generates musical events (notes) one by one. We have become so accustomed to this instrumental paradigm that it may seem as the most valid and accepted way of making music on an instrument. As we will see, new instruments also provide new modes of control, leading to different perspectives on the music played and a creative potential to be explored.

Composers and conductors exert other kinds of control over the music than the musician does. A composer has other control dimensions and other time scales of interaction with the music, and a conductor still other ways of affecting the musical production. Each has a quite well defined role and mode of control. With technologically based instruments, these roles can and will be blurred, as the musician’s potential for control over all these musical levels and dimensions can be facilitated in real time at the moment of performance. Peter Kirn has some nice reflections on this in his online article "What Does it Mean to Be an Electronic Instrument?":

 

"Digital music technology, most fundamentally, creates a level of abstraction between what you’re directly manipulating (such as a knob, mouse, or touchscreen), and the resulting sound. As such, it challenges designers to provide the feeling of manipulating something directly, making those abstractions seem an extension of your thoughts and physical body. It can also arouse suspicion and confusion in audiences, who may be unsure of what they’re really watching."

(...)

"Composers and conductors had experienced these kinds of interaction, though not in the same way or all at once. Composers using paper scores can construct musical materials for other musicians, imagining more than what they can play themselves. They have (hopefully) heard those results, too, though they haven’t been able to control them in real-time. Conductors working with acoustic musicians can make gestures, interpreted by the human players, that shape music without directly making those sounds."

(...)

"The role of musician and composer are now effectively merged. The choice of where to sit on this spectrum is now your choice." (Kirn 2013)[10]

 

As a consequence, we see some new emerging roles for the performer of music. Evan Bogunia terms these roles “Performer as Sound Assembler”, “Performer as Sound Initiator”, and “Performer as Sound Modifier” in his master thesis. There may be several ways of defining and dividing these roles, but Bogunia’s division can be useful to shed light on the different modes of control. The different modes (or performer roles) has deep implications as to how the performer may organize, change or affect the music, especially in relation to the level upon which musical structure is controlled. About the different time strata that musical ideas are sparked/initiated and performed, Bogunia states:

 

"When a DJ is spinning tracks, they can go minutes without having to make a substantial decision regarding the presentation of the music. Along the same lines, a sound assembler can be working with loops of varying length, but may have some as long as, let’s say, eight bars. While they are making decisions about the musical structure, ordering and arrangement, these decisions may still come about slowly. Finely tuned rhythmic perfection may not be that important in this case, due to software quantization settings, or other choices made prior to the performance. A performer that is functioning as a sound initiator, on the other hand, is making decisions all the time, perhaps at the same rate an acoustic instrumentalist would. They are playing all of the lines in real-time using one shot samples, or shorter musical material. I think that the difference between long-term decision-making and musical ideas, such as that utilized by a DJ, and short-term decisions and ideas used by instrumentalists, and hopefully some computer musicians, is important to

understand." (Bogunia 2012) [11]

 

The role of Sound Modifier is somewhat played down in Bogunia's thesis, however this role is one we have concentrated on developing further in T-EMP. This role, as we have taken it on, is not only to modify an unchanging input from an external source. For us, this role facilitates a new mode of interplay in that it facilitates interaction between the role of Sound initiator and the role of Sound Modifier. The way the Sound Modifier processes the sound may directly affect what the Sound Initiator chooses to do, or even is able to do.

 
back to top

Audience: can they follow?

 

The current research project has been more focused on music production and investigation of different performer roles than it has been on the relationship to the audience. Still, reflections on how the new kinds of instruments and their actions are perceived by an external viewer can be relevant when trying to investigate how they (the instruments) affect communication within the ensemble. This has been in focus in an empirical investigation on audience experience by Andreas Bergsland, see 5.6. Moreover, we read:

 

"From the beginning of the archeological evidence of music until now, music was played acoustically, and thus it was always physically evident how the sound was produced; there was a nearly one-to-one relationship between gesture and result. Now we don’t have to follow the laws of physics anymore (ultimately we do, but not in terms of what the observer observes), because we have the full power of computers as interpreter and intermediary between our physical body and the sound production. Because of this, the link between gesture and result can be completely lost, if indeed there is a link at all. This means that we can go so far beyond the usual cause-and-effect relationship between performer and instrument that it seems like magic. Magic is great; too much magic is fatal." (Schloss 2003) [12]

 

The issue of too much magic was also raised by David Moss during the workshop in May 2013. After some days of interacting with the ensemble, Moss pointed out that our kind of music can be perceived as “flat”, because there is so much going on simultaneously, complex interactions, inventive processing methods, unseen interactions and communications within the ensemble. Moss, like Schloss used the phrase “too much magic is fatal” to describe the inherent danger of this way of working.

Schloss also mentions a juggling troupe called the “The Flying Karamozov Brothers”, developing a highly refined juggling technique by “wearing special gloves that

made each catch deliberately audible to the audience, and inventing special juggling patterns that generated complex rhythms”. This artistic idea somehow got lost when they developed it further, adding sophisticated sensor and midi trigging gear to enhance the show. The audience got confused by the technology, and thought that they were simply juggling to a tape. A visible physical effort can enhance communication of intent to a significant degree, and also tell the audience that the performer is committed and deeply engaged. The computer musician has some problems to overcome in this respect, as there is oftentimes very little visible sign of effort.

In T-EMP, the lack of visible cues, together with the complex processing, the monitor mix strategy, the freely improvised content and the changing instrument designs all contribute to challenge the efficient communication within the ensemble during performance. It would not be unreasonable to raise objections to the chosen strategy, as so many elements work against the normal modes of interplay. We have, however, chosen to stay with these problems and work to solve them rather than to change strategy. Specific exercises (commented elsewhere in this report) have been utilized to shed light on each problem, not for the purpose of solving each problem once and for all, but for experiencing different ways the problem may manifest. As these are complex problems, and several problems interacting, we have not settled for any hard and fast outcomes in terms of what one should do in such and such situation. Rather, our experience of being with the problem and tackling it with the means at hand is our product, as (hopefully) conveyed by the recorded process documentation.


back to top


References:

 

[1] Moore, F.R., The Dysfunctions of MIDI. Computer Music Journal, 1988. 12(1): p. 19-28.

[2] Wessel, D. and M. Wright, Problems and Prospects for Intimate Musical Control of Computers. Comput. Music J., 2002. 26(3): p. 11-22.

[3] Roads, C., Microsound. 2001: Mit Press. p.4.

[4] Kosinski, R.J., A Literature Review on Reaction Time. web resource, 2012. http://biae.clemson.edu/bpc/bp/lab/110/reaction.htm

[5] Andy Hunt, R.K., Mapping Strategies for Musical Performance. Trends in Gestural Control of Music , M.M. Wanderley and M. Battier, eds., 2000.

[6] Hunt, A., M.M. Wanderley, and M. Paradis, The importance of parameter mapping in electronic instrument design. Journal of New Music Research, 2003. 32(4): p. 429-440.

[7] Momeni, A. and D. Wessel, Characterizing and controlling musical material intuitively with geometric models, in Proceedings of the 2003 conference on New interfaces for musical expression. 2003, National University of Singapore: Montreal, Quebec, Canada. p. 54-62.

[8] Brandtsegg, Ø., S. Saue, and T. Johansen, A modulation matrix for complex parameter sets. Proceedings of New Interfaces for Music Expression (NIME), Oslo, Norway, 2011.

[9] O'Sullivan, L.a.B., Frank, Visualizing and Controlling Sound with Graphical Interfaces. Audio Engineering Society Conference: 41st International Conference: Audio for Games, 2011.

[10] Kirn, P., What Does it Mean to Be an Electronic Instrument. Create Digital Music, 2013. http://createdigitalmusic.com/2013/03/what-does-it-mean-to-be-an-electronic-instrument/

[11] Bogunia, E., Computer Performance: The Continuum of Time and the Art of Compromise. 2012, Mills College: Ann Arbor. p. 39.

[12] Schloss, W.A., Using Contemporary Technology in Live Performance: The Dilemma of the Performer. Journal of New Music Research, 2003. 32(3): p. 239-242.