1.      Convolution


This article concerns my work with convolution. The exploration of this technique led to several unexpected directions in my artistic research project. First of all it made contributions towards how I incorporated the use of concrete sounds both in the compositional work but also as an improvisational element in real time performance. Secondly, it had a great impact in leading me towards the development of the instrument augmentations that are further discussed in the article “Real time control”. Through this process I have been able to find new strategies for exploring the technique of convolution, and I will propose those suggestions in this article. The main content in this article are also described in the paper proceedings from the NIME conference 2011[1].  Since the described results in this text where presented in 2011, this topic has been further developed both concerning creation of new software as well as practical use in several artistic works.[2]    

 

Convolution tools have been available for composers since the early nineties[3]. In popular music they are most commonly used in reverberation units where they are based on recorded impulse responses from different rooms. (The response you get from a room when sending out an impulse like, for example, a handclap). These impulse responses are then stored in order to be convoluted with a desired input. Another application is to imitate known analogue equipment by recording impulse responses from the outputs. In addition to these approaches there are no limits as to which sounds that can be convolved with each other, and the exploitation of these possibilities is where the research of this project is aimed. Other examples of approaches to this technique are Roberto Aimi`s percussion instrument [4], or “the sound of touch” [5]. A more artistic use of this technique can be found in some of Barry Truax`s works[6]within art music, or The Soundbyte`s “City of Glass”[7] within a popular music genre. As far as I am aware there has been very limited documented artistic research on convolving different sound sources with each other, and because of that most descriptions of use are focused more on technical than aesthetic aspects.[8]

 

1.1 Basic principles

When convolving two digital sound files with each other in the time domain, the individual samples of the two sound files at successive sample points are, pair wise, multiplied with each other. The output signal gives a third sound with innate attributes from both main sources. You can convolve any sound with any other sound. This technique has normally been used as a postproduction tool because of the amount of processor power needed to run such an application.  It is just recently that it has become possible to use this technique in real time without getting too much latency between the input and output signal.


1.2 Software and conventions

Because of its extensive use within reverberation software most commercially available convolution programs are built up around the same logic and visualisation, and as I will explain, this fact limits the exploration and full potential of using the technique. In addition to this, many of the convolution reverbs lock their impulse responses to an internal format, making most people unable to record and use their own recordings of impulse responses. On top of this, several of the “open” storage solutions, being created for postproduction, still have a large amount of latency and therefore make it impossible to use in real time. The basic function for most of this software consists of the possibility to mix the balance between the dry and the wet signal, the dry being the input and the wet being the convolution between the input and the stored impulse response. The pre-programmed presets mainly consist of different recordings of different room responses, or recordings of the same room with different time spans or envelops. In general most impulse responses in the presets have a clear attack point with a decreasing tail modelled after a normal room response. 

 

1.3 Terminology and focus areas

When entering the topic of convolution, by convolving different environmental and instrumental sounds with each other, some of the previously used terminology can be a bit misleading. First of all the word impulse response refers to something that has been given an impulse and as a consequence sends out a response (the recording of a handclap in a room, or the outputs of an equalizer when inputting an impulse). Because of this, an impulse response has a decreasing intensity curve starting with a clear attack point followed by a decreasing tail. This term becomes problematic if you are to convolve a guitar with the recording of a train, using the train as the stored “impulse response”. First of all the sound of the train is not a response to an impulse, secondly the sound does not necessarily have a clear attack point with a decreasing tail. The next point, which can be confusing, is the relationship between the dry/wet parameters. The dry signal represents the input, and the wet signal represents the convolution between the guitar and the train. When using reverberation in music production the balance between the dry and wet signal are important parameters when, for example, you may want more or less room simulation on the guitar. This does not apply in the same way if you want more or less “guitrain” on your guitar! As I will show through the sound and video examples in this text, treating convolution only as a traditional reverb technique can be limiting when working with the approaches described in this article.  Convolution has, during this project, in many ways functioned as a translator between instrumental and concrete sounds both when it comes to sonic expression, but also when interacting with these sources during real time performance.  

 

2. Practical approach

Whilst using a wide variety of different concrete sounds, instead of impulse responses, this project has explored which possibilities and limitations convolution between digital sound files imply both at a micro level, but also in a broader musical context. A sound used instead of an impulse response in the convolution process will in the following be called a convolutor. This work has resulted in three different approaches that I will describe further based on the different technical set ups.

 

2.1 Convolution in postproduction

This approach is the most commonly used in exploring this technique. The approach was also described in an article in the recording technology magazine “Sound on Sound” in September 2010[9], but still with a clear preconception of treating it like a reverberation unit. The method I used during this process started with empirical experiments with a variety of pre recorded concrete sounds basically consisting of urban and industrial content.

The central aim through this experience was to be able to predict how different inputs and convolutors would interact with each other, in order to control these parameters against a predetermined output.

Covolution pic1.png 

Figure 1. First setup


Video example 1:  Convolution in postproduction - click right side menu to watch video (Video Example 1)

 

 This is an example of different combinations of sounds being convolved together, also used in a broader musical context. (Musicians are Trond Engum, Rune Hoemsnes and Arild Følstad)

 

2.2 Real-time convolution

In convolution software the impulse responses are stored in a static state during the process, meaning that the only sound that can be changed dynamically during this operation is the input sound. This means that in a real time situation you can interact, and make variations in the sound in a much more direct and intuitive way than when using it in postproduction where the input signal already is recorded and stored. Using a live input from the performer enables a direct interaction with the output of the convolution process. Used conventionally this could be compared to the interaction with a reverb from your instrument, but when replacing a room response recording with a train recording you would interact differently. Put in another way, you interact differently depending on which response you get from your instrument. Another aspect of using concrete sounds instead of impulse responses in this direct interaction is that the performer can interact dynamically with the different concrete sounds through his/her own instrument. This is a very distinct from, for example, triggering or playing back the sounds during performance. Using drums as the input source through this system opens up another interesting perspective. When hitting a drum, the sound has a similar acoustical behaviour over time as a recording of a room impulse (clear attack point followed by a decreasing tail). This means that when using drums you can invert the preconception of what is the input and what is the stored impulse response.

The approach of finding ways to interact with this technique in real time was a turning point in my project, and eventually this led both to an augmentation of the drum kit, but also to the development of the convolutor sampler. The interaction model is based upon opening up a two-way communication between the musician and the output. In order to achieve this two-way communication it was crucial that the musician was separated from the acoustic sound of the instrument in order to interact with the processed signal. This was maintained by feeding back the processed signal to the musician through headphones (see Figure 2). By changing the convolutors, and tailoring them to suit the present instrument, it was possible to affect the performance without the musician feeling unfamiliar with the playing techniques of his own instrument. During these experiments both dry input signals and processed signals were recorded in order to analyse what caused the sonic changes, but also what made the musician make different artistic choices when interacting through this two-way communication.

 

Covolution pic2.png

 Figure 2. Second setup

 

Video example 2: Real – time convolution drums - click right side menu to watch video (Video Example 2)

 

This is an example of a real time use of this set up. This example shows guitar and concrete sounds playing together with real time convolution between drums and the recording of a crane. This improvisation session was a turning point in the project, and this recording shows the first time this technique was tried out in interplay. (Musicians are Trond Engum and Rune Hoemsnes)

 

Video example 3: Real – time convolution drum testing - click right side menu to watch video (Video Example 3)

This is an example where the drummer, Rune Hoemsnes, uses the drum system for real-time convolution. The video and convolutors are recorded by Rune Hoemsnes. 


2.3 Convolutor sampler

The third approach came as a result of the experiences gained from the first two setups. The idea was to be able to record a convolutor and interact with it in a real-time situation. This setup provided the opportunity to sample convolutors from my own instrument, other musicians or sound sources, and directly convolute them with another chosen sound-source in real-time.

The use of this setup gave several advantages. Firstly the implementation of this function made the whole process of trying different sounds against each other much faster and effective. Secondly the artistic value of being able to convolute samples of fellow musicians with my own instrument in real-time, opened up some exciting possibilities and yielded very interesting results. As opposed to traditional live sampling that is based upon reproducing recorded material in different ways; the convolutor sampler enables a performer to interact dynamically with the recorded material through his/her own instrument.  The program was implemented in Csound, and runs in Ableton Live as a Max For Live device. [10] 

Covolution pic2.png

Figure 3. Convolutor sampler

 

Video example 4: Convolutor sampler - click right side menu to watch video (Video Example 4)


This video demonstrates the functions and some of the possibilities of the convolutor sampler plug in.


3 Summary

As explained throughout this article the exploration of artistic possibilities within cross synthesis convolution has been fruitful for many aspects of my research project, especially when it comes to the interaction between instrumental and concrete sounds. The results of this work are audible both as part of the recorded compositions, "Trilogy", delivered as the artistic result, but also as a part of the live set up. I still believe there is a lot of unexplored potential in this technique, especially when it comes to real time convolution.

I would therefore propose some ideas for further work both artistically and technically, where the technical proposals are attached to further development of the convolutor sampler.

Artistically the individual performances could benefit from democratizing the control of the technology even further. By seeing this technique as part of the individual musician’s instrument, the musician could provide his/her own convolutors based on the different playing techniques and instruments.  An example of this can be seen in video example 4 where drummer Rune Hoemsnes interacts with his own recordings. Ideas for further development of the convolutor sampler would be the possibility to store several samples at the same time, possibility to alter the length and envelopes of the samples in real time, implement a transient detector and a real time function for switching between the different transients.  As mentioned in the introduction several of the development ideas from 2011 was implemented in a live convolver during 2012. 

 

References

[1]    Engum, Trond(2011). “Real-time control and creative convolution” In Proc. of the 2011 Conf. on New Interfaces for Musical Expression, Oslo, Norway.

[2] T-EMP COMMUNICATION AND INTERPLAY IN AN ELECTRONICALLY BASED ENSEMble http://www.researchcatalogue.net/editor?research=48123&weave=53023#Dynamic_convolution

[3]    Roads, Curtis (1996): “the computer music tutorial”, The MIT Press.

[4]    Aimi R. M.(2007) ”Hybrid Percussion : Extending Physical Instruments Using Sampled Acoustics” PhD thesis, Massachusetts Institute of Technology

[5]     D. Merrill, H. Raffle, R. Aimi. (2008) “The Sound of Touch: Physical Manipulation of Digital Sound”. In the Proceedings the SIGCHI conference on Human factors in computing systems (CHI'08). Florence, Italy.

[6]    Truax, Barryhttp://www.sfu.ca/~truax/conv.html Last viewed 10. April  2011,

[7]    The Soundbyte/Irgens (2007), City of Glass, Voices Music Publishing

[8]    Roads, Curtis (2004): “microsound”, The MIT Press. Page 209-221

[9]    Deruty, Emmanuel (2010): “Creative convolution”, Sound On Sound, Volume 25, Issue 11.

[10]    Brandtsegg, Øyvind (2011): ”The Impulse sampler was implemented in Csound and Max For Live by Øyvind Brandtsegg” oyvind.brandtsegg@ntnu.no

 

 

Video examples

 

Video examples 1-4 can be watched by clicking the right side menu