1.      Real- time Control - The transition from studio to live performance


What is a real-time performance? Is it possible to maintain a real-time performance when working with fixed media (pre recorded/stored sound)? How can a band musician dynamically control digital technology during a performance without interfering with the playing techniques on his/her instrument? What are the technical frameworks for live performances within my genre, and is it possible to expand these boundaries?

A precise and general definition of what a real – time performance in music means is problematic when seen from a music technological view. The definition would differ between the distinct practices in different musical genres, and would probably get different responses and views when asking a DJ, jazz guitarist, live electronics artist, classical violinist or a composer involved in algorithmic composition. Through this text I have chosen to use the term real-time to describe the direct and continuous interaction between musicians and digital media during performance.    

This article is not an attempt to make a framework for this definition, but an attempt to describe some of the concrete practical challenges connected to my genre when transferring studio techniques for use in a live performance.

I will, by way of this article, describe how I have approached this topic during the project period, and propose a set of strategies and reflections that led my project closer to controlling the digital technology when performing and rehearsing with other musicians in a “real – time” situation.

 

1.1    Mixed music

In electroacoustic terminology mixed music[1] describes music that combines a mix of traditional instruments and pre-recorded elements. In this text the term refers to the combination and interplay between traditional instruments and fixed media.

In 1993 my mother band, The 3rd and The Mortal, started experimenting with fixed media as part of the instrumentation; supplementing parts of our music with recordings of concrete sounds and remixes of our own studio productions[2].  Consisting of a line up of electric guitars, bass, drum kit and vocals those experiments could be seen as a parallel to work already done by many of our inspirational sources such as bands like Pink Floyd [3] or Slayer[4] to name a couple. The first attempts we made was by playing back pre recorded sounds through analogue tape, DAT or CD through a PA system, treating the sounds that came out of the speakers as a supplemental soundtrack beneath our own instrumental output. Even though the fixed media element introduced several possibilities for enriching the sound universe within the band context, it was still a passive element within the musical communication compared with instruments dynamically performed by human interaction. Another challenge that occurred at the same time was issues concerning temporal coordination between the fixed media and the instruments played at the same time. The integration of fixed media, and in particular the need for temporal coordination has since then changed the working procedures dramatically both for me, my peers within my genre, but also for popular music in general. There are several different strategies for “compromising” these challenges, and bands within my genre have chosen different solutions and paths when incorporating fixed media as part of their musical expression. After the introduction of samplers and DAW`s the efficiency and flexibility working with pre recorded material has increased dramatically, but the main focus and hegemony in contemporary beat based popular music still follows a common strategy for temporal coordination between fixed media and played instruments. This strategy for synchronisation is mainly based on the use and measurement of rhythmical content to set cue points for the musical content. Following this strategy to the outer limits by accompanying a fixed timeline with defined bars and beats reduces the possibility for individual and collective musical phrasing both rhythmical (for example rubato) and dynamical (for example crescendo).

To look closer into mixed music within my genre I would first like to distinguish between different types of fixed media both based on their ability to be controlled by human interaction during a performance, but also on the possibility for interplay in real-time.

 

2.    Fixed media

The first category of fixed media, in its most basic form, the pre recorded/pre produced content is played back statically from start to end from for example CD, DAT, hard disc recorders etc. Even though these are digital descendants of analogue media such as the tape machine and the gramophone record they have less potential for human interaction than their analogue ancestors seen from a performers view. Despite their “locked” content these are still relevant media for many performers working with mixed music within my genre. If you look further into this category of media you are unable to affect or change the content directly in real-time after it has been stored, and therefore the human interaction aspect is limited only to controlling the audio output during playback. Another point is that, because of its rigid time aspect and absolute predictability during playback, it is a necessity that performers follow the same arrangement pattern each time in order to synchronize the musical interaction. This interplay is maintained either by the fact that the stored content contains a continuous rhythmical structure, or that a separate click track is played back simultaneously in sync with the fixed media. As mentioned above this solution limits the possibility of collective interaction both rhythmical and dynamical, but it also leaves us without the possibility of altering the length or the key for both different parts within a composition, but also for the composition as a whole during performance.

Even though most bands working with fixed media have included contemporary DAW solutions for playback, there is still a trend for playing back the content in fixed time and key leaving the instrumentalists to follow the framework of the media as their conductor.

 

The second category is fixed media customized for dynamical playback, where you can interact directly with the stored content as in samplers and DAW’s. Because of these qualities I would categorize them as semi-fixed media. In later years the software sampler has outcompeted its hardware older brother, and become integrated in most contemporary DAW solutions, I would still like to distinguish between the logic of these methods based on their levels of control during playback.

The sampler gives the performer the ability to work more independently in the temporal domain because of its flexible start/stop functions on the stored content. This gives more flexible and wider opportunities to coordinate vertical musical occurrences through human interaction. This way of working in a live setting is well known, and is also widely used by most DJ`s. However, comparing possibilities for levels of control over the stored content the conventional analogue DJ set up is quite poor compared to the sampler because of its lack of the ability to record or digitally process stored sound, and also because of its limited channel output (stereo). Inside the sampler category I would also like to mention the different loop machines that are widely used within contemporary popular music, they have rigidity through the limitation of a set timeline since the first recorded layer decides the time span for all additional sound on sound layers.

In addition to the samplers temporal flexibility it also gives us the possibility to control pitch, dynamics and effects directly on the stored content (not only the output), and is therefore able to control the playback in proximity to traditional instruments by receiving control messages from different interfaces like, for example, a synthesiser or a digital mixer.

The DAW can be seen as a mixture of both fixed and semi-fixed media at the same time. Most DAW’s are tailored after conventional studio procedures and postproduction techniques, but there has been a trend in recent years for increasing their efficiency and logic against live use. I would still point out that there are some benefits and also limitations discussing the possibilities for human interaction in a live mixed music context. Most programs are based around a grid system that consists of a timeline that successively organizes the horizontal events of musical content over time, and a vertical axis represented by a track list separating the different layers of the musical content. The operator can set the level of concurrency between these two axes’. The horizontal timeline can be prearranged in a static state by predefining a set time unit and tempo, or drawing a set tempo automation. It could also be operated dynamically during playback by using, for example, tap tempo or changing tempo gradually through an interface or directly in the software. Therefore the sound layers can be edited, processed, moved and rearranged individually both dependent and independent of the timeline. In a live situation this enables the operator to arrange content into the timeline both during, and in advance of the actual playback.

Despite the flexibility within this media there is still a tendency for treating these possibilities as postproduction techniques and not for live use.

 

The integration of fixed media in a band context gives, without doubt, an extra dimension to the aesthetical expression together with the traditional instruments. Examples of this are the possibilities for limitless use of different sound sources, preciseness and complexity to the rhythmical structures, the possibility to perform large compositions with small ensembles, integration of instruments outside the ensemble etc. The paradox is that these possibilities also make this integration problematic both as a performer or as part of an audience.


We spend a great deal of time trying to discipline ourselves to perform like machines: our idea of technical perfection and efficiency is something akin to our idea of a perfectly working machine, and yet, we also have another entirely negative viewpoint towards anything human that is too machine-like.” Cort Lippe, 2002 [5]


In addition to the danger of limiting a performance within the framework of the fixed media rigidity, there is also a lack of relationship between the sounds and the source, and lack of connection between the gestures of human performers and the musical output.


This relationship and connection are traditionally important factors within my genres aesthetics in live performances and an absence of these factors challenges the authenticity of this genre.

 

Is it possible to integrate fixed media as a seamless part of a real - time performance within my musical expression?

Before entering the practical approach to this question I would like to point out some frameworks and premises for live performances within my genre.

 

3.    Framework and premises for live performances within my genre

3.1    Acoustic representation:

As in electroacoustic and computer music my genres expression is dependent upon representation through loudspeakers. As opposed to the multichannel set up for most contemporary electroacoustic concerts, most venues within my genre are equipped with conventional stereo systems. The PA system and the live technician represent the balancing and placement between instruments, leaving the audience with electronic reproduction of the sound sources.  The direct sounds from the instruments are therefore separated from the audience.

 

3.2    Fold back listening:

Most instruments in a traditional rock band are tailored for electrical reproduction, for example electric guitar/bass, and the sound of these instruments are therefore already detached from the source, transferred and represented through amplifiers before entering a PA system. In a traditional rock band consisting of electric guitar/bass and drum kit, the only musician still in direct contact with the original acoustics of the instrument is the drummer. To be able to combine and balance these instruments in preferred listening conditions for the musicians, the live technician compensates the balance through monitors placed on the stage. This means that the musicians are unable to control the balance of their own instrument in the same degree as they could in an acoustical ensemble.

 

3.3    Ensemble structure:

In contrast to the traditional composer’s role in electroacoustic music, many band constellations involve a democratic structure where all performers have a shared responsibility for the superior aesthetical expression. In most cases this therefore leaves out the role of the composer, conductor or the score being followed during a performance.

 

3.4    Interfaces and levels of control:

Most contemporary and commercially available interfaces are designed to complement the logic of the different DAW’s software, made as standalone instruments, mixers or controllers. Most of these interfaces are dependent on the performers visual interaction both with the software and the interface. The relationship between the tactile perception on the interface and the audible output are limited compared with traditional instruments.

 

4.    A practical approach to real-time control

What started as an attempt to seamlessly integrate fixed media and in particular content of environmental sounds into my genre, took several unexpected directions that will be further discussed in this text.

The majority of work during the practical approach has evolved around rehearsals and conversations with several other musicians from different genres. Many of these sessions, discussions and reflections are recorded, and a selection of these will follow the text as examples of the different faces during this work.

 

4.1  Controlling the DAW

A challenge throughout this project has been finding ways to control the digital techniques in real time, being able to use them in a musical dialog together with other musicians as an extended part of the conventional instrumentation. Since working in the studio in recent years to a large degree has changed from manoeuvring large mixing consoles to controlling everything through the DAW with a mouse and a keyboard, it felt natural to pursue this workflow in a real time situation. Even though there are several custom made interfaces for these operations on the market, few of them are made for integration on an existing instrument. As a guitarist both hands and feet are occupied; at the same time concentrating on the guitar and foot pedals, disabling the player to handle a different standalone interface at the same time. The first step in this process was to place a numerical keyboard directly on the guitar in order to control the DAW without interfering with the conventional playing. This solution opened up two different directions. One for controlling fixed media, the other for expanding the sonic expressions of the instrument. As will be described in this text, the second direction affected the whole instrumental set up in the band. The technical description of the systems that has been developed is also described in the paper proceedings from the NIME conference in Oslo 2011.[6

 

Direction 1 – controlling fixed media

In a conventional guitar set up the closest solution for controlling a DAW lies in the use of a midi floorboard. Many of these floorboards already contain most of the functions needed for controlling both static and dynamic parameters in a software environment through its different pedals. At the same time this approach led to a practical challenge in operating both the DAW and external hardware guitar processors at the same time from the same interface. The first approach was to attach a keypad directly onto the guitar in order to take care of the non-guitar operations in the DAW implemented through Ableton Live, and at the same time separate the control of the guitar processors by using a separate midi floorboard.

 

 

 

realtime 1.png


Figure 1. First setup

 

This figure can be seen as a miniature set up of a conventional studio event, and as a first attempt at bringing the traditional studio environment into the real time domain. Through this solution the traditional roles of the producer and musician are moulded together, but the system setup still consists of two parallel lines of control. This led to a search for a new solution where these roles were more seamlessly integrated with each other, and at the same time more individually flexible and comprehensive.

 

Video example 1:  Magnify The Sound Trondheim 2010 - click right side menu to watch video (Video Example 1)

The set up in use during a performance with Magnify The Sound which is an ensemble that consists of Carl Haakon Waadeland on drums, Claus Sohn Andersen on computer and real-time sampling, and Trond Engum on electric guitar and samples

 

Direction 2 - expanding the sonic expressions of the instrument


4.2  Augmentation of the electric guitar based on extended techniques

At this point the augmentation refers to applications physically attached to the instrument while extended techniques refers to the performers expansion of playing techniques. 

The functionality and practical use of contemporary digital guitar controllers are mainly based on a heritage stemming from electrical reproduction conventions, (different effect boxes and expression pedals), resulting in a large amount of different digital floorboard and multi effects solutions. There are other approaches for digital augmentations of the electric guitar like the multimodal guitar[7][8] based upon pressure and gesture sensors or the Manson guitar[9] based upon building in a touch screen (chaos pad) in the guitar body. Besides these there has been a limited documented research on digital augmentation solutions attached and controlled directly on the Electric guitar. At the same time the possibilities and functionality of tailor-made guitar software are poor compared to tools you find in most DAW programs. They are to a large degree designed to imitate conventional guitar effects and speaker simulations and it would therefore be more adventurous to start with the DAW as a processing engine controlled from the guitar.  From a musicians point of view it would be natural to integrate interfaces directly into the instrument, enabling real time control over the digital functionality without interfering with the playing of the instrument. The approach in this part of the project has been to put together well-known and intuitive interfaces and to attach them directly to the instrument in order to control the digital software in real time. The direct integration of a keypad and a track pad enables a player to send both static and dynamic control messages to different software and hardware in real time without removing the physical focus from the instrument or interfering with the idiomatic characteristics of the guitar. These interfaces are also very intuitive because of their use in other application on a daily basis, and also quite inexpensive compared to custom-made solutions.

 

realtime 2.png


Figure 2. Placement of the interfaces

 

The physical placement of the two different pads was decided upon based on two well-established extended, guitar techniques. The keypad was positioned in a typical guitar channel selector area, based on an on/off technique known as kill-switch [10] where you move the channel selector up and down after stroking the  strings. The track-pad was placed in the volume/tone control area on the guitar based on an extended technique called volume swell[11] where you use one of your right hand fingers to adjust the volume control while playing. The volume swell technique enables the player to use the volume knob dynamically without removing the right hand position from the instrument. This was the basis for the second guitar setup.

 

 

realtime 3.png


Figure 3. Second setup

This set up gave several advantages. First of all it created an opportunity for removing some of the components from the first setup without compromising the DAW control or preset changing in the Guitar-FX hardware. This was done by running all incoming control messages from the different interfaces directly through the DAW for mapping and further distribution.

Secondly, the track-pad opened up an easier and more intuitive way of controlling XY parameters compared with using two expression pedals at the same time. The physical placement of the track-pad also contributed to the possibility of using the XY parameters without removing the right hand position as in contradiction to the Manson guitar system.

Interface output

In this system both keypad and track-pad outputs are translated to midi signals through two different Max For Live devices [12]. The keypad can be used to perform static operations like on/off and momentary messages. The track-pad can be used to perform dynamic operations like volume, morphing between different effects, surround sound operations or other applications demanding XY control.

 

Video example 2: Demonstration of a guitar preset during a concert October 2011 - click right side menu to watch video (Video Example 2)

 

This is an example of a guitar preset consisting of three different effects in the software that is being played individually or in combination with each other depending on the selection from the floorboard. Effect 1 is a pitch shifter, effect 2 is a recording of a train that is being real-time convoluted with the guitar signal, and effect 3 is a recording of an anglegrinder that is being real-time convoluted with the guitar signal. (Musicians are Arild Følstad, Trond Engum, Rune Hoemsnes and Kirsti Huke) 

 

4.3  Augmentation of the drum kit

Whilst exploring the possibilities of sonic expansion in the electric guitar and based on its augmentation, this directed my attention to the drum kit. When focusing on its acoustic sound the drum kit has basically been unchanged during the history of popular music, but if we look at the technical arrangements needed for electrical reproduction, the demands and preparations go far beyond the other instruments in a traditional band set up. The extensive use of close miking in modern musical production requires the multiple use of different microphones to capture the individual drum sounds of a kit. In difference to the electrical guitar the multiple output from the kit microphones need to be balanced and produced against each other before being further distributed through speakers and fold back systems. This means that the technician is in charge of the sound of the performers instrument. A further observation of this occurrence is that the drum kits sound production normally remains in a static state after being balanced, and that this preset is neither controlled dynamically or changed by the instrumentalist during performance. The amount of sound pressure level from the drums also sets a premises for the fold back solution for the drummer since the direct sound of the kit blends in with the electrical reproduction opposite to, for example, an electric guitar.

There are already several customized digital alternatives like the electronic drum kits, or different trigger solutions, but none of them are capable of authentically recreating the dynamic range and playing techniques of acoustic drums which are an important part of the expression within my genre. The electronic drums are able to recreate a sound that is similar to acoustic drums when using its midi output to trigger drum samples, but the limitation in velocity and the limited number of samples assigned to each drum narrows the dynamic possibilities and therefore leaves out expressions as, for example, a crescendo on the cymbals. Since the sensitivity is measured by the energy in each stroke, the trigger system is unable to detect the difference between using sticks, brushes or your hands. There are other approaches to drum augmentation which preserve the idiomatic characteristics of the acoustic drums like “Hybrid Percussion”, Aimi R. M. (2007)[13] which are based on convolution techniques, or the live set up for Dépêche Mode[14] where the drum microphones goes through a DAW before entering the PA system, but as far as I know these are not established solution that are widespread within my genre.

 

When entering this part of the project I tried to figure out if it was possible to treat the sound of the drum kit under the same principals as an electric guitar during a performance without changing the idiomatic playing style of the instrument. A crucial point during this part of the work has been the close cooperation with trained drummer, Rune Hoemsnes, in order to maintain a practical approach when designing a framework for the augmentation. This has been important for sustaining on-going discussions that could lead to further technical improvements, but even more importantly how this could lead to an expansion of the aesthetical expression.

When looking at the technical framework the course of events can be separated into three main premises.

 

a)         Separating the direct sound of the drums from the performer and the audience

In popular music the drum sound produced by electrical reproduction has normally been explored in the sound studio, or by the live technician in a concert situation. These conventions are still valuable in all modern music production using acoustic drum kits. It is problematic for the drummer to directly mix his own sound during performance, partly because of the loud output from the instrument, but also because both hands and feet are occupied with the instrument at the same time.

The first step in order to gain the full potential of exploring the sonic possibilities of the sound was therefore to separate the direct sound from the performer but also from the audience.

This can be achieved by feeding back the electronic reproduction to the drummer through headphones with a sound pressure level that exceeds the direct sound of the drums. As already mentioned in point 3, the audience are already separated form the direct sound from the stage because of the PA speakers as long as the sound pressure level exceeds the instruments direct output.

 

b)         Setting up a flexible fold back system

Quite early in the process it became clear that if the drummer should be able to control the sound with the required degree of flexibility, the performer needed a dedicated DAW system customized for the drum set up. This system had to be able to handle a flexible fold back setup, but also be able to run and dynamically control the applications needed for the drum processing. The solution was to model an internal system detached from the output to the PA system with inputs from all drum microphones, other musicians, pre recorded material and a separated click track for synchronisation with fixed media. This system enabled the drummer to balance and mix his own fold back directly in the DAW.

 

realtime 4.png


Figure 4. Fold back system

 

c)         Designing a technical framework

Since the drum kit already has a tradition for close miking in live concerts, the step of moving the microphone output from the PA mixer to the DAW was quite short when looking at microphone placement. The augmentation set up consists of inserting a DAW in the signal chain between the microphone outputs and the mixing desk. Practically this means that all pre-amping, signal routing, balancing, panning, Fx and processing are done through the computer before being distributed to the PA system controlled by the drummer through a midi interface. This solution leaves the performer with a large degree of control of the overall sound production done on the instrument.


realtime 5.png 

Figure 5. Signal and system set up

 

In this system the basics behind the architecture is that the first 6 of 8 outputs are treated as a conventional drum production, but balancing between channels and applying typical insert effects like equalizers, compressors etc. are done in the DAW before entering the PA mixer. The two last outputs are dedicated to typical time based effects like reverbs, delays etc. The basic principal behind this idea is that all outputs already are produced before entering the PA mixer. Since all signal routing and effects are done in the software this enables the drummer to change settings during performance through a midi interface (through these experiments we used a triggerfinger[15] [16] and an octapad [17]. This framework enables several possibilities in a performance situation:

 - The possibility to pre produce different drum sound presets, and then switch between them during performance.

- Since muting the output channels does not interfere with the microphone input signals, it gives us the possibility of muting the first 6 conventional output signals, but still receive their input for use in more unconventional drum effects like granular synthesis and convolution outputted at channel 7 and 8. The system was implemented using Ableton Live.

 

4.4  Concrete sounds - transforming past to present

The integration of the environmental sounds within the musical expression has been fruitful during the compositional studio work enabling me to organize and control the content over time, but introducing this element in the real-time domain was a bit more challenging. While pushing the guitar and drums in a live electronic direction, the sampler still challenged the boundaries for what could be defined as real-time when working with conserved sound.

From my experience with the guitar keypad it was apparent that the stored sounds needed to be treated more dynamically and expressively. This did not exclude the role of the keypad as a trigger device, but it had clear limitations as a musical instrument. This led the focus against the midi keyboard that still is one of the most established and intuitive musical digital interfaces available. This gave several advantages in this particular part of the project both because of its already widespread integration in digital software, but also because it gave the opportunity to work closely with a professionally trained piano and synthesiser player, Arild Følstad, when building up different instruments from stored content[18]. This transformation is further discussed in the article “environmental sound”.

 

5    Constructing a live electro acoustic rock band

During the process the points listed above have not been a chronological development in time, but several parallel directions ending up in what became the basis for the instrumentation in the band “The Soundbyte”. In retrospect I will try to pinpoint what I believe was the main turning points in the transition from the work being done in the studio environment over to constructing and rehearsing with the band. Bringing together the augmented instruments mentioned in this text could be seen as this projects last stage in translating electroacoustic techniques for use within my genre. Building the band was not about inviting people from other musical styles to contribute inside an already predetermined genre. It has been about changing the whole expression of a band by introducing new frameworks for all musicians enabling to interact with the music technology without loosing control over the traditional instruments. I have chosen to point out this transformation in the following four steps:

 

5.1    Bringing the sound studio into the rehearsal room

In my genre there is still a distinction between the processes of playing together in the rehearsal room and entering a sound studio. In its most basic form the first step that was taken in this project was to separate the guitar setup from the recording chain in order to construct a dedicated system for the instrument. Just the idea that the computer was a part of the instrument and not a subpart of a recording chain was a big step in my personal preconception for how the system should be put together. This idea again led to separating every instrument from a mindset based on postproduction techniques, and the thought about how you normally enter the studio.

 

5.2    Democratizing the technology within the band

The next aspect is how the performers adapted to combine the roles of being musicians, technicians and producers at the same time. As discussed in  “Studio as a compositional tool”, introducing a producer role in a rehearsal room sets up strong directions in the structuring within a band when it comes to the creative process. 

Opening up a shared technical responsibility within this ensemble has led to a larger variety in the overall expression, and has without doubt had advantages in improvisational interplay. 

 

5.3    Augmentations of instruments

The augmentation of the instruments has been crucial for the implementations of the techniques that have been explored in the project. All of the three instruments that had been in focus had specific demands and different solutions tailored after their idiomatic characteristics.

The red line through the augmentation has been that the musicians should be able to directly respond to the output without compromising musical phrasing in the interplay.

The interpretation of the repertoire has been based on practicing and dynamically improving and developing different system solutions based on experienced demands.

Enabling each performer to receive control over both the acoustic sound from the instruments but also the electrical reproduction and digitalization has led to a sonic expansion during performance within the band.

You could say that all musicians within the band have widened their instrumental repertoire both sonically, but also in the way they alter the playing style in order to fulfil the demands of individual sound presets[20]

The sonic transformation of the individual instruments and how they are placed and sound together is probably where the aesthetic output differs most compared with other bands within my genre. 

 

5.4    Bringing the sound studio to the stage

Since bringing the band to the stage was the last part of the project, the technical frameworks were already set up and tested both in a studio environment and the rehearsal room.

Since all sound production from the band is done before entering the PA system, the interaction with the live technician was turned around compared with a conventional live situation.

Instead of treating and adjusting the signals through the live mixer[21] the sound checks have been used to treat and adjust the signals individually in the different systems before entering the PA mixer. This solution leaves a greater control over the sound production during a live performance. First of all the drum system is customized for a specific drum kit with specific microphones and microphone placements. A change of drum kit or microphones therefore needs a new setup adjustment. There is also a danger of getting feedback through the drum system because of the loop that is created when you introduce monitors on the stage. 

 

Video example 3: Demonstration of one of the sample instruments during performance - click right side menu to watch video (Video Example 3)

This is an example of how one of the sample instruments is used during performance. Arild Følstad plays the sample instrument.

 

6.    Summary

This project has been about finding new strategies for composing and producing music within my genre whilst using digital music technology. Throughout this article I have tried to explain how I have constructed a technical framework that enables accessibility to electroacoustic techniques without changing the basic instrumentation in the band, and how this system can transform studio production techniques to be used in real-time during a performance.

Since this transformation was the last stage in the project I still believe that the band is just at the start of revealing the full potential of this transformation.

I also believe that the different technical frameworks are flexible solutions that could be transferred and adapted into other instruments and genres. I also believe that it is possible to push the sonic boundaries of the instruments a lot further using the same framework. Since this project ended in 2011 several of these aspects has been developed further. 

 

 

         REFERENCES

 

[1]  Emmerson, Simon (2007) ”Living electronic music” Ashgate Limited Publishing” page 104

[2] One example is “Oceana”, The 3rd and The Mortal, Tears laid in Earth, VME 1994

[3] One Example is “Money”, Pink Floyd, The dark side of the moon, Harvest, 1973

[4] One Example is “Reign in blood”, Slayer, Reign in blood, Polydor, 1986

[5]  Lippe, C. 2002. “Real-Time Interaction Among Composers, Performers, and Computer Systems.” Information Processing Society of Japan SIG Notes, Volume 2002, Number 123, pp. 1-6.

[6] Engum, Trond (2011). “Real-time control and creative convolution” In Proc. of the 2011 Conf. on New Interfaces for Musical Expression 

[7] http://www.numediart.org/projects/07-1-multimodal-guitar/ last visited 10. April 2011,

[8]  O. Lahdeoja (2008). An approach to instrument augmentation: the electric guitar. In Proc. of the

2008 Conf. on New Interfaces for Musical Expression

(NIME08).

[9] http://www.mansonguitars.co.uk/ last visited 10. April 2011

[10] http://www.instructables.com/id/Guitar-Killswitch-Strat.-design/ Last visited 10. April 2011 

[11]  http://en.wikipedia.org/wiki/Volume_swell Last visited 10. April 2011

[12]   Wærstad, Bernt Isak (2010): “The trackpad translator was implemented in Max For Live by Bernt Isak Wærstad”http://partikkelaudio.com/extras/mfl/

[13]  Aimi R. M. (2007) ”Hybrid Percussion: Extending Physical Instruments Using Sampled Acoustics” PhD thesis, Massachusetts Institute of Technology. 

[14] http://www.youtube.com/watch?v=YvuCp1lZIBw Last visited 10. September 2011

[15]http://www.soundonsound.com/sos/sep05/articles/triggerfinger.htm Last visited 10. October 2011

[16] A demonstration of the triggerfinger in use:  Video example 5

[17]http://www.rolandus.com/products/productdetails.php?ProductId=1059 Last visited 10. October 2011

[18] A demonstration of one of the instruments can be seen in video example 6.

[19]  See video example 1: “Studio as a compositional tool” 8.45 – 9.27 Conversations with Michael Tibes

[20] See video example 2: 4.10-6.28

[21] Equalizing, compressing, effects, balance etc.

 

Video examples

 

Videoexample 1 - 3 can be watched by clicking in the right side menu