Introduction


Musicians regularly make use of technology in their creative practices. They design different kinds of digital instruments and interfaces and develop the necessary skills to play with them, forming an almost intimate relationship with these tools. Technology has become an integral part of the creative process – an extention of the musician's body, perception, and imagination. In fact, it would be true to say that the practice of music has become so intertwined with technology that it would be virtually impossible to discuss the former without mentioning at least one digital apparatus that was used during the work process, in order to create, record, play, or analyze sounds.


The ubiquitous presence of technology in music brings up certain fundamental questions: Why do we choose to use these kinds of tools in the first place? What purpose do they fill? How do we choose to engage with our surrounding environment using our computers, interfaces, and algorithms? Philosopher and sociologist of science Bruno Latour suggests that ”[technology] has never ceased to introduce a history of enfoldings, detours, drifts, openings and translations that abolish the idea of function as much as that of neutrality” (2002, pp 253-4). His approach invites us to explore the yet-undiscovered possibilities hidden in our apparatuses, and to try and imagine the way they open up unforeseen paths, re-shape our very existence, and affect our decisions and actions. He adds: “The moral law is in our hearts, but it is also in our apparatuses. To the super-ego of tradition we may well add the under-ego of technologies,'' underlining the moral implications brought up by our continuous adventures with technology. How we understand the world and how we interact with one another are determined by our entanglement with our digital apparatuses, and music can suggest a valuable perspective into these issues.


The following sections document the ARC session held on 13 April 2021. Anıl Çamcı, Jenn Kirkby, and Ilya Ziblat Shay present their work, and discuss the way in which they use digital tools in their practices and in their creative approach to music technology.

 


References

Latour, B. (2002). Morality and Technology, The End of the Means (Venn, C. Trans). Theory, Culture & Society, Vol. 19(5/6): 247–60.

Musicians Playing With Computers |

Musicians Interacting With The World

with Ilya Ziblat

(13 April 2021, online)

In his presentation Ilya described his work with sample-based processing. The samples are often of spoken, vocal content, and are extracted from social-media and video-sharing platforms: news items or reports and speeches by politicians, and other politically 'charged' materials. The reworking of the original samples allows the deconstruction of biases and supposed meanings, questions the dispositions of the speakers, and reconstructs the original content in a musical, rather than syntactical, order. The processing is done in real time by using live-electronics interfaces, allowing for musical interaction with other improvising musicians.

Trump.et Legacy is a composition in five movements for trumpet and live-electronics, released in 2021 immediately after the elections in the US (🔗Trump.et Legacy in BandCamp). Samples of pre-recorded trumpet sounds and of speech segments by former president D. Trump formed the source material for the processing. Using a computer interface, the original samples are sliced into smaller fragments. It is a sample-based synthesis technique (based on a granular synthesis audio engine) which gives the computer musician the possibility to divide the words into syllables, consonants, and vowels, or even into smaller speech components (for example, the very beginning of an explosive T consonant), and to use these phonetic atoms as musical material. In this sense, the (digital) interface gives the possibility to approach language as a sonic interface.


Working with speech as a compositional material requires divided attention: not only to sonic properties and to musical aesthetics but also to the semantic content. By processing and reconstructing spoken language a sense of defamiliarization or estrangement is created. The original material is taken out of its normal context, it can be stretched or augmented infinitely, giving the listeners the opportunity to re-examine their experience of the world. The role of the musician is “that of the reporter who enjoys perverting reality as he captures it,” as writer and documentary film maker Jacqueline Caux describes the work of electronic music composer Luc Ferrari. In this case, technology opens the possibility to explore the sonic and semantic potentials of the samples in real time during improvisation.


In the following excerpt from Trump.et Legacy, played by Chloë Abbott (trumpet) and Ilya ziblat Shay (computer), Trump’s speech upon release from hospital (after his supposedly miraculous recovery from Coronavirus) is used as source material for the processing.

Ilya Ziblat Shay is a composer, performer, and researcher. His works are open, and combine open notation or other forms of guided improvisation for musicians with interactive live-electronics systems. Ilya’s music has been performed by ensembles and soloists around Europe, and featured in international concert halls and festivals. He has a Phd from Leiden University and is active as an artistic researcher. 

 

 

http://ilyaziblatshay.com/


 

Jenn Kirby is a composer, performer, lecturer, and music technologist. She is active as a performer of live electronics and involved in the creative coding community. Jenn’s work has been performed all around Europe and the US. She is the president of the Irish Sound, Science & Technology Association, and is the Programme Leader for MA Music at the University of the West of Scotland.

 


https://www.jennkirby.com/


 

 

 

The following video excerpt is of a live set by Jenn who plays one of her self-designed interfaces:

Jenn’s research is focused on the idea of performer agency in live electronic music, and on utilising audio-visual symbiosis to enhance audience engagement. She designs and performs on various gestural software instruments that are based on her approach to music technology.


Jenn discussed two ideas which guide her in designing a new electronic instrument: Performer agency, by creating an intuitive relation between the physical or virtual properties of a digital instrument, the physical gestures used to perform with it, and the resulting sound; and audience engagement, by paying attention to how the audience experiences the performance. Jenn approaches the difference between performing with a novel technological instrument and with a traditional, and thus more familiar, music instrument in a creative way, and plays with the set of expectations brought up in each case.

Anıl discussed the relationship between his creative work and technology, outlining how research into extended realities can influence both our notions of creativity and audience engagement.


The idea of immersion is a common thread in Anil’s work, existing in different modes of artistic expression, for example in audiovisual or interactive tools. The basic assertion is that, while sound is immersive by nature, and, as a consequence, we can assume the common and innate ability to deal with spatial sound in our daily lives, the concept of spatial audio is more exclusive and requires expert knowledge. In his work with immersive technology Anil tries to bridge this gap. He presented several projects that exemplify his approach:

Anil’s works feature both sound and visuals, both through which he explores the idea of immersion in a spatial environment. He presented several projects: 

 

  • Temas is a stochastic synthesiser that allows its user to control sound by modifying probabilities (so not via direct control, for example by turning a knob on a normal synthesizer). This work has also a visual part, which is generated by stochastic means as well.

  • Inviso is an interactive web-based work. The audience is invited to design virtual sonic environments using their computer browsers. Crowdscapes is a soundscape installation created by using the Inviso interface. 

Anıl Çamcı Anıl Çamcı is a Professor of Performing Arts Technology at the University of Michigan. His research is in the areas of virtual reality, human-computer interaction, immersive systems, and spatial audio. He is the founder of the Sonic Arts Program at the Centre for Advanced Studies in Music (MIAM) in Istanbul, and formerly an ACPA PhD alumni.

 

 

http://anilcamci.com


During the discussion following the presentations, Dr. Paul Craenen (Royal Conservatoire, The Hague, and Academy of Creative and Performing Arts, Leiden University) has suggested a connecting thread that links between the different perspectives to music technology, which is the way each of the presenters approach their audience. Trying to establish communication with the listener is common in all of the works presented; however, the way to approach the listener is different in each case. In Ilya’s Trump piece, it is achieved by employing semantics and creating a common, extra-musical context; in Jenn’s case, by enhancing the traditional performative medium through focusing on performer agency in gestural software instruments; and in Anil’s work, through the idea of spatial immersion and by creating a virtual sonic-visual space inhabited by the audience. In each one of the three approaches, the use of technology is a means to engage with the listener/viewer and to establish a common reference point.