The iteration with Jonathan Reus was embedded in Algorithms that Matter (ALMAT). ALMAT is a three-year project running from 2017 to 2020, within the framework of the Austrian Science Fund (FWF) – PEEK AR 403-GBL – and funded by the Austrian National Foundation for Research, Technology and Development (FTE) and by the State of Styria. It is hosted by the Institute of Electronic Music and Acoustics (IEM) at the University of Music and Performing Arts Graz.

Points of Entry

Almat's ⬀methodology is based on iterative reconfiguration. A configuration encompasses the members of the team, Hanns Holger Rutz and David Pirrò, an invited guest artist along with a proposal within which she or he works, a "machinery", including two software systems developed by Rutz (SoundProcesses) and Pirrò (rattle), and possibly more systems brought into the experiment by the guest artists, which we aim to couple and to explore from different perspectives. An iteration takes the form of an online preparation phase, in which the core team and the guest artist engage in a dialogue about their practice and their relation to Almat, as well as the specific preparation of a two months residency period, in which algorithmic sound studies are developed in situ at the IEM in Graz.

The third iteration is conducted with Dutch-American sound artist and art-technology researcher ⬀Jonathan Reus This research catalogue entry documents our work process.

Musical background

The performative dimension of music plays often a central role in Jon's works. He grew up in the area of the Appalachian Mountains, where "music is very lived [...], it is played and has some sort of role in daily life [1]". Coming from this southern American folk tradition, he still carries with him this way of playing and way of thinking about music. He describes his approach to electronic music as follows: "My introduction into electronics music has always been about how do we play these things. What is the role of the objects that create these sounds in my daily life? [1]". 


Having worked as a software engineer for a number of years, computers became a really important component of his daily life, as well as a fundamental part of his musical practice. One representative example of how he approaches these machines artistically is his performance called iMac Music, where he turns old iMac G3 computers into musical instruments. There is a long process of going into the electronics, transforming the machine into something that can be physically played. As Jon explains: "You can think of it a bit as a preparation you might do on a guitar or a piano, developing specific hardware and techniques to turn this kind of live circuit systems into something you can perform with [1]".

Live coding

Another way Jon talks with computers is through live coding. He has been live coding for a while, he taught classes on live coding and even organised Algoraves. Apart from the performative dimension of writing code on stage, he also intends live coding as a method to develop work, as a tool to experiment with sound in real-time. Describing the way he approached the earthquake vest in his work The Intimate Hearthquake Archive he says: "These earthquake vests have a little embedded computer in their backpacks that has a system that was completed live coded. The development of it was live coded. I was plugged into the backpack, sending commands in real-time and feeling the difference and understanding what needed to be changed. For me this way of working is the most enjoyable, the most fruitful. And maybe this has something to do with this sort of folk music ways of thinking about creating [1]"

[1] Wordweaving, SignaleSoirée Graz, 26 November 2018

Jonathan Reus

[iteration 3]

Topics and Transcriptions

note: still heavily in progress!

Live coding

During the ALMAT residency Jonathan is working on several aspects of live coding practice, addressing technical, aesthetical and poetical questions. For example, with the aim of solving a specific limitation that derives from the ephemeral quality of live coding - namely that many processes get lost in the flow of writing and rewriting - he developed a SuperCollider library that allows to collect and catalogue synths played during live performances. These synthesis processes are then available for being reused and modified at a later point, filling the gap between live coding and (out of time) composition. 


Live coding and story telling

Jon is also interested in incorporating storytelling into live coding performances. He wants to experiment what musical performative possibilities there are for story telling. "I really like what story telling is, as a performative practice. It's a bit like weaving. Usually there are anecdotes that are sort of woven together, and it's very improvisational [1]".


Voice and language

Working on this concept, he came to the idea of exploring voice and language, in particular analysis of the voice. After surveying different open source speech recognition packages, he wrote a python frontend / supercollider backend - based on Sphinx - for getting some of the speech analysis data into the computer. Feeding the phonemes from the recognition algorithm into SuperCollider he can use them to transform parameters of the synthesis process, using his voice to modify sounds in real-time.   


Speech recognition

Being confronted with the limitations of tools like Sphinx, Jon has been trying to find ways that are a little less discrete and a little bit more open to deal with his voice. Trying to create a more complex classifier or feature detector, he wants to build a tool capable of capturing specific inflections and peculiar vocal utterances, pulling speech apart into its phonetic, sub-phonetic or non-phonetic qualities.  

In Progress