Nov 21, 2022
RITMO Centre for Interdisciplinary Studies in Time, Rhythm and Motion
https://www.uio.no/ritmo/english/news-and-events/events/workshops/2022/embodied-ai/
Embodied Perspectives on Musical AI (EmAI)
In this two-day workshop consisting of keynote speeches, performances, and thematic sessions, we explore musical artificial intelligence's past, present, and future through the lenses of embodied cognition.
Jonathan Chaim Reus
Performing Voices Without Bodies

When
Thematic Session 2: Software and Synthesis (Monday, 15:40)
Abstract
From the use of vocal deepfakes to commit crimes to artificial voice actors, data-driven generative machine learning is challenging the embodied situation of vocal identity. As part of my ongoing doctoral research into the uncanny embodiment of generative AI, I am developing musical and theatrical performances using vocal doppelgangers and voice transformations, and addressing the artistic practice of dataset creation.
My recent work on this theme includes a year-long generative radio broadcast featuring an artificial BroadCaster, a bespoke synthesis instrument combining fine-tuned GPT, TTS, and RAVE models to create an ongoing vocal performance that morphs in real-time between human and non-human speech, and whose models are continuously fine-tuned throughout the year using audience contributions.
Currently I am building an extensive annotated vocal dataset from my own voice with the goal of creating a controllable, extended, and expressive voice synthesis model building upon current TTS methods. I would like to present both of these projects as work in progress. And I would like to use this opportunity to discuss the conceptual scaffold grounding this project, to gain insight from others and to test out my own emergent concepts.
Bio
Jonathan Chaim Reus (b. US) is a transmedia musician known for his use of expanded electronic instrumentation in live and theatrical contexts. He is a co-founder of the instrument inventors initiative [iii] in the Hague, Netherlands Coding Live [nl_cl], and a recipient of a Fulbright Fellowship for his work on electronic instruments at the former Studio for Electro-Instrumental Music [STEIM]. He is an affiliate of the Intelligent Instruments Lab (Reykjavik) and is a PhD candidate within the interdisciplinary Sensation and Perception to Awareness doctoral programme at the University of Sussex.
May 31, 2024
https://2024.fiberfestival.nl/news/context-programme-day-2
https://2024.fiberfestival.nl/context-programme-2
Artificial Togetherness – On AI and Collective Embodied Practices
13:15 - 14:25, de Brakke Grond
Guests: Diana Neranti & Jonathan Chaim Reus
Technological capabilities, mostly accelerated by machine learning and AI applications, have made it possible to analyse, simulate, and recombine body-based creation processes. This has led to an explosion of AI content – from rapidly generated films, videos, and music – with the use of black-boxed, style-transfer software. As a counter-reaction to an overload of generic images, several artists are experimenting and performing with AI and the various aspects of the body. We are working from different kinds of knowledge systems and forms of collaboration, where ownership and sharing of datasets and AI technologies is an important given.
https://2024.fiberfestival.nl/jonathan-chaim-reus-talk
Jonathan Chaim Reus (Talk)
he/ him
Fri May 31 | de Brakke Grond
From hearing voices in spiritual transcendence, to the invention of the phonograph, the presence of the voice without a body has been an element of human cultures for millennia. In electronic music, the premier art form for sounds without bodies, the disembodied, amplified, processed and synthesised voice is the starting point, not the exception. But even in these anything-goes sonic landscapes, the traces of bodies still linger, and the idea of the singular human voice still holds a special aura and economic certainty. Jonathan Chaim Reus is seeking to build bridges between the human voice and these complex ecologies of experimental computing that exist across datasets, hardware, software, land, and air. Jonathan will give a talk, presenting some of his recent artistic work around artificial human-sounding voices, andthe ecology of voice data (and human bodies) fundamental to the creation of data driven AI, and some ways we might want to imagine voice otherwise - beyond genres, traditions, or species.
About the artist
Jonathan Chaim Reus is a transmedia artist and musician known for his use of expanded digital instrumentation in live, theatrical, and virtual (online) contexts. Thematically, his work often explores nuances and contradictions within single-story narratives around technology and culture. He has done extensive artistic research relating to musical instrument heritages and how concepts of tradition and folk art become transformed through technological change.
Since 2018 his most artistic work explores relationships between the human voice and its artificial reproduction. In 2022, he received the CTM Radiolab commission for the collective generative radio epic In Search of Good Ancestors. And is currently working on a S+T+ARTS EU funded research projects DADAsets, exploring ecologies of vocal datasets in artistic practice. He is a PhD candidate at the Experimental Music Technologies Lab (EMUTE) at the University of Sussex, and is an affiliate of the Intelligent Instruments Lab in Reykjavik.
This artist is part of the session “Artificial Togetherness”
Date: 1 June 2024
Time: 13:15 - 14:25
Location: de Brakke Grond
July 13 2024
British Academy Summer Showcase
Dear friends in London or nearby. I'll be part of the interdisciplinary panel/workshop "Voice Clones: Performance, perception and potential" tomorrow (July 13th) at the two-day British Academy Summer Showcase. This will be an interdisciplinary workshop, panel and presentation on the topic of AI voice cloning technologies with a mix of perspectives from vocal identity perception, technologies ethics and performing arts. I'll be one of the main presenters, alongsideTanvi Dinkar,Cennydd Bowlesand Steven Bloch. The panel is organised byCarolyn McGettigan, from University College London, who is investigating how people perceive artificial clones of their own voices and those of people close to them.
Key questions in the workshop are: How can we make voice technology more accessible? What responsible and productive uses can we find for it? What are key ethical considerations?
The workshop takes place at the Library of the British Academy, 3:15-4:15pm UK time.
