Electrifying Opera, Amplifying Agency. Artistic results. reflection and public presentations (PhD)
(2023)
author(s): Kristin Norderval
published in: Research Catalogue
Exposition of PhD research for PhD fellow at the Oslo National Academy of the Arts, Academy of Opera, Kristin Norderval.
This artistic research project examines the artistic, technical, and pedagogical challenges of developing a performer-controlled interactive technology for real-time vocal processing of the operatic voice. As a classically trained singer-composer, I have explored ways to merge the compositional aspects of transforming electronic sound with the performative aspects of embodied singing.
I set out to design, develop, and test a prototype for an interactive vocal processing system using sampling and audio processing methods. The aim was to foreground and accommodate an unamplified operatic voice interacting with the room's acoustics and the extended disembodied voices of the same performer. The iterative prototyping explored the
performer's relationship to the acoustic space, the relationship between the embodied acoustic voice and disembodied processed voice(s), and the relationship to memory and time.
One of the core challenges was to design a system that would accommodate mobility and allow interaction based on auditory and haptic cues rather than visual. In other words, a system allowing the singer to control their sonic output without standing behind a laptop. I wished to highlight and amplify the performer's agency with a system that would enable nuanced and variable vocal processing, be robust, teachable, and suitable for use in various settings: solo performances, various types and sizes of ensembles, and opera. This entailed mediating different needs, training, and working methods of both electronic music and opera practitioners.
One key finding was that even simple audio processing could achieve complex musical results. The audio processes used were primarily combinations of feedback and delay lines. However, performers could get complex musical results quickly through continuous gestural control and the ability to route signals to four channels. This complexity sometimes led to surprising results, eliciting improvisatory responses also from singers without musical improvisation experience.
The project has resulted in numerous vocal solo, chamber, and operatic performances in Norway, the Netherlands, Belgium, and the United States. The research contributes to developing emerging technologies for live electronic vocal processing in opera, developing the improvisational performance skills needed to engage with those technologies, and exploring alternatives for sound diffusion conducive to working with unamplified operatic voices.
The Sonified Textiles within the Text(il)ura Performance: Cross-cultural Tangible Interfaces as Phenomenological Artifacts
(2022)
author(s): Paola Torres Núñez del Prado
published in: VIS - Nordic Journal for Artistic Research
This research is working with Sonified Textile Controllers. They can be described as textile sensor-based tangible interfaces. Some of them are part of a disruptive sound performance – an experimental sound art concert that has been presented in various cities in Europe and the Americas called "Text(il)ura"[1]. The three fiber-based controllers presented as part of this performance, The Shipibo Conibo-Style Textile, The Hanap Pacha Quipu, and the Unkuña of Noise, all reference the cultures of Paola Torres Nunez del Prado’s country of origin, Perú. These textile artifacts are works of art as well as devices that fall within the scope of smart textiles and that of human-computer interaction, as they allow the user to experiment with alternative ways of approaching the execution of sound art or electronic music. Referring to both aesthetically and symbolically distinct forms of expression from diverse human communities, these cross-cultural devices are characteristically hybrid and ever-changing: history, myth, craft and technology, gender, transhumanism, sound, visuals and tactility – everything is intertwined in an amalgamation of human knowledges and experiences aiming for a type of universality that does not impose one thought system over the other. The design of the interfaces proposes a different approach to tactile manipulation of electronic sound instruments, where it is common to find controllers made of metal or plastic, and where the aural and tactile sensitivity with which they are first made, transforms the performance itself. The Sonified Textiles aim to redefine musical interfaces, both conceptually and design-wise within the e-textile realm. The research uses a phenomenological framework to go beyond the mind-body dualism and to reconnect with the natural world.
Between Data and Breath: Machine Learning, Musical Embodiment and the Emergence of Voice
(last edited: 2026)
author(s): Jonathan Reus
This exposition is in progress and its share status is: visible to all.
From vocal deepfakes to artificial voice actors and pop star avatars, data-driven machine learning has intensified embodied, musical, and social complexities of voice. While disembodiment and decontextualisation of voice have been musical concerns since the invention of sound recording, AI voice synthesis accelerates these processes and adds new perceptual, cognitive, and social layers.
Many ontologies from voice studies imagine voice as resisting fixity, yet in today’s technological climate this resistance may be losing its ontological imperative. Voice is in transformation - possibly crisis - requiring both curiosity and care in paradoxical tension. These changes also unfold within a technological arms race for innovation, profit, and global AI supremacy. Artists are not only early adopters, but experimentalists and bards who participate in the narratives around AI and vocality.
This thesis evaluates the changing vocal condition through first-person artistic research with AI voice technologies, exploring their poetics and potentials in three artworks created between 2021–2025. In Search of Good Ancestors / Ahnen in Arbeit was a year-long generative radio broadcast exploring machine learning as a intergenerational vocal memory. iː ɡoʊ weɪ is a hybrid extended voice performance practice using real-time voice transfer to unravel vocal identity on stage. DadaSets investigates the invisibilized vocal labour of AI voice through collaborations with artists, new scoring systems, the absurdist dataset-making performance Bla Blavatar vs Jaap Blonk, and the invention of the voice synthesis instrument Tungnaá.
These works are analyzed through an interdisciplinary lens: experimental vocal traditions and the embodied musical-technological ethos of STEIM, alongside philosophies of voice, cognitive neuroscience, and material anthropology; while predictive coding theory frames compositional notions of uncanny, pathological and convivial technologisations of voice. Voice data emerges as paradoxical - both disembodied and relational, material and emergent, gift and commodity - functioning as the basis for musical animacy and collaboration within a rapidly changing socio-technical landscape.
The Data-driven Voice-Body in Performance: AI Voices as Materials, Mediators, and Gifts
(last edited: 2026)
author(s): JC Reus
This exposition is in progress and its share status is: visible to all.
Data-driven, realistic and identity-bearing AI voice technologies have been proliferating widely in recent years. Voice, a multiply embodied phenomenon situated within and across human bodies in space, is deeply disrupted by the disembodying tendencies of AI voice technologies and their processes of data creation and collection, resulting in the need for a re-evaluation of perceptual, cognitive and cultural factors. This paper addresses this need by synthesizing ideas from embodied cognition, voice studies, and material anthropology to analyze real-time, AI-mediated voice as a form of embodied cognition that is an enactive, intersubjective, extended, materially and socially distributed phenomenon. We examine AI-mediated voice through the case study of the live music performance iː ɡoʊ weɪ, which uses real-time AI voice transfer systems trained on carefully curated vocal datasets that represent diverse forms of vocal alterity relations. The live performance system integrates custom RAVE models within a SuperCollider environment, enabling dynamic real-time transformation of the performer's voice through a tactile control interface. This technical architecture facilitates immediate feedback loops between biological vocalization, AI processing, and auditory perception. The domain of audience perception becomes a key point of reflection, where the formation of new perceptions of voice and body is repeatedly challenged by the fluid voice-body gestalt of the performer, a process of donning vocal masks that modulate perceptual dissonances and resolutions. We further address the complex situation of voice AI through the concept of identity-bearing technologies, which leverages theories from voice studies, speech science, material anthropology and embodied cognition that address identity perception. Finally, we address the key ethical dilemmas of such systems, particularly as they relate to the problem of designing technologies that reproduce the illusion of a singular essential vocal identity. We respond to the problem of the representation of vocal bodies through a speculative framework of the vocal gift, analyzing how gift relationships are at play within the creation of the AI systems of iː ɡoʊ weɪ, and suggesting further directions for research into developing technological systems that honor gift relationships as a fundamental principle for ethical design of AI voice technologies.
Fashioning the Voice
(last edited: 2020)
author(s): Jennifer Anyan, Yvon Bonenfant, Katie Daley-Yates
This exposition is in progress and its share status is: visible to all.
Fashioning the Voice is an interdisciplinary research project that brings together the expertise of Yvon Bonenfant an artist-academic who extends voice across media to explore innovative ways of creating (University College, Cork); Dr. Tychonas Michailidis, who’s work focuses on sensor technology and interactions (Research Fellow at Solent University) and myself.]
From Singer to Reflective Practitioner: Performing and Composing in a Multimedia Environment
(last edited: 2016)
author(s): Aleksandra Popovska
This exposition is in progress and its share status is: visible to all.
This exegesis comes as result of performing, composing and researching in a multimedia environment over
the past few years. I started working on my projects with the following question in my mind: how can I
improve my own practice?
Asking this led to identification of the problems related to making a live media performance, as well as prompting discussions about the types of knowledge necessary for producing a work of art that includes more than one medium.
Rather than attempting to draw definitive conclusions regarding topics as broad as live media
performance, I summarize and reflect upon the creation process as I experienced it, elucidating my personal
contemplations regarding my experience and practice in multimedia environment.
I will present three projects: ’Bukefalus’, ’Tribute to Morty’, and ’Every. When’, all designed as electroacoustic pieces with video displays. I shall take a
closer look at how the pieces were developed, discuss their cross-disciplinary character, and good and
bad practices involved in them. Finally, I shall focus my attention on how things go in practice when
one is composing and designing an interactive piece using improvisation and different media. In all three
projects I have been involved in different roles, as a performer, composer or designer, and in all of them
collaboration played important role. I made particular choices, sometimes blending my roles and the roles
of the participants.
I hope that my experience as a musician, having passed through both classical and technology-based
educational systems and participating in them in different roles – as performer, concept designer, composer, producer and teacher – will be useful for all creative people coming from the conservatory and wanting towork in the field of multimedia performance.
With this exegesis I would also like to make my own contribution to reflective practice and living theory
(Whitehead, 1998), by exploring improvisation and experimenting in my projects. I will write about how
it has enhanced my own identity as performer, composer and designer, and why and how I have been
committed to sharing its transformational potential with people I collaborate with. Its claim to originality
is that it arrives at a living concept of knowledge transformation through multidimensional reflection; as
a singer who is a composer, as a composer who is a researcher, as a student who has been a teacher, as an
artist who has lived in an imaginative world creating her works, and as a researcher dealing with institutional
policy and educational change. “I am my own informant into different perspectives, and will try through
these personae to have a dialogue between several positions and arrive at a concept that is tested and lived
from several perspectives.” (Spiro, 2008, p. 29)