Granular Synthesis

 

I chose granular synthesis as the method to produce the music for this research projects. Even if it is ultimatly about game audio, in a first step it is mostly game music that I am interested in. The use of granular synthesis has many reasons. There are many forms of granular synthesis as Curtis Roads described in his book Microsound (Cambridge, Mass: MIT Press, 2001). It is rich in parameters, even the simplest granular synthesis methods with just the basic building blocks has many variables and gives a rich timbral spectrum. 

Parameters can be changed continuously to produce neighboring timbres, which is harder to do with other synthesis methods. It is intrinsically timbral and not melodic. This makes it harder to use it in a score and instrument context, but also gives non-melodic possibilities. 

Also, even if it was until recently mostly private research and not academic, I already have quite a lot of experience with granular synthesis.

 

A test for Hey!

 

 

 

{kind: paragraph, function: contextual, keywords: [_, granular, synthesis, sound, timbre]}

Procedural Game Audio

 

A game is dynamic, it changes with time and the interaction with the player.

 

Oftentimes game audio, specifically the music of a game is not. It is pre-composed and played as a fixed piece of music.

 

There are easy ways to make the music more dynamic. Most game engines, specifically Unity have tools to change volume, pitch and even employ filters.

 

The next step would be to use modular pieces of music and layer them according to what the state of the game is and what the game creator wants to communicate to the player, be it a simple confirmation of a user interface action, the sounds that anchor the players perception in the game world like steps or music complementing and enhancing the mood.

 

Here already the sound designer needs to know a lot about the state of the game to choose the right musical building blocks. But this still relies heavily on samples. Even if with todays computers memory is not really a problem anymore, samples have to be recorded and put into the game and only give a limited possibility to, for instance, change the melody.

 

Except if we take an example in classical music and use samples consisting of single tones and use a score to play melodies with this very basic building blocks. This still fixes the timbre, but it allows a wider range of melodic content, especially if used together with automatic composition techniques researched in the early to mid 20th century like Schönberg's Twelfe-Tone-Music.  

 

A step further would be procedural music, music completly composed as process. This is seldomly done complety without samples, but it gives interesting possibilities not only for music, but also for game sounds like steps as Andy Farnell showed in his book Designing Sound (Cambridge, Mass: MIT Press, 2010). 

 

{kind: paragraph, function: contextual, keywords: [_,  sound design, video, game, soundtrack, interaction]}

Dragica Kahlina 

Lucerne University of Applied Sciences and Arts 

Ludic Lab Lucerne

 

Hey! A Game

 

To test the assumptions I am making, I need a game that incorporates at least part of these ideas. The idea of the autonomous system is more or less independent of the exact game that is played.

Hey! is a research game from the Ludic Lab Lucerne that is meant to be played in a public setting, at an exhibition or a festival. It imerses the audience into the game visually and auraly. Part of the concept is that the game is the space. Instead of just a metaphorical Magic Circle (see John L. Gillin and J. Huizinga, Homo Ludens: A Study of the Play-Element in Culture., vol. 16, 1951) it uses physical space as its "Magic Circle". Tha player or players enter the room made by the installation and as such enter the game itself. It's not virtual space and it's not augmented reality, but still the audience gets the feeling to be immersed in the game.

Since Hey! already has an autonomous sound system, controlled by an AI, we did go full circle and chose the theme of nurturing an artificial intelligence as the main setting, that also makes it possible to playfully engage people in a discourse about our algorithmic helpers. 

 

Video of the first test at the Zurich Biennale 2019

 

{keywords: [_, game, space, installation, artificial intelligence, sound, timbre]}

Autonomous


Procedural Game Audio gives us a lot of freedom. But we still tightly control it and bind it to the data. This means the data structure has to be fixed. In the music community there is a lot of effort to use artificial intelligence as a composing tool or as a musician, a performer in its own right.

But what if we could push this even further and have a music system that is not just procedural and intelligent, but also autonomous and alive. A system that goes sniffing in the game after data it could use, alive because it learns, grows and evolves on its own inside the game?

The research on this is still in the beginning. First there has to be a game and a procedural music framework to build upon and then there needs to be some intelligence. Part of this already exist and builds on my earlier (nonacademic or commercial) research, but it is nowhere near to any answer on the topic.


{kind: paragraph, function: contextual, keywords: [_,  sound design, video, game, soundtrack, interaction, artificial intelligence]}

Game Audio

as an

Autonomous System

Thank you to my collaborators,

the Ludic Lab team 

 

Francois Chalet (Hey! art work)

Richard Wetzel

Sebastian Hollstein

Artificial Life or A-Life

 

is an expansion on artifical intelligence. Where Artificial Intelligence (AI) uses its algorithms for decision making, A-Life tries to simulate complete autonomous organisms and even eco-systems. Incorporating AI for decision making, but also genetics, evolution and often also learning and the interaction with other such organisms.

The simulation of A-Life was originally mostly motivated by biological research see Christopher G. Langton, ed., Artificial Life: An Overview, Complex Adaptive Systems (Cambridge, Mass: MIT Press, 1995), but nowadays it is not only about simulating real life.

In games A-Life is used to populate otherwise empty worlds and uses rules that make sense in the game world. The AI in games no longer only controls things like navigation or decision making of simulated enemies, but whole ecosystems with daily and yearly life cycles and even sometimes generation spanning life cycles simulating life, death procreation, genetics and as such evolution.


{keywords: [_,  artificial intelligence, artificial life, complex adaptive systems, interaction, game design]}

A Game

in the context of this research is understood as:

 

A system that has no use and reason outside existing and interacting with another system outside of itself, normally an entity called the player.

 

A system that has rules. These rules are oftentimes fix, but in the future the research may also consider changable rules, it has to be seen if this can be done outside something that can be interpreted as meta-rules.

 

A system having a state, determined by data. The state can change according to the rules. These changes may be triggered by interactions with the player.

 

A system is giving feedback on its state through visual, audio or haptic clues.

 

Everything else may be part of the system, but isn't necessary for this work. None of the above can be left out.

 
{function: definition, keywords: [game, player, system, rules, interaction]}

---
meta: true
event: almat2020
kind: essay
date: 200919
author: Dragica Kahlina
place: Online

keywords: [game audio, autonomy, system, procedural]
---