Reactive Spaces, Strange Spatialization and Cross-synthesis

 

In the last part of this composition, the main tool in use is the sound oriented programming platform SuperCollider and the DAW Reaper. The latter recorded and composed the live processed multi-channel tracks from the former. These tools create a powerful alliance, with options both to touch the source code, the abstracted bare bones of digital sound processing, and to work in a graphically oriented software environment with all the convenience it delivers.

4.1. Reactive Spaces

The theoretical research led to the idea of an unreal space or location that could quickly change shape, size and materiality. Besides, these abrupt changes would be triggered by the sounds that are literally thrown into that space or location. In a way, the answer is not only a passive reflection of a given sound but, eventually, a spatial physical gesture. This is the most dominant approach in the fourth section of the composition.

Fig. 4.1 – Diagram of a reactive acoustic/virtual delay network

This particular structure implements different forms of feedback, constructing a live reactive hybrid system, which is used as a tool for a subtle embodied performance in the physical space. The actual presence of the author’s body in the space is the imaginary real sound source of that layer.

4.2. Strange Spatialization

Resulting from this, a fixed 6 minute long recording was then fed through software customized for 8 channel distribution. This SuperCollider code relies on a precedence effect.

The monophonic recording was set through eight independent all-pass delay lines and each delay time was modulated by a perlin noise source. The range of this modulation was limited to a maximum value of 200ms. The speed of a change of position within a perlin noise source was set to one tenth of second.

This very slow but continuous change of mutual delays between the channels continuously establishes or breaks up the precedence effect relationship between the listener and independent loudspeakers. With this, the perception of the sound source in space is continuously doubted by the listener. The character of the processed sound is radically transformed, not spectrum-wise, but rather spatially. The sound output of this utility builds up to a feeling that it is not in fact happening at the listener’s physical location, but somewhere outside it, at a distance, behind the walls, upside down.

Fig. 4.2 – Diagram of the spatialization utility

4.3. Cross-synthesis

Fig. 4.3 – Block diagram of the sample driven noise oscillator and convolution network

The custom sound, originally sampled from one of the recordings mentioned above as a hybrid system, was used as a kernel for that convolution reverb. This sample was manipulated in a way that its amplitude envelope roughly resembles the natural decay of a given sound and its propagation in a space.

Sound 4.1 – Sample of the kernel sound

In the third step, the outcome of this convolution, based solely on iteration, is fed through another custom spatializer and amplitude modulator, which has a random channel order and is able to spread sound among them, as well as setting a virtual center between them.

This spatialization iteration brought a certain level of strangeness to the sound, mainly by disordering the channels and thus introducing non-linear movement through the listener’s more or less linear spatial experience.

Besides the frontal layer, there is a subtle background sonic texture. This layer was created predominantly though the concepts of intermodulation and cross-synthesis. The process of creation can be divided into three discrete steps:

First, a recorded sound from an acoustic environment is fed to a digital SuperCollider synthesizer. A three dimensional perlin noise source was used as the main oscillator in this synthesizer. It also has an audio input that is mapped to modulate the frequency of the base oscillation. This input signal is further decomposed and used as a spectral source for continuous two-channel convolution.

The second step begins with the result of the previous process, which is then fed through Reaper’s native convolution reverb device.

Fig. 4.4 – Diagram of strange spatializer v2

The core of virtual structure (i.e. the source code) is based on three comb filters with a very high rate of internal feedback and a variable delay time. If the amplitude of the input sound exceed a threshold, the parameters of these filters were randomly modified (mainly the delay duration, as a foretaste of changing the size of the space). The outputs of these filters were then mixed and further manipulated. Thereafter, the resulting sound was sent from the digital domain to the real physical space, in which it was recorded, thus once again entering the digital domain, and so on. In this live process, the feedback was established through an acoustic as well as a virtual domain. The input was gradually normalized and increased and that, in certain conditions, caused changes in the processing chain. Thus, the feedback loop temporarily lost its stability.