The conception of Imperfect Reconstruction took its starting point with the wish to create a compound sound installation in which the systems that I (SoundProcesses) and David (rattle) created were put together. SoundProcesses is more of a programming environment than a particularly opinionated piece of sound art in itself. You have to implement a specific compositional strategy on top of it. The approach I came up with was given the name Negatum, a play on the word mutagen that had been used in the preceding year to denote my experimentation with genetic algorithms for sound synthesis processes. Through an evolutionary algorithm, sound structures would be found by automatic programming of signal processing blocks that are evaluated with respect to their potential to create sounds resembling a given (real world) sound. This type of algorithm had been employed, each time in different ways, in the installation Configuration and in the fixed media piece Grenzwerte.

Negatum consists of two basic cyclic algorithms that run in parallel. The first produces the corpus of sounds, the second one projects the sounds into space. The first one maintains the current population of the GP; if it becomes "too old" (too many sounds have been selected from it, too many iterations have run), a new sound recording is obtained from the microphones, and the GP is restarted. After a number of iterations in the GP, the SVM marks interested and uninteresting sounds, then adds the interesting sounds to the SOM, organising them in a two-dimensional space. If the SOM becomes too large, a new empty SOM is created, so over time, a number of different sound spaces is built up. The projecting algorithm chooses from the SOM instances and traverses them in a diagonal movement (from one of the four sides to another of the four sides), picking up the sounds. The sounds thus obtained are placed on a timeline, forming a sequence of sometimes overlapping and sometimes spaced apart sounds. This sequence is then projected into the space, using again a diagonal movement across the grid of 24 channels in the gallery (each of the two systems had been assigned half of the 48 channels). A variant of vector-based amplitude panning is applied, using a Delaunay triangulation of the speaker positions, a reference to the physical triangle structure of the visual projection mesh installed in the space.

Delaunay triangulation of speaker channels

In Negatum, I took two of the constituents I had used before, the genetic programming (GP) and the self-organising map (SOM), I added another component, a support vector machine (SVM), and formalised the three components as objects within the SoundProcesses framework, so I could compose with them from "within" the system. The GP is instructed to retrace a sound picked up from the exhibition space itself, somewhat similar to what I did in Configuration, but this time new sounds are recorded during the exhibition from time to time, using the four microphones placed inside the gallery. This way, the sound becomes a reconstruction both of the acoustic space and of the sounds emitted from rattle. In this way, it resembles the visual approach taken with the Hough video installation.

The SVM is inserted between the GP and the SOM. It is a component that either selects or rejects sounds produced by the GP. It is the result from a supervised learning stage, in which I skimmed through thousands of sounds produced by the GP, selecting those that I thought were interesting to hear in the space, and those that I would rather not project. In other words, the SVM is something like a crude approximation of my aethetical judgment. It was instructed to focus on rhythmical and dynamic sounds, avoiding for example very steady sine tones or accumulation of particular sounds that are emitted too often from the system.

Originally, I wanted to superimpose multiple sound layers and movements, but after we began listening to the compound installation with both systems running, it was clear that we had already found a clear structure that did not require more density. Both systems are complementary to each other, where David's system produces an overall sound texture, always projecting from all 24 channels at the same time, but with subtle phase modulations that produce spatial movements, and where my system can be distinguished by the sounds having a punctiform location in space that slowly moves from one side to another side. The synthetic sound structures of both systems are both distinct but also similar, and often the two systems seem to dialogue with each other, even though there is no strict correlation or synchronisation between the two.

Isolated sound sequences

[hh 20/03/17]

Population of the genetic programming (spectrograms in the middle column, feature vector in the right column)