In Negatum, I took two of the constituents I had used before, the genetic programming (GP) and the self-organising map (SOM), I added another component, a support vector machine (SVM), and formalised the three components as objects within the SoundProcesses framework, so I could compose with them from "within" the system. The GP is instructed to retrace a sound picked up from the exhibition space itself, somewhat similar to what I did in Configuration, but this time new sounds are recorded during the exhibition from time to time, using the four microphones placed inside the gallery. This way, the sound becomes a reconstruction both of the acoustic space and of the sounds emitted from rattle. In this way, it resembles the visual approach taken with the Hough video installation.
The SVM is inserted between the GP and the SOM. It is a component that either selects or rejects sounds produced by the GP. It is the result from a supervised learning stage, in which I skimmed through thousands of sounds produced by the GP, selecting those that I thought were interesting to hear in the space, and those that I would rather not project. In other words, the SVM is something like a crude approximation of my aethetical judgment. It was instructed to focus on rhythmical and dynamic sounds, avoiding for example very steady sine tones or accumulation of particular sounds that are emitted too often from the system.