COMPOSITION

The recordings were analysed using two methods: computer-based feature recognition, also known as computer listening, and traditional composer listening. While the workings of computer listening can become quite technical, simply put, we either instruct the computer to listen for specific information in the space, spectrum, or temporal domain of the sounds, or we ask it to ‘learn’ about the sound through machine learning and neural networks. In contrast, composers traditionally follow an autoethnographic approach to their sound—that is, a process guided by listening and decision-making based on experiential self-reflection.

When considering ecological theory and working with extended recordings, we are limited by the ear’s and body’s variable awareness. After an hour or more of listening, a decision I make one day may not be the same as if I carried out the same procedure another day. This implies that the autoethnographic approach is prone to failure. With computer listening, we can achieve greater consistency. However, this, in turn, may come at the expense of potentially interesting details that the computer ignores. Combining both methods can result in a balanced outcome, revealing both one-off features and main archetypes.

In the analysis, I devised a method to separate multiple simultaneous sources and their motion features from the 3D recording. This process outputs individual streams of mono sounds and spatial data. In addition to isolating clear features, I can alter the spatial-frequency spectrum to reveal information usually masked in noise. In composition, I decide when and how to enhance interesting elements and musically reflect the natural sound environment’s changing spatial and temporal event fingerprints. The process is described in Barrett 2021 and although applied to some extent in the earlier installations "Subliminal Throwback" and "Speaking Spaces 2: Surfaces from Graz", here the method was applied in its most advanced state.