This accessible page is a derivative of https://www.researchcatalogue.net/view/2938321/3620174 which it is meant to support and not replace.
Video description: A video demonstrating the construction of a database. Sounds are analysed and read to a database. They form an archive that can be queried. The video traces the exact process of allocating a directory and transforming it into a functional database that can then be queried and accessed through the software tools developed for it.
Click on https://www.researchcatalogue.net/view/2938321/3620174#tool-3622214 to watch the video.
Exploratory Analysis
In my work with digital synthesis, I have become interested in using machine-listening approaches where sound processes auto-regulate based on analysis data. While real-time analysis provides valuable insights, relying heavily on those for crucial behaviours can sometimes result in overly smooth and generalised outcomes. This tendency contrasted with my objective then of achieving sharper and more distinctly articulated results. Using the UPIC material, I wanted to analyse more sounds in depth, compare them, and find gaps and unique perspectives that would be harder with real-time approaches. My methodology was significantly influenced by concepts from data science, especially those introduced by John Tukey in his groundbreaking work ‘Exploratory Data Analysis’ (1977). His methodology focuses on discovering the unknown and deeply exploring the inherent qualities of data to build a thorough understanding of a certain dataset under question. His approach emphasises delving into data with curiosity, allowing the data itself to guide the analytical journey and reveal its underlying story.
The UPIC sounds would become my dataset. Could this extensive archive of past sounds serve as a foundational or contextual basis for the development of compositional algorithms? What was interesting about the UPIC archive as a starting point was the strong character of the sound material. Most of the sounds had some sort of pitch movement or otherwise noisier parts. There were very few ‘neutral’ sounds or static textures that would easily combine or melt with similar sounds. There was also a great number of recordings that existed and were organised in the same directory structure as fifteen years before.
Image description: In white text on a black background, various features (e.g. Frequency, Spectral Flatness and Loudness) are arranged in an oval box, under the title 'Data Dimensions.' The analysis process is based on a set of features that are analysed for each sound as well as each of the segments a particular sound consists of. The feature space forms the backbone for all the database queries.
Click on https://www.researchcatalogue.net/view/2938321/3620174#tool-3622220 to see the image.
Image description: In white text and numbers on a black background a database is depicted, with the numbers corresponding to particular features. Each feature is stored for sound segments as well as entire sounds, and they are all normalised to range between 0 and 1 to facilitate queries and provide a standard interface.
Click on https://www.researchcatalogue.net/view/2938321/3620174#tool-3622219 to see the image.