Towards a Rendering Loop
I want to give you an update on what I have been working on recently, hopefully arriving soon at the stage that we discussed in our last phone meeting—being able to render half an hour of "example material".
As the logic for starting and coordinating the processes is simple to explain but less simple to implement correctly, and I am still pursuing the idea that this implementation is done fully within a Mellite workspace (instead of a separate standalone project), the process is slower than I had hoped. But I'm getting there. A lot of the prototyping I have done for the new version of another piece, Writing (simultan), actually applies here, so there is a "pattern" of how these processes work.
Logically, I am dividing the processes into three layers; one is real-time input gathering, one is rendering the various transformations and analyses, the third is real-time sound production. The idea is that they can run each on their own, just exchanging data. The input gathering thus recordings into sound files which are "filled up" as new material is needed (and available, given the planned "common listening" mode!). The rendering layer then takes the current "database" (input sound gathering) if it has sufficient length, and starts non-realtime rendering processes, the output of which form another pool of sounds. This pool is then used for the third layer.
The first rendering process I am implementing is the one that is described on the left here: Applying energy / spectral flatness thresholds, transforming the thus filtered material into pure resonances (minimum phase spikes), and then ordering them using the complete graph of timbral similarities of the segments along with the Lin-Kernighan algorithm. Most of the work I am doing now, is seeing that this runs by itself, drawing input sounds from the microphone, doing the rendering into a pool, playing from the pool. I should be finished with this in the next few days.