fdn studies on large time scales


a question I find challenging in this work is how to deal with very large time scales. the processes I've been using so far work with small to medium length sound forms. a possible strategy is then to organise these processes over time, to compose their temporal occurences and relate them to the sound events in the staircase. another approach, conceptually very different, is to work on the processes themselves, until they can deal with large time scales, and then relate them to the environment. experiments in this page are an attempt at taking this second direction.


these experiments are based on two interconnected sound modules, that I've been using in my first mockup and other sketches. the first one is a fdn reverberator, the second a r/w table that cuts audio at zero crossing occurrences. the process is for eight channels, but here you hear only the first two.


{poz, 200520, function: contextual, keywords:[time scales, process, time, fdn, experiment]}

time experiments

examples in this column are purely autonomous, meaning that the only input to the system is a single impulse at t0

SC_200518_235434.aiff

link to git

short excerpt from a small room

 

SC_200519_003256.aiff

link to git
another small room. this process curiously takes exactly 1h 36m and 14s to die



SC_200519_102510.aiff

link to git

same in a much bigger room. this never dies, at least it didn't when i let it run for more than four hours. what i find interesting is that small transitions from one state to the other are made audible, but it still retains a good margin of unpredictability and small details. even tiny sound events happening in the network might then evolve into large sound forms. also the way the system transitions from one state to the other, and some rythmicalities that emerge, i find quite peculiar and 'musical'

{poz, 200520} so i'll take this model and see if it makes sense to couple it to the staircase. First attempts (tried some quite irrational insertions of the staircase into the network) are rather uninteresting and harsh

{poz, 210520} meanwhile, i completed the port of network2 to faust. it still needs some tuning, but the behaviour is well reproduced. i found out that a crucial part (of which i wasn't aware before) for its time developments is the inversion and summation of the array of channels that happens at the beginning of the loop, highlighted in the diagram below. that's basically what creates the transition from a 'reverb model' to a generative network. funny is that i don't even remember why i inserted this part in the original network, and then i forgot about it


CPU usage is roughly half when compared to the implementation is SuperCollider

---
meta: true
artwork: ThroughSegments
project: AlgorithmicSegments

keywords: [time, feedback, network]
---