update 200629

parallelising on different servers

this patch couldn't run at blocksize 1 on the rasbpi 4. I tried a complete port to Faust, but all the slight differences add up and change quite radically the sound character of the thing. Especially doing the walsh transform on the server side rather than in the language makes quite some difference. Since I prefer to maintain the original one, I figured out that I could parallelise just a small part of the process, and that made the trick. This configuration runs fine on the pi, even tho the server that runs at blocksize 1 is constantly at 80-90 % CPU usage. But I didnt experience any click. The two server are connected via jack and the second one takes as input the 8 channels output of the first server + 16 channels of rms derived from within the FDN


from David


* Fragmentation

Reading and re-reading your comments (especially Ji's) and ideas,
thinking further, I am more am more intrigued by the idea of
introducing some "moments" or controlled segmentation or, trying to
find another word, /fragmentation/. That is, moments in which
something is stopped and then restarted, for instance. Or a clear
change in the disposition of the organization of the segments.


My intuition here is that, while, /probably/, our processes will engage
somehow in a rather continuous or more or less smooth transformation
of the aural space (and of themselves), these clear "cuts" could be an
interesting change of gesture.


Also, I think that this start/stop, pause/reset could even contribute
in making the transformational process of the single segments and the
installation as a whole, more clearly audible. Maybe it could
contribute in making the algorithm even more "present" or tangible? In
any case I think that these moments should be really clear, well
perceivable, that is, not just "technical" in the sense that their
effect would remain rather internal to the installation.


Further it would us to start composing on a different formal level, to
think in (bigger) temporal forms.


A sort of cross-over between and installation form and multiple

{kind: quote, quoted: DP}

fragmentation: acoustic snapshots from microphone


Going back to your comments from the last months made me reflect again about the question of segmentation and the idea of using some kind of "bridges" limited in time and space, not only in terms of how our segments will communicate with each other, but also within each one's own segment. Especially the comment from David on fragmentation ringed a bell and made me think about an alternative strategy to connect my network to the space of the staircase. In my previous attempts I tried to create a sort of direct coupling between  the two (the sound from the microphone conitnously modulating some parameters in the network). Apart from the fact that the sound results didn't really convinced me, I was also not satisfied with this concept of direct coupling.


So now the new idea is: substituting the one-sample impulse that puts the network in self-oscillation with a very short (128 samples) "impuls" grabbed from the microphone in the staircase. The idea is to start the process with a very short sound snippet, that represents a very specific acoustic situation at a very specific point in time, let the network evolve in sound forms which are precisely determined by this initial condition. Then stop the network, take another "acoustic snapshot" and restart the process. I think it would be very intersting if I could find a way of amplifying the differences in the snapshots through the network, making them perceivable through different time developments. Today I did a test without modifying the network. I took it as it is and simply subsituted the impulse with these audio snapshots and see if I get perceptually different responses. It works, and I also like how it sounds. Nevertheless, even if the sound forms that emerge out of different snapshots vary, the overall behavior of the network I perceive as quite similar. I'd like to find a way of amplifying differences not only through the diversity between sound forms, but also through a difference in terms of the overall behavior (different "musical" forms). But at the same time, I think it's interesting that the overall form is similar. it starts from "impulse trains", then there is a grainy like part in which you can here the network is kind of synchronising to itself, or stabilising, and then it goes to feedback like forms. This process takes 2 to 3 minutes and, even if it is always different in its micro articulation, the overall form is more or less the same for different initial conditions. probably keeping this sort of shared "preamble" could be a good idea to make the act of stopping and restarting the process perceivable.


I also find very intriguing this idea from David of thinking about the work as a sort of cross-over between an installation form and multiple pieces.

acoustic snapshot 1

from David's recording 20191217_1015.wav

128 samples starting at 1.041 seconds

acoustic snapshot 2

from David's recording 20191217_1015.wav

128 samples starting at 23.095 seconds

poz: Observations Round 3


from David


I do not think of a /complete/ restart such that it
somehow erases or the new instance "forgets" or neglects what has
happened before. I think that maybe parts of the segments' processes
should /survive/ the cut, not being touched by it, the parts that
"listen to" and determine the long time evolution of the installtion.





from David


some kind of recursive, reentrant system, a system that re-enters
itself at different scales



from David


This kind of memory is therefore not simply "passively" receiving or
exactly recording some input. Rather it "metabolizes" it, eats is up,
makes it part of it. I think of a process of sedimentation of the
input into the algorithm. A sort of "engraving" of resonances into its
space of potentials, into its /phase space/.


{kind: quote, quoted: DP}

I will pin some inspiring parts here

{kind: memo}

meta: true
title: poz: Observations Round 3
author: POZ
date: 200604
artwork: ThroughSegments
project: AlgorithmicSegments

kind: conversation

The idea of a system that re-enters itself I find really fascinating. But also very hard to think about it. Here I'm referring to a system in terms of its structure rather than its behavioe. I'll try to think about it maybe I get some more clear ideas

{kind: memo}

This is also something I want to try soon

{kind: memo}


from Ji

I naturally would imagine them having the same character and being bi-directional, but it's interesting to think they could be different: one-dimensional and even, as you suggest, existing for just some time. In a way, re-inscribing some sort of segmentarity into these "bridges" by limiting them, in time and extension. I think it could be a way to make these connections between the works more "perceptible".


Yes I think we could create this bridges, like a pipeline with certain rules (e.g. directions, time duration, filtering out, etc). Then it can contribute to create a certain order between us. Or we could limit ourselves in order to fitting our 'segmentations' -or whatever we call our modules- into the brieges. Then we can 'organize' ourworks (not in the manner of cleaning up, but to create a structure.)

How could we create this? Are we going to? May each one of us could make one for the connection to the next person?

Thinking about what Poz suggested, I think the audio channel can be a medium, but the bridge can be a module for the medium. I think the bridge can do a bit more than being a channel, or a pathway to a data (OSC), as written above.

{kind: quote, quoted: JYK}

fragmentation: acoustic snapshots via OSC?


Another thing I was thinking is that for me it would be cool to take this idea and traspose it also to the "bridges" among segments we are planning to have. I remember we had this conversation about the feasibility of exchanging channels of sound, and then we decided for OSC for many different reasons. Now I think that it would be interesting (at least, I'd like to try it with my network) to have a sort of hybrid communication where we would stream, every now and then, audio snapshots from one segmented to the other in OSC protocol in form of packets of 128 floats. Since these audio snippets are so short, and since this snapshot transmission would only happen from time to time, I don't think it would be a problem to go over OSC. It's a proposal, maybe let me know if you like the idea

system response to acoustic snapshot 1

system response to acoustic snapshot 2

density of events

from Hanns


it's interesting how sound events get disposed in time in this sketch by Hanns Holger. Reading the description, I didn't fully understand why it happens this way. I'll check the code and see if I can wrap mi head around

{kind: note}

June 12th


I'm going back to this idea of using simple linear regression to detect "sound exceptions" in the staircase. I think it could be interesting to combine this with the network, for example: whenever an "exceptional event" happens in the space, stop the network and start it back using 128 samples of this sound event as a seed to the network. I need to find two meaningful descriptors for doing a charting that makes some sense in there.

June 12th


Or maybe, instead of looking for differences among sound events I could look for differences in the overall sound atmosphere. I mean, instead of comparing 1 second long events, comparing periods of 20 minutes or something. I wonder how the acoustic conditions varies over time, maybe it's not directly perceivable but I could try to "amplify differences" through the network.