This Mellite workspace contains an experiment to recreate and transform the piece with the software now being fully implemented within Mellite. The first instance was using an early version of Sound Processes, the second instance was using a recent version of Sound Processes, but the algorithm’s logic was still implemented in terms of plain Scala code, using the software API and writing the code in a regular code editor. This new version attempts to understand, how working in a live programming environment such as Mellite changes my approach of developing the piece, as well as understanding and testing new abstractions for the formulation of the algorithms (notably the so-called the Control object with its Ex, Act, Trig abstractions).
The current version is an effort in which I have translated the algorithm of wr_t_ng m_ch_n_, i.e. I took the plain Scala code of the control logic, and translated it into Control objects. Finally it triplicates the structure in order to represent three nodes A, B, C on one Raspberry Pi.
To run the piece, you have to configure the Mellite preferences so that in the audio section, there is at least one input and one output channel. Then make sure to boot the audio server in the main window, before proceeding (you can choose ‘automatic boot’ in the preferences). Route a radio signal to the sound card’s input, the workspace then just uses the first input channel of the sound card.
The workspace itself contains a small ‘control’ user interface that you can open. But before, the location of the sound file directory has to be set using the ‘initialize’ UI. Open that, select a directory, and press ‘Initialize’, then you can close that window.
In the exhibition, I set up a RAM disk large enough to store the ongoing sound database. You should set this also as a the temporary directory in the Mellite preferences.
The ‘main’ control object launches the three control instances A, B, C. It begins by playing the plain radio (input) signal in order to ensure that the channel is set correctly. A hardware foot switch in the exhibition is then used to start the actual sound algorithm. The three Raspberry Pis per table are interleaved so that they have enough time to render their new phrases. So after each phrase, action is yielded to the next Pi, so the sequence is (Pi 1, node A), (Pi 2, node A), (Pi 3, node A), (Pi 1, node B), (Pi 2, node B), (Pi 3, node B), (Pi 1, node C), (Pi 2, node C), (Pi 3, node C), and back to the beginning. Each Pi selects one of its three Petri dish groups using two mechanical relays.
In each ‘control’ A, B, C, the main search and replace algorithm is activated. Initially the database and phrase files will be empty, so it will take a while, until the database has been filled to its given maximum (currently: three minutes).
The algorithm currently alternates between two database and two phrase files, so that the space on the RAM disk or SD card does not grow over time. Always one of the two files is the current one, and one is the previous one, which is necessary so that a file can be played while the next one is rendered, and so that a database can be rewritten. Upon boot and shutdown of the Pi, these files are backed up to the SD card, so the same database is continually evolved over the time of the exhibition.
Please feel free to look into all the partial code objects inside the ‘aux’ folder, and if you want to learn more about them, get in touch.