Wolkenpumpe, like so many other “components” implemented on top of Sound Processes and available in Mellite, was originally a standalone project.

 

I first started moving my live improvisation practice from the analog domain to digital computers and software around 2004. I think Blind Operators, a short-lived hybrid analog/computer project with Martin Kuentz, may have been the first time I employed an initial Max/MSP patch that would form the basis of Wolkenpumpe. I then reimplemented the patch in SuperCollider, when version 3 became open source. It was a fixed “matrix” of sound generators and filters that could be enabled and configured using a MIDI controller, following the spirit of my earlier analog practice with music cassette walkmen, various filtering and distortion modules, loopers, and mixer. The matrix then become more dynamic, as a variable number of modules could be added, and soon after the fixed grid layout of modules became too constraining. I moved to a SwingOSC based interface written in Java, with the sound processing still happening in the SuperCollider server. In the new interface, one could place modules anywhere on the screen, and wire them up with patch cords. I began operating them with a graphic tablet. Some years later, around 2008, I began implementing my new SuperCollider client ScalaCollider, and consequently Wolkenpumpe obtained a new form in the Scala programming language, now using a much more dynamic pan-and-zoom interface, and the visual module representations algorithmically positioned using a force-directed (physics-inspired) layout. In the following years, Sound Processes was iterated in close relation to my PhD, and Wolkenpumpe was adapted to the new framework. If I counted correctly, this would have been Wolkenpumpe generation 6. While it is still continously developed in the context of my improvisation practice, I still consider the current version within the “6th generation”.

{date: 210113, keywords: [ _, history ]}

First pure SuperCollider version (c. 2006).

{kind: caption}

First SwingOSC hybrid Java / SuperCollider version (c. 2007).

{kind: caption}

Performance for SMC 2010 Barcelona, where I improvised with sounds retrieved through live freesound.org database queries.

{kind: caption}

Live analog composite video mix of Wolkenpumpe and rattle, for Anemone Actiniaria (2015).

{kind: caption}

Wolkenpumpe

{kind: title}

Wolkenpumpe is named after the poem (1920) of dadaist Hans / Jean Arp. I do not remember why I picked it exactly—after all, it is a machine to produce sound clouds—but at the time I was very interested in Dada and Surrealism, and I had been reading a collection of poems by Arp. Strangely, the organic amoeba-like visual structures typical of the current version only came much later than the name was chosen; the original version had rigid rectangular blocks. An alternative name is Pompe de Nuages or just Nuages, and if you look into the source code, the types usually begin with Nuages.

{date: 210113, persons: Hans Arp, keywords: [ _, name, dada, cloud, amoeba, poem ]}

A video tutorial for Mellite.

The icon is a crop from Arp's ‘self-portrait’ (c. 1922).

{kind: caption}

---

meta: true

author: hhr

keywords: [Wolkenpumpe]

---

Wolkenpumpe is essentially a modular synth kind of system. It distinguishes sound generators (that do not necessarily require sound input from within the patch), filters, and sink / outputs. The categories are fluid. Your repertoire of generators, filters, and sinks sits in lists that can be brought up in the interface, or one can simply type their names from the text interface. At any point on the display, you can create and delete these processes, and interconnect them with virtual wires. Every process exposes a number of parameters that can be either manually adjusted, or patched into other processes.

 

Is Wolkenpumpe a live coding system? Perhaps not in the narrow sense; one rarely writes new functions from scratch during a performance, although it is technically possible (and I used to have a pop-up code editor around for while). At least in my practice, I create macroscopic sound structures by assembling the prepared building blocks. These blocks are not static, I adjust them to the necessities of the gig, or I make specific selections for the kind of project. I continually experiment with new processes. As an open source system, it is fully programmable, and someone could take Wolkenpumpe and make entirely different music with it than emerges from the typical structures employed by me.

 

Since this approach is not unlike patching in a system like PD, if you imagine that you bring up prior written abstractions, and since many things can be done via keyboard control and text input, Wolkenpumpe could be called a live coding'ish system. But then, does the audience become a viewer? Do they visually see what is happening? I rarely project the screen, although many people find it visually pleasing, and it certainly has its distinct aesthetic quality. Sometimes I sit in a space, and people can walk around, and they usually notice the interface, they look over my shoulder, and after the concert I get questions about Wolkenpumpe, and how to interpret the moving amoebas. I somehow find that nicer than forcing the visual channel on the audience. Since the visual interface is fully independent of the sound synthesis layer, it would also be possible to develop a second view only for the audience that is less technical and more abstract / poetic?

{date: 210113, keywords: [ _, practice, interface, live coding, modular synthesizer, visual, audience ]}

“NuagesPad”

In my improvisation practice, usually each player employs their own self-developed instrument, and I have come to appreciate this heterogeneity. Often the systems/instruments are wired together, cross-feeding audio signals that allow complex feedback and mutual influence. On the other hand, this approach is limiting, and I am interested in exploring how playing together “in the same virtual space” could work, if it is not understand as a form of being “synchronously” engaged in a causal relation that produces a “unified" piece.

 

This working-together-in-independence would be a form where distance among the players can be kept, where they can remain “inassimilable”. Sharing then is a constant circulation and curious spacing to one another without being totalised (as described by Jean-Luc Nancy in Being Singular Plural).

{date: 210114, keywords: [ _, speculative ]}

I was recently in an experimental Etherpad session, and thought that was quite interesting, given the absolute minimalism of its interface. It was interesting, because it was not about making “a coherent text”, but using it as an experimental platform, where different people could leave their different marks, but they could also disappear “out of sight”, more like a multiplayer RPG where the world stretches beyond your screen.
 
How could Wolkenpumpe be transformed from a single-player live performing/coding interface to a new multiple player situation, where players continually negotiate their proximity in the virtual space, either cooperating on sound structures, or coupling them with “weak ties”, or withdrawing to niches?

 

I imagine this to be possible by augmenting the interface with visual markup to see the different players operating. In the simplest case, there would still be one audio engine, but multiple views can be attached, so every player can focus on different parts of the overall sound structure.

{date: 210114, keywords: [ _, speculative ]}

Another configuration would be were each player has their own audio engine. They could be remotely connected over the Internet, although this will be less trivial in terms of the latency of actions (one can no longer open a “cross-player transaction” I presume; one would have to live with glitches, perhaps with a situation where the sound on one node is not exactly the same as the sound on another node. This could be interesting as experiment.
 
Perhaps it is not even desirable that each engine plays the same sound. Two players could be spatially separated in a space, each with their own set of speakers or transducers, and each “sub-tree” in the patch would run on either of the sound systems (again visually marked). Perhaps there is a limited number of buses (patch cords) that could be used to bridge the nodes. This is very experimental and very intriguing.

{date: 210114, keywords: [ _, speculative ]}

I could return to the unsolved question what the audience “sees” of the performance. I have often hesitated about the visuality of the interface in the sense that I was weighing the usability for the player with the aesthetic quality for the viewer. A viewer does not need the same rendering of details as the performer. It is boring to have a 100% alignment. Text input has been placed on the bottom screen, so it could be avoided by an analog camera filming the screen. This could be radicalised if multiple views can be connected. Why not add a dedicated audience view? It could render the action more abstract (or more concrete), more poetic. More reduced, preventing the visual sense from overtaking the attention economy and clogging the ears.

{date: 210114, keywords: [ _, speculative ]}

Two of the three videos in Configuration (2015) are based on the layout engine underlying Wolkenpumpe.

{kind: caption}

I have taken up this project again, hoping that it can produce some relevant experiences in the new Simultaneous Arrivals project to which it seems intimately tied, even though not integrated in its formal structure. It means going again into the mess of the Wolkenpumpe code base; as I get older and the code bases get older, I feel a stronger resistance to “opening up” these containers with thousands and thousands of lines of code. As always, you ask yourself whether you are going to make it refactoring these things, or whether you should not start from scratch. After two days of looking into texts written years ago, it seems the code is “unfreezing” and becoming manageable again. At least, I could implement a first proof-of-concept in which multiple views can be attached to the same SoundProcesses object (model). This is shown in a short video.

{date: 220918, keywords: [ _, simularr ]}