Research Steps

 platforms and software

Choice of Platforms

 

In the research proposal, the infrastructure is explained as mobile devices connecting to a hub—a server—that is placed in the same space as where the musicians are, which creates a connection with other hubs—and other musicians—elsewhere. After having contemplated developing software running in a browser on mobile devices, which would provide for a platform- and software-independent experience, I opted for a different route. Most importantly the disadvantage of browser run software was the limitation of not allowing for sound input. In addition, my experience in this field was too limited, with the danger of either not being able to achieve my goals or having to hire a programmer.

 

Because of feeling comfortable with Puredata (Pd), and added to this the existence of a platform for executing Pd programs on mobile phones, PdParty, I decided to opt for this framework. It would allow me to be in charge of both the design of the communication protocol as well as the development of applications to be run on mobile devices and/or laptops.

 

The biggest challenge was going to be the creation of the hub function, the server, that I had imagined developing in NodeJS. This platform, intended for server-client connections and explained as a “JavaScript everywhere paradigm”, had recently become available as an extension to the Max programming environment. This combination called ‘Node for Max’ appeared to be a good candidate and in the meantime I had gained some experience in smaller projects.

 

Due to Covid-19 outbreak, the idea of space had to be reevaluated to the extend that is was not going to be possible for a while to gather musicians in the same physical space. Naturally the research question was adjusted to a consideration of all participants being in different locations. It was at the same time that a service was introduced to me, with the name netpd. This framework has been developed “for making music collaboratively and in real-time” and was “written in Pure Data (Pd)”.

The main feature of netpd is termed as state synchronicity. The intended use of netpd is that the activity of musicians, making use of the same software platform, is not communicated through an audio channel, but instead generated locally—as sound created by the software—by synchronizing the state of the software instruments through data exchange. The benefit is that a fraction of the data that would normally be required for audio is needed to maintain this state synchronicity.

 

The example of netpd made clear that the original idea of storing data on servers, accessible for musicians and other servers, was overly complex. Adopting the idea of state synchronicity I decided to follow a development path based on each client locally storing the state of all participants, by simple data exchange. Moreover, it allowed me to make use of an existing server application, the netpd-server, which I installed on a dedicated Linux computer at a fixed i.p. address.

 

 

First Challenge

 

In order to get a better understanding of the concept of state synchronicity, the first challenge I set myself was to have two mobile devices connect to each other and create a joint activity. Following the original idea of netpd, in this case the two devices would run a software that would allow a musician to create sound output by interacting with the device, but at the same time it would make audible that what the other musician was doing. Each mobile device therefor created two audio streams, one generated locally, one remotely.

 

Read here the report on this implementation.

Implementation of the first challenge. Software running on a mobile device that connects with another, making audible the joint activity.

Interaction Scheme

 

When it became clear that the software that needed to be developed was going to be running on laptops (and not on mobile devices), I came to understand that it had to have one important requirement: it had to be usable without having to touch it—too much—in order not to interfere with the music making. Interacting with a mobile device through a touch interface would have been acceptable, but the use of a pointer on a computer would just be too clumsy.

 

Having pondered about this limitation for a while, I decided that the music making itself could become the means of interaction. In order to achieve that, the software had to be able to ‘listen’ to the musician, derive data from that listening process and turn that data into a useful model of interaction. In other words, control would be derived from audio analysis.

 

In addition, the environment what was going to be controlled, needed to have a clear graphical component, in order to allow musicians to obtain useful information from the user interface. Moreover, the action of all other participants would have to be represented graphically in that interface. These two requirements—not depending on physical interaction and clear graphics—led to taking the decision of incorporating a canvas on which the musicians would be able to paint.

 

Since the development at that time was still taking place in Pd, the options for creating such a canvas were really limited due to lack of availability of graphical interface components—on of the drawbacks of this software. From this limitation the idea was born to use a matrix, consisting of squares that could be painted either black or white, depending on a measurement of loudness on the audio input. No audio, or soft levels, would leave a square white, a louder level would paint it black. Step by step, from top to bottom, squares in the leftmost row of the matrix would be painted and after having colored the last square, the entire row would be copied to the one next to it, and the proces would repeat itself.

 

On the one hand this created the situation in which musicians had some very basic control over creating graphical patterns based on their own activity—making sound or being silent. On the other hand, the canvasses of the various musicians combined would build a graphical representation of everyone’s contribution, and therefore of the musical activity as a whole. It could be said that all musicians were together involved in painting a single canvas, although the control for each musician was limited to only a portion of that canvas.

Apart from the canvas functioning as leaving graphical traces of the music, the canvas could also turn into a score, or a graphical guide for musical activity. It could be imagined that at least to some degree the graphical representation would influence each musician’s decisions. For instance, the continuous playing of one musician, would leave a fully black canvas, exposing to all the others  how much space was taken. But better, the continuously changing canvas could invite for searching to coincide or synchronize with someone else.

 

Probably this was the most important takeaway of this experiment. I decided that in a next software version this aspect would be further developed. In order to enhance the canvas interaction in the current version, some enhancements were made to how the canvas would fill. First of all, cellular automaton rules were applied. This allowed for creating variations on how on column is copied to its adjacent. Furthermore, coloring rules were introduced. The rather limited and functionally oriented set of default colors in Pd were used.

 

Read here the report on this version.

An example of the canvas actively being filled during a performance. Clearly the portion representing each musician can be seen, as well as the corresponding canvas part. Some coloring rule is being applied in this example. The audio levels are displayed just left of the canvas, showing activity for Sean and Ranjith in this specific moment. This image is taken from a video stream, hence the poor quality. Barely visible is an arrow on the side of the left column which points to the square that is about to be colored. 

Basic Navigation Scheme

 

As mentioned before, being involved in a joint improvisation while at the same time leaving traces of musical activity on a canvas became the central idea of the next version of the software. On of the major drawbacks of the previous version was that the canvas of each of the performers remained separated from those of the others. Although together they created some sense of a togetherness, it was clear that this aspect needed to be developed. Another disadvantage was that all of the performers witnessed the creation of the canvas in a different way. The software was set up so that upon launch the improvisor that started it would always be listed first on their own computer, followed by the others who had already joined, or were about to do so. For each of the performers therefore, the order of performers was listed differently.

 

It was decided that all performers would join on a single canvas. On this canvas basic navigation would give the ability to take control of traversing it and as a consequence open the opportunity to encounter others. Initially each of the performers was given a random position on the canvas, recognisable as indexed dots, and the act of playing, or being silent instead, would steer their simple avatars. By synchronising the activity on the screens of all players involved, each would witness the construction of the same coloured canvas.

 

The colouring scheme in this rendition of the software was a procedure developed on the description of Langton’s Ant, “a two-dimensional universal Turing machine with a very simple set of rules but complex emergent behaviour,” according to Wikipedia. I had come across this pattern in preparing for programming classes and once I started working with it, I realised its potential for generating complex structures. In short, each of the performers would become an ant, dragging with it a bucket of paint.

Three performers on a canvas. When playing, their role is to paint. When silent, it is possible to traverse the canvas and have an encounter. A position on the canvas will becoming increasingly coloured with multiple visits of performers.

next: Results