First result

 11 June 2020

What is required to set up the communication channel described in the outline is a node to which all performers can connect. This node does not need to be in the physical space(s) involved and in this case is realized in the form of a netpd-server. The netpd infrastructure which is built in PureData realizes state synchronicity between various connected clients. This means that rather than communicating sound, the activity of performers through a graphical user interface (gui) is shared between participants, thereby reducing dramatically the amount of data when compared to audio (and video). In addition, netpd offers peer-to-peer file sharing and text communication through chat.

During a session on 11 June a simple software instrument, running on an iOS mobile device, was demonstrated that realized state synchronicity between the devices. In this specific case two users were involved in a musical activity without the need of setting up an audio channel between them. It was the software instrument that realized the sound result of both performers on each mobile. The infrastructure of this setup is represented graphically below.

The basic netpd interface

The image above shows the interface of the pdparty application running the software instrument mentioned before. As mentioned, for this test two iOS devices were used, held as game controllers. The interaction was based on moving both thumbs over the surface of the phone and rotation over its x and y axis. This movement would adjust various parameters of a simple sound looping mechanism. In addition, using the selector at the lower part, a selection from among three different sounds could be made as source for the loop.

 

This application would not only produce sound based on the interaction with it, but also, as a secondary layer, produce the sound based on the interaction of the other performer. Both the devices, some twenty kilometers apart, in this way produced exactly the same sound. The image below shows the PureData patch, running on both computers, which managed the communication to the netpd-server.

Improv One

 3 November 2020

On the 3rd of November four musicians met for an improvisation session. For this session the first version of a software was used that served as a meeting place and exchange platform. Each of the performers had a copy of this software installed on their own computer and by launching it would automatically connect to a server. This session was broadcasted on YouTube and where it can be viewed. On 20 November another session was done with the same musicians that was broadcasted on LOOS YouTube channel.

Broadcast on LOOS channel

The shared point of connection that the software provided was a canvas that would be filled square by square based on the activity of each player. Over time as the canvas filled up unexpected visual modification would be applied such as applying transformations rules and color change. The canvas therefor not only served as a visual representation of each musicians activity, but became a graphical tool that performers could relate to and even play with. Alongside the regular audio and visual channels this canvas became an important characteristic of the performance.

With this performance, a shift in focus of the research took place. Whereas until this moment the idea has been central that the software would have a role alongside (optimized) audio and video connections, it became clear that a well defined joined activity and a reasonable quality audio connection provided all the means to establish a sense of togetherness. The joined activity here, apart form everyone playing their instrument, could be defined as a rather reduced contribution to creating a graphical canvas, and the simultaneous interpretation of it.

 

Based on the experience it was decided that in a next rendition of the software this aspect was going to play an even more prominent role. Although the first version worked quite well, it became clear that the model should be extended into the direction of an originally imagined quality of the software which was the ability of individual musicians to flexibly team up with others during an improvisation.

 

As mentioned earlier, with this version of the software, two improvisations were conducted, in both cases bringing the same musicians together. In the period leading up to the first presentation, however, the software appeared to be unstable. The exact reason was never found, but at some point it started to crash regularly. Sometimes it would run fine for an hour, sometimes the first crash happened after a minute. The first broadcast reveals such a crash on my own computer right at the start. This instability led me to rewriting the entire software in Max, leaving the Pd platform altogether, at least for this project.

 

Because of the crashes, the audio recordings made locally on every computer were rendered useless. During the second session, the recording didn’t start well, most likely because the message got lost. (This was solved in a later version where the buffering of communication was implemented, as well as a confirmation on starting a recording.) Regretfully, of both sessions there are no recordings except for what was broadcasted. Since one of the goals of the research had been to compare the two, with this specific version that was not possible.

The image above shows the software during a testing session. Sadly, this performance that was scheduled as part of a public presentation of this research, due to technical issues, didn’t happen. The interface however shows a clear overview of this version of the software which, apart from employing the active navigation of the canvas, came with many improvements.

 

First of all, the entire software had been rewritten for the platform Max, dropping the Puredata version altogether. The latter software introduced too many limitations when working with graphical aspects. Moreover, by rewriting the foundation for server interaction in NodeJS—the server itself still being NetPd—the communication became more stable and error free.

 

Another major difference was the abolition of zoom as the platform for audio and video communication. As can be seen in the image above, the video communication was built in the software. An extremely reduced version of a camera image would be communicated to all participants. The reduction implied an incredibly little amount of data used for the construction of it, limiting the bandwidth requirements.

 

The audio communication built in this version of the software was provided by Sonobus,  “an easy to use application for streaming high-quality, low-latency peer-to-peer audio between devices over the internet or a local network.” Sonobus integrated seamlessly with the other functionality in the software.

Improv Two

9 Februari 2021

next: Conclusions