Daniel Mayer

 

Algorithms in Sound Synthesis, Processing, and Composition:

a Dialectic Game

The Working Process as Dialectic Game


A work of art is a result of a working process, and I'm convinced that the aesthetic quality, however hard to grasp admittedly, rises and falls with the intensity and quality of the maturation – of work and artist – which happens along this process. But how could aesthetic quality and maturation be described? Therefore I'd like to go back to a thought that lies totally outside the sphere of computer algorithms. Theodor W. Adorno drafted it brilliantly: new works of art – at least in the context of the Western tradition – will always be understood and judged concerning their ancestors. In this way, the formal languages of art are a narrative of power and rebellion, linked to the conditions of real power in subtle ways. Searching for the new is a must, but how the new is new, namely compared to the old, that's the crucial point. Quality shows up in the sensitivity for this dialectic game – for sure, it has to do with the subconscious.

In practice, an artistic workflow probably consists of thousands of decisions, from which many are a result of the artist's relation to existing and remembered works. A workflow with automated algorithms, though, is a real novelty, as it can dramatically accelerate the game of dialectic decision-making. The constructive attitude is a starting point to get things running, the habit of experimentation produces results that question the aesthetic conscience – the subject has to react and to deal with them, maybe with new intermediate answers of constructive type. Hence, construction and experimentation are modes to calibrate the subject's positioning to the past via algorithms in the fields of sound production and organization.

The graphic sketches the idea of a balanced working process as a phase diagram: the artist oscillates between the four quadrants following a very strange attractor, stochastic dialectic impulses are triggering new quasi-periodic states. Continuity is a simplification – jumps will commonly occur.

For the sake of clarity, I assume only one single human agent. Collaborative working processes induce additional feedback between the acting persons. However, the principle of the subject's dialectic reaction on the base of implicit and explicit historical knowledge remains. I've also omitted the treatment of space. Experimentation and feedback are relevant in this area as well, but probably less so the historic-dialectic aspects (there might be exceptions: circular movements are a stereotype one would maybe want to avoid – or to design in a more refined way).


{kind: paragraph, persons: Theodor W. Adorno, keywords: [workflow, algorithms, synthesis, memory, experimentation, construction]}

The ubiquity and versatility of computer algorithms in the field of sound and music – which is continuing and extending the historical tradition of rule-based musical composition – makes it necessary to reflect their role, especially for artists using them in their daily practice. Although this seems self-evident, it's possible and wide-spread to use algorithms naively, also in various one-dimensional ways. I have done so myself in the past, and I'd like to explain my revised views on the subject by suggesting and discussing a categorization.

The following analysis emerges from observations of my artistic practice in the fields of electro-acoustic as well as instrumental composition. For sure, it is not valid for all possible usages of algorithms regarding sound and music, e.g., I omitted the role of algorithms in performance. However, I tend to think that the outlined abstractions of human-machine interaction can be applied to other working processes as well, even beyond the domain of audio – I'm looking forward to discussing this with other participants of the ALMAT symposium on the occasion of the upcoming round table.

First, there are two poles of algorithmic functionality: the production and the organization of sound. Both can be intertwined, but they can also be separated, which, I think, can be more problematic in aesthetic regard. Secondly, there are two poles of attitudes when working with algorithms: construction and experimentation, both can apply to sound organization and production. I will argue that a successful workflow will mostly include a meaningful balance of all these elements – whereas, an over-emphasis of either pole is a potential danger of entering an artistic cul-de-sac. In this sense, a dialectic relation between the poles of functionality and the poles of attitude is welcome, but this is not the only kind of dialectics I'm arguing for: I'd like to show how, via repeated judgments, especially the historic-dialectic aspect comes into play and influences, maybe even dominates, a workflow, which, on the surface, might look to be mainly driven by an abstract – the algorithmic – paradigm. To a large extent, it's the unconscious, sedimented knowledge, which is influencing the individual's relation to new material – that can hardly ever be new in any sense.

At this point, I should say that I'm talking about my practice of programming and adjusting algorithms. I've observed that many colleagues are acting similarly, though, I have also seen people taking algorithmically generated output as granted, so judging the algorithm by its abstract value (beauty?) and not by its materialized results. I have to admit that I'd feel extremely uncomfortable with such an approach.



Furthermore, it's worth mentioning that the usage of algorithms doesn't necessarily require its programming, algorithms can be chosen and combined in a higher-level syntax, graphically or not, and the future will bring more of such tools. However, I think that being able to access at least a minimum of lower-level syntax at the end gives more choices to the artist.



To exemplify the bespoken categories, I will finally include a current snapshot of my artistic work. Right now – summer 2020 – I'm at the starting point of composing new pieces with totally synthesized material, something I haven't done for a long time in which I preferred synthesis based on recorded samples. I'm more intensively experimenting with three methods that are all rather new to me, and I'm not sure what I will decide on at the end. However, I will comment on the experiences that I have made so far. That might clarify the abstractions of the described determining forces – which are a result of similar experiences in the past.



As I regard the dialectic reaction as the driving force behind the algorithmic working process, I'd like to start here.


{kind: paragraph, function: introduction, keywords: [synthesis, processing, sound, production, organization, construction, experimentation, material, unconscious, dialectic]}

The Attitude of Experimentation


Unintentional experimentation is one of the most fascinating and satisfying ways to spend time with the computer. The surprises of unexpected sounds, surreal gestures, and formal developments can be mind-blowing. In such moments we enjoy the flow, but after an honest reflection, we must confess that experimentation is a lucky fruit of construction, its dialectic twin. No algorithm to let it play that isn't, at least to a minimal amount, made to perform something, whatever it is. So it's the balance of construction and experimentation, in other words: of intentional and unintentional working, which is essential for achieving convincing results in the fields of sound and structure. It's important to note that, although experimentation builds upon efforts of previous construction, it isn't only happening after it. Typically, construction and experimentation alternate – the latter can trigger ideas to extend existing algorithms into a new direction. The decision to go back to a constructive attitude will happen if the results of experimentation are not satisfying enough to keep them, and the reason is much likely (I suppose virtually always) caused by a conscious or unconscious comparison with known structures and sounds.

Now, how can a balance in the alternation of construction and experimentation appear? That's probably very personal and depends on the project. It might be more fruitful to describe possible pitfalls and imbalances: first, a tunnel vision of an algorithm can block any progress in a workflow. The belief in a specific procedure can become quasi-religious and therefore hinder any altering or abandonment – if at the same time the experimental results are dissatisfying, we have a classical appetence-aversion conflict. Secondly, the alternation between the two attitudes can also happen too fast: if there's not enough time for experimentation with a specific algorithm, it's unlikely that satisfying results – such that survive the dialectic game – can emerge. It's essential to give the process enough time and to step back repeatedly from the occupation with the work to avoid the extremes. More than once, it happened that I have been experimenting several consecutive days with a specific algorithm – after a break of some days, a quick re-evaluation of the sounding results immediately showed the insufficiency of the approach. It also happened that overlooked experiments of a late-night computer session turned out to be much more promising than thought at first.

Further, random changes and even mistakes are also a vital part of experimentation. If they lead to new results that are pleasing, there's no reason to reject chance finds. On the contrary, it's a good reason to continue investigating the modified procedures. This strategy is a structural similarity to science and technology, over and over, errors are leading to discoveries in these fields.

Not at least, we should reflect on the combination of algorithmic and non-algorithmic strategies. Again, the balance of contrasting working modes is essential, and reciprocal interference should be rather welcome than eliminated. For example, it's rather common to arrange algorithmically generated layers of sound in a digital audio workstation. One might take these layers for granted and try to improve the editing and mixing. However, it could turn out that certain timbral qualities are awkward to combine. Then, it might be better to go back to the algorithmic working mode and change the parameters or even the design of the algorithm. 

 

{kind: paragraph, keywords: [synthesis, processing, sound, algorithm, experimentation, intention, workflow, attitude, procedural, composition]}

The Organization of Sound


This chapter refers to the meso- and meta-level of time scale. Timbre is subject to a combinatorial directive – algorithms can structure or sequence arbitrary sound events, independent of their material characteristics. By using the computer, this opens the possibility of experimental design of gestures, phrases, and even overall musical forms, which is a great achievement. However, there is a specific danger: designing meso- and meta-structures can fail with neglecting the underlying sound material, which might or might not work in this or that structural coat.

Algorithmic attempts to organize sound are not bound to the computer: rule-based composition has been a topic for centuries, though with changing emphasis. It has been less relevant in the classical-romantic era, where the role-model of the individual composer and – related – the idea of (self)-expression was emerging and finally became dominant. A revival of formalization happens in the disruptive period of the early 20th century. Different variants of dodecaphonic techniques (Schönberg, Berg, Webern, Hauer) are the most famous examples, at the same time, other inventions come into being (e.g., synthetic chords defined by Scriabin and following composers of the Russian avant-garde). In this time of change, the paradigms of rule and expression are fighting, paradoxically even within the world of rules itself – think of Schönberg's still "romantic" usage of 12-tone-rows in contrast to Josef Matthias Hauer's anti-Wagnerian musical machinery. No matter how composers use rules in their workflows – with the intention of expression or its affirmative refusal: formal frameworks, often designed very individually, become increasingly important during the 20th century and especially after World War II. The raise of rules in Western art music starts with the dissolution of a mandatory musical syntax, especially the disintegration of functional harmonics around the year 1900, and it seems that these developments are strongly related: the design of rules replaces their general validity.

The invention of the computer is happening when music history has already been going into the direction of formalization. Nevertheless, a new quality comes into play by the speed of calculation and the possibility of rapid feedback, even in the non-electronic domain. E.g., however imperfect an instrumental simulation of a score might be, the sheer possibility to check calculated compound gestures or long-term developments within a piece can change the compositional workflow dramatically.

It's worth regarding these feedback loops in more detail: results of a calculated organization of sound – no matter if electronic or not – are judged. Changes of the algorithm might follow to urge things towards a new direction – or even to urge nothing, following the spirit of intention-free experimentation. However, judgments happen in every cycle of the loop. Now, what does constitute such judgments? They can refer to different properties of the calculated output: e.g., harmonic, melodic, rhythmic, contrapuntal, timbral, or meta-properties like formal development. Depending on the reference, the historic-dialectic aspect is of different relevance. Probably, it is most influential in the areas of harmony or rhythm. E.g., think of an algorithm that would tend to produce harmonic fields of major triads or even traditional cadences. It is likely that an algorithmic composer would change the algorithm in a way to avoid this – or, embracing the occasion, to play with it so that the distance to the known model is producing a welcome tension. In any case, it's the mighty ghost of functional harmonics that is determining the individual reaction, which can vary between allergy and ironic quasi-citation. Regarding the property of formal development, the historic-dialectic aspect seems to be less relevant: transitions between different states, accelerations as well as decelerations, changes of density, the sequencing of contrasts, etc. are so ubiquitous and unspecific that a historic-dialectic reaction cannot immediately be supposed. A decision in such cases will be less predetermined, which doesn't mean that things are easy: the composer has to develop a feeling for the balance of the material dynamics and to trust his/her perception.

 

{kind: paragraph, keywords: [synthesis, processing, sound, organization, form, rules, algorithmic, computer, calculation, simulation, workflow, intention, judgement, experimentation, history, dialectic, harmony, rhythm, material]}

Links:


  1. Personal website
  2. Credo, shortlong, two programmatic texts
  3. miSCellaneous lib @ GitHub
  4. miSCellaneous lib @ home
  5. Description Fb1_ODE
  6. Description ZeroXBufRd
  7. Description of ZeroXBufRd extension
  8. Download TU Berlin workshop material
 
{kind: reference, keywords: [reference, SuperCollider, library, synthesis, techniques]}

     Construction

The second synthesis method I'm experimenting with refers to a class that I developed for reading half wavesets between zero crossings (ZeroXBufRd). When analyzing the zero crossings of the slope, one can use it to go forward and backward between local minima and maxima (or turning points). By using a random walk for the movement, we get a concatenation of symmetric segments, a particular kind of buffer modulation (or buffer scratching). Technically this is a form of waveshaping. I'm applying this method to a synthesized source of quickly altering sounds. For experimentation, I'm switching between choosing new sources and adjusting the following processing. Many years ago, but with granulation, I've already used such a ping-pong strategy and called it double synthesis. Reading this in an old program text of mine might have been the trigger for the new experiments.

 

I'm attracted to this model in a similar way as to the previous: the fissure of the sounds is appealing. The combination of two options – producing different sources versus using different processing parameters – allows for a large variety of results, which is nice with regard to formal planning.

 

 

 


{kind: paragraph, keywords: [synthesis, sound, algorithm, experimentation, composition, techniques, buffer, scratching, waveshaping, zero crossing, SuperCollider, UGen]}

The third track that I'm following: one year ago, my colleague David Pirrò gave me a nudge to dive into the field of using (systems of) ordinary differential equations (ODEs) for synthesis, which was an accidental coincidence of implementations and thoughts. It became clear to me that I could use my previous implementation of single sample feedback (the class Fb1) to implement a class for the integration of ODEs (Fb1_ODE) – David Pirrò helped me with valuable tips for the numerical methods. I was happy about being able to manage the implementation successfully and audifying some well-known models from physics, population dynamics, and electrical engineering. The literature has recommended some of these models for sound synthesis within the last decades, but so far, I was unable to find ones that gave me satisfying sounds. Meanwhile, I found a different type of equation that turned out to be promising for me:  


y'(t) = g(y) * sin(t * f0(src)) * sin(t * f1(src)) * sin(t * f2(src))


The system is non-autonomous as it's also explicitly depending on the time t. src denotes an external multichannel exciter, yet I used sine waves. The fi represent linear functions. I don't go into the details of possible variations but would like to verbalize the sounding results: what's most exciting to me at the moment is the kind of emerging spectral developments. The linear functions are causing an ongoing spectral enrichment, starting from sine mixtures and ending up with dense, noisy textures. There are similarities to FM and additive synthesis, but the global tendencies of spectral development would probably be hard to achieve with these methods.

 

 

 

 

{kind: paragraph, persons: DP, keywords: [synthesis, sound, algorithm, experimentation, composition, techniques, ode, feedback, differential equation]}

I have ideas in mind about assembling several such developments in a digital audio workstation. Furthermore, I have identified some parameter groups that would produce contrasting families of them. I've also identified the parameter relations that are responsible for approaching harmonic content, though I'm still not sure about the sounds. Although I like their quality and especially their development over time, the similarity to well-known additive and FM-textures might be critical. Also, there's the proximity to the drone genre, and I wouldn't like to fall into particular stereotypes. Anyway, I will continue experimentation, and I'm curious about the things that will happen.

Gérard Grisey: " ... our model is sound, not literature, sound not mathematics, sound not theatre, visual arts, quantum physics, geology, astrology, or acupuncture."

 

{kind: paragraph, keywords: [assemblage, DAW, stereotypes, experimentation, sound]}

Snapshot: Current Synthesis Experiments


During recent years I've collected some ideas for rare sound synthesis algorithms. Some of them are related to classes that I've programmed for the miSCellaneous library, and others are not, though can mostly be implemented with just a few lines of code. I've held the "Unorthodox Synthesis" course first in a workshop format at TU Berlin, then as a lecture at IEM Graz.

Currently, I'm experimenting with different procedures, and I intend to produce some new works based on pure synthesis. Right now, I ended up with three favorite synthesis options. I'm always starting with sound experiments first. It's a long time ago that I have abandoned the habit of starting with considerations that lie outside algorithmic experimentation. I've also developed an allergy against concepts that come from the outer-musical world.

The first approach is from a procedure I named "buffer rewriting" – I'm not aware of any commonly used name for it. The idea is very simple: in short, take a buffer and read and write data to it at the same time and with different rates. Under "normal" conditions – for another purpose – one would have to take care that reading and writing wouldn't mess up the expected order of stored data. Exact opposite here: the reading scrambles and omits written data, the result is a kind of totally messed up delay line. The algorithm can be fed with any audio signal and extended with overdub, feedback, and other options. Still, it is easy to realize with SC's classes for buffer reading and writing (BufRd, BufWr).

 

At the moment of writing these lines (as said, such judgments can change), the quality of the synthesized sounds is very appealing to me. They are hefty and lack the typical purity of many synthetic sounds, which I often dislike. The variations, even from tiny parameter changes, can be surprising. That's usually a good reason for me to decide on a material, as I'm optimistic about getting enough variation for a large structure. What I also like very much is that I am not reminded of other synthesis procedures – if I'd hear it for the first time, I'd probably have no idea about the underlying algorithms.


Buffer rewriting, extended with overdub, fed with decayed impulses:




{kind: paragraph, keywords: [synthesis, sound, algorithm, experimentation, composition, techniques, buffer, writing, rewriting, SuperCollider, UGen]}

The Attitude of Construction


Now let's switch from the algorithm's material functionality – or in other words: its type of content – to the meta-level of the subject's attitude towards algorithms, which can vary between the poles of construction and experimentation. Again I should emphasize that I'm inspecting my artistic practice and not following any concept, e.g., of human-machine interaction. There are psychologically informed theories in this area, but the embroilment resulting from historic-dialectic aspects is probably too specific to be relevant for psychologists. Most likely, it wouldn't fit into empirical metrics either.

Concerning the naming: intention could be an alternative to the term construction. The design of algorithms usually requires to have a clear purpose in mind, might it be a sort of structure-building, of producing formal developments (organizational functionality) or a specific synthesis procedure (sound-production-oriented functionality). The purpose can also be more cloudy; however, there remains an intention to be realized in a programming language with rationale work. This labor, though, isn't enough: the emergence of an artistic value then requires practical experimentation – under laissez-faire conditions and with a mind opened to the dialectic game with the ghosts of the past.

A typical example of a constructive attitude is the writing of classes, which are a fundamental building block of many programming languages. Some comprise a clearly defined aim – e.g., a GUI abstraction –, others are so general that the specific intention might be hard to verbalize for non-programmers. However, most experienced programmers would agree that writing a class is an act of deliberate intention and far away from, let's say, half-conscious dreaming or écriture automatique. But the latter practices – in their digital form – might again inspire the design of new classes. As peculiarities of an attitude of experimentation (or laissez-faire), one could regard them even as the targets of class abstraction.
 
When I'm writing classes – e.g., for my SuperCollider library miSCellaneous – I'm going to add implementations that are not yet part of the main library of SC and that I regard as being useful to have. But this kind of desired usefulness is mostly different from a solely technical sort, e.g., an archiving optimization. The undertaking is instead often closely linked to historic-dialectic aspects and the hope for a surplus in the usage under experimental conditions. Some time ago, I implemented the Iannis Xenakis' sieves in original and extended (live-controllable) form. I was aware of the fact that they can produce structures without obvious repetition, but a controllable degree of self-similarity. In this way, I thought they must be useful, e.g., for granular rhythms (Xenakis tended to use sieves in other ways though), and capable of producing structures that are fundamentally different from random or widely used combinatorics. This difference was the dialectic aim I had in mind, and it turned out to be promising in experiments so far. Such granular rhythms are also an example of the inseparability of algorithms between the domains of sound organization and sound production: obviously, they belong to both.

 

{kind: paragraph, keywords: [synthesis, processing, sound, algorithm, functionality, attitude, organization, experimentation, composition, intention, construction, dialectic, abstraction]}

The Production of Sound


This chapter refers to the micro-level of time scale, and synthesis will always mean the inclusion of processing as an option. As the topic is extensive, there is a habit to regard it completely separated from questions of formal organization. Indeed it's possible to arrange respectively organize synthesized material without algorithms at all as well as with algorithms that have nothing to do with the synthesis process. On the other hand, there are synthesis algorithms that have a strong tendency to project beyond the micro-level, for example, transition effects from timbre to a perceptible rhythm. It might be a question of personal taste also if one prefers a bottom-up (sound production first) or a top-down (organization first) workflow. I tend to believe that in the age of timbral malleability, it is an advantage to favor bottom-up – following formal tendencies that grow out of the sounding material – and to be careful with a strict separation of sound organization and sound production. In any case, focussing only on one algorithmic type is problematic: a fractal structure applied to general MIDI sounds appears equally strange as an experimental physical model synthesis restrained into a folk tune.



But what's the role of the historic-dialectic aspect within the production of sound itself?



Probably even more than concerning the organization of sound, it's typical that algorithms for synthesis come into being in feedback loops, where aesthetic judgments are influencing further refinements. These judgments are again influenced by conscious and – probably much more – unconscious knowledge of history, causing attraction and repulsion in a complicated linkage. Although the history of electronic sounds is relatively short – at least in comparison with music history in general – one should not overlook that the omnipresence of electronic sounds in media during the last decades has already impregnated several generations. Typical sine mixtures from early electronic music or FM-sounds from the 80s are immediately triggering a whole cloud of strong associations, from iconic pop songs to science-fiction. Sensitive composers will avoid proximity to such "standard sounds" except for purposes of citation, assemblage, and alike. They will also get around electronic sounds that resemble a stereotype copy of acoustic instruments. They will, on the other hand, often favor those sounds that, at least in one aspect, enter unchartered territory.


Sometimes, even in the electronic domain, it's stated that there were no new sounds, or that the novelty of sounds would be irrelevant in comparison with the context of its usage. While admitting that context does matter, I'm strongly opposed to this viewpoint. Regarding the big picture of music history, there is an ongoing development towards refining and composing timbre, maybe starting with Claude Debussy as an outstanding figure. The music history of the 20th century continues this trend, which is not at last documented by the importance of extended instrumental playing techniques in new music, a phenomenon though, which, I tend to think, has reached a kind of a dead-end in the domain of instrumental-acoustic music. But I'm regarding the possibilities of electronic music as a straight continuation of this development. One could even say that with today's algorithms, we are, for the first time in history, at a point where composition can coincide with the creation and shaping of timbre.


Against the background of this statement, it might not surprise that I'm voting for a bottom-up strategy when it comes to the question of a junction of algorithms for synthesis and organization. I'm unable to express a recipe for such, but I'm clear about my priority: let's start with sound first!


{kind: paragraph, keywords: [workflow, algorithms, synthesis, experimentation, construction, timbre, history]}

 

Context: listen now, later, or omit

Organization of Sound

Production of Sound

 Experimentation

---
meta: true
event: almat2020
function: proposal
origin: contribution
date: 200919
author: Daniel Mayer
place: Online

keywords: [algorithms, sound, synthesis, processing, composition]
---