Compositional methods were explored through Ircams programming environment Open Music.
Hocket is a medieval compositional technique where materials alternate between multiple voices. A dense microtonal material can be distributed between instrumental groups with different concert pitches. This split is handled by the function hocket-multiseq:
A different hocket technique is possible, using a curve shape to split a melodic line between multiple midi channels, and finally separate parts:
The shape of the midi channel curve can be recognized in the resulting score:
Adding orchestral attack/decay envelopes with multiseq-env-poly after the hockets can distort the timing. A monophonic line is folded into polyphony or heterophony.
The initial melodic line.
The same line split between multiple parts, and time distorted through these envelopes.
Morphing within sound processing is available in Audiosculpt, Soundhack, Csound and other software. Properties of one sound is be applied to a second sound to form hybrid sounds. There are numerous morphing techniques. One of them is to match amplitudes from one sound with remaining data (pitch and time) of another sound. In csound, this is done with the function pvcross or pvscross. The example below extracts parameters of 2 tam-tam sounds, performed with different sticks and techniques. The hybrid tam-tam has pitch of tam-tam 1 with rhythms of tam-tam 2.
Tam-tam analysis 1 (struck).
Tam-tam analysis 2 (scraped).
The result is a hybrid tam-tam sound.
Partial trackings can easily be treated the same way as other musical materials. Control data is gathered from multiple sources: The input material is reordered by new pitch curves. These curves come from spectral dynamics of a second tam-tam sound, and melodic curves from Ravels Sonatina, used in many of the following examples. Such connections can be considered 'morphing' of musical materials. We can use the term in a wider sense for a range of fluid transformations of musical materials available through the Ruben-OM patch library. Inputs can be partial tracking, composed fragments or historical quotations.
Filtering by criteria
Partial trackings from the pm2 library can contain a huge amount of information. We can select duration longer than a minimum.  This is helpful to remove background noise from the analysis of timbrally stable sounds. Keeping only the short partials means keeping only the most noisy and instable components of a sound.
Dynamic levels of partials can be a selection criteria.
While the previous examples were static filters, the following patches reorder information by curves. These processes work well on spectral analysis, due to their multidimensional nature. Sorting by pitch, duration or dynamics criteria will return different results.
In this example, a chaotic curve is used as the new pitch curve, pointing up or down through the total reservoir of pitches.
Duration is the next parameter. Through simple ascending curves, we can make all quiet and short components audible alone, by transitioning from iterated pointillistic textures, progressively longer until reaching homophonic chords.
Finally, curves can be pointers to dynamic levels of the material. 6 different simple shapes will create a heterophonic crescendo and decrescendo. The attack portion of a percussion instrument is the strongest, so what comes at the top of the curves, comes from the beginning of the tam-tam sound.
Examples using partial trackings have a natural variation on all parameters. Manually edited scores have pitch and rhythm, while dynamics are static. It is possible, though tedious, to change midi velocities manually. bpfs-dyn-multiseq applies dynamic curves to each chord-seq. This is useful to find a more musical MIDI playback, and directly relevant if the material is exported to Csound synthesis. The Om2Csound library exports Csound scores from Open Music.
Time scaling could be considered for a less rigid performance. Introduction of irregularities to a MIDI playback is often called 'human playback'.
Score with dynamic shapes:
Basic Open music patches can deal with particular types of musical gestures, while the maquette is designed to work on a macro level. I have chosen not to work with maquette, as rhythmic syncronization and communication between different patches are complicated to handle.
gestures-chordseq organizes a composition of patches through a macro melody, a repertoire of gestures, and scripts for gesture choices, duration of each, speed and silence before each.
Two piano gestures used by Salvatore Sciarrino has been used as an example. The input patches are lambda abstractions, and it's possible to expand a repertoire of specific gestures.
This is the "Sciarrino" score. The scripts are randomized in this patch, and the score will be different at each evaluation. The gestures have dynamic shapes, for a more expressive playback.
Another idea could be using a macro melody, and let a number of new melodies start in proximity to each note of the macro melody. knoppskyting was an early version of the gesture patch idea. Gesture patches with 4 inputs are used, it is possible to control interval sizes.
Principles of these tree branches are:
- Each submelody start simultaneously with it's source note (time blur could be added for the future).
- The melody pitchrange equals the interval to next note of the macro melody, except for the last macro note (which repeats the last interval).
- The resulting submelody is transposed to a proximity to it's source note.
- The macro melody is there as an organizing principle, but it is not itself present in the output. It has been split into a heterophony.
The result of this patch is a short orchestral outburst created with knoppskyting and orchestral envelopes.
A long macro melody would create a vast amount of parts. The ensemble-circles functions makes is easier to control number of parts.
ensemble-circles contains the gestures-chordseq function, with additional structural organization to handle multiple parts. Curve shapes create orchestral arpeggios of chords, or serve as spatialization trajectories for a spatially distributed ensemble. Blurring parameters can distort the attack envelopes or control overlaps between parts.
This triggering curve...
....can be recognized as attack trigger for this score:
ensemble-circles-lists makes it possible to use more than one gesture for each part.
A script will for each part will select from the available gesture patches.
ensemble-circles-lists-gliss contains the gesture script, and an additional interpolation of the base notes.
The envelope functions in Ruben-OM do not generate material, but scale existing material. Orchestral scultures are shaped by fitting input parts within envelopes of attack and decay. If the input is a chord with pitches sorted from high to low, the envelopes serve as lopass and hipass filters.
Musical fragments can be squeezed in between similar envelopes. This example shows the Ravel Sonatina, shifted and stretched with ascending envelope shapes.
We will hear a canon where the original parts are out of sync.
multi-env-poly-legato-tie joins a list textures through a list of envelopes. After the initial attack, the decays are the attack shapes of next texture, until a final decay. This can create fluid changes of textures within an orchestra section. The list of textures can be of any length.
The principle of granulation of sound is extracting, reordering and superposing fragments of a sound. We can do a similar thing extracting random segments from Ravels Sonatina. This could be called 'musical granulation'. Through superposition of many extracts without syncronizatin to common beats, a blurring of the music can be achieved. Used in a simpler way, extract-onset-range-multiseq can be used as an editing tool.
Superposed fragments from Ravels Sonatina:
A spectral analysis will contain precise intonations. Intervals can be approximated to a particular equal tempered interval size, which makes it easilier to identify intervals in instrumental parts.
Another possibility is extracting only notes within a certain proximity to notes of an instrument. Lists of possible midicent notes on the instruments can make this a useful orchestration tool.
Notes can be extracted through simple hipass, lopass, bandpass and bandreject filters.
The soft filters do not remove everything (unless filter pass percent is 0), but thins out regions by random selection of notes.
Bpf curves are used to control time variable filters.
Interval inversion curves
Modus Quaternion offers 4 versions of a material:
- Retrograde (time curve from end to beginning).
- Inversion (intervals are multiplied by -1).
- Retrograde inversion.
Through a combination of inversion functions and time pointers , all four versions are available with Ruben-OM. Controlling inversion degree and time pointers, over time by curve shapes, offers not just four, but endless variations of a material. At some point, the original identity of the material may be obscured.
This patch demontrates behaviours of the inversion curves.
The original fragment from Ravels Sonatina.
A pure inversion of the same fragment.
A transition from original to inversion. This extract is short, the whole process can be studied from the patch. Intervals reach microtonal in-between states, and the fragment ends with inverted intervals.
Original is interval multiplied by 1, inversion is multiplication by -1. A transition can go through 0. Intervals multiplied by 0 will give only note repetitions.
Halftone materials can be transformed to quarter-tones by diminishing the interval sizes. This version gradually transforms from quarter-tones to eight-tones.
There are 2 possible 'pitch multiplications'.
First literally multiplying intervals by a single floating number, for new microtonal scalings. A tam-tam spectrum loses some of the original sonority when multiplied by a floating number, as the streched spectrum becomes more artificial.
A second possibility is what Pierre Boulez defined as 'chord multiplication'. This is pure transposition, but number of notes gets multiplied.
If we were to follow up the use of chord multiplication in Boulez's Le Marteau sans maitre, this patch would need to be developed further. The piece involves not just one chord to 'chord multiply' with the others, but intricate matrixes of combinations.
Through pitch-shifting, the overall range of the music can be dynamically shaped over time. If the overall pitch shifting range is similar to the input, a transposition will happen. Otherwise the music will be twisted out of shape.
Ravels Sonatina is used as an example.
This is a microtonally distorted Ravel. The melodic gestures can still be recognized.
A pitch shifted Ravel fragment also scrambled with a time pointer would be harder to identify, while it would contain imprints of the general activity of the music.
We can create a glissando from one chord to another, to hear harmony of the transitions, or for csound synthesis.
A trill glissando will require four chords, when additional trill notes at beginning and end are included.
The trill glissando score.
The trill glissando is combined with addition of rests. Generally ascending bpfs increase probability and duration of rests over time. Finally the rhythms are quantified.
A trill glissando with progressively more rests. The whole score can be seen and heard in the patch.
Different processes are combined to create a larger orchestral texture. Notes of a tam-tam spectrum serve as beginning and end of a glissando. This could be a static sound, but the bpfs create a constant glissando in all parts. The texture is thinned out by rests, and finally scaled to a single orchestral attack/decay envelope.
A human performance would have constant irregularities and tempo changes. A MIDI performance can get more flexible through curves of tempo. The music is split into a large number of time windows to create smooth transitions.
Micro polyphonies in works by Ligeti are dense textures made through numerous simultanous rhythmic versions of the same melodic line, like a cluster of delays. We can attempt to thicken a line through time scaling with a number of randomized curve shapes. This does not show Ligeti techniques, but a related textural approach.
Multiple melodies can start at the same time, every string player in five sections has a separate time scaling curve. This creates a large score, which can be heard in the patch, and exported to Finale for editing.
A time pointing curve is used for linear or non linear readings of a musical material.
A curve where y moves 0-1000 will return the original order:
A curve with y moving 1000-0 returns a retrograde curve:
This is the resulting retrograde of the Ravel Sonatina:
Other curves can create non linear readings of the music.
... will create a scrambled version of Ravels Sonatina. The pitches are all from the original, while irregular scannings through the music are visible. At points Ravel is rapidly read from the beginning to the end, multiple times, while calmer curves can make the music stop, or dwell on shorter fragments.
Open Music uses rhythm trees as a format for time signatures, measures and proportions. To quantify absolute durations, or irrational lists of numbers, to a tempo and rhythm is complicated. The standard function omquantify tends to lose a lot information. mktree is more accurate, but it's hard to control the the complexity of the quantifications. I needed a method handling any list of irrational proportions well.
The tree-quant functions are new alternatives included with Ruben-OM. A list of number proportions are converted to a rhythm tree without loss of notes. Changing the overall duration makes it easy to augment or diminish rhythms by any ratio.
The first example is simple. Proportions add up to 9, duration is 9 seconds at tempo 60. There is no need for approximation.
But if we for instance changed tempo to 59, or duration to 9.238724, we would see how different multiplications affect the complexity and precision of the approximation.
A hierarchy of lists give measures with time signatures and proportions. Floats like 1.0 means the the note is tied from previous beat, negatives like -1 means that it is a rest. It is possible to add further levels of lists within lists to create subdivision within subdivisions. I have chosen not to go below the level of beat subdivision, but this could be a possible direction to further develop the quantifier.
A list of irregular proportions makes it possible to experiment with different multiplication factors. There are tiny values which could risk bringing calculations outside the numerical range and cause errors. These are increased and "saved" by the tree-quant.
Lets show these examples as rhythm trees.
With multiplication 1, proportions are simplified to 1 in most cases:
(5/2 (((3 4) ((1 (-1 1 -1 1)) (1 (3.0 -1)) (1 (-1 1 1 -1 1 -1)))) ((2 8) ((1 (-2 1)) (1 (1.0)))) ((3 4) ((1 (1.0)) (1 (1.0)) (1 (1.0)))) ((3 4) ((1 (3.0 -1)) -2))))
With multiplication 4.323, proportions vary more. The structure of the list is the same, but contrasts between proportions is what makes it increasingly complex in notation:
(5/2 (((3 4) ((1 (-2 2 -4 1)) (1 (8.0 -1)) (1 (-2 4 1 -1 1 -1)))) ((2 8) ((1 (-5 1)) (1 (1.0)))) ((3 4) ((1 (1.0)) (1 (1.0)) (1 (1.0)))) ((3 4) ((1 (7.0 -2)) -2))))
With multiplication 5:
(5/2 (((3 4) ((1 (-2 2 -4 2)) (1 (9.0 -1)) (1 (-2 4 1 -1 1 -1)))) ((2 8) ((1 (-5 1)) (1 (1.0)))) ((3 4) ((1 (1.0)) (1 (1.0)) (1 (1.0)))) ((3 4) ((1 (8.0 -2)) -2))))
With multiplication 20:
(5/2 (((3 4) ((1 (-7 5 -14 5)) (1 (31.0 -2)) (1 (-6 14 1 -1 1 -1)))) ((2 8) ((1 (-16 2)) (1 (1.0)))) ((3 4) ((1 (1.0)) (1 (1.0)) (1 (1.0)))) ((3 4) ((1 (31.0 -8)) -2))))
Since rhythm trees build notation from proportions to notation, Open Music is capable of notating results far beyond what is musically practical. This is a quantification with multiplication 3000:
These rhythms arise through the more precise approximation of the irrational values coming in:
(5/2 (((3 4) ((1 (-1047 749 -2094 613)) (1 (4501.0 -225)) (1 (-823 2094 3 -6 6 -3)))) ((2 8) ((1 (-2251 161)) (1 (1.0)))) ((3 4) ((1 (1.0)) (1 (1.0)) (1 (1.0)))) ((3 4) ((1 (4501.0 -1051)) -2))))
The sum of numbers create the subdivision of each beat.
Through the experiences of tree-quant, I started to see the need to limit short rests. tree-quant-legato makes if possible to filter out short or unintended rests.
The preceeding functions are strict; if a note doesn't start exactly on the beat, there will be a tie. tree-quant-legato-tie introduces a duration treshold to tie a note to next beat.
Numeric proportions, or macro rhythms, can easily be quantified.
Quantification can be a final step of transferring a spectrum to notation. multiseq2poly-legato-tie is a high level function using tree-quant-legato-tie. Through hockets a tam-tam spectrum is split by 1/16-tones. This will give each musician the chance to perform halftones at different tunings. Zero durations and overlaps are changed before quantification, and every part is time scaled down to a playable spead. Converting this spectrum to a score combines hockets, time scaling and quantification.
An approximation of only 1/8-tones will increase the amount of activity within each part, as notes are split between fewer parts.
The spectrum can also be converted to a monophonic part.
The tam-tam spectrum melody.
A spectrum can be approximated to available notes on an instrument.
The notes exist on a celesta, but the part would need time scaling to be playable. That would not complete a piece by itself. Further technical challenges on the instruments should be considered, and musical choices can be made.
 I generally use the word 'parts' instead of 'voices' or instruments. The reason is possible confusion with the Open Music score objects: 'Chord-seq' is pure proportional notation, a 'voice' has quantification, 'poly' consists of a list of voices.