When I started recording, I had two main objectives. The first was to develop a taxonomy of the sounds I create based on the aesthetic ideas outlined in the respective texts. Simultaneously, I planned to pursue a second objective: creating a dataset for machine learning purposes.
At the time when I started making these recordings, I had not yet decided on what the machine learning task might be, but I was thinking of the potential of fine-tuning a language-audio model such as MusicGen. This finetuning would allow the model to generate sounds similar to mine and this was a compositional process I was interested in exploring.
[Note: General-purpose large machine learning models are usually trained on very large amounts of data and are not specialized for a specific task. Instead, they are designed to perform well across a wide range of general tasks. These models are known as pre-trained models. However, pre-trained models often do not perform well on highly specialized tasks. To use a pre-trained model for a specialized task, the model is further trained on task-specific datasets, starting from its existing parameters. This process is called fine-tuning.]
As my understanding of machine learning methods is an ever-evolving process, at the time I was not familiar with LoRA (Low-Rank Adaptation), a well-known method that allows one to fine-tune a transformer-based model using minimal resources. Because I was unaware of LoRA, I believed that I needed at least ten hours of recordings in my dataset. Although the first objective could have been achieved with at most four hours of recorded music, I began a process that ultimately resulted in more than twenty hours of recorded music. In order to record twenty hours of music I had to exhaust all the possibilities of variation in all my techniques and musical ideas.
I am used in focusing in the smallest details of sound. Nevertheless, the process of recording twenty hours of music forced me to go even deeper into the potential of my sounds in a way I had never done before and in a way that would not have happened otherwise. Having twenty hours of recording of the same sound would have been useless, as a dataset must be as diverse as possible for the training to be meaningful. So, I spent hours exploring each material and striving to create new and different sounds from it.
The method used for creating the pieces or examples below is the following. During the recording I kept notes about what I am recording and why. At frequent intervals, I stopped recording to listen to the sounds, reflect on them and modify my techniques and microphone placements accordingly. I kept notes from this process which is called the first listening.
In the end of each recording day I listened back to everything I recorded, made notes on what worked and what did not and made a plan of things to be recorded the next day. This was the second listening.
After having finished all recordings I listened back to all the takes and made detailed notes for each take and marked exact times of interesting events on my notes and on Reaper (the music recording and editing software that I use). The granularity of these markings is 5-10 second events. Considering that I had 20 hours of recordings, this task took many weeks to complete. This was the third listening.
Afterwards, for each different type of sound, I listened again to all the events marked as interesting and further narrowed down the set of interesting events. This was the fourth listening.
Then I started cutting the events that I would use for composition and for the dataset. Each sound in the dataset would be between 5 and 20 seconds long.
The compositions and sketches presented below result from this process.
Some of the examples below are finished compositions and some are sketches. The sketches will be used for further processing with electronic music techniques. Unless otherwise stated, all of the examples below though are acoustic sounds with no processing.
Mikado Beats
One of the techniques I use for exploring the complexity of the piano overtones is placing large wooden mikado sticks between the lower strings of the piano. Using rosin on my fingers and by means of friction, I make the strings resonate by moving my fingers on the sticks.
The competing oscillations between the two strings give rise to a complex system that is very hard to control. Tiny variations on the stick's angle and its placement on the string, that are inevitable in this real time gestural control, as well as the smallest change in finger pressure give rise to unexpected timbral artefacts.
This leads to a performance where one must always expect the unexpected. When one attempts to repeat a sound, the result is almost always failure, as something else arises that was not anticipated. Other times too much oscillation can result to sticks falling and making sounds as they do, an effect that can ruin the concert or a good take in the recording. But there is also the experience of unexpectedly interesting sounds emerging when one least expects them.
Repetition fails and expectation recedes.
Sound happens - sometimes too much, sometimes not enough.
What remains is attention.
Sound do not return; they arrive as something else.
And then without intention, something listens back.
Two hours into recording this technique, something I had not experienced before occured. I found a position of the stick where very long beats emerged.
[Note: In acoustics, a beat is an interference pattern between two sounds of slightly different frequencies, percieved as a periodic variation in volume. https://en.wikipedia.org/wiki/Beat_(acoustics) ]
This was the point when I understood the implications of this new recording process. I continued search for these beats and here I present a composition that I made using some of these events.
Harmonics
I recorded many different types of harmonics on the bass strings. With and without pedal, with one or two fingers, and harmonics that would merge or not into one another. This sketch is created using harmonics with no pedal that merge into one another. This means that some of the overtones of one harmonic continue into the next one.
Fishing Line
(Lament for Petroloukas Chalkias)
One of my recording sessions took place in August 2025, a few days after Petroloukas Chalkias passed away. Petroloukas Chalkias was the most outstanding clarinet player within the Epirus traditional music in my lifetime.
As I was recording the technique where I use fishing line to bow between strings of different pitches, I started recording a tone combination that reminded me of this traditional music. Although it is the oldest improvising tradition in Europe, this music remains alive and functional in everyday activities and celebrations, and it was all around me while I was growing up.
I then recorded many takes of an improvisation that was inspired by the laments (miroloyia) that are instumental free form improvisations on the clarinet. Musicologist Kostas Lolis writes about the form of these laments:
On the surface they do not rush; they are smooth,
they seem calm and still,
yet within them flow the most indescribable
continuous or momentary rhythmic patterns.
This is a very long piece, aiming to alter the listener's perception of time, calm and still, evolving through momentary rhythmic patterns. Each phrase is around 30 seconds long comprising of seven events (four and three) and a longer pause. This was an attempt to create a lyric-like form within the abstract piece, and was decided and applied already at the time of the improvisation (was not an editing choice afterwards).
Two wooden sticks
This is an improvisation using two wooden sticks between the strings E and F# in the middle octave. Unlike previous improvisations, this piece is developing in dynamics starting from very soft and gradually shifting to loud and fast-evolving. Here, I am trying to create the effect of a sound that is initially far away and gradually coming nearer until it fully develops in loudness.
Noise
When I create electronic music, I often like to start with a noisy sound and gradually filter out interesting tones. Here I recorded some noise sounds on the piano for this purpose.
The first one is created by using strings with metal rings threaded on them and moving them on the tuning pinss of the piano (the metal pieces inside of the piano board where the piano strings are wound around and tied to). The second one is created by mooving the hair of a toothbrush on the same surface.
In my improvisations, I associated sounds like these to the sounds of birds. But these sounds were always created in an improvisatory manner, and, given the precarity of of my techniques, I used to allow the timbres and rhythms to emerge according to what the technique afforded in the moment of performance. This heavily stochastic approach lead to a vagueness in the sonic result that can be heard here.
The process of creating a dataset made me ask the question: how can each of these 5-10 seconds sounds in my dataset be distinguished from the next? During my recordings I started thinking about the idea of bird species. The rhythms and pitches of each species song must be uniquely identifiable yet have the potential of slight variations.
This idea changed completely my recording process and I started thinking more about the details of each sound event. It gave a different type of intention to my gestural articulations and resulted in me attempting different variations than I would have normally would. The recording is a setting very different than the performance. In the recording failure is not really a problem, especially if one has the priviledge of time and of being their own recording engineer. So I recorded a few hours of birds and experimented with different ideas. When listening to the recordings later, I identified discrete species and I present some of those here.