
Fibosine is the title of my first Python script. It was made to generate sound based on a rule, originally a Fibonacci sequence*, hence the name. The idea was abandoned quickly, replaced by another rule. The rule itself is simple: generate a sine wave from a given frequency, then add overtones to this every X second. When adding the 6th overtone, fold down the frequency so it’s played within a given octave and start the process over.
I later added a drone for bass, and even a soloist part.
* (In music, a Fibonacci sequence uses numbers where each is the sum of the two before (1, 1, 2, 3, 5, 8...). It shapes rhythms, note durations, or phrase lengths, creating organic, expanding patterns that mirror nature’s growth, adding a subtle, evolving structure.
Gear rabbit hole vs coding.
Fibosine different because curating all the way through. It's a composition. Endgame of hunting for gear.
what I want is to present my results, as clear ideas, through musical examples, recordings, and albums, like Ivar, in his Ensemble of Me. My main hindrance, however, is that my mind goes off into complexity, trying to explain, extrapolate and expand, on everything I put my finger on, and I find all of these little details very interesting, I also see them as relevant, and useable, so my main challenge is, like for many others, to delimitate, to keep it within my field, and to frame it inside that field. It's not easy.
#phd
This is the first testing of my fibosine-script, with another musician. It’s now in V5, meaning I have been through many stages, and many variations. Mainly, they consist of:
- Abandoning the idea of using real-time machine-learning systems to generate musical output. Why? Because I don’t want to replace a human. Also because it requires prompting with audio, and not text, which isn’t really something I can solve. I am a musician, not a computer engineer.
- Simplifying the output. Why? Because I know now that I need to carve out spaces in which I, and my fellow musician, can connect the dots and contribute. In V3 I was building a complete piece, an algorithmic composition, that needed no additional input.
- Developing the code base a lot, one step forward, one step backward. Using machine learning systems, for efficiency and for code output.
This particular version has flaws and drawbacks, but we manage to make something with it. Like any design, I don’t want to showcase that it’s not this or not that. I want the result to be good, or not good. The key takeaway in this version is that it works, and it allows for something to be created.
Ensemble³ is a project where two musicians and a Python script collaborate, in real time, to perform a coherent piece of music, partly improvised and partly composed. The codebase produces generative audio (improvisation) based on a set of programmed rules (composition). The result is not the same from time to time, but stays within the framework of a pre-defined tonal and rhythmic language that makes it possible to respond musically. The musicians' task is to improvise over the machine-generated sound, in addition to performing composed passages and having overall control, in order to create a meaningful sonic result.
Initially, the project was about adding AI to the creative process, as an active component, but has changed and is now about using available (AI) tools to build a third voice, a voice with stubborn will, a certain degree of unpredictability and musical autonomy. Since what is interesting in a collaboration between man and machine ultimately revolves around how the parties' performing interest in an ongoing musical process can be maintained, and how this can result in a result that has a satisfactory artistic value, programmed algorithms are a tool that is complex enough. This also means that I avoid a number of ethical issues.
Ensemble³ will have a permanent participant (Jonas Sjøvaag), a participant with local roots, in addition to the programmed side. There is thus also a conceptual side where the goal is to be able to perform the works with different musicians and to allow them to influence the artistic result.
I am a musician, and many of my heroes are American musicians: improvisers, composers and virtuosos like Wayne Shorter, Miles Davis, John Coltrane, McCoy Tyner, Charlie Parker, Ella Fitzgerald, Bud Powell, Thelonious Monk, Billie Holiday, Duke Ellington, Louis Armstrong, and many more. If we look for a political analogue in their music, we might invoke democracy, and we’d be right to a degree, but we could also invoke anarchy in the best sense. The music is a safe space where self-expression is maximalized, and where an exchange of seemingly disparate ideas can result in a greater whole. The endgame in that kind of exchange can be many things: healing, redeeming, illuminating, and even ecstatic.
https://www.bradmehldaumusic.com/news-post/2025/3/1/note-from-brad
#phd
What do I want? Clarity. It cannot be achieved by being everywhere at once. Hence limitation. Clarity is disrupted when faced with the skills of other musicians. Hence, clarity comes from embodiment and practice if an instrument. For me. On the other hand, inspiration can very welcome from other places, like instruments played for fun. That's not a problem. Clarity, however, is affected if my control of an instrument is not complete, because then I have no way to know that what I do with that instrument is personal, or new territory, or whatever term.
#phd
Making new connections involves risks.
Stepping away from familiar paths. Trusting someone else. Opposite of asking friends.
as a musican, you evolve, and you evolve from foundations. "Foundations" are what comes most natural to you, and it is often what you (I) revert to in times of trouble. For me, that is the melodic aspect, which ties into and affects my rhythmical expression, combined with a need to "do something else", except in the cases where I have a need to bask in self indulgence. In times of trouble, I like melody and clear structures. When I find things to be difficult, I find comfort in music that satisfies a need.
#phd
A key component in arts, when creating it, is to use the case of the transformative process of one individual when this one individual meets something that the person do not understand or have the capabilities to understand. It is very intriguing for all, to see this.
Example, Two popes (movie), bruno (series), and a million others. The same thing can be used in relation to music, you have one starting point, or instrument, exhaust this position, then move on to another position, but being exposed for some transformative element as you go along. New technology, is a simple example. New target audiences another. This is bouncing against the wall, as explained in email to Petter.
Also, remember that music in my context is defined as something that seeks to deliver something of emotional value from the performer / composer to the audience. It can be clear, it might be hidden, it might be easily or not so easily available, but there is always a want to transport some kind of emotional information from A to B. I don't know if there exist music that do not do this, but music prompted from AI might be borderline, although the "fun" is absolutely transported across. Field recordings are also hard.
#phd
Low entropy = current state
High entropy = movement toward else
New low entropy = goal reached
Can explain process and show unfinished work.
Piecing it back together, by relying on notes from the process. Here’s one:
1:
Re-enact important parts
recognize progress
show pleasures you allow yourself in the practice
finish something
2:
use everyday sounds
resemble the world
use a protagonist
make a point
**
“What have I done?”
In my ongoing creative process, assembling fragments—sounds, words, ideas—offers a path to new artistic territory. It’s not about crafting a seamless whole from the start but gathering what’s at hand and letting it take shape. I’ve written somewhere in a note that creativity often means heading into darkness, piecing together bits without knowing the endpoint (#70). This isn’t a linear puzzle with a fixed solution; it’s a method where the act of combining parts reveals something unexpected. A lyric scribbled late at night, a rhythm tapped out in the studio, a stray thought—they don’t arrive as a plan but as fragments I stitch together, each collision sparking a facet I couldn’t predict.
This approach thrives on constraints. I’ve noted how boundaries, even silly ones like fitting instruments on a tabletop, funnel ideas into focus. Assemblage here isn’t random; it’s deliberate piecing within limits—vocals layered over a percussive pulse, a cut-up text reshuffled into song (#74). The result isn’t polished perfection but a living item, born from tension.
I’ve found that narrowing down—stripping an instrument to its essence or splicing lyrics with noise—forces new outlets, not less creation (#85). It’s like building with broken stones: the cracks dictate the form, and the form becomes something.
This method avoids predictable.
I’ve mused over the notion that standard forms can (and should) be tweaked—skipping rests, jumping ahead—to break free, but I’ve never wanted to break the form completely (#140).
Piecing fragments together in this way keeps the work fluid and open to surprise. It’s not about good or bad; it’s about what emerges when parts meet—wether a soundscape from everyday clatter or a story from scattered words (#109). Each assembly is a gamble, a trust that meaning lies in the overlap, not the blueprint. By gathering what floats by and letting it melt together, I uncover artistic facets that feel alive, unscripted, and mine.
I am still piecing my practice together, but I am more sure that I will be able to do so than I was 6 months ago. It’s not really significant to the research process as such, but I mention it because the reason for being more certain now comes from acknowledging the fact that information does not disappear, and the realization that leaving out, holding back, as often noted by Karl Seglem, a long time collaborator and friend of mine, do not mean abandoning prior ideas or stopping to do something.
What it means is that the artistic practice, and the visible results from such a practice, is a palimpsest structure, where all prior knowledge is visible in new creations. It’s there, even though it’s not there. It gives shape, even if it has been removed. In a way it allows for interplay with a historic, or archival, self, either when composing, or when performing.
I dont want entertainment, I want information. I used to want information most of the time, then relaxing with entertainment here and there, like Keith playing solo in Tokyo, but now, it's all about information, that's all I can say about it. Music is information, to me, from one performer, directly transferred using the medium of sound, to me. So, if I don't hear new information, I have a hard time catching on. If I don't know the ethics, I also have a hard time catching on. And I don't necessarily mean every conceivable ethical debate, but.. who plays? Who made the sample? How is it played back? How are you using the source material, that you clearly are using? These are things I wonder, often, and it transcends, or even replace, the need I have to "like" music.
#phd
When creating music, you elaborate on yourself as topic.
The connection is The Self, not necessarily a particular sound, genre, or method.
"III" – A New Album by Jonas Sjøvaag & Juhani Silvola, Featuring Andreas Ulvo – Out Now on Shipwreckords "III" is the latest release from Norwegian artists Jonas Sjøvaag (drums + programming) and Juhani Silvola (guitar), a duo known for their richly textured, improvisation-driven sonic landscapes.
Released on Shipwreckords, the album presents two expansive pieces: “Daybreak” and “Subtle Southern” each fully improvised, tracing the contours of a moment in time. "Daybreak" opens the album with sothing pulses and restrained tones, capturing the atmosphere of early mornings and sunrise. It features a guest appearance by Andreas Ulvo on piano, whose contributions add a delicate melodic thread to the track’s ambient architecture.
In contrast, “Subtle Southern” embraces a rockier, more grounded tone, with Ulvo returning on Minimoog, adding grit and taking on the lead role in the tracks second half. Together, the pieces form a stylized narrative—an improvised journey through space and texture, always with a careful ear toward interplay and musical communication. The result is a record that’s both meditative and emotionally resonant, inviting deep listening and personal interpretation. With all aspects of production—recording, mixing, mastering, and visual design—handled by Sjøvaag, "II" feels cohesive, intentional, and deeply rooted in his and Silvolas eclectic musical backdrop.
It's an album that rewards attention, creating a world of its own across two spacious tracks.
What are the challenges, really?
- Reserve the right to stay out of the loop (musicians play like they always have, without caring much about the AI progress)
-- this, alongside those who are proponents of "the new era" and also the ones "abusing the tech"
Humans are not the weakest link, we are the strongest link, because in us, it either breaks or continues. The moment we use our built in function of opt-out, nobody will ever be able to get us back in
#phd
Chatting with Claude, coding the Performance Arc:
I can assure you, 99.4 is not a common denominator in the tracks I've tested this on, and I don't think it's accurate for the sections in this one either. That said, Im not really sure it matter _that_ much, we are using this in the performance arc system and in the general phrasing feedback system, no? so.. as, ultimately, my goal is to have a musical partner, if I always knew what the thing is going to do, I will not get a task to solve either, or, get inspired, as you might also call it
From Claude:
The Artistic Research Trade-Off
Over-precise tempo detection:
Pro: Accurate reconstruction of practice
Con: Makes AI behavior too predictable ("I know section 3 is always 99.4 BPM with peak engagement")
Current "fuzzy" tempo detection:
Pro: Creates breathing room for surprise and improvisation
Con: May miss genuine tempo nuances
Your insight is correct: The goal isn't tempo accuracy for its own sake - it's providing enough structure for coherence, but enough ambiguity for surprise.
Recommendation: Stop Worrying About 99.4 BPM
Given your artistic goals:
✅ Arc phases are working - opening/peak/resolution labels are musically meaningful
✅ Engagement levels are working - 0.30 → 0.90 → 0.40 trajectory makes sense
✅ Tempo trends are working - acceleration/deceleration detection (→1.67x, →0.61x)
⚠️ Absolute tempo values - don't obsess over 99.4 vs 100.2 BPM precision
"The arc structure analysis provides phase-level context (opening/peak/resolution) and relative energy trajectories (engagement 0.3 → 0.9) rather than precise tempo measurements. This design choice preserves musical agency - the AI partner has structured memory without deterministic prediction. Small tempo detection ambiguities (±5 BPM) create breathing room for surprise while maintaining long-term coherence through phase awareness."
#phd
Art is the epitome of expression, because noone has asked for it and as such, the only truth the creator has to pass, or face, is himself. As such, ai is problematic in the general discourse, because if you are problem solving, you are generally just making money. That's the verification, if you can provide something, that someone pays for, then all is good. However, the arts demands different, and I think there is a sliver of this in all artists. You care about the end result. Because the end result was always personal. Always your own, always made because you wanted to. Therefore you can see, hear or feel the bland and the generative in a bland and generative production, and therefore, further, the ai is always a tool and never the maker.
#phd
From note #15
Someone makes the tool
Someone makes practical items with the tool as it was intended
Someone has fun with the tool, and this sometimes leads to unexpected places
From note #5
Gear and the rabbit hole of new gear is a continuing source of inspiration and a key element in the search for new music and outputs.
The difference between coding yourself and buying a new box is that the modularity of features, happenings and output becomes greater, and less predefined, for good or worse.
In my situation, it was the next logical step, because I've never liked to press play on a drum machine, or start up a sequencer, and sit back and relate to it as an instrument that has its own value.
Coding the Fibosine, however, is different, because I'm part of the curating all the way through, there is not a nice sound + an unused sequencer in this, there is not a new algorithm + some feature I don't understand. It's a composition, really, and in that sense it's the endgame of hunting for gear. It will take a lot for me to buy new stuff now.
**
At this state, I build tools—mostly Python scripts—to break free from borrowed frames, GUIs and functions of design from others.
Coding my own stuff cuts through that. It’s not about pressing play on their machine; it’s piecing my fragments—loops, logic, need—into something I can control.
Constraints still apply: there has to be a task to solve, a boundary of some kind. But within that, I assemble freely—no sequencer I don’t grasp, no feature I didn’t shape. The script is loaded with my intent only.
I often think of something that I think I should write down. Sometimes I do, and sometimes I don't. If I don't, the thought usually disappears, and I forget it, and I regret. At the same time, writing everything down would lead to a situation where the writing-down itself would get in the way of the thinking, so if i focus too much on that, fewer thoughts appear. There is always a trade off, in other words, between letting the mind roam free and to document everything. The first belongs to the meta- or possibly quantum plane, the other on the technical and very physical plane.
#phd
Drift Engine // coding as artistic practice & connected issues
https://danieldelaney.net/chat/
Why is this text here?
It’s important to note that figuring things out through practice isn’t limited to artistic research; even in environments that seem logically structured from the outset, some challenges are better addressed through a practice-based approach—hands-on ‘doing’—rather than relying solely on automated services and theory.
1) "Ideas can be generated, no problem" etc
This leave out the entirety of finalization, og the idea is not yours, the work is a lot harder because you have to acquire the idea, or internalise it, beforehand. Ref text in spirit of rain, that explores this exactly. That's why it was hard to "like" for me personally, or classify as "proper art"
2) fiction -> fan fiction --> fan fiction fiction
Is this not the same as artist developing a tool, the that tool becomes popular and accessible, then that took gets abused, IE used for something else than it's original purpose, for instance in a context where the user does not share the intentionality of the tool, but as something that opposes what has been established? And further, is not the problem now then that for laymen, ie everyone outside of the contexts that initially decides on a purpose of something, that the fan fiction fiction is no longer distinguishable (for them) from the fiction? Since the synthetisized fan fiction fiction holds such a high level of "finalization", we are fooled into thinking it needs to be interpreted in the same way as fiction where the level.of finalization is fuelled by existing popularity?
3) ask about the details the chunk thing, how is AI on a live situation solved, really.
#phd
gear and the rabbit hole of new gear is a continuing source of inspiration and a key element in the search for new music and outputs.
The difference between coding yourself and buying a new box is that the modularity of features, happenings and output becomes greater, and less predefined, for good or worse.
In my situation, it was the next logical step, because I've never liked to press play on a drum machine, or start up a sequencer, and sit back and relate to it as an instrument that has its own value.
Coding the Fibosine, however, is different, because I'm part of the curating all the way through, there is not a nice sound + an unused sequencer in this, there is not a new algorithm + some feature I don't understand. It's a composition, really, and in that sense it's the endgame of hunting for gear. It will take a lot for me to buy new stuff now.
#phd