Where Section 2 explores the digital aftereffects of hybridization, Section 3 does not have the same prioritization with sound analysis or investigating the corporeal parts of a simulated guitar model. Where the opening section deals with source abstraction to draw recorded fragments increasingly further into an acousmatic world of sound, this part of the composition reciprocates by using a single synthetic guitar model to do things that are stereotypically guitaristic. Here, the main tool used to generate the foreground material is a software instrument from The Synthesis Toolkit called StifKarp, which is a generic model for a plucked string that can be fine-tuned to sound like an acoustic guitar using programmable controls for the sustain of the string, the strength used to pluck the string, and whether it models a brighter (closer to the bridge of the guitar) or darker (further from the bridge) sound. The StifKarp software instrument is controlled using custom algorithmic music patterns made in the ChucK programming language – in order to create an automatic sense of melodic/rhythmic development. Relative to the previous sections of the composition, much of the foreground material in Section 3 consists of sounds with a far more discernable sense of rhythm, melody, and harmony. While the music relies on more familiar constructs (e.g., fixed meter, scales, etc.) than often found in electroacoustic compositions for fixed media, the presentation of the guitar in an electroacoustic context is more challenging, due to the intelligibility of the plucked string sounds.
In traditional acousmatic music, technological intervention and source abstraction work in tandem (e.g., transformation via sound effects, recording sounds with unusual microphone positioning), but in the third section of Obsession, the inherent ambiguity of the physical model already achieves the goal of moving between electronic and guitar sounds. In other words, the progression towards a sense of fidelity in the apparent source material is what drives the composition forward. This relates to the recent use of the term post-acousmatic, which reflects how recent fixed media works for loudspeaker playback have critically engaged with the idea of source abstraction and incorporated nontraditional materials in acousmatic music, such as synthesized computer glitches and post-tonal harmonies that are more germane to contemporary acoustic music (Adkins, Scott and Tremblay 2016: 113). A recent presentation at The International Computer Music Conference argued that computer music (a term used for pieces where the sounds stem from computer-synthesized and algorithmic music software instead of acoustic recordings) 'could be defined within the genre boundaries of acousmatic music' – that using digital signal processing to generate musical material can also involve compositional thinking that plays with the causal identity of a sound (Holbrook and Rudi 2022). Obsession uses algorithmic music techniques to suggest that artificiality can be a new way of exploring sound sources in electroacoustic music for fixed media.
Section 3, which lasts from 5:04 until the end of the piece, begins with a kind of cadenza – a device from classical music in which the performer plays in an improvisatory fashion to convey a sense of virtuosity. The music is structured around exchanges between frenetic gestures, with the model using audio filters from the above-mentioned GRM Tools Classic plugins. These filters are made to give the illusion of an instrument with unusual and large proportions – though this is an approximation and not based on any scientific modeling. Occasionally, an especially loud and percussive note is played, which signals an abrupt exchange to the use of the StifKarp physical model to algorithmically reproduce the sound of specific guitar tropes (e.g., aggressive string bending).
The use of the ChucK programming language creates a web of interconnected musical gestures where new sounds are triggered based on the behavior of older ones. Many of the objects are sequences of notes and rhythms that are made to reference sounds that are associated with more traditional notions of guitar virtuosity (i.e., relying on the strings more than components that foreground noise). Examples of these bursts of guitaristic tropes include the use of pentatonic scales (which overwhelmingly contribute to guitar solos in rock music), tremolo picking (used in heavy metal and even surf music, where the plectrum moves back and forth at extreme speeds to give the illusion of a sustained pitch), exaggerated string bending, diminished chords in parallel motion (which are highly idiomatic of virtuosic jazz guitar playing), and power chords (which are essential building blocks for rhythm guitar playing in heavy music). Such references to guitar tropes as objects that can be arranged in an algorithmic music program build on the referential use of high-energy performance techniques in other acousmatic pieces. For example, in composer Åke Parmerud's electroacoustic work Growl, the natural distortion of heavy metal screaming vocals is heard, in what the composer describes as a piece for a 'growler's choir' (Parmerud 2015). In part, the use of these tropes as rhetorical devices or sound objects to be reorganized and augmented using software asks the question of what might sound performative or germane to the digital realization of an instrument.
When we listen to a recording, is performativity for a digital instrument model comparable to that of its acoustic counterpart? This question becomes more pertinent as the simulation of real-world sounds becomes increasingly accurate with the help of new algorithmic tools. The StifKarp model used in Section 3 has existed for well over a decade, but some digital musicians have started to leverage AI to match the sounds of instruments in a way that feels more authentic. Music theorist Jonathan de Souza discusses how we might hear performativity on an instrument via a two-way connection between the auditory and motor domains. De Souza writes that 'an expert guitarist might imaginatively "hear" the chords in handshapes made away from the instrument. Conversely, the guitarist might imaginatively "feel" performative gestures while listening to guitar music' (2021: 4). Much like the opening gesture from Section 1, the ending of the piece juxtaposes a chord played with some pitched tapping sounds juxtaposed against another instance of more exaggerated tapping sounds superimposed on to the real guitar using sound effects processing from the GRM tools plugins. Rather than aiming for a sense of catharsis where I presented an unaltered guitar recording as the final sound for the piece, I feel that this juxtaposition supports the idea that digital music tools have destabilized conventional ideas about instrumentality as being connected to physical components and tactile interactions with resonating bodies. The way in which electronic/digital modification has become idiomatic within guitar playing makes the instrument well-suited to charting a course for digital instrumentality.

