[EG May 8]
Against technologies of emotional categorization-optimization

Towards technologies of emotional amplification-materialization

 

It is typical to first encounter algorithms as tools that categorize, sort, and manage flows of information. Even a child knows what a search engine does.  Search engines are useful because they impose a presumably ideal ranking system upon a contemporary glut of data online. Categorization is what makes communication, sharing, language, and transmission of knowledge itself possible in an age where content curation and management are practiced on equal footing in everyday life with creation and analysis.

 

It seems fitting in some ways, as categorization comes long before creation – an infant learns to listen to and respond to language much earlier than it can express itself. Categorization by its nature requires human perception of what is categorized, and also a linguistic concept or set of binary relationships.  For example we note colors appropriate to human sight: sizes on scales are learned by interactions with rulers we can hold in our hands, our songs are in scales and harmonic relationships that humans consider to be audible. Language itself is built from our perceiving bodies, competing with epistemological-cultural forces subject to colonial histories.  Western science adopted Latin as its primary language, whereas if history had occurred otherwise, what is called science might be thought of through Cree (at least in what is currently known as Canada).

 

So, speaking of Latin, we likely already know that the language and words we use are not neutral infrastructures in relation to knowledge generation in these fields. For instance, Lisa Feldman-Barrett speaks to the influence of language in the scientific study of psychology, as because the mind is predictive rather than reactive, human bodies use language in concert with memory in order to create appropriate emotional/physiological responses literally in the moment as we process our senses—as we make sense of our own bodies and categorize. The fact that language both emerges from our body and might also confuse our own sense of our bodily truths perhaps present a tragedy and a generative gift to the aesthetic lives of humans. I’m not sure computers have the luxury of this problem as their bodies are almost always transparent to themselves: their languages equally transparent, until their bodies fail and are rendered sense-less.

 

I’m invested in working with communication and the body during a historical moment where great faith, energy, and cultural investment is being placed in the brain, processing, and yes algorithms. The technophiliac capitalist environment promotes fascination with the brain, the rational, and now “learning” as a height of human achievement, and perhaps as an evolutionary justification for the vast amount of wealth and resources poured into this field. Ultimately, users of the internet might have already placed too much faith in sorting algorithms that impose their own genre of super-human “techno-rational-naturalism” upon users: the dark fulfillment of the Enlightenment era’s promise. The dichotomous split between body and brain, emotion and ration, and nature and culture is well entrenched in cultures I was raised in: indeed, I seem to remember learning English largely through binary relationships such as hot and cold, soft and hard, boy and girl. Feldman-Barrett states that the capacity to experience emotion itself is based upon the language we know: going beyond what we might believe to be evolutionarily hard-wired and naturalistic and shaping the vibrational forces of what it means to inhabit a body itself.

 

My research at ALMAT is part of a long investigation into algorithms and physiological markers of emotion began during my Masters’ thesis project in Studio Arts at Concordia University in 2012. Insisting on the embodied experience as primary consideration, I explored emotional physiology and DIY biosensors equally with musical languages/rules as means of channeling emotional biodata into robotic behavior in Swarming Emotional Pianos. I was interested in creating a robotics system as opposed to representing the emotional data via sound or visualization software (or both) precisely because of the stubborn corporeality and physical presence of robots in relation to the human body. Machine senses are in some ways a means of approaching the “other”, but are typically a very controlled filter for our own sensory organs: these machine physicalities have always been a fundamental aspect of our own human physicality in the 21st century.

 

My goal was to use sound as a means of illuminating the machine’s sense of the physicality of emotion itself. As someone trained more in media arts than music, I understood that algorithms could be useful for mapping the physiological data to sound in novel ways, but was at a loss for how this could be accomplished.

 

My first instinct was to begin looking at algorithmic music tutorials on Max/MSP, a programming language I was already familiar with. Greeted by numerous arpeggiators, tools and tutorials that treated music as discrete building blocks to be assembled and reassembled, the introductory techniques all tended to sound “good”, which is to say they were effective at communicating established musical ideas. The optimization of algorithmic music over several years has largely reified dominant modes of musical communication, which fall into simplistic emotional categories (ie: fast tempos in minor keys are “angry”, slow tempos in major keys are calming). I suppose this is because humans do not necessarily trust computers to merely be themselves: a belief that the perceptual mode and language of a computer must be adapted to human scales in order to be made aesthetically useful. Algorithmic music in some ways has developed into a kind of musical Turing test, where human intelligence is seen as a benchmark for machine achievement: a means of demonstrating how humanity can master machine tools to further human aesthetic expression.


As such, I struggled to align my desire to illuminate the physicality of emotions with algorithmic systems based on pre-established models of music developed through engagement with Western socio-linguistic conventions. These conventions refer both to musical structure as well as emotional experiences that emerged from cultural understandings of our own bodies according to our physiological-cultural perception. But in the same way that I cannot divorce myself from my own body, one cannot completely abandon language and communication - only attempt new strategies. In many ways I think about how my practice both seeks to destroy these strategies through an impulse to “take up space”, and foreground its own problematic embodiments and contradictions.


I didn’t like these tools because I didn’t just want the emotions to “sound good”: I wanted to discover a way of processing music through the body. This isn’t unrelated to histories of monocultural participation in Western music practice, but once again, as a white woman, I should not be the loudest voice to speak on this particular topic. After struggling with beats and various scale patterns imposed by tools presented to me both online as well as by people in my immediate community, I eventually felt that even the imposition of rhythmic interest was too sensational and distracting to be useful.  I resorted to “breaking” the system for the sake of abstraction, and programmed my Euclidean beat generators and Western harmonic melodic patterns in ways that were decidedly abstract. A very useful failure.

 

After this stage I moved on to FM synthesis in strict avoidance of laptops. I make analogous the underappreciated physicality of the body with my use of microcontrollers: while microcontrollers such as the Teensy are not without processing power (or “minds”), they in themselves are relatively mindless, or without the capacity for memory in comparison to laptop computers that most algorithmic music is made for nowadays. I’ve always been against using a laptop to run the processing algorithms that map the physiological data into sound, perhaps also because visually or conceptually, a performance where people “pour their emotions” into a literal computer can be interpreted as rather mundane and everyday in an age of constant connection and social media.  I’m interested in the direct 1:1 relationship between the tiny physiological signals and the use of hardware FM synthesis to generate micro-rhythms and clicks in sound(s). I’ve always been interested in how these physical relationships could affect and interrupt one another, physically in space, and also how the sounding of these tones interacts with the environment (or room) in order to generate the final experience.

 

I developed the BioSynth as a means of exploring FM synthesis and amplifying the movements of the data streams themselves, pondering the micro-rhythms and physical structures of larger wholes I take for granted in listening experiences.  In this way I feel that I’m in a process of moving away from algorithms as a means of categorization and increasingly towards algorithms as means of physical amplification of biological data.

 

Even though I’m interested in amplifying the biodata, I have never been interested in “presenting” the data on graphs during performances. I wouldn’t want to promote any audience’s (learned) instincts to predict and interpret the data according to narrative or statistic categorization. Scientists themselves have been struggling with emotional categorization in relation to valence, stress, and arousal, and are compelled to present research in terms of efficacy in algorithmic prediction (ie: “in research studies our algorithms have performed at 98% accuracy”). I’m most interested in the research of Lisa Feldman-Barrett for these reasons as it addresses the role of aesthetic symbols in emotional physiology, what many assume to be an evolutionarily-logical or empirical process. Partnerships with machine perspectives, languages, and analyses in ages of big data seem promising, but the question remains how to avoid the pitfalls of our own languages of communication and thought. Through my investment or non-investment in emotional-aesthetic tropes, I realize that algorithms that map and categorize re-create the failed divisions and categorizations that science itself struggles with in emotional physiology. In particular I am intrigued by how generative these ideas have been in concert with ideas briefly communicated to me during the ALMAT conversations, of Herbert Brün’s theories of communication, and artistic strategies of anti-communication. So my challenge to myself is to move away from modes of interpretation and instead to emphasize materiality.  This is not to say I’m satisfied with simple sonifications, but rather, that I want to develop algorithms that amplify and demonstrate complex interactions of signals with one another as feedback builds between simple physiological elements: much the same as these entangled elements create the grey and noisy zones of emotion that humans experience every day. It is through forms of anti-communication, or resisting the urges to map, that I might come across more satisfying ways of developing embodied algorithmic music, rather than algorithmic music that emerges from my pre-established ideas of emotions as inhabiting the body.

 

This withdrawal from my own agency as a composer working in algorithms is made in an attempt to bypass rational impulses or habits: “the body knows best” might be a bad mantra but I recognize that my body operates thanks to my sensing brain. I still am working with creative feedback loops and mapping, but I am imposing less and less categorical expectation on the data. My attention instead is focused on infrastructures and embodiments: precisely, the biosensors as tools of data collection, designing for human comfort. Designing sensors specifically for bodies I anticipate: is there a way to compose the bodies themselves? Previous conversations have elaborated upon my interest in ASMR as a means of body-programming, but I’m not sure I’ll be able to explore this fully in the scope of this residency. As noted by Rob Gallagar (2016), ASMR has a curious relationship to this approach of body-centric rationality, as he notes that at the beginning of the genre, there was no “ASMR video”, only a community of users that observed their physiological reaction to media materials never intended as ASMR that they experienced online. In this way, the genre emerged from embodied networks that “upvoted” and ranked both original and preexisting digital artifacts online, forming a larger social algorithm for quality control and super-communication. Out of the noise of participatory web 2.0, ASMR enthusiasts collaboratively developed the genre through pseudo-scientific means of ranking, categorization, and the curiously binary term of “triggering” (almost like a bang or boolean.) Despite all the contradictions in ASMR--particularly its blend of the scientific with new age and self-help content--I delight in the irony that ASMR itself is very irrational and frustrating to most observers: it presupposes a physiological relationship that may or may not be present in the viewer. The sounds of a typical ASMR video appear to value noise over signal (or noise as signal) in relation to intimacy that may or may not have narrative content, and finally acknowledges subjective physiological reaction as a dominant means of judging ‘quality.’ As sound theorist Douglas Kahn noted in his book Noise Water Meat (1999), flaws and imperfections are an essential part of the desire and intimacy proposed by genres such as ASMR: it is through the amplification of this intimate “noise” - clicks, saliva pops and tongue clicks - that one bears witness to the building blocks of intimacy without content.


I am equally fascinated by the status of ASMRtists as creators: largely female producers of material based on emotional labor and binaural audiophilia, there exists a tension between their status as “composers” of novel sonic material, and products of an online movement that privileges adherence to statistically ‘successful’ ASMR formulas. This “optimization” of ASMR has occurred so quickly to render the entire movement suspicious. Top ASMRtists produce almost mechanically and are measured and measuring the infrastructural rankings of YouTube itself, paid through clicks and comments of networked body-viewers. While the original intent of such ranking on Reddit appeared to be to categorize and define a niche bodily phenomenon, exploring what “worked” and what didn’t in order to optimize the potential for instigating “tingles” in the viewer, the dark side of this algorithmic force is that it also reproduced dominant media norms. Despite the fact that the technical barrier of entry to ASMR is as low as owning a smartphone, one could imagine that ASMRtists may originate from any corner of the globe, belong to any gender, or age group (so long as the creator has access to a quiet, undisturbed space, which is in itself a luxury). Despite this potential for diversity, the seemingly uncontroversial nature of bodily response and affective preferences in online viewers have largely reproduced the dominance of white, young women as the most influential and viewed in the genre. This demonstrates how the seemingly neutral instrumentalization of the audience-as-body tends to reify racist and heteronormative ideals. Indeed, bodies are never mere bodies divorced from their cultural context.


In the same way that I was drawn to create robotics systems as counterintuitive, digital media for emotional representation, I’m similarly drawn to these feminized ASMR noise makers as subjects on the fringe of institutional attentions in sound studies; producers that collaborate equally with the algorithmic/social aspects of their online communities, as well as developing individual creative practices through attention to the (largely imagined) embodiment of the abstracted other. I take note of this genre of sound as I continue to develop my own collaborations with bodily reaction and algorithmic expression, fermenting in my mind as I work in this more institutional environment and write academic papers about giving more power to the materiality of the biodata through algorithmic amplification and feedback in my algorithms. To myself, ASMRtists represent both a promise and problem of algorithmic collaboration and communal involvement in individual creative practice, as well as reliance on embodiment without observing the political entanglement of pleasure with social context.


There have been many conversations during the ALMAT residency about the influence of algorithmic tools and infrastructures upon the ultimate aesthetics of creation, but I want to speak about the role of the internet and our sharing communities as agents that influence what tools are available to researchers as they explore new languages.


The development of new biosensor tools divide my attention in this residency between what I term “open source hardware mornings” and “music software afternoons” (a dichotomy I find useful but also humorous). I argue that this engagement with the open-source community is an important conceptual part of my practice. Outside of educational institutions, many artists (such as myself!) are learning about algorithms through online tutorials and communities of lab users - these social connections are tools for thinking and forming aesthetic research as individuals. As a means of learning, one begins from imitation and adoption of others’ tools, and moves forward in order to understand their capacities. This being said, building your own tools and instruments is a luxury to those who already possess powerful knowledge in technical domains that are statistically homogenous in their maleness and whiteness.  This is not necessarily a problem, as even engagement with black boxes can form a basis of productive anti-communication if performed critically. But conversely, there is a strong cultural force to reward artists and technologists that develop technology as a super-communicative and dazzling array of tools for truth, equality, or entertainment - this includes the feel-good drama of hyper-communicative technology as cartoonishly Orwellian in science fiction and some media arts. These tools are software, artistic works, career opportunities, open-source tutorials, and networks of users that are very helpful, and necessary, if you are to communicate or use technology at all. A close-knit community of users in itself can optimize processes and yet also reproduce sameness, whereas developing technologies of communication (or anti-communication) either independently or in smaller communities of artists might be a slow, confusing, or isolating process. All the same it is this diversity among bodies that brings about the continuous labor that democracy demands. In this way there isn’t absolute comfort in open source culture as providing educational tools online, as the most popular tutorials and practices gather more users that solidify and contribute to these processes (think of the dominance of Arduino microcontrollers). To this effect, open source and independent software tools can also become normative through its user bases (think of the maker movement and its dependency on SEO rankings through crowdfunded products and advertisement campaigns). To quote Audre Lorde from her essay The Master’s Tools will never dismantle the Master’s House (1984), it is only under conditions of difference that the “necessity for interdependency” becomes unthreatening. “Only within that interdependency of different strengths, acknowledged and equal, can the power to seek new ways of being in the world generate, as well as the courage and sustenance to act where there are no charters.” While Lorde was using these words to criticize tools of academia in feminist practice, I see the application too for algorithmic tools in social function.


In this way algorithms for creative use are typically first filtered through online cultural networks and their attendant search engines, and first exposure to tools and aesthetic works can have a strong effect on the imagination of what an algorithm does or can do. The influence and importance of tutorials and open source technologies is why I am invested in developing these tools as a part of my practice. As such the inexpensive biosensor interface as well as open source Arduino library I’m developing does not in itself solve any problems, but I feel it is a statement to meaningfully contribute to difference by making my tools as accessible as possible and be mixed in with other open source communities. These contributions lend themselves to the possibility of difference in ways that I know will enrich my own practice when I enable other artists to more easily explore the same things that I do.

 

I work with emotion and the body as a form of resistance against what I see as communicative tendencies and assumptions in algorithmic culture: with each work I develop I aim to expose different margins, noise, irrational behavior, generosity, as well as develop accessible, friendly, and transparent materials so someone else who might not already be in the social circle or shared studio space or electroacoustic music degree program can do it too. The conversations at ALMAT have been generative for emphasizing the role of feedback in design as well as how nature and culture, mind and body, society and individual, are neither undifferentiated nor dichotomous entities.


RESEARCH


  1. Development of a new hardware sensor tool (thermistor-respiration)

  2. Production of a new PCB that eliminates dependency on Teensy microcontroller - more shareable design

  3. Development of Respiration library and filtration tools for BioData library

  4. A temporary abandonment of Teensy microcontrollers to go back to fundamental connections. The Teensy is a great body but Max/MSP does allow me to compose-experiment more quickly. I’m reserving strategies I use in MaxMSP to those that I know will translate back to the memory-less Teensy.  So I’m exploring these notions of “feedback” and “filtration, not categorization” right now just with the heart sensor, but will add the other signals in next week.

  5. Filtration and exploration of subtractive FM synthesis are a bit new to me so I’m also looking up how various filters “work”.  This notion of filtration is really interesting to me as a means of “amplification” or “zooming in” on the data itself. The modes of sonic exploration just seem more in tune with the materiality of the data/body itself now.

  6. Some exploration with the physicality of a small transducer applied to the head as vehicle for new sounds

  7. I’m curious to experience/understand David’s compositional strategies more to see how they might influence my programming.  

  8. I suspect that after feedback from ALMAT participants I will start a translation of the MaxMSP code back into the Teensy environment to see what its processing limits are, and if I can use multiple Teensy processors to create novel sounds in chorus.  

  9. I need to determine if the Teensy’s limitations are interesting enough to continue exploring or if I should move on to the Bela or Rpi in order to multiplex more sensors into a single processor.

In reading this I think it's more an explanation in how things are coming together in my head during the residency, I kind of laugh at how repetitive it is (difference = good!) but it's more thinking not only about algorithmic tools but how the ideas that shape the tools even got there in the first place...

 

{function: comment}

---
meta: true
author: EG
function: brainstorming
date: 180508
keywords: [categorization, optimisation, amplification, materialization, residency, planning, overview, emotions, physicality, BioSynth, ASMR, algorithms]
---