Mapping as Conceptual Framing in Parameter Mapping Sonification


 

Standard approaches to PMSon can work quite well when the data represented contains a few orthogonal dimensions (i.e. each data dimension varies independently) and those dimensions are mapped to a small number of orthogonal sonic parameters. Guillaume Potard’s “Iraq Body Count” sonification is frequently cited as an effective example (Barrass and Vickers 2011). It consists of three streams of data: military deaths, civilian deaths, and crude oil prices during the first year of the Iraqi invasion. The data for civilian and military deaths are mapped to short impulses with unique timbres, while the oil price is mapped to modulate the pitch of a continuous tone. While the mapping strategy here seems quite straightforward, the interaction between these axes produces a richly discursive sonic structure. Denis Smalley's (1997) theory of spectromorphology might shed further light on sonic dynamics at play in Potard's mapping strategy. Spectromorphology is a descriptive framework for electroacoustic music consisting of detailed categorization schemes deriving from basic gestural shapes (called primal gestures) that are extended to add a meaningful low–level organizational structure to musical domains. In Potard’s piece, the mappings for the civilian and military deaths result in two streaming (i.e. perceptually segregated) sound shapes defined by discontinuous textural motion. This textural motion takes the form of a turbulent growth process that is driven by a movement between note, when individual data can be heard, and noise, when the data values increase to create a cloud of sound. The oil price appears as a rich spectral contour whose internal texture contains multiple tonal centers which are not quite consonant with one another, giving this sound a sense of disharmony and instability. Adopting Gary Kendall's (2010, 2014) thinking on meaning-making in electroacoustic music, the listener’s embodied experience and cultural context come into play when interpreting such sounds. In the case of the “Iraq Body Count” sonification, the transient sounds representing deaths may sound, to the contemporary listener, like digital “glitches.”

 

The practice of making creative use of glitching (transients introduced due to errors in digital audio systems) emerged in the late 1980s and early 1990s in the works of electronic musicians (e.g. Oval and Pan Sonic) who, appropriating ideas from earlier experimental musicians such as John Cage, were exploring the concept of “failure” (Cascone 2000). We hear this exploration continued today with artists like Christian Fennesz and Ryoji Ikeda. In terms of the materiality of the medium, glitch’s sudden discontinuities point to a rupture, a sometimes–violent transition, possibly indicating failure, or, in more neutral terms, a “breaking” of continuity. The cultural framing of the affordances of a “cracked” or breaking medium (c.f. Kelly 2009) underlines the role of discontinuities in sound as related to a foregrounded sound event which “breaks out of” a comfortable continuity. In this context, the departure from a stable structure becomes both perceptually and structurally salient (and, hence, a useful aesthetic strategy in a sonification context) and, via the eruption of the apparent materiality of the medium and its cultural associations, might also contribute to an emotional narrative: a sense of growing failure and/or brokenness. From this perspective, Potard's mapping strategy is richer and more nuanced than it might sound on the surface.

AudioObject 1: “Iraq Body Count


In a medical care context, Tecumseh Fitch and Gregory Kramer (1994) developed an effective auditory display solution which allowed listeners to monitor and respond to indicators of complications across five continuous and three binary physiological variables. These medical complications can be identified on the basis of interactions between the streams of data. Rather than mapping this complex data to eight discrete sonic dimensions, the display instead featured two independent auditory streams with mappings which affected various aspects of their articulation. These streams were designed to act as realistic-sounding (and thus, familiar) sonic referents – 1) a heartbeat signal and 2) a breathing signal – and were mapped to heart rate and breathing rate respectively. The other variables were “piggy-backed” onto these base streams. Atrioventricular dissociation and fibrillation were mapped to modulate the heart signal in the same way these factors modulate a heartbeat in the real world. Four other mappings (body temperature to a filter applied to the heart beat sound, blood pressure to the pitch of the heart sound, brightness of the heart sound to CO2 level, and pupillary reflex to a high-pitched tone) were arbitrary and thus required learning on the part of the listener. Fitch and Kramer suggested that this approach allowed users to more effectively identify medical complications than a visual display, due to the parallel processing of the auditory streams. However, it might also be the case that having two streams of data based on such familiar (and associated) sounds, namely heartbeat and breath, made it easier for listeners to interpret the display. When there are no obvious sonic referents for the data that is to be represented in an auditory display, the mapping problem may become more pronounced, and the listener may find it increasingly difficult to interpret the system. We suggest that incorporating and addressing concepts from embodiment in general, and embodied cognition in particular, can help in the provision of such referents for auditory displays. The following section will explore some of these concepts and their implications in greater detail.