Introduction: Sonic Information Design and the Mapping Problem
Sonic information design refers to the application of design research, as defined by Trygve Faste and Haakon Faste in 2012, to sonification, an auditory display technique in which data is mapped to non-speech sound for the purpose of representation or communication. In this context, design research is taken to involve “the process of knowledge production that occurs through the act of design” (Faste and Faste 2012). Where visual information design is generally concerned with the presentation of information in a manner that can be effectively and efficiently understood (Horn 1999), sonic information design "pays particular attention to user experience including physical, cognitive, emotional, and aesthetic issues; the relationship between form, function, and content; and emerging concepts such as fun, playfulness and design futures" (Barrass and Worrall 2016).
It is within this expanded context that a key challenge for effective parameter mapping sonification (PMSon) – the mapping problem – might be addressed. PMSon is a sonification technique in which data is mapped to auditory parameters such as pitch, amplitude, duration, or timbre in order to communicate the data to a listener (Grond and Berger 2011). The mapping problem was first introduced by John Flowers (2005), who noted that, in his experience, meaningful information does not necessarily arise naturally when the contents of complex data sets are submitted to sonification. This point is echoed by Florian Grond and Thomas Hermann (2012), who point out that because certain sounds can be interpreted in a number of ways and sonifications tend to represent phenomena that have no natural sonic referent, the practice of sonification often runs the risk of producing a perceptual experience that is too arbitrary in its form and dynamics to be clearly understood. From this point of view, the mapping problem is framed as a question of how a sonification or auditory display conveys an intended meaning to a listener. It is this particular aspect of the mapping problem (i.e. the conceptual framing) that we are concerned with in this article. Successfully addressing the problem may therefore involve a process of reducing the level of arbitrariness in data to sound mapping strategies, considering whether certain parameter mapping approaches may imply framing relationships which actually work against the structural dynamics of a given data set. In this regard, David Worrall (2013; 2014) argues that the mapping problem is further entrenched by some of the software tools used for sonification research and practice. Many of these tools (e.g. SuperCollider, CSound, Max) are borrowed from the field of computer music and are designed to control sound in terms of the parameters of Western tonal music, which, as Trevor Wishart (1996) argues, reduces the rich multi-dimensional spectra of musical discourse to just three primary dimensions: pitch, duration, and timbre. These parameters, Worrall argues, fail to account for embodied aspects of sonic discourse which he sees as critical to meaning–making, such as the micro-level gestural inflections that instrumentalists employ to add layers of performative punctuation – and, hence, information around structural or dynamic features – to a musical performance. Much as a successful Western classical music performance is not simply a mechanistic rendering of the pitch and timing information from a musical score, successful sonic information design may involve consideration of how its data is framed by the performative punctuation of the mapping strategy.