Sonifying for Public Engagement: A Context-Based Model for Socially Relevant Data
In this paper we discuss the possibility for designing sonification as a tool for public engagement for socially relevant data. We do so through a case study of a specific sonification model and the results of a participatory focus group discussing ours and similar sonifications of “social” data. First we report on a unique and contextually-sensitive approach to sonification of a subset of climate data: urban air pollution for four Canadian cities. Similarly to other data-driven models for sonification and auditory display, this model details an approach to data parameter mappings, however we specifically consider the context of a public engagement initiative and reception by an “everyday” listener as a core principle informing our design. Further, we present an innovative model for FM index-driven sonification that rests on the notion of “harmonic identities” for each air pollution data parameter sonified, allowing us to sonify more datasets in a perceptually “economic” way. Finally, we discuss usability and design implications for sonifying socially relevant information based on user evaluation of our design and an open-ended discussion from two small-scale participatory focus groups.
The Planetorium: Sonic Information Design for Earthling Audiences
R. Michael Winters, Avrosh Kumar
For millennia, humans have looked into the night sky and wondered at what they saw. This curiosity led them to develop instruments that enabled them to look deeply into space, which revealed celestial bodies in ever-greater detail. They developed new mathematics and physics to better understand and predict the cosmos and began to teach each other, pointing to what they knew to be true. Sadly, as the objective facts and figures poured in with ever-increasing precision, the oral traditions of myths, stories, and gods began to fade from view.
In the past century, some humans have worked to cultivate a new technology that would allow humans to use their ears to hear the objects that, before, they could only see. By transforming numbers into sound, humans could listen again, which made those who could hear it very happy. However, others did not comprehend it, became confused, and began to ask one another: How can I make sense of something I cannot see? How do I know what I am hearing is true? Is this some kind of music, or something else altogether?
In this paper, we use “The Planetorium” as a metaphor and context for understanding how we might best design aural celestial experiences for earthling audiences. Audiences are groups of individual humans all listening to the same thing at the same place for some length of time. And while planetariums are places they go to learn about space through stunning views and new perspectives, planetoriums (like auditoriums) are places they go to listen.
Informative Sound Assists Timing in a Simple Visual Decision-Making Task
In this study, we examined the design of informative sound to assist timing in a simple, multimodal, game task. The game was initially designed as a series of eight-second, visual decision-making tasks. Players could respond quickly, with more risk, or wait longer for more information, thus reducing their risk of being incorrect. The player was scored by counting the number of correct attempts they completed in a five-minute interval. Waiting longer made the task easier but reduced the number of opportunities they had to repeat the task within the time limits of a game. In general terms, this game was designed to examine the way a player balances risk and reward. Unfortunately, the visual version of the game introduced the unintentional risk of a “safe” player waiting too long, timing out and failing to respond at all. This caused the risk-reward structure of the game to be out of balance. This study reports on the subsequent design and evaluation of a simple informative sound that was added to the game. The intention was to address this time-out issue without impacting other aspects of player performance. A within-subject experiment with 48 participants measured the response time, number of timeouts, and success rate of each player in a sequence of tasks. We measured player performance under three conditions: no sound (visual-only), constant (non-informative) sound, and increasing amplitude (informative) sound. We found that the increasing sound display significantly reduced timeouts when compared with the visual-only and constant sound versions of the task. Importantly, this reduction in timeouts did not impair the players’ performances in terms of their success rate or response time.
Sonic Information Design for the Display of Proteomic Data
William L. Martens
A research project focusing on the sonification of proteomic data distributions provided the context for the current study of sonic information design, which was guided by multiple criteria emphasizing practical use as well as aesthetics. For this case, the auditory display of those sonifications would be judged useful if they were to enable listeners to hear differences in proteomic data associated with three different types of cells, one of which exhibited the neuropathology associated with Amyotrophic Lateral Sclerosis (ALS).
A primary concern was to ensure that meaningful patterns in the data would not be lost as the data were transformed into sound, and so three different data sonifications were designed, each of which attempted to capitalize upon human auditory capacities that complement the visual capacities engaged by more conventional graphic representations. One of the data sonifications was based upon the hypothesis that auditory sensitivity to regularities and irregularities in spatio-temporal patterns in the data could be heard through spatial distribution of sonic components. The design of a second sonification was based upon the hypothesis that variation in timbre of non-spatialized components might create a distinguishable sound for each of three types of cells. A third sonification was based upon the hypothesis that redundant variation in both timbral and spatial features of sonic components would be even more powerful as a means for identifying spatio-temporal patterns in the dynamic, multidimensional data generated in modern proteomic studies of ALS. This paper will focus upon the sound processing underlying the alternative sonifications that were examined in this case study of sonic information design.
Sonic Information Design
The International Community for Auditory Display (ICAD) is a multidisciplinary community that includes researchers with backgrounds in music, computer science, psychology, engineering, neuroscience, and the sonic arts. Although this multi-disciplinarity has been beneficial, it has also been the cause of clashes between scientific and artistic research cultures. This paper addresses this divide by proposing design research as a third and complementary approach that is particularly well aligned with the pragmatic and applied nature of the field. The proposal, called sonic information design, is explicitly founded on the design research paradigm. Like other fields of design, sonic information design aspires to make the world a better place, in this case through the use of sound. Design research takes a user-centered approach that includes participatory methods, rapid prototyping, iterative evaluation, situated context, aesthetic considerations, and cultural issues. The results are specific and situated rather than universal and general and may be speculative or provocative, but should provide insights and heuristics that can be reused by others. The strengthening and development of design research in auditory display should lay the path for future commercial applications.
Editorial: On Sonic Information Design
Sonic Information Design refers to the design of sounds to provide useful information in applications that have impact in our daily lives. The articles in this special issue of the Journal of Sound Studies on Sonic Information Design had their origins as responses to the theme of the 22nd International Conference for Auditory Display, held in Canberra, Australia in 2016.
Addressing the Mapping Problem in Sonic Information Design through Embodied Image Schemata, Conceptual Metaphors, and Conceptual Blending
This article explores the mapping problem in parameter mapping sonification: the problem of how to map data to sound in a way that conveys meaning to the listener. We contend that this problem can be addressed by considering the implied conceptual framing of data–to–sound mapping strategies with a particular focus on how such frameworks may be informed by embodied cognition research and theories of conceptual metaphor. To this end, we discuss two examples of data-driven musical pieces which are informed by models from embodied cognition, followed by a more detailed case study of a sonic information design mapping strategy for a large-scale Internet of Things (IoT) network.