A Sonic Segue


Order and Progress: a sonic segue across A Auriverde (Fraietta 2019b) is an abstract audiovisual sojourn across the Flag of Brazil as a sonified and politicized astronomy show.


[A]bstraction in the creation of a work of art was generally defined as the determination of the essence of a thing. The artist was thought to arrive at this essence by a process of generalization whereby particulars were eliminated until only a subject’s generic, universal qualities—its essence—remained. (Morgan 1996: 319)


The Flag of Brazil is rich with political symbolism. The green and gold—originally representing the houses of Bragança and Habsburgs—are often reinterpreted to signify the Amazon rainforest and the rich resources of the land (Smith 2018). Also, the skyscape represents both an astronomical phenomenon and political event. Finally, at the center is a symbol of the Earth emblazoned with words Ordem e Progresso, translated “Order and Progress.” Creating a soundscape that synergizes with the visual astronomy show required various sonification techniques to symbolically express the essence of each attribute.


Sonification

Sound is not a tangible object but, rather, quantifiable energy transmitted through a host medium as mechanical vibrations in the form of pressure waves (Goldstein and Powis 1999; Rumsey and McCormick 2012). Sound generation occurs when an object becomes excited through physical stimulus, causing the surrounding body to vibrate in sympathy (Roads 1996). This process is defined as sonification (Webster 2006). Some researchers, particularly those in the field of Auditory Display, apply a narrower definition, stating that “sonification is the transformation of data relations into perceived relations in an acoustic signal  for the purposes of facilitating communication or interpretation” (Kramer et al. 2010: 4). Sonification can be an effective mechanism for conveying information so it can be perceived aurally, and is often referred to as data sonification (Bjørnsten 2015; Flowers, Buhman, and Turnage 2005;  Hermann and Ritter 1999; Worrall 2018). It is generally approached according to two different paradigms. The first uses mathematical functions applied to numerical or quantifiable data, producing a sonic output that is directly or functionally related to input values. The second is an aural expression of constructs that are not necessarily bound to numerical representation or western logic (Barrass and Kramer 1999; Madhyastha and Reed 1995;  Polli 2012; Worrall 2018).


Some researchers argue that using the term sonification for both pragmatic scientific use as well as cultural or aesthetic use, even though both paradigms convert information from one or more sources into sound, “is somewhat unfortunate because it blurs purposeful distinctions” (Worrall 2018: 180). Moreover, some have restricted the definition of sonification to “a precise scientific method [....] that [...] delivers reproducible results and thus can be used and trusted as instrument to obtain insight into data under analysis” (Hermann 2008: 2). A definition of sonification presented at the 14th International Conference on Auditory Display is as follows:


Definition: A technique that uses data as input, and generates sound signals (eventually in response to optional additional excitation or triggering) may be called sonification, if and only if
(C1) The sound reflects objective properties or relations in the input data.


(C2) The transformation is systematic. This means that there is a precise definition provided of how the data (and optional interactions) cause the sound to change.


(C3) The sonification is reproducible: given the same data and identical interactions (or triggers) the resulting sound has to be structurally identical.


(C4) The system can intentionally be used with different data, and also be used in repetition with the same data. (Hermann 2008: 2, emphasis in original).


Other researchers argue against this narrow definition of sonification, which, although it may lend itself well to auditory display of data, discounts the use of the term to denote sound that is designed to represent constructs that exist outside the domain of science because they are mental realities rather than just physical (Gresham-Lancaster 2012; Supper 2012; Truax 1974).


The technique of producing a mental understanding of information through sound was originally called auralization (Vickers 2011), a term analogous to visualization (Schröder 2011; Summers 2008). Jason Summers notes that using the term sonification in place of auralization can be problematic. He states:


More recently, the aural analogy of this meaning of the term visualization has been termed ‘sonification’ [....] While there is much to be admired in this more nuanced delineation, the analogy with visualization is lost. The visual parallel of sonification is not termed ‘lumification.’ Likewise, ‘audification,’ which describes a specific subset of sonification, has no visual parallel in ‘vidification.’ [...] The most logical course now is to accept the broadening of the term auralization to match visualization and maintain symmetry between terms describing the senses of hearing and sight (Summers 2008).


Ironically, Paul Vickers states that the term auralization “has fallen out of common usage with authors tending to use the more general (and well-known) sonification, so auralisation is now a (mostly) deprecated term” (Vickers 2011: 464); while Summers states: “as usage of the term auralization grows, there is also now a pressing need for formal definition of the term and standardization of its technical usage” (Summers 2008, emphasis added). Consequently, the use of either term could be problematic, depending on the reader’s field of research.


An alternate analogous relationship could be with the term illumination, which can mean “to enlighten spiritually or intellectually” (Webster 2006). Sonification, therefore, could be used to indicate the use of sound to facilitate scientific or mathematical insight—as in the case of data sonification—or as a tool to facilitate research in other disciplines, such as anthropology, architecture, cultural studies and sensory history (Truax and Barrett 2011). For the purposes of distinction and usage within this research, two different paradigms of sonification will be examined and referred to as logical sonification and figurative sonification.


Logical Sonification

Although sound does not involve any transfer of matter from source to destination, valuable information can be inferred about an object based on the sound it makes. This phenomenon has been used by various cultures and civilizations since antiquity (Worrall 2018). For example, a person can detect how full a cistern is by tapping the side at various heights and listening for an echo. Similarly, although physicians have used auscultation since the time of Hippocrates (Bohadana, Izbicki, and Kraman 2014), the addition of percussion through tapping the body greatly improved its effectiveness as a diagnostic technique (Cummins 1945). The inferences obtained in these cases were based on the acoustic attributes of the bodies. Data sonification, however, can be used to reveal attributes about an object using techniques that are completely unrelated to the acoustic properties of the subject. One of the most successful examples of this is the Geiger-counter, which clicks in response to invisible radiation levels, instantaneously and continually alerting the user of possible dangerous levels of ionizing radiation (Kramer et al. 2010). This technique of directly mapping a data value through time is referred to as audification (Dombois and Eckel 2011). The increased of use radioactive materials in industry, medicine and research has resulted in a multitude of radiation detection monitors, most of which employ a visual display with some providing an additional audible monitor. Interestingly, research indicates that searching for radiation using audio alone is far superior to using visual display or a combination of visual and audio information, and that “operators should be instructed to perform  the early stages of the search [for radiation levels] with the visual display shut off” (Tzelgov, Srebro, Henik and Kushelevsk 1987: 95). The pulse oximeter, which generates a tone whose pitch varies based on the oxygen levels in the patient’s blood (Kramer et al. 2010), is used in a somewhat similar way during surgery. This use of sonification exploits the ability of humans to immediately recognize changes through the sense of hearing (Walker and Nees 2011). This type of mathematical mapping can be modified to produce audible cues, such as notifications, alarms and warnings that are not necessarily based on an immediate data value, but on accumulated thresholds determined by functions or algorithms (Guillaume 2011).


Logical sonification can be used to map multidimensional data to separate sonic parameters, such as pitch, volume, timbre, duration, reverberation and spatial orientation; a technique referred to as Parameter Mapping Sonification (PMSon) (Grond and Berger 2011). PMSon can be particularly useful for creating virtual 3D soundscapes or data navigation by exploiting psychoacoustic spatial cues (Brazil and Fernström 2011b; Carlile 2011). For example, a sound can be made to appear more distant by filtering higher frequency content, reducing the amplitude, and adding reverberation (Spiousas et al. 2017).


Logical sonification can be very effective for monitoring and debugging computer program execution, and has been used since the early days of computer science (Vickers 2011). Technicians could easily monitor the behavior of real-time systems by sonifying the digital data from various buses through a speaker. I witnessed this type of program execution monitoring on the 1967 Marconi Myriad computer as it was being used for military radar data tracking. An experienced technician could often walk into the computer room and determine that the Myriad was not running correctly based on the sound patterns he or she heard from the speaker (Phillips 1967; Scholz 1988; Vickers 2011).


Logical sonification has a functional or reflective relationship between data values and the generated sound. It is systematic, reproducible and can be used with different data sets. It is, therefore, the epitome of Hermann’s suggested definition of sonification. Mathematical mapping, however, can be ineffective when attempting to sonify metaphorical or figurative information that does not lend itself to numerical representation or physical abstraction. Sonification in these instances would be more akin to being described as figurative (Goina and Polotti 2008).


Figurative Sonification

“Visualization is our culture’s default data representational process, but other senses, particularly our sense of hearing, have always provided rich unique paths to augment our understanding of reality in both scientific and spiritual ways” (Gresham-Lancaster 2012: 207).


Figurative sonification can be used as a technique to communicate information using analogies, metaphors and innuendos (Brazil and Fernström 2011a; Lockton, Ricketts, Aditya Chowdhury and Lee 2017; McGookin and Brewster 2011; Polli 2012; Roddy and Furlong 2015; Supper 2012). Instead of performing a literal translation of numerical or logical data into sonic parameters, information is conveyed to the listener using Gestalt principles that are based on humanistic or holistic principles rather than naturalistic ones (Ncube and Crispo 2007). The two main types of figurative sonification defined in the field of auditory display are auditory icons (Brazil and Fernström 2011a) and earcons (McGookin and Brewster 2011).


Auditory icons are sounds that are familiar to the listener and were originally used to augment graphical display elements or to reinforce an affordance available to the listener. This is effected by drawing on the listener’s previous experience, knowledge or association of sound sources and their causes (Dingler, Lindsay, and Walker 2008; Lucas 1994). They are the sonic equivalent of visual icons in that they recognizable representations of objects or events. In the same way that visual icons can be cartoonified, emphasizing certain features to facilitate recognition, auditory icons can be filtered or synthesized to make them potentially more recognizable than real-life sounds (Brazil and Fernström 2011a). The distinct features of auditory icons, however, is they are easily recognizable by the listener as everyday sounds (Brazil and Fernström 2011a; Dingler et al. 2008).  


Earcons differ from auditory icons in that relationship between the sound generated in the earcon and the concept it represents is abstract and must be learned by the listener (McGookin and Brewster 2011). Contrary to auditory icons, where the relationship between concept and sound is semantic, earcons often have no acoustic correlation between the sounds generated and the entities they represent (Brewster, Wright, and Edwards 1994). For example, musical instruments such as drums, fifes, cymbals, and horns have been used by the military since antiquity to signal messages on the battlefield or parade using musical motifs or melodies (Ginsberg-Klar 1981; Howe 1999; McGookin and Brewster 2011). Similarly, leitmotifs have been used in film music and opera to sonically flank an identity, such as the shark in Jaws and Darth Vader in Star Wars (Sorensen 2011).


There have been several attempts to define the difference between earcons and auditory icons (Brazil and Fernström 2011a; Brewster et al. 1994; Dingler et al. 2008; Lucas 1994; McGookin and Brewster 2011). Effective communication for both requires that the listener is able to discern the meaning of the sound. Considering that both are learned either consciously or through environmental experience, at what point does an earcon become an auditory icon? Auditory icons require some cultural or pre-existing relationship before they can be effective at conveying information (McGookin and Brewster 2011). The same condition, however, applies to earcons—the difference being as to when the relationship was learned. Similarly, musical onomatopoeia blurs the boundary between auditory icons and earcons. Sergei Prokofiev’s Peter and the Wolf is an example of a work in which the sound quality of the instrument could be likened to the behavioral attributes of the character. Another example is the ‘Dance Macabre’, where xylophones represent dancing skeletons. “Saint-Saëns exploits this convention when he uses the xylophone to represent fossils in the ‘The Carnival of Animals’ ”(Sorensen 2011: 241).


One significant difference between the auditory icons and earcons, however, is that the earcon can be used to effectively hide the meaning of a message from those to whom the message is not intended. Passwords have been used since antiquity as a method of discriminating friend from foe (Eve 2016). Audible passwords and discrete sounds can function as secret earcons, whereby the insider who knows the code understands, while the outsider remains ignorant. Mystery as a genre of media entertainment is extremely popular and generally involves solving a crime (Knobloch-Westerwick and Keplinger 2006). Watching a mystery movie the second time invokes a different sensation in the viewer because they are then able to perceive clues that were previously enigmatic. The same principle can be applied sonically through the use of progressive leitmotifs.


In the original ‘Star Wars’ movie, John Williams established the Imperial March as the leitmotif for Darth Vader.... In the ‘Star Wars’ prequel, ‘The Phantom Menace’, the childish version of Darth Vader’s leitmotif suggests ‘Anakin Skywalker is Darth Vader’. As Anakin begins to show villainous predispositions, the leitmotifs bear a growing resemblance to the Imperial March (Sorensen 2011:  242).

 

During a subsequent viewing of the movie, they are able to hear the leitmotifs with “open ears” because they know the secret: Anakin is Vader!


Figurative sonification can affect human beings personally, culturally, and spiritually, with the ability to transform tangible objects into temporal relics based not only on the sound produced but also on the context heard by the listener (Fraietta 2020). Examples include recordings of dead persons and playback of past news broadcasts, such as voicemail messages of 9-11 victims, which can trigger a profound emotional response (Dray 2015). Similarly, legislation and voluntary codes of ethics regarding the sharing of information or documentation of a deceased indigenous person include warnings about voices and are included as standard practices in the media (Abdul Ghani Azmi 2017;  Bauman and Smyth 2007; Jacklin 2005). Figurative sonification can also be used to invoke a sense of history in theatre or film through the use of period music (Hurtgen 1969). In summary, figurative sonification is sound generation that conveys semantic, metaphorical, cultural, figurative, spiritual, or numerically unquantifiable information (Gresham-Lancaster 2012; Supper 2012).  


Performing the Sojourn

Order and Progress: a sonic segue across A Auriverde was originally composed as a live work for planetarium software and samba ensemble (Fraietta 2019a). Rather than simply providing a visual stimulus for the musicians, the planetarium software functions as part of a musical instrument in its own right (Fraietta 2019c).


The Hornbostel-Sachs classification system for musical instruments originally provided four specific classes of instruments—Idiophone, Membranophone, Chordophone and Aerophone—defined primarily by the method of excitation and sonification (Von Hornbostel and Sachs 1961). Electrophonic instruments were later identified as a new class of instrument (Galpin 1937), now referred to as Electrophones (Knight 2015). One of the most significant features of this new class of instruments was the separation of physical excitation and sonification. Moreover, composers were no longer restricted to creating sound based solely on the laws of physical motion, as was the case for early musical automatons (Ord-Hume 1973). Instead, they were able to work abstractly using mathematics and algorithms to create, map and manipulate sound, which effectively sonifies algorithms (Hunt and Hermann 2011; Miranda and Wanderley 2006; Roads 1996).


The instrument for Order and Progress was created using the Stellarium planetarium software (Zotti and Wolf 2019), which was interfaced to the HappyBrackets creative coding environment (Fraietta, Bown, Ferguson, Gillespie and Bray 2020) and the VizieR database of astronomical catalogues (Ochsenbein, Bauer, and Marcout 2000). The Stellar Command library (Fraietta 2019c) provided the data communication interface between all the components. As the performer moves about in Stellarium, Stellar Command determines the field of view displayed—which is the right ascension (RA), declination (Dec.) and the view radius—and queries VizieR using this field of view. VizieR responds with the astronomical data recorded in the online database. This data is conditioned by Stellar Command before being input to HappyBrackets for algorithmic treatment, synthesis and sonification. Stellarium can also be controlled programmatically, which facilitates automation and sequencing during performance.


Although one could sonify this data using logical sonification techniques such as audification or PMSon, figurative methods offer more effective mechanisms for expressing the essence of features within the flag. Rather than attempting to conform the work to any representation or conveyance of empirical reality, it seeks to evoke emotions and responses through the use of fantasy, mystery, exaggeration, distortion, and parallelism that suggests rather than visualizes. The first movement, Amerindia, celebrates the indigenous people of Brazil through soundscape and cultural astronomy. The color green is sonically expressed through the sound of Amazon rainforest, while the political nature of each star is expressed through an auditory icon mapping of regional language to state, described in greater detail in the section titled Soundmarks.


The second movement, Cães Celestes (Celestial Dogs), symbolically expresses shimmering gold through the use of pure sine waves; while the motto, Ordem e Progresso, is symbolized through a harmonic progression from chaos to order.


The final movement, Celestial Samba, is a suite of nine short traditional Brazilian dances for percussion ensemble. In addition to acting as quasi earcons, symbolizing that Brazil has been populated by peoples from around the globe, it functions as a celebration of Brazilian cultures.



<-- Astronomy within Brazilian Culture Amerindia: Soundscape of Indigenous Brazil -->