3.2 Éliane Radigue and the Sonic Medium


 

Radigue’s music shares this translocative quality with Ikeda’s track. The additive effects of reflected waveforms are pronounced and involve the listener in the creation of its patterns. We continually make small, unconscious adjustments to tune into what we are listening to (Xie 2013). As in the Ikeda track, all it takes is one of these barely conscious movements, made in an effort of attunement, to make us aware of the discontinuous sounds in the music. This might lead us to make larger movements, to move about the listening space, to further explore how our actions effect the music we hear. 

 

The sound wave, a pulse of energy rather than a substance in itself, requires a material through which to move.[12] Typically, that material is air; air molecules comprise the sum total of sound’s materiality. Yet this material – the air and its molecules – is not detectable by the unaided senses. We are conditioned, therefore, to consider air as empty space. Sound is perceived like wind: a series of pressure gradients, small ones sensed by our ears and larger ones sensed by our skin (and possibly our entire bodies, once the vestibular system is engaged and our balance is affected). Yet the material character of this seemingly atmospheric medium can, in fact, be revealed to us, perhaps not as molecules, but as a substance nonetheless through its very malleability.

 

Such effects can be subtle, but they are distinctive, and they contradict aspects of the environmental model Gibson proposes. According to Gibson, an enclosed medium can be “filled” with light, with sound, and even with odor. He then states that “any point in the medium is a possible point of observation for any observer who can look, listen, or sniff. And these points of observation are continuously connected to one another by paths of possible locomotion” (1986: 13). When continuity is broken, however, as in these interference patterns, and locomotion results in discretely revealed objects, we break with the world described by Gibson. His “medium” cannot account for discontinuities of the sort encountered in Radigue’s music. The sounds do not behave as we would expect the sound from a fixed source to behave. The medium is not filled with sound in the same sense as an environment illuminated by the “unlimited scattering” of radiated light (1986: 44).

 

The physiology of sound localization extends beyond the placement of ears on opposite sides of the head. Our entire bodies are involved, and audio technologists have been occupied with finding models for replicating sounds based on how they are heard by individual listeners with different body shapes.[13] Our ears are designed to deflect sound waves and redirect them into the ear. Bosun Xie also notes the active role movement plays in localizing sound sources: how we aim the ear so that the pinnae can lay hold of the sound (Xie 2013: 17). And though not moveable themselves, as in other species, our ears are attached to our heads and thus to our bodies, and their position relative to incoming wave fronts can be calibrated to enhance particular sounds over others. Thus, there is an openness to the interpretation of any type of reproduced sound.

 

Because Radigue’s music is composed of waveforms that interact with one another on an audible level, they reveal something about the character of the sound itself and how it inhabits the spaces in which we find ourselves. More complex waveforms, say those produced by orchestral music, may contain more timbral information, yet they exhibit, perhaps paradoxically, increased uniformity in their audible patterns of dispersal and reflection. Our head movements reveal less about the types of vibrating objects that produced these complex waveforms than they do about the location of those sound sources. What we notice when we move is not a change in the type of sound we hear but a change in the sound’s distance and direction from where we are located.

 

Gibson observed this phenomenon in one of his vision experiments. He found that a series of alternating discs, when sufficiently dense, coalesced to form a continuous surface. He showed subjects a series of thin, plastic panels, each with a cut-out circle. The panels were placed one behind the other, alternating black and white, with the diameters of the circles becoming progressively smaller as the series continued away from the viewer. (Figure 2.) When viewing thirty-six panels aligned at equal distances, the alternating black and white circles outlined by the panels coalesced into a smooth surface, forming what looked like a striped tunnel. When there were substantially fewer of these of panels, no more than a dozen or so, most subjects saw individual rings – with “air” in between them (1986: 155).

 

Complex waveforms manage to suffuse a space much more densely than simpler ones, and the additive interference of individual frequencies are not nearly as apparent. In his experiment, Gibson found that density led to a “surfaciness” that obscured the composite structure of the tunnel. In regard to sound, density similarly obscures the composite waveforms that constitute its structure. Electronic music of the variety composed by Radigue, however, can reveal this composite wave structure and turn it toward productive aesthetic purposes.

 

Timbre, as the distribution and relative strength of partial frequencies, is a key component in the auditory system’s ability to identify where a sound is coming from. With less timbral complexity, there are fewer clues for identifying a sound’s distance and location. Radigue makes use of only a carefully chosen subset of the available controls on her ARP 2500 to create her sounds. The result is a collection of tones with little of the “noise” or inharmonic partials that accompany acoustic instruments. Consequently, there is less detail – less of the complex information carried by higher frequencies – that tends to dissipate rapidly with distance. The resultant waves, then, with their reduced timbre, are not that far removed in shape from sine waves. Radigue’s tones contain more harmonics than Ikeda’s sine waves, yet they remain simple enough to merge with their reflections to form waves of similar shape but varying intensity. These tones can be considered sounds without reverb: reflected waves combine with the direct signal to either attenuate it or reinforce it, creating what are typically referred to as standing waves. Such sounds convey no information about their material sources or their location, nor about the characteristics of the space (actual or engineered) in which they were recorded. Shorn of these characteristics, the sounds cannot but be perceived as occurring precisely where they are heard.

 

Matthew Nudds explains this important distinction between the location of sounds and the locations of their sources and of our ability to recognize the difference. To determine the location of a sound source, the auditory system groups together frequency components that likely belong to a single source. For example, a set of frequencies that belong to the same harmonic series, or that start and end at the same time, would be attributed to the same vibrating object. By contrast, Nudds explains that “the sounds that we hear are instantiated where we are” (Nudds 2009: 77). By this he means that even though we perceive sounds as being connected with their presumed source, the physical sound waves we hear, the changes in pressure gradients at the eardrum, exist in the same place that we happen to be. This separability between the sound and its source allows us to hear sources as remaining constant even as the qualities of the sound may change. To illustrate, Nudds uses the example of pulling a hat down over the ears, muffling any incoming sound: we don’t believe that the source of the sound has changed, even though the sound we hear has been altered (Nudds 2009: 93). When “Kailasha” is played, its tones do not disperse uniformly throughout the environment. Therefore, we cannot attribute the sound we hear to a fixed source. The ambient aural array that reaches our ears is so drastically distorted that the medium itself is revealed as irregular, as having its own texture. 

 

Marratto writes that “textures cannot show themselves otherwise than to a body capable of exploratory movements” (2012: 28). Just as with the texture of a fabric, the texturing of space with sound requires a gesture in order to be perceived. We don’t simply touch a fabric’s texture, we feel it, which means running our fingers over it. And just as the things we touch lie where we touch them, the sounds we hear are located right where we are, not at a distance beyond our peripersonal space. Likewise, with Radigue’s music, its sonic texture is not located at a distance, as in Gibson’s description of viewing a landscape, but it is where we hear it. We thus have an awareness of space not in terms of direction, but as a depth revealed through the texturing provided by the music. As “Kailasha” is distributed through the environment unevenly, it carries with it its own palpable substance. Its texture does not reveal the materiality of the surfaces that reflect it, but instead reveals the music’s own peculiar materiality. It is a dramatic expression of music’s ability to have presence.

 

Note that this differs from the concept of acousmatic sound, a sound whose source remains hidden from view. Acousmatic sound entails a multimodal conception of listening in which both hearing and vision take part. Its distinctive character is a result of severing the visible connection between sound and source. Acousmatic sound does have a source; it is just that for the listener that source is not reconcilable with corroborative visual perceptions. Sound phenomena that result from the interference patterns of simple waveforms, however, do not have a hidden source: they appear to us as having no source at all. They are atmospheric.

Eliane Radigue, “Kailasha,” 41:15 – 42:15

Figure 2: James Gibson’s diagrams of his optical tunnel (1986: 146).