Reconfiguring the Landscape (public version)

Reconfiguring the Landscape investigates how 3-D electroacoustic composition and sound-art can evoke and provoke a new awareness of our outdoor environment. The work addresses both urban and natural settings.

Outdoor sound environments have penetrated the materials and structures of electroacoustic composition and sound-art, and we have witnessed an increase in the number of installations and compositions, and the awareness of sound in architectural environmental design. More recently, our technology to capture, analyse and synthesise 3-D sound spaces using tools such as higher-order ambisonics has aligned with the precision of our spatial hearing, such that electroacoustic composition has the potential to address, and be enriched by, the full dimensionality of the sonic landscape. Unthinkable only a few years ago, it is this affordance and challenge that the project takes as its point of departure.

The project contains four cross-disciplinary modules exploring the 3D soundscapes and landscapes of local sites:

  1. Layering: Compositions that draw on site-specific 3D sound and human behaviour will be layered back into the existing environment. What are the implications of this combined 3-D soundscape? What is the impact of the resulting multi-modal experience? What happens in the counterpoint between real and added? What is the dialogue between reality, hyper-reality and experience?

  2. Enhancing: How can we encourage the visitor to uncover details of sound and space that are too fleeting to be fully comprehended in the real-time flow, distracted from by the experience of our other senses, or occurring in our absence? We address what is present, to inspire and tempt our wish to linger and experience space and place. Enhancing reveals the living ‘soul’ of the original space interpreted by the artist.

  3. Relocating: What happens when we relocate 3-D sound from one site to another? In Relocating, high-speed internet will transmit 3-D audio over vast distances and merge one scene into another in real-time. It will connect locations in the world which share features such as climate, terrain and social function.

  4. Modifying: The quality of our acoustic environment is a rating that society has become more aware of. The normal way to address noise is by abatement. Is it possible to modify sound via acoustical intervention, to transform the ordinary into the remarkable and explore noise in a playful way?

The integration of art and technology is central to the investigation. Sound, space, human behaviour and temporal variations will be captured, analysed and embraced as sources for creative work.

A few of the art-technology approaches will include:

The use of technologies will align with spatial scene perception and compositional needs by maintaining a distinction between unique and unrepeatable personal experiences, and the temporal sound-space as a neutral subject. To do this, core technologies will be deployed over a variety of time-spans and spatial dimensions. Methods will include use of the EigenMike (32-channel 4th order ambisonics 3D microphone capturing high spatial precision recordings and sound-field decomposition possibilities); distributed microphones arrays capturing near and far; LIDAR and photogrammetry offering 3D virtual models and data sources for spatial-audio synthesis and composition; non-intrusive motion tracking yielding data for compositional use, for design, and for understanding human behaviour and responsive environments. Motion tracking will also be used to understand how the visitor behaves before and after ‘added sound’.

Labs, studio and concert spaces will be set up with higher-order ambisonics (HOA) loudspeaker arrays. HOA will be a core spatialisation approach for creative work and for emulating in situ ideas in lab settings. Work will consider the spatial, spectral and visual impact of loudspeakers. Technologies for adding spatialised sound will be explored for artistic affordances. Besides ambisonics, solutions will include hyper-directional loudspeakers, transducers and actuators. Of central interest are beam-forming loudspeakers that use directivity patterns and surface reflections. Professor Zotter is working specifically with the artistic group to custom-make prototypes for our work.

Particularly useful for both basic and advanced approaches to the ‘relocating module’, the type of streamed spatial audio materials will range from real-time EigenMike and Soundfield microphone natural scene capture to performed interventions.

The project’s official start date will be 1st October 2019.

The team