Problems related to the experiment: mixing the real into the virtual
In Harvesting the Rare Sounds, we are dealing with four non-conflicting pieces of sound art, which blend rather naturally as distantly located sound sources in our virtual space. This is mostly how game engines function, applying acoustic rules to virtually located sounds and playing the result at the listener’s position.
In Hybrid Hall, on the contrary, there is no located sound source: all locations are phantom, meaning our brain now estimates the position of sound objects depending on how the many speakers perform them. This is, for instance, the way surround sound works in films. The novelty here lies in the virtual engine’s surprisingly convincing performance, as the blending of 16 constantly playing virtual loudspeakers creates a realistic surround experience.
A question arises, following our previous work with invisible choreography: what if we transposed a recorded space into a virtual one, not from an anthropocentric point of view (which is typical of ambisonic recordings and already widely integrated in virtual reality) but from a room’s point of view? There, the original omnidirectional microphone set-up (the ‘module’) would have to become a virtual loudspeaker set-up: again, without located sound sources, but rather virtual speakers performing the sound captured by real microphones in order to transpose the real world, its rolling marbles, and its acoustics, into the virtual one. A window on the real, all on a one-to-one scale.