The amphitheatre located between NMH and MF(UiO) is a case study for how we can explore outdoor sounds and outdoor acoustics, then use this information as a baseline for transformation and artistic work.

The Amphiteatre

The recordings revealed an interesting flutter echo that you only hear by shouting or if there is a noisy seagull or crow nearby. I will use these IRs in the composition that will later be placed in this space. There are two considerations:

  • Each ambisonic IR only captures the acoustics recorded at one location, stimulated by the loudspeaker as a sound-source in one location. IRs were recorded from three different locations before a noisy road-cleaning train of men and machines began work in the area.
  • The speaker is not omni-directional. Its direction of face has significant influence on the recorded IR.

These recordings were made at the time of the COVID-19 shutdown which limited the duration we could remain onsite.


Example 1: Single location first order ambisonics IR convolution, decoded to binaural (listen on headphones). This is technically incorrect as I simply convolve the channels one to one. We do at least hear the flutter echo between the buildings.

Example 2: Here I attempt to combine a number of first order ambisonics IR's into one file for a matrix convolution decoded to binaural (listen on headphones). Again it's strictly incorrect from a technical standpoint, but begins to reflect more accurately the real scene.


Example 3: Here I decode both IR and source to a virtual loudspeaker array, convolve each of the decoded channels one to one, and then re-encode the results in HOA. The binaural example shows that this is by far the best option. There are two features to note:

  • When the source directions are placed below the horizontal there is a spatial distortion. This is because I chose to decode to both IR and source sound to a virtual hemisphere (to reduce the number of convolution channels for this test). A full sphere will be more appropriate if the source that is being convolved is also spherical.
  • When the source direction is frontal we hear a gain increase. This is expected from a single 3D IR as the bias reflects the original spatial relationship between the microphone and the loudspeaker in the IR recording. Although it may also represent an increase in clarity of the direct sound, for two of the IRs the speaker was pointing towards the wall (so that the microphone would not record a direct wavefront). However, even when convolving with these IRs there is a frontal bias. I'm not yet sure how to solve this other than manually adjusting the gains of the decoded channels prior to convolution.

Next stage:

Multiple mono IRs taken from locations in outdoor space. The locations of the IRs are the locations where we may consider locating loudspeakers in an ambisonics decoding. The mono IRs are then convolved with the decoded ambisonics. In theory this should allow a full ambisonics composed scene to then be heard as if it had been played outdoors. This is the inverse of a BRIR approach.


The temperature and humidity data logger that I deployed early January is still of today (26th March) hanging in place. I will leave it here until after Easter and then download the data. The logger takes a temperature and humidity reader once an hour since the day it was deployed.

A-format 3D IRs recording, assisted by Ulf Holand.


  • Genelec 1031 loudspeaker
  • SPS200 microphone
  • Sounddevices 788T recorder (for mic preamps)
  • MOTU828MkII audio interface
  • MacBook Pro
  • MaxMSP8 and the HISSTools Impulse Response Toolbox (MLS and Sinetone sweep methods)