1. A feedback loop between movement, light and sound draws its signal from the intensity of the projected light and sensor data from another piece on the opposite side of the room. The remote sensors control the animation. Subsequent changes in the projection are captured by a photocell and translated into the sounds you hear.


    2. {origin: program notes, keywords: [feedback, loop, movement, light, sound, sensor]}
  1. Project proposal


    My current research works on the embodiment of interactive relationships between people and their environment. I believe that together, interpersonal and environmental interactions describe a live interface that brings agency to built systems and can create new models of understanding for our own human connections and our definitions of communication. My work has recently focused on these interactions, in the form of deploying sensors in public (or at least populated) environments and using the generative output to create sonic interactions in the same environment. I have used basic sensors through simple microcontroller interfaces, and complex systems such as biofeedback from EEG (electro-encephalograph). Typically, I use MaxMSP for multi-channel sound generation and the control of other formal outputs such as video, light, or messaging for physical actuations. 

    I try to approach these tools with a hope of reducing complexity, by using algorithms to abstract the fundamentals of the interactions I am trying to model. Aesthetically, this is motivated less by a need for minimalism, and more by the hypothesis that this distilling process creates something that can demonstrate rather than simply describe relationships. I aim to create work that actualizes and embodies its objectives rather than making metaphors.

    My proposal for working with ALMAT in the impuls context is in two parts. In one way, I propose to incorporate a re-use of some of the MaxMSP code I already have to support additional sensors and sound generation strategies. For example, a recent piece of mine uses a primitive neural network to monitor a space via webcam, and then triggers events based on changes at the pixel level, according to adjustable sensitivity and recovery settings. Other previous projects make use of microcontroller data via serial and OSC streams over USB or wifi, and could easily be adapted for use here.

    Secondly, I also propose to begin by experimenting with a photoresistor ‘microphone’ that creates unprocessed sound signals directly from light. Many of these could be cheaply and easily built to create an expansive (but extremely simple/elegant) sensor array that informs and activates a multichannel speaker system such as the one described in the ALMAT project plan. In this model, a single photocell is soldered immediately onto the raw end of an XLR cable and then can be directly mixed 1:1 into a multi-channel installation, with or without processing. The photoresistor will generate a sound signal that varies in pitch, timbre, and volume depending on the quality, consistency, and intensity of the light it can ‘see.’ 

    Of course, the use of any particular sensor strategy will depend highly on the space, the flow of human traffic through that space, and appropriate responses to the site-specific and collaborative nature of the project. I have a small collection of sensors myself and a prototyping kit I can bring along. Additionally, I have two very small and versatile portable projectors that can offer some visual component to the installation, possibly by creating a feedback loop with the photocell sensors. 

    As an overall strategy, I would like to work toward using sensor data in some way that interprets the modes of interaction that people have with the space, and activate that data in some way that provides people with alternative modes of interaction. By allowing this ‘rule’ to guide the creative process, we would be certain to end up with an interactive environment that evolves through the exercise of agency, or 'embodied access.'

    I do have questions about what gear I should plan on bringing. I was thinking to bring along:

    • at least one pico projector 
    • a usb webcam
    • a Motu ultralite
    • 6 photocell cables. The cables are ~2m terminated with a 1/4“ (male) to fit my Motu. 
    • Depending on deployment, I may need to solve for extension cables/adaptors, or adapt for a different interface.

    I would like to at least try to work with MaxMSP for sound processing, and I believe it is possible to create an abstraction that will run on the Pi, but of course PD may be a simple enough alternative. The part that I am most curious about is whether we will be able to support multiple audio inputs, or if I should focus on just one and skip the Motu altogether. (I do know that historically my version of the Motu (mk3) does not play with Linux, and as such probably won't work with the Pi.) I am looking currently for other multichannel input solutions for Pi.


  2. {origin: project proposal}

Alicia Champlin

---
meta: true
date: Feb-2019
event: impuls academy
place: muwa

keywords: [exhibition, sound art]
author: Alicia Champlin

---