An interactive installation for two or more people
Created and premiered at the Montreal Contemporary Music Lab, June 2015
Sponsored in part by Signal Culture
(MIDI Sprout biorhythmic sensor, conductive headband, tactile sensors, four-channel sound,
video, custom software, Ableton Live)
How do our interactions with other people resonate in our minds? Can strangers or loved ones accurately understand the intent of your actions towards them when outside circumstances or unknown factors often warp their meaning? How can we be sure that we are helping someone if we do not know how they interpret the concept of “help”? This installation provides two stations of interaction: one that is passive, and one that is active. Each role you play will yield different reactions from the generative software, and when working in tandem with someone else, each of your actions will affect the other’s, resulting in an ever-changing audiovisual environment.
A wall, allowing no visual contact between the participants, separates the stations. At one station, a participant is presented with a looping audiovisual environment and a headband, which is connected to the MIDI Sprout biorhythmic sensor. When the headband is worn, the participant’s galvanic skin response is measured, converted into a stream of MIDI data, and analyzed by the custom Max/MSP software. The results of the live MIDI stream analysis cause the software to shuffle, loop, or stutter through a collection of video clips, as well as control audio modulations on the backing drone. Once removed, the environment returns back to its “null” state.
On the opposite side of the wall, a participant is presented with a stark environment, populated only by a tactile-sensor control box. Running a finger along the circular sensor and pressing the buttons results in a Markov-based performance of percussive tone colors (non-pitched sounds, as well as pitched chimes). Each touch or press allows the custom software to change and manipulate the probability tables that determine the performance for these percussive sounds, and this constantly changing set of controls presents a beguiling and somewhat frustrating experience to participants on this side of the installation. Currently, these sounds are generated from a sampler, but a future iteration will instead incorporate a robotic percussion instrument, allowing for an acoustic sound source for this side of the installation.
When both stations of the installation are occupied, the actions of each participant directly cause changes in the environments on both sides of the wall. Engaging with the sensor box now distorts the video projection and creates drastic pitch modulations on the headband-side of the wall; wearing the headband switches the routing of the sensor boxes controls, resulting in the inclusion of more chime pitches and sending the percussive tone colors through a heavy granular delay. Eventually, audio from each side of the wall will blend into the other side’s channels, as if the wall between both sides is beginning to crack.