In this piece I conduct audience members who are running Control on their mobile devices; I projected instructions for downloading it prior to the performance.
When the piece begins I push a simple interface to participants consisting of a single slider. In the role of the Conductor, I raise my arms up and down; participants are supposed to match this gesture on their phone. Audience gestures control parameters of an ambient soundscape.
The concept of the piece can be summed up as: Will anonymous audience participants obey a Conductor?
If audience members don't obey me, I can banish them to the queue. Up to eight people can participate at once while everyone else waits in the queue to be activated. New participants are activated at the onset of different musical scenes. They can also replace participants who have been cut.
The visualization, networking and composition logic was done in LuaAV. The audio comes from Ableton Live. This video was edited rather hastily a few months back... I'll try to put something nicer up soon.
The human visual systems receives constantly a large amount of information that has to be filtered if we want it to be meaningful. A key element to know where to direct our attention is a classification between figure and ground. However, that separation is not always evident. Examples as Rubin's vase show that is possible to create images that have more than one stable interpretation. But, can the same type of ambiguities be created on a sequence of moving images?. Are we able to track what is happening at the local and global scale?
"Background singer" is a short animation that explores these questions and the possibilities of parallel story telling using the duality of local and global.
This is a approximation / recreation of a performance from early August 2012. The first half makes heavy use of the new three oscillator monosynth in Gibber, both for the bass and the lead line. The second half has some granulation and minimal bleeping.