“Voice of Sisyphus” is a time-based study of a single photograph, realized as a continuous performing audio-visual composition. It is presented as a multimedia installation with a large cinematic projection and 4 channel audio, spatializing sounds by speakers positioned in each of the four corners of the exhibition space. The photographic image taken at a formal ball some time in the past, has been chosen for its emblematic quality of the theatricality of a formal social event. Even though it is a documentary still photograph, it nonetheless suggests the staging and narrative aspect of cinematic construction.
A single photograph is transformed through software in nine distinct ways over time. The sound composition is created out of the analysis of visual regions in the photograph through the sampling of pixel clusters as the software “reads” and translates areas of the image at 30 frames per second.
Movement mode, which is the most distinguishing visual parameter, determines the way that each region reads the image to produce the sounds. Four main types of movement have been defined: 1) Stationary: Usually assigned to the “large” background region, which remains still while the “small” foreground region moves over it. In some cases both background and foreground regions remain stationary, each generating their own sounds based on the filtering process at work. 2) Smooth scanning: The foreground region scans over the image either horizontally or vertically and either smoothly or in a randomized back-and-forth manner, 3) Rectangular divisions: Cycling either randomly or in a sequential patterns in various directions, a region jumps over grid divisions of the image. 4) Regions of interest are selected according to coordinates which have been manually set to feature areas of semantic interest in the image. These include faces, clusters of people, windows, glasses, lines, mirrors, plants, decorations, etc. within the image.
--
Voice of Sisyphus is a multimedia installation by George Legrady with image analysis, audio and spatialization software development by Ryan McGee and audio composition software development by Joshua Dickinson.
The novel image sonification software scans and translates regions of the image into audio waveforms. A detailed technical description of the sonification can be found at: lifeorange.com/writing/McGee_ICAD_2012.pdf
The installation premiered at the Edward Cella Gallery in Los Angeles from November 5th, 2011 to February 4th, 2012.