Wind[1] creates sound from motion in the viewer's visual environment.
Each presentation of Wind takes the form of a temporary intervention, constructed around a site-specific visual environment of objects that are moving in the wind, such as a tree or a field of tall grass. A video camera records images of this visual environment. The images are delivered to a vision processing system that extracts fields of motion and mathematically transforms these into fields of sound. The fields of sound are in turn reproduced through loudspeakers placed within the original environment.
Heard and seen together, the sound fields and visual environment form a stimulus feedback loop. The sound fields guide and enhance the viewer’s visual experience, focusing their visual attention toward particular kinds of visual motion. This strengthened visual experience guides and enhances the viewer’s sonic experience, focusing their sonic attention toward particular kinds of sonic motion. This feedback loop creates a strong interconnection between the visual and the sonic within the mind of the viewer, leading to momentary states of complete attention to the different senses, and an overall heightened awareness of the beauty that is present.
Built using openFrameworks, Pure Data, and Linux on a Pandaboard.
[1] frey.co.nz/wind