Wind creates sound from motion in the viewer's visual environment.
Each presentation of Wind takes the form of a temporary intervention, constructed around a site-speciﬁc visual environment of objects that are moving in the wind, such as a tree or a ﬁeld of tall grass. A video camera records images of this visual environment. The images are delivered to a vision processing system that extracts ﬁelds of motion and mathematically transforms these into ﬁelds of sound. The ﬁelds of sound are in turn reproduced through loudspeakers placed within the original environment.
Heard and seen together, the sound ﬁelds and visual environment form a stimulus feedback loop. The sound ﬁelds guide and enhance the viewer’s visual experience, focusing their visual attention toward particular kinds of visual motion. This strengthened visual experience guides and enhances the viewer’s sonic experience, focusing their sonic attention toward particular kinds of sonic motion. This feedback loop creates a strong interconnection between the visual and the sonic within the mind of the viewer, leading to momentary states of complete attention to the different senses, and an overall heightened awareness of the beauty that is present.
Built using openFrameworks, Pure Data, and Linux on a Pandaboard.