This Video was done for the first Term on Adaptive Architecture and Computation course for Design Studio module.

The purpose of this project was to investigate ways of translating an image into sound, based on how the user looks at it. In order to determine where the user is looking, an Eyetracker had to be developed on the standards of the EyeWriter. Furthermore certain characteristics of the image such as local hue/brightness where used to define the sounds while the complexity of this parameters defined the melodic patterns. Finally another layer of control was used by using the speed of the Eye.

Loading more stuff…

Hmm…it looks like things are taking a while to load. Try again?

Loading videos…