The Blobatron combines input from a Microsoft Kinect with Processing in order to generate a visual representation of the 3-dimensional movement inside a constrained physical space.
By combining the openKinect and openCV libraries, the Kinect maps out the spatial coordinates of both the position and height of each disk. OpenCV reads the contours of the disks, or “blobs”, and stores their coordinates.
Based on the input coordinates read by the Kinect, cubes are drawn in a 3-dimensional space in Processing. By pulling the strings on the Blobatron, the user can manipulate where and how the volumes are generated in space. The inherent haptic capacities provided by the weight of the disks and the friction of the strings in the pulley-system provides a tangible feedback, which maps the users movement to the visual output.
The clear feedback from the Blobatron to the user’s manipulation of the strings enables the user to play the Blobatron as a visual instrument.
Made during the Generative Design week 2012 at Copenhagen Institute of Interaction Design.
Project by Kenneth A. Robertsen and Kat Zorina
http://www.ciid.dk / http://katzorina.com / http://www.kennethaleksander.no