Kinect is not only about entertainment and gaming. We can also use the motion tracking and detection in in-store installations. So we have been experimenting with a couple of ideas... The first demo we did is an installation meant to be in fashion stores, mixing the shopping experience and the online store. Here's a short movie of it. We've already progressed on these, combining Kinect with social media and other experiences as well.
Magrathea uses the kinect camera to dynamically generate a landscape out of any structure or object. The kinect takes an depth reading of what's built on the table in front of it, which is then rendered live onscreen as terrain using openFrameworks and openGL.
The depth reading is mapped to a polygonal mesh, which then has textures dynamically applied to it based on the height and slope of the structure. For example, steep slopes are given a rocky texture, and flatter areas a grassy one. As the user builds and removes, the landscape correspondingly grows and sinks out of the ocean, shifting into a new configuration.
Landscape can be made from anything, such as blocks, boxes, the human body, and even a giant mound of dough.
Thank you for watching, and we hope you enjoyed Magrathea.
Made by Timothy Sherman and Paul Miller
For Golan Levin's Interactive Art & Computational Design course at Carnegie Mellon University in Spring 2011
This example demonstrates how to track the average location of a given number of points that meet a specific depth threshold. It's the most basic form of hand tracking (though of course it knows nothing about the object being a hand.)