Magrathea uses the kinect camera to dynamically generate a landscape out of any structure or object. The kinect takes an depth reading of what's built on the table in front of it, which is then rendered live onscreen as terrain using openFrameworks and openGL.

The depth reading is mapped to a polygonal mesh, which then has textures dynamically applied to it based on the height and slope of the structure. For example, steep slopes are given a rocky texture, and flatter areas a grassy one. As the user builds and removes, the landscape correspondingly grows and sinks out of the ocean, shifting into a new configuration.

Landscape can be made from anything, such as blocks, boxes, the human body, and even a giant mound of dough.

Thank you for watching, and we hope you enjoyed Magrathea.

Made by Timothy Sherman and Paul Miller
For Golan Levin's Interactive Art & Computational Design course at Carnegie Mellon University in Spring 2011

Loading more stuff…

Hmm…it looks like things are taking a while to load. Try again?

Loading videos…