In this experiment, the objective was to port the ZCam's code into an AddOn for openFrameworks and see what I ended up with. The ZCam, which almost made it into commercial production before Microsoft purchased the developer, 3DV Systems, uses real-time range imagine information in place of traditional full-color video. Featuring a near-infrared pulse, a variety of image data can be captured, separated and massaged in ways that allow the developer to detect how far pixels are away from the ZCam lens. For this experiment, I output grayscale depth-map images and generated MEL code on-the-fly for Maya to run. Once Maya received the code, it created a mesh, loaded the latest depth-map image, applied it as a displacement map to the mesh, backed it in, took a range of vertices's from the mesh and placed spheres at their locations. Using this approach, any 3d object could be used so rather than calling it a point cloud, I think poly-cloud is more apropos.

Loading more stuff…

Hmm…it looks like things are taking a while to load. Try again?

Loading videos…