Made with Cinder and a Kinect sensor. Runs in realtime.

7 copies of myself trail my movements. The depth map stays in play for all the copies so there can be self occlusion.

The process is fairly straightforward. I am maintaining an array of 120 depth textures and 120 video textures. Each frame, I grab the current texture, add it to the array (using a cycling index) and send it plus 7 other copies to a shader. For each frag, I use the video rgb value that corresponds to the smallest depth value (in this case, the smaller the depth, the closer the frag is to the Kinect).

I am using the infrared video feed because it aligns with the depth map without needing to do any extra calibrating. Makes me look odd. As if I needed any extra help looking odd in this video.

Loading more stuff…

Hmm…it looks like things are taking a while to load. Try again?

Loading videos…