This should explain pretty well how Occlusion Video Chat works behind the scenes.
"Side A" is a bare-bones server. When I move the mouse around you'll see a few blurry dots follow. Those dots and the very faded image have transparency applied to them. BUT I'm not using the transparency as transparency. I'm exploiting it to store depth information.
"Side B" is a bare-bones client. It receives the transparent image from "Side A". With a texture of its own and the over-the-wire texture combined, I can send them both to a GLSL shader that compares the transparencies per-pixel, renders the closest pixel in front, and sets the output transparency back to 100%.
This is a simulated "perfect" scenario though. With a Kinect, and real people in front of a camera, the color and depth images had to be carefully aligned and all the "bad data" had to be filled in in to make "pretty" depth channels.
Also, as of this version, the transparent image that comes in from the server has lost some of its opaqueness and brightness in the pixels. I had to boost them back up by 25%. Even though the test "A" and "B" images were rendered with the exact same lighting the images look different. I think it's a color-conversion issue when I'm using getPixels(), but I'm not sure yet.
Occlusion Video Chat is premiering at SPARKcon 2012 September 14th in the Digital Motion Showcase
Loading more stuff…
Hmm…it looks like things are taking a while to load. Try again?