Three Microsoft Kinect depth cameras are calibrated and transform motion-tracked skeletons into a unified 3D scene. As a skeleton moves out of one camera's field of view, it is picked up by the next camera. In this demo, all "versions" of a skeleton are shown for clarity (left camera (2): blue, center camera (0): green, right camera (1): red -- best viewed in HD), but only the best representation should be shown/used for further processing. Each Kinect camera's host PC is connected to the output device via Ethernet, allowing for a larger setup, both in terms of space covered and the number of Kinect cameras in use.
The shown project was created between April and July of 2011 by Minh Tuan Nguyen, Jonas Witt and Armin Zamani and was supervised by Prof. Patrick Baudisch at the Human Computer Interaction Lab at Hasso Plattner Institute, Potsdam, Germany.