1. See ravenkwok.com/perspective-tracking-in-triple-screens-cave/

    As continuation of a series of one-screen perspective tracking experiments (Portal 2.0, Contra Base 1, Doge Chorus, etc.) I did in late 2014, Shawn Lawson (shawnlawson.com/) and I changed the lab layout into a three-screens box CAVE environment late last month. This video documentation covers multiple new demos I've developed for the new environment in the past couple of weeks.

    Each screen is attached perpendicularly to its neighbor(s), aligned with its corresponding rear projection. Four Vicon Bonita cameras on top corners cover the central area surrounded by the screens. The origin point locates at center of the central area.

    Data collected by Vicon is sent through VRPN (Virtual-Reality Peripheral Network), then OSC (Open Sound Control) protocol to Processing, and is mapped as frustum parameters to compute three projection matrices, which coincide with the screens' position in reality.

    Demo 2: Pikachu Quadtree is a rehash of my quick sketch Pikaworm (openprocessing.org/sketch/111470) in 2013.

    The real-time sound effect in Demo 4: Lightsaber is created through the overlap of a sine wave and a sawtooth wave generated using Beads (beadsproject.net/).

    The floating face in Demo 5: Zordon is a 3D reconstruction based on the oni file data recorded using Kinect and SimpleOpenNI (code.google.com/p/simple-openni/).

    # vimeo.com/126332070 Uploaded 11.7K Plays 14 Comments

Processing

cz001

Browse This Channel

Shout Box

Heads up: the shoutbox will be retiring soon. It’s tired of working, and can’t wait to relax. You can still send a message to the channel owner, though!

Channels are a simple, beautiful way to showcase and watch videos. Browse more Channels.