Augmenting an Object with Face-Tracking and Reactive Content
Recently, we’ve been experimenting with technology capable of tracking faces and changing what the viewer sees on a display, but we wanted to push the experience into uncharted territory. What if you could augment a physical object behind a display (say, a clothespin) with digital labels, and re-calibrate the content to line up with the object depending on your perspective?
We used Planar’s LookThru Display Box, (bit.ly/SQUmM3) a transparent LCD screen, and a Kinect for Windows Sensor to build this experiment. The depth-sensing camera knows when you’re in front of the display, and it recognizes where in 3D space your face is. Then our software can interactively respond based on your face’s orientation.
The effect elicits a dramatic reaction. Visitors approach what appears as a static display, but as the move closer, the sensor registers their presence and content pops up and lines up with the object inside. We found that once people realized that the display was “smart” and recognized them, there was a moment of delight and they couldn’t tear themselves away. People wanted to linger and explore.
As an homage to the studio’s first interactive (bit.ly/SQUr2g), we used clothespins, but you can imagine how this would work with any physical object, from a valuable artifact in a museum to a new item in a retail environment.
With this technology, we can augment objects in ways that are not only novel or surprising, but truly educational and interesting. We’re able to strengthen the engagement with a tangible object, bringing new stories to life and creating experiences that people are unlikely to forget.
Music by Solander, “Flod”