This is a project designed on giving a person the ability to choose of what they see in front of them is added to the overall video content. By using their hands the person selects what part of the live webcam feed is recorded into the playback video. As soon as two blobs are detected the video goes into recording mode and records the area on the left. When the hands leave the space it creates the video and then plays it back.

There are still some issues that need to be ironed out and the reason it is so slow is from the amount my macbook can handle. The hand tracking code is a slight modification from KinectCoreVision which can be found here github.com/patriciogonzalezvivo/KinectCoreVision. The rest is modifications from the given openFrameworks examples.

Loading more stuff…

Hmm…it looks like things are taking a while to load. Try again?

Loading videos…