Realtime capture. Feedback and procedural processes. No video samples, all synthesis on the GPU, just webcam textures and 5 pics compiled in an extra short quicktime file to make a realy cheap faceshift effect.
A second test i did about interactivity. The video control the sound, and the sound control the video.
I think i have better result than those i had with the first one. Better sound feedback, and better video glitch effects.
I intent to use max/msp/jitter and faceosc to control Reaktor, with a webcam (face/color tracking) and a cheap gamepad. I use a custom version of "faceosc", and "oscroute.mxe" from the CNMAT library.
The Reaktor ensemble is a Skrewell modification i did adding it a glitchy looper, a granular voice pitcher and a kick oscillator.
The visual part is a test to see how to combine 3DCG contents, lowrez pics and webcam video with the webcam alpha channel controled by a simple threshold. Glitch effects are triggered and controled by sound events in a max/msp/jitter patch(peak and pitch).
Thanks to Jean Marc Pelletier for his precious "cv.jit" toolbox, to Andrew Benson for all his "jitter recipes", to the CNMAT for their easy to use max externals, to Masato Tsutsui for his jitter tutorials and to Kyle Mc Donald to share his great "faceosc"...
I didn't uploaded my patches, but if you want tu use one, you can send me an email at email@example.com, i will be happy to share.
This test has been recorded with "Fraps".
My english may be quite shitty, sorry about that.