I am trying to use the LEAP motion to control modulation and processing of my voice while singing live.
Frequently as a singer I use my hands to articulate concepts, emphasis or to remind myself how to physically reach a note. I would like to use the LEAP motion to interpret these automatic movements into ways to process my singing live, not pre-recorded for a real-time performance.
here is the video of the simple example I did today with the enormous help of Justin Lang. Processing sketch takes in information from LEAP and then sends it as an OSC signal, which is picked up by another processing sketch (the one on the right). This is a proof of concept, the next iteration will have MaxMSP picking up the LEAP as OSC information to modulate my singing which will be a patch in MAX and sent out to speakers.