First approach to render a audio reactive sketch with Processing. First I had to create two separate sketches where the audio is analysed and the results are saved in a text files.

Still seems to be far away from being finished. The movement and the change of form are too fast, I think. And yes, the audio and the video aren't synchronized, yet.

Used libraries:
peasyCam for camera
beads for audio analysis and beat detection
processing.opengl to display
processing.video to render the video file

Audio & video combination with AfterEffects.

Music:
CocoRosie - The moon asked the crow

Basic Code:
openprocessing.org/visuals/?visualID=16658

Screenshots:
flickr.com/photos/dianalange/sets/72157627438208050/

Loading more stuff…

Hmm…it looks like things are taking a while to load. Try again?

Loading videos…