The animations seen above are screen captures generated by our homebrewed audio-reactive system, the product of three separate applications working together and running on a host PC. The first application, JellyEars, performs realtime amplitude analysis across seven frequency bands using a fast fourier transform (FFT), and transmits that data over UDP to the second application, JellyBrain. This program, which is capable of a large variety of animation types, uses this data to produce sound-reactive animations. On the Amazing Jellyfish, JellyBrain sends its animation data via an AVR microcontroller directly to the hardware and LEDs on the physical 8-foot dome. For these animations, JellyBrain instead sends its data to the third application, JellyView, which interprets and displays the data on a 3D model of the dome.
No soundtrack was added to this movie clip - all the audio was captured along with the animations as they were generated in realtime in response to the audio.