I've been thinking about new ways of making music and working with sound. I'm especially excited about machine learning augmenting our selection of sounds, analyzing and decomposing existing recordings, and making automatic suggestions for compositions.
This shows around 30k "drum samples" from a few different sample packs, organized in 2d (position) and 3d (color). All sounds are less than 4 seconds long, but I only analyze and play the first second while scrolling through. I used librosa to extract the constant-q transform of each sound with 84 bins and 11 time steps. I used t-SNE with perplexity 100 to layout the sounds from those 924 dimensional vectors.
The 4th Choreographic Coding Lab was an outcome of MotionBank, a four-year research project of the Forsythe Company. It brought together technologists, visual artists, and dancers for five days of exploration and creation. Sergio Mora-Diaz and I took the opportunity to create a series of programs and can visualize different trackable qualities of a dancer.
For this workshop we were using an Asus Xtion Pro camera with the Cinder library.