I made this at Rethink Music Hacker’s weekend in Boston last weekend. This is an example using a 200ms sample of laughter.

A simple genetic algorithm learns the short-time fourier transform of a target static sound texture. The approximation gradually acquires information about the target sound via repeated semi-random modifications to the spectrogram. Phase and magnitude are learned separately. The learning process is sonified by inverting the estimated spectrogram at each iteration of the algorithm. The visualization is calculated by taking the inverse 2-dimensional fourier transform of the spectrogram at each iteration. I pass the 2D ifft only the real values of the spectrogram, resulting in symmetric images. This sonification and visualization allows for the gradual evolution of the sound from silence to target approximation can be seen and heard. The goal is not necessarily to accurately model the target sound, but rather to hear and see the learning process. Some target textures are easier to approximate than others but personally I find the ones that are difficult to approximate more interesting.

Loading more stuff…

Hmm…it looks like things are taking a while to load. Try again?

Loading videos…