This video demonstrates a new Jamoma module for sonification of motiongrams. From a live video input, the program generates a motion image which is again transformed into a motiongram. This is then used as the source of the sound synthesis, and "read" as a spectrogram. The result is a sonification of the original motion, plus the visualisation in the motiongram.
Developed by Alexander Refsum Jensenius, fourMs lab, University of Oslo.
Loading more stuff…
Hmm…it looks like things are taking a while to load. Try again?