Transcoding an illustration as an audio signal, inspired by Yasunao Tone's "Wounded Man'yo". The lines on the screen are drawn manually; but when I release the mouse, a neural network continues the stroke based on what it's seen me draw so far. The strokes are saved to an audio buffer that is continually being read non-linearly, using the value of the current sample as an index for selecting the next sample.
More on Wounded Man'yo medienkunstnetz.de/werke/wounded-many-o-2-2000/
The original (visual) idea, with Lisp + Processing rpi.edu/~mcdonk/imitativeart/
More on the code openframeworks.cc/forum/viewtopic.php?t=1188
Loading more stuff…
Hmm…it looks like things are taking a while to load. Try again?