Transcoding an illustration as an audio signal, inspired by Yasunao Tone's "Wounded Man'yo". The lines on the screen are drawn manually; but when I release the mouse, a neural network continues the stroke based on what it's seen me draw so far. The strokes are saved to an audio buffer that is continually being read non-linearly, using the value of the current sample as an index for selecting the next sample.

More on Wounded Man'yo
The original (visual) idea, with Lisp + Processing
More on the code

Loading more stuff…

Hmm…it looks like things are taking a while to load. Try again?

Loading videos…