This is a project to help show the differences between color and sound. Starting with a single black pixel and expanding onwards to gradients and things even more complex, I discover how a computer reads pixels as audio. In doing so, a few clearly defined "rules" became appearent.

#1. Audio is static if the picture is static. So you must "break" an image with layers - i.e. .psd's or .raw files.

#2 With reference to the produced audio, it seems as though the complexity of the tones is related to the size of the picture, but the size of the picture does not dictate the length of the audio.

and #3 Although new tones and sounds can be produced by breaking and re-combining the pictures with color, the color itself is not transferable from audio. RGB in, grayscale out.

Loading more stuff…

Hmm…it looks like things are taking a while to load. Try again?

Loading videos…