I know everybody hates #Deepdream, but I really love it. However, it’s not the aesthetics that I love, but the poetry of what’s happening inside the algorithm.
An artificial neural network which has been previously trained to recognise images from over 20,000 categories is now presented with new images that it doesn’t recognise, such as a video of my face. The #Deepdream algorithm runs this network backwards to generate new images, such that the generated images amplify the neural activation patterns triggered by the original images – i.e. what it thinks it recognises in my face. This is analogous to us looking at a cloud, and thinking we see a rabbit, and then drawing the rabbit that we think we see. And then we look at our drawing, and create a new drawing of what we think we see in our first drawing, etc.
This in itself, is interesting as a metaphor for confirmation bias, and filter bubbles etc. But there’s more. People look at these images and say “oh look it’s a puppy-slug”, or “a bird-lizard” etc. But actually, there’s no such thing as a “puppy-slug” or a “bird-lizard”. These images are just noise with particular distributions, such that when we look at them, we can’t help but project what we know onto them. Certain patterns in the original image cause corresponding neurons in the artificial neural network to fire weakly, and the #Deepdream algorithm amplifies those features. Then we look at these generated images, activity in our brain registers those same features amplified by the #Deepdream algorithm, whether it be puppy-like or slug-like features. And then we complete the recognition process by projecting those meanings back onto the image. This is quite literally a duet of a quest for meaning, an entanglement, between an artificial, and a biological neural network.