Google Research has been playing with artificial neural networks. Recently they birthed a trippy tool codenamed DeepDream, and many of your fave Vimeans are having fun with it.
What it is
Do you know what an artificial neural network is? I'm sorry, dumb question, of course you do! Just for my benefit, though, let me explain so that I know I'm getting it right (and feel free to correct me in the comments if I'm actually getting it wrong). An artificial neural network is a series of”¦ well, artificial neurons that are”¦ errr networked together. Huh. I guess what it is isn't as important as what it does.
What it does
An important tool in machine learning, and integral to the process of software speech and image recognition, these networks take an input and pass it through each neuron, running functions on it at every level to refine or extract information until reaching an output. This output is the “answer.” In the case of image recognition networks, researchers first feed the network millions of images to build up a baseline of knowledge, and then when it is shown an image (the input), each neuron is activated in sequence, asking at each step, “based on what you know, what is it you see here?”
The idea is that the gradual refinement of the input being passed between neurons yields a pretty good answer by the time it gets to the output, like: That is a dog. This is a spoon. This is your favorite episode of CBS” Early Edition.
Why it gets weird
But what if you short-circuit the process, and rather than see the results of a network working in harmony, you try to see what's happening at the individual neuron level? That's what members of The Google Research team did. In what was first described in a blog post last month, researchers would take an image that had been analyzed at a particular neuron level and then pull it out, re-entering it as a new input. You end up getting a feedback loop that can be pretty trippy, where software simply “incepts” things into existence. Ergo, puppy slugs.
The June blog post simply highlighted the concept, but it was only on July 1st that the research team ended up open sourcing the the code used to produce these images. The floodgates have opened and now lots of artists are playing around with the tool. We first learned about the concept and resulting community of tinkerers after seeing this piece from Memo Atken:
Memo's written very thoughtfully about DeepDream, and we definitely encourage you to see what he has to say. He explains the concept more elegantly than I, and has some interesting takeaways. Others have iterated even beyond the original concept with video. Rather than creating a progressively abstracted feedback loop presented in infinite zoom, these creators have been deepdreaming frames of video and then sticking the frames back together to create video that looks”¦ deranged. In the very best way possible. Check out some examples below, and, if you decide to create your own deepdream experiments, please be sure to tag them, so we can all check them out!