Submitted for workshop on machine learning for creativity at NIPS 2017
This video explores and expands upon the technique popularly known as “deep dream,” an iterative process for optimizing the pixels of an image so as to obtain a desired activation state in a trained convolutional network. It primarily experiments with the dynamics of feedback-generated deep dream videos, wherein each frame is initialized by the previous one. By gating (or “masking”) the pixel gradients of multiple channels and mixing them via pre-determined masking patterns, while simultaneously distorting the input canvas, one can achieve novel aesthetics.
A number of strategies are directly inspired by the initial work of Google’s Deep Dream implementation, particularly the work of Mike Tyka who first experimented with feedback, canvas distortion (zooming), and mixing gradients of multiple channels. The trained network used is Google’s Inceptionism network. The workflow for generating these is under continuous development, and future improvements include a more generalized canvas distortion function, and improved masking from source images.
For a Jupyter notebook with instructions on how to generate these kinds of images, see: github.com/ml4a/ml4a-guides/blob/master/notebooks/neural-synth.ipynb
Thanks to the NIPS creativity workshop for accepting. nips2017creativity.github.io