How would machines understand a movie?
New advancements in machine learning make real-time image analysis possible and move us closer to
those long-anticipated philosophical sci-fi scenarios. All the captions for these clips are generated in real-time using machine learning algorithms. They are not always right. Sometimes they are funny, flat out or understandably wrong. When they are right though - they are uncanny.
This research experiment is using recurrent neural networks (RNN) using Andrej Karpathy's NeuralTalk (https://github.com/karpathy/neuraltalk2) and was inspired by Kyle McDonald's NeuralTalk and Walk (vimeo.com/146492001)
Scenes from Blade Runner, Inception and Ex Machina.