My first test of having a neural network trying to recreate a scene using pix2pixHD.
The network is trained on street view images and the test data is frames captured from a game I made in Unity where each object is corresponding to a specific label in RGB.
Then I made an Openframeworks application that converts the captured images from Unity to grayscale with 8bits per channel so that I can use them to test with.
( The final pump in the end of the video is the main camera being hit by another car )
Thanks to Nvidia for a great paper and awesome work!
Link to Nvida's git repo & paper:
tcwang0509.github.io/pix2pixHD/
Link to dataset the network is trained on:
cityscapes-dataset.com/