This is part of a final project for a class, it involves compressing video that has depth information. I might repeat the work with a stereo camera later (or even better a lidar) but for now I went ahead with a simulated source that uses opengl to produce depth information from the z-buffer.
What's shown here is the the depth information is taken to construct a 3D grid of voxels that represents the original 3D scene at low fidelity. I cheated and used position and attitude information generated from the source, I might get a registration method working eventually that will automatically derive changes in position and attitude- but I can claim there was an IMU strapped to the camera for this.
Next week I'll have another video uploaded that ought to show the later steps in the process that involve image compression.