A short demonstration of my third-year project on autonomous landing site selection, adapted from an algorithm outlined by Park and Kim .
The top-left frame shows a depth map being streamed from an ASUS Xtion PRO LIVE 3D scanner.
The bottom-right frame shows the result of the Canny operator on the depth map followed by a distance transform. The overlaid circles show possible landing sites. The green circles show safe sites (with the thicker circle being the best); purple circles indicate sites which are too steep; orange circles show sites which are too small.
The top-left image is called a depth map. There's a sensor on my ceiling which makes it (very similar to an XBox Kinect). Think of it as a 3D camera; instead of sending a colour for each pixel, it sends a distance. I've turned those values into shades of gray to display the depth map. (Darker parts are closer to the camera.)
The project objective is to find a safe place to land using this data. My system uses an edge detector to find "cliffs". Then it makes an image which is black next to a cliff, but gets lighter the further away you get. That's what you see under the circles in the bottom-right, and it's called a distance transform.
Park and Kim (who wrote the original algorithm) thought that the lightest part of the distance transform would be the flattest area, and therefore the best landing site. Turns out that's not really true, because you can have a jolly steep hill without any cliffs. My version can detect that, and make better use of small sites. (It doesn't really matter if bits of the aircraft hang over a cliff, so long as the landing gear is secure.)
The green circles are the safe landing sites that the system's found. They turn purple if they slope too much (as I show at the start by tilting one).
 J. Park and Y. Kim, “Landing site searching algorithm of a quadrotor using depth map of stereo vision on unknown terrain,” in AIAA Infotech at Aerospace Conference and Exhibit 2012, (Garden Grove, CA, United States), pp. 2245–2259, June 2012.