Stereo Vision based Simultaneous Localization and Mapping for Flying Robots
Stereo vision allows to estimate the structure of the scene by using only two cameras (Left and Right) separated by a given baseline. The projection of a given point on the image plane of the right camera is a shifted version of the projection of the same point in the image plane of the left camera. This shift is called disparity and corresponds to the inverse depth. Unlike active sensors based on the projection of structured light such as the Microsoft Kinect, stereo vision allows to get accurate depth maps in outdoor environments with strong sunlight. Furthermore stereo vision outperforms sensors based on the projection of structured light with respect to the maximum range of the sensor.
Fig. 1 The block diagram of the proposed Stereo Vision based SLAM.
In this research project we are using a quadrocopter based on the Mikrokopter QuadroXL. We have equiped our flying robot with an Odroid XU quad-core ARM onboard computer and a stereo camera pair looking forward as well as a monocular camera looking downward.
|Fig. 2 The quadrocopter and the stereo rig used in this research.|