Multi-Camera Visual SLAM for Autonomous Navigation of MAVs
Cameras are passive sensors, and have a superior potential for environment perception, while still being lightweight, relatively low cost and energy efficient. They can be used to facilitate autonomous navigation of micro aerial vehicles (MAVs) in environments where GPS is unavailable or unreliable, such as indoors or in outdoor urban areas. This is normally done by implementing a visual simultaneous localization and mapping (SLAM) system, which is able to provide the pose estimates of a MAV and map the environment with abstracted information.
Micro aerial vehicle platform
Our MAV is a quadrotor controlled by a pxIMU autopilot. The onboard computer is a Kontron microETXexpress computer-on-module (COM) featuring an Intel Core 2 Duo 1.86GHz CPU, 2 GB DDR3 RAM and a 32Gb SSD. The two cameras utilized on our MAV are two PointGrey Firefly MV monochrome cameras.
|Fig. 1 The MAV platform with dual cameras marked in ellipses.|
Autonomous Navigation of MAVs using Visual SLAM
We use a keyframe-based visual SLAM system to enable autonomous navigation of our MAV. It has two fundamental components: Pose tracking provides the pose of the camera in real-time based on a set of detected map points of an existing map. Mapping updates and refines the map to facilitate the pose tracking and maintain the environment information.
Furthermore, we integrated a local -feature-based method into the SLAM framework for landing site detection, and locate the landing site within the map of the SLAM system, taking advantage of those map points associated with the detected landing site. This enables autonomous landing of a MAV on an arbitrarily textured landing site .
Visual SLAM with Multiple Cameras
Monocular vision systems normally have rather limited field of views (FOVs). Increasing the FOV of a camera provides better environmental awareness and can enable more robust tracking in complex environments, with loss of environmental details, however. Our solution is to utilize multiple cameras with little or no overlap in their respective FOVs into a single SLAM system for MAV autonomous navigation  by: First, analyzing the cost functions of the optimization problems in pose tracking and mapping of the visual SLAM system. Second, utilizing image features from multiple cameras in optimization steps to improve its robustness and accuracy. We chose a dual-camera setup resulting from a compromise between tracking robustness and onboard computation capability.
Our proposed visual SLAM system can enable a MAV to navigate autonomously along a predefined path. It is more resistant to tracking failure in complex environments than conventional monocular vision systems.
|Fig. 3 The experiment environment (left) and the built map and flight trajectory during the manual flight.|
Tel.: +49 7071 29 78987
shaowu.yang at uni-tuebingen.de
|||Shaowu Yang, Sebastian A. Scherer, and Andreas Zell. Robust onboard visual SLAM for autonomous MAVs. In 2014 International Conference on Intelligent Autonomous Systems (IAS-13), Padova, Italy, July 2014. [ details | pdf ]|
|||Shaowu Yang, Sebastian A. Scherer, and Andreas Zell. Visual SLAM for autonomous MAVs with dual cameras. In 2014 International Conference on Robotics and Automation (ICRA'14), Hongkong, China, June 2014. [ details | pdf ]|
|||Shaowu Yang, Sebastian A. Scherer, Konstantin Schauwecker, and Andreas Zell. Autonomous Landing of MAVs on Arbitrarily Textured Landing Sites using Onboard Monocular Vision. Journal of Intelligent & Robotic Systems, 74(1-2):27-43, 2014. [ DOI | details | link | pdf ]|
|||Shaowu Yang, Sebastian A. Scherer, and Andreas Zell. An onboard monocular vision system for autonomous takeoff, hovering and landing of a micro aerial vehicle. Journal of Intelligent & Robotic Systems, 69(1-4):499-515, January 2013. [ details | link | pdf ]|