Building Local Terrain Maps for Autonomous Navigation

Stefan Laible


In order for a mobile robot to navigate safely and efficiently in an outdoor environment, it has to recognize its surrounding terrain. Our robot is equipped with a low-resolution 3D LIDAR and a color camera (see Fig. 2). The data from both sensors are fused to classify the terrain in front of the robot. Therefore, the ground plane is divided into a grid and each cell is classified as either asphalt, cobblestones, grass, or gravel. We use height and intensity features for the LIDAR data and Local ternary patterns (LTP) for the image data. By additionally taking into account the context-sensitive nature of the terrain, the results can be improved significantly.

Context-sensitive classification

Important for improving the classification results is the insight that terrain appears in contiguous areas - a fact that is ignored when the grid cells are considered only independently of each other. Only very rarely will one find terrain that varies greatly within a small range. To account for this, a suitable mathematical model is needed, which exists in the form of a Conditional Random Field (CRF) (see Fig. 1). A CRF models the conditional probability of the labels given the features directly; such a model is called discriminative.

Fig. 1 The terrain label y of a grid cell depends on the measured features x, but also on the labels of its neighboring grid cells.

Terrain Maps

Taking into account several consecutive frames we get a spatio-temporal terrain classification. By also detecting obstacles with the LIDAR, the robot can build a local terrain and elevation map of its environment as it drives (see Fig. 2). These maps can be used for robot localization and autonomous navigation.
  

Fig. 2 Left: A terrain and elevation map built by our robot (Gray: asphalt, blue: cobblestones, red: obstacles). Right: Outdoor robot Thorin with a Marlin F-046 C Color Camera and a FX6 3D Laser Scanner by Nippon Signal.



Contact

Stefan Laible
Tel.: +49 7071 29 78983
stefan.laible at uni-tuebingen.de

Publications

[1] Stefan Laible and Andreas Zell. Building local terrain maps using spatio-temporal classification for semantic robot localization. In Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on, pages 4591-4597, Chicago, IL, September 2014. IEEE, IEEE. [ DOI | details | pdf ]
[2] Stefan Laible, Yasir Niaz Khan, and Andreas Zell. Terrain classification with conditional random fields on fused 3d lidar and camera data. In European Conference on Mobile Robots (ECMR 2013), pages 172-177, Barcelona, Catalonia, Spain, September 2013. IEEE. [ DOI | details | pdf ]
[3] Stefan Laible, Yasir Niaz Khan, Karsten Bohlmann, and Andreas Zell. 3d lidar- and camera-based terrain classification under different lighting conditions. In Autonomous Mobile Systems 2012, Informatik aktuell, pages 21-29. Springer Berlin Heidelberg, 2012. [ DOI | details | link | pdf ]
[4] Yasir Niaz Khan, Philippe Komma, and Andreas Zell. High resolution visual terrain classification for outdoor robots. In Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on, pages 1014 -1021, Barcelona, Spain, nov 2011. [ DOI | pdf ]

Export list of publications as BibTeX / Endnote / Rich Text Format (RTF)