DeepDyve – unlimited articles from the top peer-reviewed journals – Just $9.99/month.

Stay on top of your field and complete your research – Get started with a Free Trial. Signup only takes 60 seconds!

Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks


Sage Publications
Copyright © 2002 by SAGE Publications
Publisher site
See Article on Publisher Site

Preview Only

Expand Tray Hide Tray

Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks


A key component of a mobile robot system is the ability to localize itself accurately and, simultaneously, to build a map of the environment. Most of the existing algorithms are based on laser range finders, sonar sensors or artificial landmarks. In this paper, we describe a vision-based mobile robot localization and mapping algorithm, which uses scale-invariant image features as natural landmarks in unmodified environments. The invariance of these features to image translation, scaling and rotation makes them suitable landmarks for mobile robot localization and map building. With our Triclops stereo vision system, these landmarks are localized and robot ego-motion is estimated by least-squares minimization of the matched landmarks. Feature viewpoint variation and occlusion are taken into account by maintaining a view direction for each landmark. Experiments show that these visual landmarks are robustly matched, robot pose is estimated and a consistent three-dimensional map is built. As image features are not noise-free, we carry out error analysis for the landmark positions and the robot pose. We use Kalman filters to track these landmarks in a dynamic environment, resulting in a database map with landmark positional uncertainty.
Loading next page...

Preview Only. This article cannot be rented because we do not currently have permission from the publisher.