Read Unlimited Journals

for just $19.99 per month.

Start Your Free Trial

Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks

Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks

Abstract

A key component of a mobile robot system is the ability to localize itself accurately and, simultaneously, to build a map of the environment. Most of the existing algorithms are based on laser range finders, sonar sensors or artificial landmarks. In this paper, we describe a vision-based mobile robot localization and mapping algorithm, which uses scale-invariant image features as natural landmarks in unmodified environments. The invariance of these features to image translation, scaling and rotation makes them suitable landmarks for mobile robot localization and map building. With our Triclops stereo vision system, these landmarks are localized and robot ego-motion is estimated by least-squares minimization of the matched landmarks. Feature viewpoint variation and occlusion are taken into account by maintaining a view direction for each landmark. Experiments show that these visual landmarks are robustly matched, robot pose is estimated and a consistent three-dimensional map is built. As image features are not noise-free, we carry out error analysis for the landmark positions and the robot pose. We use Kalman filters to track these landmarks in a dynamic environment, resulting in a database map with landmark positional uncertainty.
Loading next page...
 
/lp/sage/mobile-robot-localization-and-mapping-with-uncertainty-using-scale-9LenvsFdAy

You're reading a free preview. Subscribe to read the entire article.

And millions more from thousands of peer-reviewed journals, for only $19.99/month.

Start Your Free Trial