Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

A robust vision-based sensor fusion approach for real-time pose estimation.

A robust vision-based sensor fusion approach for real-time pose estimation. Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IEEE transactions on cybernetics Pubmed

A robust vision-based sensor fusion approach for real-time pose estimation.

IEEE transactions on cybernetics , Volume 44 (2): 11 – Nov 13, 2014

A robust vision-based sensor fusion approach for real-time pose estimation.


Abstract

Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.

Loading next page...
 
/lp/pubmed/a-robust-vision-based-sensor-fusion-approach-for-real-time-pose-RFP002JVZ0

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

ISSN
2168-2267
DOI
10.1109/TCYB.2013.2252339
pmid
23757545

Abstract

Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.

Journal

IEEE transactions on cyberneticsPubmed

Published: Nov 13, 2014

There are no references for this article.