Access the full text.
Sign up today, get DeepDyve free for 14 days.
References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.
Abstract. We present a visual tracking method with feature fusion via joint sparse presentation. The proposed method describes each target candidate by combining different features and joint sparse representation for robustness in coefficient estimation. Then, we build a probabilistic observation model based on the approximation error between the recovered candidate image and the observed sample. Finally, this observation model is integrated with a stochastic affine motion model to form a particle filter framework for visual tracking. Furthermore, a dynamic and robust template update strategy is applied to adapt the appearance variations of the target and reduce the possibility of drifting. Quantitative evaluations on challenging benchmark video sequences demonstrate that the proposed method is effective and can perform favorably compared to several state-of-the-art methods.
Journal of Electronic Imaging – SPIE
Published: Jan 1, 2015
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.