Access the full text.
Sign up today, get DeepDyve free for 14 days.
The moving object detection and tracking technology has been widely deployed in visual surveillance for security, which is, however, an extremely challenge to achieve real-time performance owing to environmental noise, background complexity and illumination variation. This paper proposes a novel data fusion approach to attack this problem, which combines an entropy-based Canny (EC) operator with the local and global optical flow (LGOF) method, namely EC-LGOF. Its operation contains four steps. The EC operator firstly computes the contour of moving objects in a video sequence, and the LGOF method then establishes the motion vector field. Thirdly, the minimum error threshold selection (METS) method is employed to distinguish the moving object from the background. Finally, edge information fuses temporal information concerning the optic flow to label the moving objects. Experiments are conducted and the results are given to show the feasibility and effectiveness of the proposed method.
Transactions of the Institute of Measurement and Control – SAGE
Published: Feb 1, 2019
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.