Access the full text.
Sign up today, get DeepDyve free for 14 days.
Fotini Simistira, V. Katsouros, G. Carayannis (2015)
Recognition of online handwritten mathematical formulas using probabilistic SVMs and stochastic context free grammarsPattern Recognit. Lett., 53
Nam Vo, A. Bobick (2014)
From Stochastic Grammar to Bayes Network: Probabilistic Parsing of Complex Activity2014 IEEE Conference on Computer Vision and Pattern Recognition
Shah Rahman, Siu-Yeung Cho, M. Leung (2012)
Recognising human actions by analysing negative spacesIet Computer Vision, 6
Xiao-Qin Cao, Zhi-Qiang Liu (2015)
Type-2 Fuzzy Topic Models for Human Action RecognitionIEEE Transactions on Fuzzy Systems, 23
M. Ryoo, J. Aggarwal (2006)
Recognition of Composite Human Activities through Context-Free Grammar Based Representation2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), 2
I. El-Henawy, K. Ahmed, Hamdi Mahmoud (2018)
Action recognition using fast HOG3D of integral videos and Smith-Waterman partial matchingIET Image Process., 12
Heng Wang, Alexander Kläser, C. Schmid, Cheng-Lin Liu (2013)
Dense Trajectories and Motion Boundary Descriptors for Action RecognitionInternational Journal of Computer Vision, 103
M. Ryoo, J. Aggarwal (2009)
Semantic Representation and Recognition of Continued and Recursive Human ActivitiesInternational Journal of Computer Vision, 82
J. Wind (2017)
Exact Nonlinear and Non-Gaussian Kalman Smoother for State Space Models with Implicit Functions and Equality Constraints
Rooji Jinan, Tara Raveendran (2016)
Particle Filters for Multiple Target TrackingProcedia Technology, 24
M. Arulampalam, S. Maskell, N. Gordon, T. Clapp (2002)
A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian trackingIEEE Trans. Signal Process., 50
J. Pulido, J. González, C. Suárez-Mejías, A. Bandera, P. Bustos, F. Fernández (2017)
Evaluating the Child–Robot Interaction of the NAOTherapist Platform in Pediatric RehabilitationInternational Journal of Social Robotics, 9
Debapratim Dawn, S. Shaikh (2016)
A comprehensive survey of human action recognition with spatio-temporal interest point (STIP) detectorThe Visual Computer, 32
A. Gams, T. Petrič, Martin Do, B. Nemec, J. Morimoto, T. Asfour, A. Ude (2016)
Adaptation and coaching of periodic motion primitives through physical and visual interactionRobotics Auton. Syst., 75
Siyuan Qi, Siyuan Huang, Ping Wei, Song-Chun Zhu (2017)
Predicting Human Activities Using Stochastic Grammar2017 IEEE International Conference on Computer Vision (ICCV)
Katherine Strausser, H. Kazerooni (2011)
The development and testing of a human machine interface for a mobile medical exoskeleton2011 IEEE/RSJ International Conference on Intelligent Robots and Systems
A. Tapus, A. Bandera, R. Vázquez-Martín, L. Calderita (2019)
Perceiving the person and their interactions with the others for social robotics - A reviewPattern Recognit. Lett., 118
Siyu Li, Der-Horng Lee (2014)
Learning daily activity patterns with probabilistic grammarsTransportation, 44
E. Vidal, F. Thollard, C. Higuera, F. Casacuberta, Rafael Carrasco (2005)
Probabilistic finite-state machines - part IIIEEE Transactions on Pattern Analysis and Machine Intelligence, 27
Xavier Hinaut, Maxime Petit, G. Pointeau, Peter Dominey (2014)
Exploring the acquisition and production of grammatical constructions through human-robot interaction with echo state networksFrontiers in Neurorobotics, 8
Anan Liu, Yuting Su, Zan Gao, Tong Hao, Zhaoxuan Yang, Zhe Zhang (2013)
Partwise bag-of-words-based multi-task learning for human action recognitionElectronics Letters, 49
A. Bimbo, F. Dini (2011)
Particle filter-based visual tracking with a first order dynamic model and uncertainty adaptationComput. Vis. Image Underst., 115
Kyuhwa Lee, Yanyu Su, Tae-Kyun Kim, Y. Demiris (2013)
A syntactic approach to robot imitation learning using probabilistic activity grammarsRobotics Auton. Syst., 61
Arnaud Ramey, V. González-Pacheco, M. Salichs (2011)
Integration of a low-cost RGB-D sensor in a social robot for gesture recognition2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI)
F. Daum (2005)
Nonlinear filters: beyond the Kalman filterIEEE Aerospace and Electronic Systems Magazine, 20
E. Wan, Rudolph Merwe (2000)
The unscented Kalman filter for nonlinear estimationProceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No.00EX373)
Eunju Kim, A. Helal, D. Cook (2010)
Human Activity Recognition and Pattern DiscoveryIEEE Pervasive Computing, 9
Cynthia Matuszek, E. Herbst, Luke Zettlemoyer, D. Fox (2012)
Learning to Parse Natural Language Commands to a Robot Control System
Nam Vo, A. Bobick (2016)
Sequential Interval Network for parsing complex structured activityComput. Vis. Image Underst., 143
M. Santofimia, J. Martinez-del-Rincon, Jean-Christophe Nebel (2014)
Episodic Reasoning for Vision-Based Human Action RecognitionThe Scientific World Journal, 2014
M. Khan, N. Salman, A. Ali, A. Khan, A. Kemp (2015)
A comparative study of target tracking with Kalman filter, extended Kalman filter and particle filter using received signal strength measurements2015 International Conference on Emerging Technologies (ICET)
U. Bayaliev, U. Brimkulov, R. Sultanov (2015)
Seyyar Robotlarda Kullanılan Stokastik Konum Belirleme Algoritmalarının Karşılaştırmalı Analizi, 3
Neil Dantam, Mike Stilman (2013)
The Motion Grammar: Analysis of a Linguistic Method for Robot ControlIEEE Transactions on Robotics, 29
R. Dillmann (2004)
Teaching and learning of robot tasks via observation of human performanceRobotics Auton. Syst., 47
Rajesh Rohilla, Vanshaj Sikri, Rajiv Kapoor (2017)
Spider monkey optimisation assisted particle filter for robust object trackingIET Comput. Vis., 11
Qing Chen, N. Georganas, E. Petriu (2008)
Hand Gesture Recognition Using Haar-Like Features and a Stochastic Context-Free GrammarIEEE Transactions on Instrumentation and Measurement, 57
In this study, human activity with finite and specific ranking is modeled with finite state machine, and an application for human–robot interaction was realized. A robot arm was designed that makes specific movements. The purpose of this paper is to create a language associated to a complex task, which was then used to teach individuals by the robot that knows the language.Design/methodology/approachAlthough the complex task is known by the robot, it is not known by the human. When the application is started, the robot continuously checks the specific task performed by the human. To carry out the control, the human hand is tracked. For this, the image processing techniques and the particle filter (PF) based on the Bayesian tracking method are used. To determine the complex task performed by the human, the task is divided into a series of sub-tasks. To identify the sequence of the sub-tasks, a push-down automata that uses a context-free grammar language structure is developed. Depending on the correctness of the sequence of the sub-tasks performed by humans, the robot produces different outputs.FindingsThis application was carried out for 15 individuals. In total, 11 out of the 15 individuals completed the complex task correctly by following the different outputs.Originality/valueThis type of study is suitable for applications to improve human intelligence and to enable people to learn quickly. Also, the risky tasks of a person working in a production or assembly line can be controlled with such applications by the robots.
Industrial Robot: An International Journal – Emerald Publishing
Published: Aug 5, 2019
Keywords: Finite state machine; Human–Robot interaction; Particle filter; Context-free grammar; Push-down automata
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.