Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Gesture‐based human‐robot interaction using a knowledge‐based software platform

Gesture‐based human‐robot interaction using a knowledge‐based software platform Purpose – Achieving natural interactions by means of vision and speech between humans and robots is one of the major goals that many researchers are working on. This paper aims to describe a gesture‐based human‐robot interaction (HRI) system using a knowledge‐based software platform. Design/methodology/approach – A frame‐based knowledge model is defined for the gesture interpretation and HRI. In this knowledge model, necessary frames are defined for the known users, robots, poses, gestures and robot behaviors. First, the system identifies the user using the eigenface method. Then, face and hand poses are segmented from the camera frame buffer using the person's specific skin color information and classified by the subspace method. Findings – The system is capable of recognizing static gestures comprised of the face and hand poses, and dynamic gestures of face in motion. The system combines computer vision and knowledge‐based approaches in order to improve the adaptability to different people. Originality/value – Provides information on an experimental HRI system that has been implemented in the frame‐based software platform for agent and knowledge management using the AIBO entertainment robot, and this has been demonstrated to be useful and efficient within a limited situation. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Industrial Robot: An International Journal Emerald Publishing

Gesture‐based human‐robot interaction using a knowledge‐based software platform

Loading next page...
 
/lp/emerald-publishing/gesture-based-human-robot-interaction-using-a-knowledge-based-software-YXBtlAbhpX

References (30)

Publisher
Emerald Publishing
Copyright
Copyright © 2006 Emerald Group Publishing Limited. All rights reserved.
ISSN
0143-991X
DOI
10.1108/01439910610638216
Publisher site
See Article on Publisher Site

Abstract

Purpose – Achieving natural interactions by means of vision and speech between humans and robots is one of the major goals that many researchers are working on. This paper aims to describe a gesture‐based human‐robot interaction (HRI) system using a knowledge‐based software platform. Design/methodology/approach – A frame‐based knowledge model is defined for the gesture interpretation and HRI. In this knowledge model, necessary frames are defined for the known users, robots, poses, gestures and robot behaviors. First, the system identifies the user using the eigenface method. Then, face and hand poses are segmented from the camera frame buffer using the person's specific skin color information and classified by the subspace method. Findings – The system is capable of recognizing static gestures comprised of the face and hand poses, and dynamic gestures of face in motion. The system combines computer vision and knowledge‐based approaches in order to improve the adaptability to different people. Originality/value – Provides information on an experimental HRI system that has been implemented in the frame‐based software platform for agent and knowledge management using the AIBO entertainment robot, and this has been demonstrated to be useful and efficient within a limited situation.

Journal

Industrial Robot: An International JournalEmerald Publishing

Published: Jan 1, 2006

Keywords: Robotics; Man machine interface

There are no references for this article.