A Multi-View Hand Gesture RGB-D Dataset for Human-Robot Interaction Scenarios


Shukla D., ERKENT Ö., Piater J.

25th IEEE International Symposium on Robot and Human Interactive Communication (IEEE RO-MAN), New York, United States Of America, 26 - 31 August 2016, pp.1084-1091 identifier identifier

  • Publication Type: Conference Paper / Full Text
  • Volume:
  • Doi Number: 10.1109/roman.2016.7745243
  • City: New York
  • Country: United States Of America
  • Page Numbers: pp.1084-1091
  • Hacettepe University Affiliated: No

Abstract

Understanding semantic meaning from hand gestures is a challenging but essential task in human-robot interaction scenarios. In this paper we present a baseline evaluation of the Innsbruck Multi-View Hand Gesture (IMHG) dataset [1] recorded with two RGB-D cameras (Kinect). As a baseline, we adopt a probabilistic appearance-based framework [2] to detect a hand gesture and estimate its pose using two cameras. The dataset consists of two types of deictic gestures with the ground truth location of the target, two symbolic gestures, two manipulative gestures, and two interactional gestures. We discuss the effect of parallax due to the offset between head and hand while performing deictic gestures. Furthermore, we evaluate the proposed framework to estimate the potential referents on the Innsbruck Pointing at Objects (IPO) dataset [2].