Learning Semantics of Gestural Instructions for Human-Robot Collaboration


Creative Commons License

Shukla D., ERKENT Ö., Piater J.

FRONTIERS IN NEUROROBOTICS, cilt.12, 2018 (SCI-Expanded) identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 12
  • Basım Tarihi: 2018
  • Doi Numarası: 10.3389/fnbot.2018.00007
  • Dergi Adı: FRONTIERS IN NEUROROBOTICS
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Anahtar Kelimeler: human-robot collaboration, proactive learning, gesture understanding, intention prediction, user study
  • Hacettepe Üniversitesi Adresli: Hayır

Özet

Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions.