Visual Task Outcome Verification Using Deep Learning

ERKENT Ö., Shukla D., Piater J.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) / Workshop on Machine Learning Methods for High-Level Cognitive Capabilities in Robotics, Vancouver, Canada, 24 - 28 September 2017, pp.4821-4827 identifier identifier

  • Publication Type: Conference Paper / Full Text
  • Volume:
  • Doi Number: 10.1109/iros.2017.8206357
  • City: Vancouver
  • Country: Canada
  • Page Numbers: pp.4821-4827
  • Hacettepe University Affiliated: No


Manipulation tasks requiring high precision are difficult for reasons such as imprecise calibration and perceptual inaccuracies. We present a method for visual task outcome verification that provides an assessment of the task status as well as information for the robot to improve this status. The final status of the task is assessed as success, failure or in progress. We propose a deep learning strategy to learn the task with a small number of training episodes and without requiring the robot. A probabilistic, appearance- based pose estimation method is used to learn the demonstrated task. For real- data efficiency, synthetic training images are created around the trajectory of the demonstrated task. We show that our method can estimate the task status with high accuracy in several instances of different tasks, and demonstrate the accuracy of a high- precision task on a real robot.