Published on Mon Mar 07 2016

Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection

Sergey Levine, Peter Pastor, Alex Krizhevsky, Deirdre Quillen

We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images. We then use this network to servo the gripper in real time to achieve successful grasps. Our experimental evaluation demonstrates that our method achieveseffective real-time control.

0
0
0
Abstract

We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing.

Thu Nov 24 2016
Computer Vision
Robotic Grasp Detection using Deep Convolutional Neural Networks
Deep learning has significantly advanced computer vision and natural language processing. While there have been some successes in robotics using deep learning, it has not been widely adopted.
0
0
0
Sat Jun 09 2018
Machine Learning
Learning to Grasp from a Single Demonstration
Learnings-based approaches for robotic grasping using visual sensors typically require collecting a large size dataset. We propose a simpler learning-from-demonstration approach that is able to detect the object to grasp from merely a single demonstration using a convolutional network.
0
0
0
Tue Dec 09 2014
Computer Vision
Real-Time Grasp Detection Using Convolutional Neural Networks
We present an accurate, real-time approach to robotic grasp detection based on convolutional neural networks. The model outperforms state-of-the-art approaches by 14 percent.
0
0
0
Thu Nov 05 2020
Computer Vision
Improving Robotic Grasping on Monocular Images Via Multi-Task Learning and Positional Loss
0
0
0
Sun Sep 16 2018
Computer Vision
Real-Time, Highly Accurate Robotic Grasp Detection using Fully Convolutional Neural Networks with High-Resolution Images
Robotic grasp detection for novel objects is a challenging task. For the last few years, deep learning based approaches have achieved remarkable performance improvements.
0
0
0
Mon May 28 2018
Machine Learning
More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch
For humans, the process of grasping an object relies heavily on rich tactile feedback. Most recent robotic grasping work has been based only on visual input. We propose an end-to-end action-conditional model that learns regrasping policies from raw visuo-tactile data.
0
0
0