Published on Sun May 20 2018

Learning Real-World Robot Policies by Dreaming

AJ Piergiovanni, Alan Wu, Michael S. Ryoo

We focus on learning a realistic world capturing the dynamics of scene changes conditioned on robot actions. Our dreaming model can emulate samples equivalent to a sequence of images from the real world. This allows the agent to learn action policies by interacting with the dreaming model rather than the real-world.

0
0
0
Abstract

Learning to control robots directly based on images is a primary challenge in robotics. However, many existing reinforcement learning approaches require iteratively obtaining millions of robot samples to learn a policy, which can take significant time. In this paper, we focus on learning a realistic world model capturing the dynamics of scene changes conditioned on robot actions. Our dreaming model can emulate samples equivalent to a sequence of images from the actual environment, technically by learning an action-conditioned future representation/scene regressor. This allows the agent to learn action policies (i.e., visuomotor policies) by interacting with the dreaming model rather than the real-world. We experimentally confirm that our dreaming model enables robot learning of policies that transfer to the real-world.

Mon Dec 03 2018
Artificial Intelligence
Visual Foresight: Model-Based Deep Reinforcement Learning for Vision-Based Robotic Control
Deep reinforcement learning (RL) algorithms can learn complex robotic skills from raw sensory inputs. We present a deep RL method that is practical for robotic manipulation, and generalizeseffectively to never-before-seen tasks and objects.
0
0
0
Wed Jul 29 2020
Artificial Intelligence
Dreaming: Model-based Reinforcement Learning by Latent Imagination without Reconstruction
Dreamer is a sample- and cost-efficient solution to robot learning. It is used to train latent state-space models based on a variational autoencoder. This approach often causes object vanishing, in which the Autoencoding fails to perceive key objects for solving control tasks.
0
0
0
Tue Jul 14 2020
Artificial Intelligence
Goal-Aware Prediction: Learning to Model What Matters
Learnings models combined with both planning and policy learning have shown promise in enabling artificial agents to learn to perform diverse tasks with limited supervision. One of the fundamental challenges in using a learned forward dynamics model is the mismatch between the objective of the learned model and that of the downstream planner or policy.
0
0
0
Tue Dec 03 2019
Artificial Intelligence
Dream to Control: Learning Behaviors by Latent Imagination
0
0
0
Mon Dec 21 2020
Artificial Intelligence
Offline Reinforcement Learning from Images with Latent Space Models
The ability to learn directly from rich observation spaces like images is critical for real-world applications such as robotics. We propose to learn a latent-state dynamics model, and represent the uncertainty in the latent space. Our approach is both tractable in practice and corresponds to maximizing a lower bound of the ELBO.
0
0
0
Fri Dec 04 2020
Artificial Intelligence
Planning from Pixels using Inverse Dynamics Models
We propose a novel way to learn task-agnostic dynamics models in high-dimensional observation spaces. These models adaptively focus on task-relevant dynamics, while simultaneously serving as an effective heuristic for planning with sparse rewards.
0
0
0