Published on Wed Mar 08 2017

Robust Adversarial Reinforcement Learning

Lerrel Pinto, James Davidson, Rahul Sukthankar, Abhinav Gupta

Deep neural networks and fast simulation have led to recent successes in the field of reinforcement learning. Most current RL-based approaches fail to generalize since the gap between simulation and real world is so large. This paper proposes the idea of robust adversarial reinforcement learning (RARL)

0
0
0
Abstract

Deep neural networks coupled with fast simulation and improved computation have led to recent successes in the field of reinforcement learning (RL). However, most current RL-based approaches fail to generalize since: (a) the gap between simulation and real world is so large that policy-learning approaches fail to transfer; (b) even if policy learning is done in real world, the data scarcity leads to failed generalization from training to test scenarios (e.g., due to different friction or object masses). Inspired from H-infinity control methods, we note that both modeling errors and differences in training and test scenarios can be viewed as extra forces/disturbances in the system. This paper proposes the idea of robust adversarial reinforcement learning (RARL), where we train an agent to operate in the presence of a destabilizing adversary that applies disturbance forces to the system. The jointly trained adversary is reinforced -- that is, it learns an optimal destabilization policy. We formulate the policy learning as a zero-sum, minimax objective function. Extensive experiments in multiple environments (InvertedPendulum, HalfCheetah, Swimmer, Hopper and Walker2d) conclusively demonstrate that our method (a) improves training stability; (b) is robust to differences in training/test conditions; and c) outperform the baseline even in the absence of the adversary.

Tue Sep 03 2019
Artificial Intelligence
Generalization in Transfer Learning
Generalization and overfitting in deep reinforcement learning are not commonly addressed in current transfer learning research. Conducting a comparative analysis without an intermediateregularization step results in underperforming benchmarks and inaccurate comparisons. We propose regularization techniques for continuous control through sample elimination, early stopping and maximum entropy regularized adversarial learning.
0
0
0
Mon Dec 11 2017
Artificial Intelligence
Robust Deep Reinforcement Learning with Adversarial Attacks
This paper proposes adversarial attacks for Reinforcement Learning (RL) These attacks are then leveraged during training to improve the robustness of RL within robust control framework.
0
0
0
Sat May 25 2019
Artificial Intelligence
Adversarial Policies: Attacking Deep Reinforcement Learning
Deep reinforcement learning (RL) policies are known to be vulnerable to perturbations to their observations. An attacker is not usually able to directly modify another agent's observations. We demonstrate the existence of adversarial policies in zero-sum games between simulated humanoid robots.
0
0
0
Sat Jan 26 2019
Machine Learning
Action Robust Reinforcement Learning and Applications in Continuous Control
A policy is said to be robust if it maximizes the reward while considering a bad, or even adversarial, model. We show that our criteria are related to common forms of uncertainty in robotics domains, such as the occurrence of abrupt forces.
0
0
0
Tue Aug 08 2017
Artificial Intelligence
Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning
Model-free deep reinforcement learning algorithms have been shown to be capable of learning a wide range of robotic skills. Model-based algorithms typically require a very large number of samples to achieve good performance. This work shows that medium-sized neural network models can in fact be combined with model predictive control (
0
0
0
Fri Aug 21 2020
Machine Learning
Adversarial Imitation Learning via Random Search
Imitation learning directly learns policy based on data on the behavior of experts. It does this without the explicit reward signal provided by the environment. The proposed method performs simple random search in the parameter space of policies and shows computational efficiency.
0
0
0