Supervised Policy Update (SPU) is a new sample-efficient methodology for deep reinforcement learning. SPU formulates and solves a constrained optimization problem in the non-parameterized proximal policy space. In terms of sample efficiency, SPU outperforms TRPO in simulated robotic tasks.
We propose a new sample-efficient methodology, called Supervised Policy
Update (SPU), for deep reinforcement learning. Starting with data generated by
the current policy, SPU formulates and solves a constrained optimization
problem in the non-parameterized proximal policy space. Using supervised
regression, it then converts the optimal non-parameterized policy to a
parameterized policy, from which it draws new samples. The methodology is
general in that it applies to both discrete and continuous action spaces, and
can handle a wide variety of proximity constraints for the non-parameterized
optimization problem. We show how the Natural Policy Gradient and Trust Region
Policy Optimization (NPG/TRPO) problems, and the Proximal Policy Optimization
(PPO) problem can be addressed by this methodology. The SPU implementation is
much simpler than TRPO. In terms of sample efficiency, our extensive
experiments show SPU outperforms TRPO in Mujoco simulated robotic tasks and
outperforms PPO in Atari video game tasks.