Published on Sun May 10 2020

Accelerating Deep Neuroevolution on Distributed FPGAs for Reinforcement Learning Problems

Alexis Asseman, Nicolas Antoine, Ahmet S. Ozcan

We report record training times for Atari 2600 games using deep neuroevolution implemented on distributed FPGAs. Combined hardware implementation of the game console, image pre-processing and the neural network in an optimized pipeline, multiplied with system level parallelism enabled the acceleration.

0
0
0
Abstract

Reinforcement learning augmented by the representational power of deep neural networks, has shown promising results on high-dimensional problems, such as game playing and robotic control. However, the sequential nature of these problems poses a fundamental challenge for computational efficiency. Recently, alternative approaches such as evolutionary strategies and deep neuroevolution demonstrated competitive results with faster training time on distributed CPU cores. Here, we report record training times (running at about 1 million frames per second) for Atari 2600 games using deep neuroevolution implemented on distributed FPGAs. Combined hardware implementation of the game console, image pre-processing and the neural network in an optimized pipeline, multiplied with the system level parallelism enabled the acceleration. These results are the first application demonstration on the IBM Neural Computer, which is a custom designed system that consists of 432 Xilinx FPGAs interconnected in a 3D mesh network topology. In addition to high performance, experiments also showed improvement in accuracy for all games compared to the CPU-implementation of the same algorithm.

Tue Jul 03 2018
Artificial Intelligence
Human-level performance in first-person multiplayer games with population-based deep reinforcement learning
Real-world contains multiple agents, each learning and acting independently. We demonstrate for the first time that an agent can achieve human-level in a popular 3D multiplayer first-person video game.
0
0
0
Sun Sep 18 2016
Artificial Intelligence
Playing FPS Games with Deep Reinforcement Learning
This is the first architecture to tackle 3D environments in first-person shooter games. We show that the proposed architecture substantially outperforms built-in AI as well as humans.
0
0
0
Wed Aug 16 2017
Artificial Intelligence
StarCraft II: A New Challenge for Reinforcement Learning
This paper introduces SC2LE (StarCraft II Learning Environment) It is a reinforcement learning environment based on the StarCraft II game. This domain poses a new grand challenge for reinforcement learning. We describe the observation, action, and reward specification.
0
0
0
Wed Jun 06 2018
Neural Networks
Deep Reinforcement Learning for General Video Game AI
The General Video Game AI (GVGAI) competition and its associated software framework provides a way of benchmarking AI algorithms. Using this interface, we characterize how widely used implementations of several deep reinforcement learning algorithms fare on a number of GVGAI games.
0
0
0
Sun Apr 26 2020
Neural Networks
Warm-Start AlphaZero Self-Play Search Enhancements
AlphaZero is a large and complicated system with many parameters, and success requires much compute power and fine-tuning. We propose a novel approach to deal with this cold-start problem by employing simple search enhancements.
0
0
0
Sat Jan 23 2021
Artificial Intelligence
Deep Learning for General Game Playing with Ludii and Polygames
This paper describes the implementation of abridge between Ludii and Polygames. Ludii is a general game system that already contains over 500 different games. Polygames is a framework with training and search algorithms, which has already produced superhuman players for several board games.
0
0
0