Published on Thu Apr 12 2018

Combating catastrophic forgetting with developmental compression

Shawn L. E. Beaulieu, Sam Kriegman, Josh C. Bongard

Intelligent agents exhibit successful behavior across problems in several settings. Endemic in approaches to realize such intelligence is catastrophic forgetting. Methods for obviating catastrophic forgetting have sought to preserve features of the system necessary to solve one problem.

0
0
0
Abstract

Generally intelligent agents exhibit successful behavior across problems in several settings. Endemic in approaches to realize such intelligence in machines is catastrophic forgetting: sequential learning corrupts knowledge obtained earlier in the sequence, or tasks antagonistically compete for system resources. Methods for obviating catastrophic forgetting have sought to identify and preserve features of the system necessary to solve one problem when learning to solve another, or to enforce modularity such that minimally overlapping sub-functions contain task specific knowledge. While successful, both approaches scale poorly because they require larger architectures as the number of training instances grows, causing different parts of the system to specialize for separate subsets of the data. Here we present a method for addressing catastrophic forgetting called developmental compression. It exploits the mild impacts of developmental mutations to lessen adverse changes to previously-evolved capabilities and `compresses' specialized neural networks into a generalized one. In the absence of domain knowledge, developmental compression produces systems that avoid overt specialization, alleviating the need to engineer a bespoke system for every task permutation and suggesting better scalability than existing approaches. We validate this method on a robot control problem and hope to extend this approach to other machine learning domains in the future.

Mon Mar 29 2021
Neural Networks
Self-Constructing Neural Networks Through Random Mutation
0
0
0
Wed May 16 2018
Machine Learning
Progress & Compress: A scalable framework for continual learning
We introduce a conceptually simple and scalable framework for continual learning domains where tasks are learned sequentially. We demonstrate the progress & compress approach on sequential classification of handwritten alphabets as well as two reinforcement learning domains: Atari games and 3D maze navigation.
0
0
0
Sun Nov 28 2010
Neural Networks
DXNN Platform: The Shedding of Biological Inefficiencies
This paper introduces a novel type of memetic algorithm based Topology and Weight Evolving Artificial Neural Network (TWEANN) system called DX Neural Network (DXNN) DXNN implements a number of interesting features, amongst which is a simple and database friendly tuples based encoding method. The paper will discuss DXNN's architecture,
0
0
0
Mon Jan 30 2017
Neural Networks
PathNet: Evolution Channels Gradient Descent in Super Neural Networks
For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network. PathNet is a first step in this direction. It uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks.
0
0
0
Mon May 17 2021
Neural Networks
Evolutionary Training and Abstraction Yields Algorithmic Generalization of Neural Computers
A key feature of intelligent behaviour is the ability to learn abstract strategies that scale and transfer to unfamiliar problems. We present the Neural Harvard Computer (NHC), a memory-augmented network based architecture.
1
0
0
Sat Aug 14 2010
Neural Networks
Discover & eXplore Neural Network (DXNN) Platform, a Modular TWEANN
Modular Discover & eXplore Neural Network (DXNN) is a novel type of Topology and Weight Evolving Artificial Neural Network (TWEANN) system. DXNN utilizes a hierarchical/modular topology which allows for highly scalable and dynamically granular systems to evolve.
0
0
0