Published on Thu Sep 24 2020

Neurocoder: Learning General-Purpose Computation Using Stored Neural Programs

Hung Le, Svetha Venkatesh
0
0
0
Abstract

Artificial Neural Networks are uniquely adroit at machine learning by processing data through a network of artificial neurons. The inter-neuronal connection weights represent the learnt Neural Program that instructs the network on how to compute the data. However, without an external memory to store Neural Programs, they are restricted to only one, overwriting learnt programs when trained on new data. This is functionally equivalent to a special-purpose computer. Here we design Neurocoder, an entirely new class of general-purpose conditional computational machines in which the neural network "codes" itself in a data-responsive way by composing relevant programs from a set of shareable, modular programs. This can be considered analogous to building Lego structures from simple Lego bricks. Notably, our bricks change their shape through learning. External memory is used to create, store and retrieve modular programs. Like today's stored-program computers, Neurocoder can now access diverse programs to process different data. Unlike manually crafted computer programs, Neurocoder creates programs through training. Integrating Neurocoder into current neural architectures, we demonstrate new capacity to learn modular programs, handle severe pattern shifts and remember old programs as new ones are learnt, and show substantial performance improvement in solving object recognition, playing video games and continual learning tasks. Such integration with Neurocoder increases the computation capability of any current neural network and endows it with entirely new capacity to reuse simple programs to build complex ones. For the first time a Neural Program is treated as a datum in memory, paving the ways for modular, recursive and procedural neural programming.

Sat May 25 2019
Neural Networks
Neural Stored-program Memory
Neural networks powered with external memory simulate computer behaviors. These models, which use the memory to store data for a neural controller, can learn complex algorithms.
1
0
1
Wed May 06 2020
Neural Networks
Do we know the operating principles of our computers better than those of our brain?
0
0
0
Mon Sep 17 2018
Neural Networks
Self Configuration in Machine Learning
The algorithm is based on the fact that for any layer to be trained, the effect of a direct connection to an optimized linear output layer can be computed without the connection being made. The simplicity of this training arrangement allows the activation function and step size in weight adjustment to be self-adjusting.
0
0
0
Sat Nov 11 2017
Artificial Intelligence
Building machines that adapt and compute like brains
Building machines that learn and think like humans is essential not only for cognitive science, but also for computational neuroscience. A new computational neuroscience should build cognitive-level and neural-level models, understand their relationships, and test both types of models.
0
0
0
Mon May 17 2021
Neural Networks
Evolutionary Training and Abstraction Yields Algorithmic Generalization of Neural Computers
A key feature of intelligent behaviour is the ability to learn abstract strategies that scale and transfer to unfamiliar problems. We present the Neural Harvard Computer (NHC), a memory-augmented network based architecture.
1
0
0
Wed Aug 05 2015
Neural Networks
INsight: A Neuromorphic Computing System for Evaluation of Large Neural Networks
Deep neural networks have been demonstrated impressive results in various cognitive tasks. In order to execute large networks, Von Neumann computers store the large number of weight parameters in external memories. This leads to power-hungry I/O operations and processing bottlenecks.
3
0
0