Published on Sun Feb 28 2021

Optimal Conversion of Conventional Artificial Neural Networks to Spiking Neural Networks

Shikuang Deng, Shi Gu

Spiking neural networks (SNNs) are biology-inspired artificial neural networks (ANNs) that comprise of spiking neurons to process asynchronous signals. SNNs are more efficient in power consumption and inference speed on neuromorphic hardware, but they are difficult to train from scratch.

0
0
0
Abstract

Spiking neural networks (SNNs) are biology-inspired artificial neural networks (ANNs) that comprise of spiking neurons to process asynchronous discrete signals. While more efficient in power consumption and inference speed on the neuromorphic hardware, SNNs are usually difficult to train directly from scratch with spikes due to the discreteness. As an alternative, many efforts have been devoted to converting conventional ANNs into SNNs by copying the weights from ANNs and adjusting the spiking threshold potential of neurons in SNNs. Researchers have designed new SNN architectures and conversion algorithms to diminish the conversion error. However, an effective conversion should address the difference between the SNN and ANN architectures with an efficient approximation \DSK{of} the loss function, which is missing in the field. In this work, we analyze the conversion error by recursive reduction to layer-wise summation and propose a novel strategic pipeline that transfers the weights to the target SNN by combining threshold balance and soft-reset mechanisms. This pipeline enables almost no accuracy loss between the converted SNNs and conventional ANNs with only of the typical SNN simulation time. Our method is promising to get implanted onto embedded platforms with better support of SNNs with limited energy and memory.

Sun Mar 22 2020
Neural Networks
An Efficient Software-Hardware Design Framework for Spiking Neural Network Systems
Spiking Neural Network (SNN) is the third generation of Neural Network. By processing based on binary input/output, SNNs offer lower complexity, higher density and lower power consumption.
0
0
0
Wed Jun 03 2020
Neural Networks
You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference to ANN-Level Accuracy
Spiking Neural Networks (SNNs) have gained significant interest as power-efficient alternatives to Artificial Neural Networks. The vast majority of neuromorphic hardware designs support rate-encoded SNNs. Time-To-First-Spike (TTFS) encoding, encodes information in relative time of arrival of spikes.
0
0
0
Sun Sep 16 2018
Neural Networks
Direct Training for Spiking Neural Networks: Faster, Larger, Better
Spiking neural networks (SNNs) that enables energy efficient implementation on emerging neuromorphic hardware are gaining more attention. SNNs have not shown competitive performance compared with artificial neural networks. This is due to the lack of effective learning algorithms and efficient programming frameworks.
0
0
0
Tue Sep 24 2019
Neural Networks
Temporal-Coded Deep Spiking Neural Network with Easy Training and Robust Performance
Spiking neural network (SNN) is interesting both theoretically and practically because of its strong bio-inspiration nature. Unfortunately, its development has fallen far behind the conventional deep neural network. This paper demonstrates that a deep temporal-coded deep SNN is feasible for applications with high performance.
1
0
0
Sun Jul 25 2021
Neural Networks
H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks
H2Learn is a novel architecture that can achieve high efficiency for BPTT-based SNN learning. Compared with the modern NVIDIA V100 GPU, H2Learn achieves 7.38x area saving, 5.74-10.20x speedup, and 5.25-7.12x energy saving.
1
0
0
Fri Jul 17 2020
Neural Networks
FSpiNN: An Optimization Framework for Memory- and Energy-Efficient Spiking Neural Networks
Spiking Neural Networks (SNNs) are gaining interest due to their event-driven processing. FSpiNN is an optimization framework for obtaining memory- and energy-efficient SNNs. It reduces the computational requirements by reducing the number of neuronal operations.
0
0
0