Published on Sat Feb 27 2016

Significance Driven Hybrid 8T-6T SRAM for Energy-Efficient Synaptic Storage in Artificial Neural Networks

Gopalakrishnan Srinivasan, Parami Wijesinghe, Syed Shakib Sarwar, Akhilesh Jaiswal, Kaushik Roy

Multilayered artificial neural networks (ANN) have found widespread utility in classification and recognition applications. The scale and complexity of such networks have led to a significant interest in the development of efficient implementations. In this work, we focus on designing energy efficient on-chip storage for the synaptic weights.

0
0
0
Abstract

Multilayered artificial neural networks (ANN) have found widespread utility in classification and recognition applications. The scale and complexity of such networks together with the inadequacies of general purpose computing platforms have led to a significant interest in the development of efficient hardware implementations. In this work, we focus on designing energy efficient on-chip storage for the synaptic weights. In order to minimize the power consumption of typical digital CMOS implementations of such large-scale networks, the digital neurons could be operated reliably at scaled voltages by reducing the clock frequency. On the contrary, the on-chip synaptic storage designed using a conventional 6T SRAM is susceptible to bitcell failures at reduced voltages. However, the intrinsic error resiliency of NNs to small synaptic weight perturbations enables us to scale the operating voltage of the 6TSRAM. Our analysis on a widely used digit recognition dataset indicates that the voltage can be scaled by 200mV from the nominal operating voltage (950mV) for practically no loss (less than 0.5%) in accuracy (22nm predictive technology). Scaling beyond that causes substantial performance degradation owing to increased probability of failures in the MSBs of the synaptic weights. We, therefore propose a significance driven hybrid 8T-6T SRAM, wherein the sensitive MSBs are stored in 8T bitcells that are robust at scaled voltages due to decoupled read and write paths. In an effort to further minimize the area penalty, we present a synaptic-sensitivity driven hybrid memory architecture consisting of multiple 8T-6T SRAM banks. Our circuit to system-level simulation framework shows that the proposed synaptic-sensitivity driven architecture provides a 30.91% reduction in the memory access power with a 10.41% area overhead, for less than 1% loss in the classification accuracy.

Thu Aug 17 2017
Neural Networks
Power Optimizations in MTJ-based Neural Networks through Stochastic Computing
Stochastic Computing (SC) is an emerging paradigm which replaces conventional units with simple logic circuits. Spintronic devices, such as Magnetic Tunnel Junctions (MTJs), are capable of replacing CMOS. We propose approximating the Synaptic weights in our MTJ-based NN implementation, in ways brought about by the properties of our SNG.
0
0
0
Thu Oct 12 2017
Neural Networks
STDP Based Pruning of Connections and Weight Quantization in Spiking Neural Networks for Energy Efficient Recognition
Spiking Neural Networks (SNNs) with a large number of weights can be difficult to implement in emerging in-memory computing hardware. We present a sparse SNN topology where non-critical connections are pruned to reduce the network size. The remaining critical synapses are weight quantized to accommodate for limited conductance levels.
0
0
0
Wed Jun 13 2018
Neural Networks
Exploiting Inherent Error-Resiliency of Neuromorphic Computing to achieve Extreme Energy-Efficiency through Mixed-Signal Neurons
Neuromorphic computing, inspired by the brain, promises extreme efficiency. Digital neurons are conventionally known to be accurate and efficient at high speed. analog/mixed-signal neurons are prone to variability and mismatch. The proposed MS-N is implemented in 65 nm CMOS technology.
0
0
0
Sat Nov 21 2020
Neural Networks
On-Chip Error-triggered Learning of Multi-layer Memristive Spiking Neural Networks
Local forms of gradient descent learning are compatible with Spiking Neural Networks (SNNs) The proposed algorithm enables online training of multi-layer SNNs with memristive hardware.
0
0
0
Tue Apr 13 2021
Neural Networks
An Adaptive Synaptic Array using Fowler-Nordheim Dynamic Analog Memory
0
0
0
Thu Aug 29 2019
Neural Networks
An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM
Memristor-based weight pruning and weight quantization have been seperately investigated and proven effectiveness in reducing area and power consumption compared to the original DNN model. We consider hardware constraints such as crossbar blocks pruning, conductance range, and mismatch between weight value and real value to achieve high accuracy and low power.
0
0
0