Published on Sat Jan 23 2021

Autoregressive Belief Propagation for Decoding Block Codes

Eliya Nachmani, Lior Wolf
0
0
0
Abstract

We revisit recent methods that employ graph neural networks for decoding error correcting codes and employ messages that are computed in an autoregressive manner. The outgoing messages of the variable nodes are conditioned not only on the incoming messages, but also on an estimation of the SNR and on the inferred codeword and on two downstream computations: (i) an extended vector of parity check outcomes, (ii) the mismatch between the inferred codeword and the re-encoding of the information bits of this codeword. Unlike most learned methods in the field, our method violates the symmetry conditions that enable the other methods to train exclusively with the zero-word. Despite not having the luxury of training on a single word, and the inability to train on more than a small fraction of the relevant sample space, we demonstrate effective training. The new method obtains a bit error rate that outperforms the latest methods by a sizable margin.

Sat Jul 16 2016
Neural Networks
Learning to Decode Linear Codes Using Deep Learning
The method generalizes the standard belief propagation algorithm. It assigns weights to the edges of the Tanner graph. These edges are then trained using deep learning techniques.
0
0
0
Tue Jan 21 2020
Machine Learning
Pruning Neural Belief Propagation Decoders
We consider near maximum-likelihood (ML) decoding of short linear block codes based on neural belief propagation (BP) decoding. While this method significantly outperforms conventional BP decoding, the underlying parity-check matrix may still limit the overallperformance.
0
0
0
Thu Jan 24 2019
Artificial Intelligence
Learned Belief-Propagation Decoding with Simple Scaling and SNR Adaptation
We consider the weighted belief-propagation (WBP) decoder recently proposed by Nachmani et al. We also investigate parameter adapter networks (PANs) that learn the relation between the signal-to-noise ratio and WBP parameters.
0
0
0
Mon Jan 08 2018
Neural Networks
Near Maximum Likelihood Decoding with Deep Learning
A novel and efficient neural decoder algorithm is proposed. The proposed decoder is based on the neural Belief Propagation algorithm. We demonstrate the decoding algorithm for various linear block codes of length up to 63 bits.
0
0
0
Wed Mar 04 2020
Machine Learning
Neural Enhanced Belief Propagation on Factor Graphs
A graphical model is a structured representation of locally dependent random variables. A traditional method to reason over these random variables is to perform inference using belief propagation. We propose a new hybrid model that runs conjointly a FG-GNN with belief propagation.
0
0
0
Thu Sep 05 2019
Machine Learning
Hyper-Graph-Network Decoders for Block Codes
Neural decoders were shown to outperform classical message passing techniques for short BCH codes. In this work, we extend these results to much larger families of algebraic block codes, by performing message passing with graph neural networks.
0
0
0