Published on Wed Jun 13 2018

Polynomial Regression As an Alternative to Neural Nets

Xi Cheng, Bohdan Khomtchouk, Norman Matloff, Pete Mohanty

Despite the success of neural networks (NNs), there is still a concern among many over their "black box" nature. Here we present a simple analytic argument that NNs are in fact essentially polynomial regression models. This view will have various implications for NNs.

4
4
18
Abstract

Despite the success of neural networks (NNs), there is still a concern among many over their "black box" nature. Why do they work? Here we present a simple analytic argument that NNs are in fact essentially polynomial regression models. This view will have various implications for NNs, e.g. providing an explanation for why convergence problems arise in NNs, and it gives rough guidance on avoiding overfitting. In addition, we use this phenomenon to predict and confirm a multicollinearity property of NNs not previously reported in the literature. Most importantly, given this loose correspondence, one may choose to routinely use polynomial models instead of NNs, thus avoiding some major problems of the latter, such as having to set many tuning parameters and dealing with convergence issues. We present a number of empirical results; in each case, the accuracy of the polynomial approach matches or exceeds that of NN approaches. A many-featured, open-source software package, polyreg, is available.

Sun Feb 07 2021
Machine Learning
Towards a mathematical framework to inform Neural Network modelling via Polynomial Regression
Neural networks are widely used in a large number of applications, but are still considered as black boxes. This has led to an increasing interest in the overlapping area between neural networks and more traditional statistical methods. In this article, a mathematical framework relating neural Networks and polynomial regression is explored.
0
0
0
Wed Dec 18 2019
Machine Learning
Benchmarking the Neural Linear Model for Regression
The neural linear model is a simple adaptive Bayesian linear regression method. It has been used in a number of problems ranging from Bayesian optimization to reinforcement learning. To the best of our knowledge there has been no systematic exploration of its capabilities.
0
0
0
Tue Nov 29 2016
Machine Learning
The empirical size of trained neural networks
ReLU neural networks define piecewise linear functions of their inputs. Initializing and training a neural network is very different from fitting a linear spline. Standard network initialization and training produce networks vastly simpler than a naïve parameter count.
0
0
0
Wed Feb 26 2020
Machine Learning
NeuralSens: Sensitivity Analysis of Neural Networks
Neural networks are important tools for data-intensive analysis. They are usually seen as "black boxes" that offer minimal information about how the input variables are used. This article describes the package that can be used to perform sensitivity analysis of neural networks using the partial derivatives method.
0
0
0
Mon Mar 26 2012
Machine Learning
Polynomial expansion of the binary classification function
This paper describes a novel method to approximate the polynomial coefficients of regression functions. The derivation is simple, and offers a fast,robust classification technique that is resistant to over-fitting.
0
0
0
Tue Jun 02 2020
Machine Learning
Bayesian Neural Networks
Neural networks have become a powerful tool for the analysis of complex and abstract data models. However, their introduction intrinsically increases our uncertainty about which features of the analysis are model-related and which are due to the network. This means that predictions by neural networks have biases which cannot be trivially
0
0
0
Sat Apr 04 2020
Neural Networks
Rational neural networks
Rational neural networks approximate smooth functions more efficiently than ReLU networks with exponentially smaller depth. The flexibility and smoothness of rational activation functions make them an attractive alternative to ReLU.
1
0
1
Mon Jun 24 2019
Artificial Intelligence
Variations on the Chebyshev-Lagrange Activation Function
We show results for different methods of handling inputs outside [-1, 1] on synthetic datasets. We demonstrate competitive or state-of-the-art performance on the classification of images (MNIST and CIFAR-10)
0
0
0
Fri Oct 12 2018
Machine Learning
Uncertainty in Neural Networks: Approximately Bayesian Ensembling
Ensembling NNs provides an easily implementable, scalable method for uncertainty quantification. However, it has been criticised for not being Bayesian. This work proposes one modification to the usual process that we argue does result in approximate Bayesian inference.
0
0
0
Sun Jan 03 2021
Artificial Intelligence
Recoding latent sentence representations -- Dynamic gradient-based activation modification in RNNs
In Recurrent Neural Networks (RNNs), encoding information in a suboptimal or erroneous way can impact the quality of representations. In humans, challenging cases like garden path sentences can lead their language understanding astray. Inspired by this, I propose an augmentation to standard RNNs in the form of a gradient-based correction mechanism.
0
0
0
Wed Apr 08 2020
Machine Learning
Dendrite Net: A White-Box Module for Classification, Regression, and System Identification
Dendrite Net is a basic machine learning algorithm, just like Support Vector Machine (SVM) or Multilayer Perceptron (MLP) The algorithm can recognize this class after learning, if the output's logical expression contains the corresponding class's logical relationship among inputs.
0
0
0
Thu Jun 04 2020
Machine Learning
A Polynomial Neural network with Controllable Precision and Human-Readable Topology II: Accelerated Approach Based on Expanded Layer
CR-PNN is the Taylor expansion in the form of network. It is ten times more efficient than typical BPNN for forward-propagation. As the network depth increases, the computational complexity increases.
0
0
0