Published on Mon Jan 22 2018

Extreme Learning Machine with Local Connections

Feng Li, Sibo Yang, Huanhuan Huang, Wei Wu

This paper is concerned with the sparsification of the input-hidden weights of ELM (Extreme Learning Machine) In the usual ELM, the hidden- input weights are randomly given. The new ELM-CL behaves better than the traditional ELM.

0
0
0
Abstract

This paper is concerned with the sparsification of the input-hidden weights of ELM (Extreme Learning Machine). For ordinary feedforward neural networks, the sparsification is usually done by introducing certain regularization technique into the learning process of the network. But this strategy can not be applied for ELM, since the input-hidden weights of ELM are supposed to be randomly chosen rather than to be learned. To this end, we propose a modified ELM, called ELM-LC (ELM with local connections), which is designed for the sparsification of the input-hidden weights as follows: The hidden nodes and the input nodes are divided respectively into several corresponding groups, and an input node group is fully connected with its corresponding hidden node group, but is not connected with any other hidden node group. As in the usual ELM, the hidden-input weights are randomly given, and the hidden-output weights are obtained through a least square learning. In the numerical simulations on some benchmark problems, the new ELM-CL behaves better than the traditional ELM.

Sat Sep 13 2014
Neural Networks
A study on effectiveness of extreme learning machine
Extreme learning machine (ELM), proposed by Huang et al., has been shown a promising learning algorithm for single-hidden layer feedforward neural networks (SLFNs) Nevertheless, because of the random choice of input weights and biases, the ELM algorithm sometimes makes the hidden layer output
0
0
0
Fri Jan 24 2014
Machine Learning
Is Extreme Learning Machine Feasible? A Theoretical Assessment (Part II)
Extreme learning machine (ELM) can be regarded as a two stage feed-forward neural network (FNN) learning system. ELM randomly assigns the connections with and within hidden neurons in the first stage and tunes the connections with output neurons in second stage. The randomness of ELM also leads to certain negative consequences.
0
0
0
Sat Apr 30 2016
Machine Learning
Constructive neural network learning
In this paper, we aim at developing scalable neural network-type learning systems. We focus on "constructing" rather than "training" feed-forward neural networks (FNNs) for learning. We prove that the proposed method overcomes the classical saturation problem for FNN approximation.
0
0
0
Wed Jul 24 2019
Machine Learning
Backward-Forward Algorithm: An Improvement towards Extreme Learning Machine
The extreme learning machine needs a large number of hidden nodes to generalize a single hidden layer neural network for a given training data-set. The proposed technique has an advantage over the back-propagation method in terms of iterations required.
0
0
0
Thu Jun 12 2014
Neural Networks
Learning ELM network weights using linear discriminant analysis
We present an alternative to the pseudo-inverse method for determining the weight of an object. The method is based on linear discriminant analysis andprovides Bayes optimal single point estimates.
0
0
0
Mon Apr 27 2020
Machine Learning
Efficient Inverse-Free Incremental and Decremental Algorithms for Multiple Hidden Nodes in Extreme Learning Machine
Inverse-free extreme learning machine (ELM) algorithm proposed in [4] was based on an inverse-free algorithm to compute the regularized pseudo-inverse. In this paper, we propose two inverse- free algorithms for ELM with Tikhonov regularization. We
0
0
0