Published on Wed Oct 23 2019

Event-scheduling algorithms with Kalikow decomposition for simulating potentially infinite neuronal networks

Tien Cuong Phi, Alexandre Muzy, Patricia Reynaud-Bouret

Event-scheduling algorithms can be adapted to perform the simulation of finite neuronal networks activity. These algorithms are currently not computationally tractable in continuous time. To solve this problem, a new algorithm with Kalikow decomposition is proposed.

0
0
0
Abstract

Event-scheduling algorithms can compute in continuous time the next occurrence of points (as events) of a counting process based on their current conditional intensity. In particular event-scheduling algorithms can be adapted to perform the simulation of finite neuronal networks activity. These algorithms are based on Ogata's thinning strategy \cite{Oga81}, which always needs to simulate the whole network to access the behaviour of one particular neuron of the network. On the other hand, for discrete time models, theoretical algorithms based on Kalikow decomposition can pick at random influencing neurons and perform a perfect simulation (meaning without approximations) of the behaviour of one given neuron embedded in an infinite network, at every time step. These algorithms are currently not computationally tractable in continuous time. To solve this problem, an event-scheduling algorithm with Kalikow decomposition is proposed here for the sequential simulation of point processes neuronal models satisfying this decomposition. This new algorithm is applied to infinite neuronal networks whose finite time simulation is a prerequisite to realistic brain modeling.

Wed Oct 22 2008
Neural Networks
Introducing numerical bounds to improve event-based neural network simulation
The spike-trains in neural networks are mainly constrained by the neurological dynamics itself. Global temporal constraints are also to be taken into account. These ideas are applied to punctual conductance-based generalized integrate-and-fire neural networks simulation.
0
0
0
Fri Nov 02 2018
Machine Learning
Data-driven Perception of Neuron Point Process with Unknown Unknowns
Previous research has focused on effects from the spiking history of target neuron and the interaction with other neurons in the system. We propose to use unknown unknowns, which describes the effect of unknown stimuli, undetected neuron activities and all other hidden sources of error.
0
0
0
Sat May 25 2019
Neural Networks
Application and Computation of Probabilistic Neural Plasticity
The discovery of neural plasticity has proved that throughout the life of a human being, the brain reorganizes itself through forming new neural connections. There is a lack of understanding of the probability of this reorganization occurring. We show how an additive short-term memory (STM) equation
0
0
0
Fri May 30 2014
Neural Networks
ELM Solutions for Event-Based Systems
Most engineered systems use signals that are continuous in time. Events, like Dirac grotesquedelta functions, have no meaningful time duration. In this domain, signal processing requires responses to spatio-temporal patterns of events.
0
0
0
Tue Apr 14 2009
Machine Learning
Inferring Dynamic Bayesian Networks using Frequent Episode Mining
This paper shows how dynamic (discrete) Bayesian networks can beferred from the results of frequent episode mining. We show how, under reasonable assumptions on data characteristics, the optimal DBN structure can be computed using a greedy, local algorithm. We connect the optimality of the DBNructure with the notion of fixed-delay episodes and their counts of
0
0
0
Wed Sep 16 2020
Neural Networks
Neuroinformatic tool to study high dimensional dynamics with distributed delays in Neural Mass Models
New time-delay NMM can simulate several types of EEG activities. Three structural levels (cortical unit, population and system) were defined and assumed.
0
0
0