Published on Thu Mar 14 2019

A New Approach for Distributed Hypothesis Testing with Extensions to Byzantine-Resilience

Aritra Mitra, John A. Richards, Shreyas Sundaram

We study a setting where a group of agents, each receiving partially informative private observations, seek to collaboratively learn the true state. To solve this problem, we propose a distributed learning rule that differs fundamentally from existing approaches.

0
0
0
Abstract

We study a setting where a group of agents, each receiving partially informative private observations, seek to collaboratively learn the true state (among a set of hypotheses) that explains their joint observation profiles over time. To solve this problem, we propose a distributed learning rule that differs fundamentally from existing approaches, in the sense, that it does not employ any form of "belief-averaging". Specifically, every agent maintains a local belief (on each hypothesis) that is updated in a Bayesian manner without any network influence, and an actual belief that is updated (up to normalization) as the minimum of its own local belief and the actual beliefs of its neighbors. Under minimal requirements on the signal structures of the agents and the underlying communication graph, we establish consistency of the proposed belief update rule, i.e., we show that the actual beliefs of the agents asymptotically concentrate on the true state almost surely. As one of the key benefits of our approach, we show that our learning rule can be extended to scenarios that capture misbehavior on the part of certain agents in the network, modeled via the Byzantine adversary model. In particular, we prove that each non-adversarial agent can asymptotically learn the true state of the world almost surely, under appropriate conditions on the observation model and the network topology.

Fri Jul 05 2019
Machine Learning
A New Approach to Distributed Hypothesis Testing and Non-Bayesian Learning: Improved Learning Rate and Byzantine-Resilience
We study a setting where a group of agents, each receiving partiallyformative private signals, seek to collaboratively learn the true underlying state of the world. To solve this problem, we propose a distributed learning rule that differs fundamentally from existing approaches.
0
0
0
Thu Apr 02 2020
Machine Learning
Distributed Hypothesis Testing and Social Learning in Finite Time with a Finite Amount of Communication
A network of agents seeks to identify the true state of the world from a finite set of hypotheses, based on a series of stochastic signals. We provide a simple algorithm for finite-time learning which only requires the agents to exchange a binary vector with their neighbors.
0
0
0
Thu Apr 02 2020
Machine Learning
Distributed Inference with Sparse and Quantized Communication
We consider the problem of distributed inference where agents in a network aim to uniquely identify this state from a finite set of hypotheses. We focus on scenarios where communication between agents is costly, and takes place over finite bandwidth. We prove that our rule guarantees convergence to the true
0
0
0
Fri May 06 2016
Machine Learning
Distributed Learning with Infinitely Many Hypotheses
We consider a distributed learning setup where a network of agents access realizations of a set of random variables. The network objective is to find a parametrized distribution that best describes their joint observations. We provide non-asymptotic bounds for the concentration rate of the agents' beliefs.
0
0
0
Wed Sep 04 2019
Machine Learning
A Communication-Efficient Algorithm for Exponentially Fast Non-Bayesian Learning in Networks
We introduce a simple time-triggered protocol to achieve communications-efficient non-Bayesian learning over a network. Despite such sparse communication, we show that each agent is still able to rule out every false hypothesis exponentially fast. For the special case when $a=1, we prove that the asymptotic learning rates resulting from our
0
0
0
Tue Oct 20 2020
Machine Learning
Robust Asynchronous and Network-Independent Cooperative Learning
We consider the model of cooperative learning via distributed non-Bayesian learning. We show that our proposed learning dynamics guarantee that all agents in the network will have an asymptotic exponential decay of their beliefs on the wrong hypothesis.
0
0
0