Published on Sat Sep 03 2016

Graph-Based Active Learning: A New Look at Expected Error Minimization

Kwang-Sung Jun, Robert Nowak

Exact computation of EEM optimally balances exploration and exploitation. EEM-based algorithms employ various approximations due to computational hardness of exact EEM. This can result in a lack of either exploration or exploitation, which can negatively impact the effectiveness of active learning.

0
0
0
Abstract

In graph-based active learning, algorithms based on expected error minimization (EEM) have been popular and yield good empirical performance. The exact computation of EEM optimally balances exploration and exploitation. In practice, however, EEM-based algorithms employ various approximations due to the computational hardness of exact EEM. This can result in a lack of either exploration or exploitation, which can negatively impact the effectiveness of active learning. We propose a new algorithm TSA (Two-Step Approximation) that balances between exploration and exploitation efficiently while enjoying the same computational complexity as existing approximations. Finally, we empirically show the value of balancing between exploration and exploitation in both toy and real-world datasets where our method outperforms several state-of-the-art methods.

Tue Jul 21 2020
Machine Learning
Efficient Graph-Based Active Learning with Probit Likelihood via Gaussian Approximations
We present a novel adaptation of active learning to graph-based semi-supervised learning (SSL) under non-Gaussian Bayesian models. We also introduce a novel "model change" acquisition function based on these approximations.
0
0
0
Wed May 18 2016
Machine Learning
Active Learning On Weighted Graphs Using Adaptive And Non-adaptive Approaches
The goal is to construct a binary signal defined on the nodes of a weighted graph. A new sampling algorithm is proposed, which sequentially selects the graph nodes to be sampled. The algorithm generalizes a recent method for sampling nodes in unweighted graphs.
0
0
0
Sat Dec 19 2020
Machine Learning
An Information-Theoretic Framework for Unifying Active Learning Problems
This paper presents an information-theoretic framework for unifying active learning problems: level set estimation (LSE), Bayesian optimization (BO), and their generalized variant. We introduce a novel active learning criterion that subsumes an existing LSE algorithm and achieves state-of-the-
0
0
0
Thu Mar 18 2021
Artificial Intelligence
Data driven algorithms for limited labeled data learning
0
0
0
Thu May 30 2019
Machine Learning
Understanding Goal-Oriented Active Learning via Influence Functions
Active learning (AL) concerns itself with learning a model from as few labelled data as possible through actively and iteratively querying an oracle with selected unlabelled samples. We present an effective approximation that bypasses model retraining altogether.
0
0
0
Sun Nov 25 2018
Machine Learning
: Active Learning over Hypergraphs
We propose a hypergraph-based active learning scheme which we term The scheme generalizes the previously reported algorithm $S^2 $ originally proposed byasarathy et al.
0
0
0