Published on Fri Feb 08 2019

Adaptive and Safe Bayesian Optimization in High Dimensions via One-Dimensional Subspaces

Johannes Kirschner, Mojmír Mutný, Nicole Hiller, Rasmus Ischebeck, Andreas Krause

Bayesian optimization is known to be difficult to scale to high dimensions. We show that our algorithm converges globally and obtains a fast local rate when the function is strongly convex. We obtain the first safe Bayesian optimizationgorithm with theoretical guarantees applicable in high-dimensional settings.

0
0
0
Abstract

Bayesian optimization is known to be difficult to scale to high dimensions, because the acquisition step requires solving a non-convex optimization problem in the same search space. In order to scale the method and keep its benefits, we propose an algorithm (LineBO) that restricts the problem to a sequence of iteratively chosen one-dimensional sub-problems that can be solved efficiently. We show that our algorithm converges globally and obtains a fast local rate when the function is strongly convex. Further, if the objective has an invariant subspace, our method automatically adapts to the effective dimension without changing the algorithm. When combined with the SafeOpt algorithm to solve the sub-problems, we obtain the first safe Bayesian optimization algorithm with theoretical guarantees applicable in high-dimensional settings. We evaluate our method on multiple synthetic benchmarks, where we obtain competitive performance. Further, we deploy our algorithm to optimize the beam intensity of the Swiss Free Electron Laser with up to 40 parameters while satisfying safe operation constraints.

Mon Oct 05 2020
Machine Learning
Parameter Optimization using high-dimensional Bayesian Optimization
In this thesis, I explore the possibilities of conducting Bayesian optimization techniques in high dimensional domains. High dimensional domains can be defined to be between hundreds and thousands of dimensions. We focus on solutions to practical problems such as tuning the parameters for an electron accelerator.
0
0
0
Sat Feb 27 2021
Machine Learning
High-Dimensional Bayesian Optimization with Sparse Axis-Aligned Subspaces
High-dimensional optimization is difficult because it is difficult to define a suitable class of surrogate models. We show that a new approach that relies on Hamiltonian Monte Carlo can rapidly identify sparse subspaces relevant to modeling the objective function.
0
0
0
Mon Jan 20 2020
Machine Learning
Finding the Sparsest Vectors in a Subspace: Theory, Algorithms, and Applications
The problem of finding the sparsest vector (direction) in a low dimensional subspace can be considered as a homogeneous variant of the sparse recovery problem. In contrast to the classical sparse.recovery problem, the most natural formulation for finding the. sparsed vector in a
0
0
0
Sat Feb 18 2012
Machine Learning
Robust computation of linear models by convex relaxation
The paper describes a convex optimization problem that can reliably fit a low-dimensional model to this type of data. This approach parameterizes linear subspaces using orthogonal projectors. The paper provides an efficient algorithm for solving the problem.
0
0
0
Mon Nov 11 2013
Machine Learning
Toward a unified theory of sparse dimensionality reduction in Euclidean space
Let be a sparse Johnson-Lindenstrauss transform with non-zeroes per column. We study settings for required to preserve the norm of every simultaneously and multiplicatively up to . We introduce a new complexity parameter,
0
0
0
Sat Dec 06 2014
Machine Learning
Relations among Some Low Rank Subspace Recovery Models
Recovering intrinsic low dimensional subspaces from data distributed on them is a key preprocessing step to many applications. In recent years, there has been a lot of work that models subspace recovery as low rank minimization problems. We find that some representative models are actually deeply connected.
0
0
0