Published on Thu Jun 10 2021

Lower Bounds on Metropolized Sampling Methods for Well-Conditioned Distributions

Yin Tat Lee, Ruoqi Shen, Kevin Tian
0
0
0
Abstract

We give lower bounds on the performance of two of the most popular sampling methods in practice, the Metropolis-adjusted Langevin algorithm (MALA) and multi-step Hamiltonian Monte Carlo (HMC) with a leapfrog integrator, when applied to well-conditioned distributions. Our main result is a nearly-tight lower bound of on the mixing time of MALA from an exponentially warm start, matching a line of algorithmic results up to logarithmic factors and answering an open question of Chewi et. al. We also show that a polynomial dependence on dimension is necessary for the relaxation time of HMC under any number of leapfrog steps, and bound the gains achievable by changing the step count. Our HMC analysis draws upon a novel connection between leapfrog integration and Chebyshev polynomials, which may be of independent interest.

Mon Jan 08 2018
Machine Learning
Log-concave sampling: Metropolis-Hastings algorithms are fast
We prove a non-asymptotic upper bound on the mixing time of the Metropolis-adjusted Langevin algorithm (MALA) The method draws samples by simulating a Markov chain obtained from the discretization of an appropriate Langevin diffusion, combined with an accept-reject step.
0
0
0
Wed Dec 23 2020
Machine Learning
Optimal dimension dependence of the Metropolis-Adjusted Langevin Algorithm
The mixing time of the Metropolis-Adjusted Langevin Algorithm (MALA) scales as , where is the dimension. The diffusion scaling limit requires stringent assumptions on the target distribution and is asymptotic in nature.
0
0
0
Wed May 29 2019
Machine Learning
Fast mixing of Metropolized Hamiltonian Monte Carlo: Benefits of multi-step gradients
Hamiltonian Monte Carlo (HMC) is a state-of-the-art Markov chain Monte Carlo sampling algorithm. We study the variant most widely used in practice, the Metropolized HMC. We provide a non-asymptotic upper bound on the mixing
0
0
0
Tue Dec 23 2014
Machine Learning
Particle Metropolis-adjusted Langevin algorithms
A new sampling scheme based on Langevin dynamics is proposed. The algorithm's behaviour depends on how accurately one can estimate the gradient of the log target density.
0
0
0
Fri Feb 22 2019
Machine Learning
Nonconvex sampling with the Metropolis-adjusted Langevin algorithm
Langevin Markov chain algorithms are widely deployed methods to sample from distributions in challenging high-dimensional and non-convex statistics and machine learning applications. Despite this, current bounds for the Langevin algorithms are slower than those of competing algorithms. In this paper, we show that the Metropolis-adjusted Langevin algorithm (MALA) is faster than the best bounds for its competitor
0
0
0
Fri Sep 29 2017
Machine Learning
User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient
In this paper, we study the problem of sampling from a given probability density function that is known to be smooth and strongly log-concave. We provide an upper bound on the error of the first-order Langevin Monte Carlo (LMC) algorithm with varying step-size.
0
0
0