Explores the convergence of Langevin Monte Carlo algorithms under different growth rates and smoothness conditions, emphasizing fast convergence for a wide class of potentials.
Explores computing density of states and Bayesian inference using importance sampling, showcasing lower variance and parallelizability of the proposed method.
Covers the theory of Markov Chain Monte Carlo (MCMC) sampling and discusses convergence conditions, transition matrix choice, and target distribution evolution.