Hamiltonian Monte Carlo: Efficient Posterior Sampling in High Dimensions

Research Team
1 Research Platform
Abstract —

Hamiltonian Monte Carlo (HMC) uses gradient information to explore posterior distributions efficiently, making it a practical default for high-dimensional Bayesian inference.

Keywords: bayesian statistics, markov chain monte carlo, hamiltonian monte carlo, posterior inference

Motivation

Random-walk Metropolis methods can struggle in high dimensions because they take small, inefficient steps. HMC improves efficiency by simulating a Hamiltonian system that proposes distant, high-acceptance moves.

Core Idea

HMC introduces a momentum variable pp and uses a Hamiltonian:

H(q,p)=U(q)+K(p)H(q, p) = U(q) + K(p)

where U(q)U(q) is the negative log posterior and K(p)K(p) is typically a quadratic kinetic term. Leapfrog integration produces proposals that preserve volume and approximately conserve energy.

Practical Considerations

  • Step size and trajectory length strongly affect performance.
  • Adaptive variants like NUTS select path length automatically.
  • Diagnostics such as effective sample size and R-hat are essential.

Reference

Hoffman, Matthew D., Gelman, Andrew (2014) The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo Journal of Machine Learning Research
Show BibTeX
@article{hoffman2014nuts,
title={The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo},
author={Hoffman, Matthew D. and Gelman, Andrew},
journal={Journal of Machine Learning Research},
year={2014},
volume={15},
number={47},
pages={1593-1623}
}