Honest exploration of intractable probability distributions via markov chain monte carlo

Galin L. Jones, James P. Hobert

Research output: Contribution to journalArticlepeer-review

176 Scopus citations


Two important questions that must be answered whenever a Markov chain Monte Carlo (MCMC) algorithm is used are (Q1) What is an appropriate burn-in? and (Q2) How long should the sampling continue after burn-in? Developing rigorous answers to these questions presently requires a detailed study of the convergence properties of the underlying Markov chain. Consequently, in most practical applications of MCMC, exact answers to (Q1) and (Q2) are not sought. The goal of this paper is to demystify the analysis that leads to honest answers to (Q1) and (Q2). The authors hope that this article will serve as a bridge between those developing Markov chain theory and practitioners using MCMC to solve practical problems. The ability to address (Q1) and (Q2) formally comes from establishing a drift condition and an associated minorization condition, which together imply that the underlying Markov chain is geometrically ergodic. In this article, we explain exactly what drift and minorization are as well as how and why these conditions can be used to form rigorous answers to (Q1) and (Q2). The basic ideas are as follows. The results of Rosenthal (1995) and Roberts and Tweedie (1999) allow one to use drift and minorization conditions to construct a formula giving an analytic upper bound on the distance to stationarity. A rigorous answer to (Q1) can be calculated using this formula. The desired characteristics of the target distribution are typically estimated using ergodic averages. Geometric ergodicity of the underlying Markov chain implies that there are central limit theorems available for ergodic averages (Chan and Geyer 1994). The regenerative simulation technique (Mykland, Tierney and Yu, 1995; Robert, 1995) can be used to get a consistent estimate of the variance of the asymptotic normal distribution. Hence, an asymptotic standard error can be calculated, which provides an answer to (Q2) in the sense that an appropriate time to stop sampling can be determined. The methods are illustrated using a Gibbs sampler for a Bayesian version of the one-way random effects model and a data set concerning styrene exposure.

Original languageEnglish (US)
Pages (from-to)312-334
Number of pages23
JournalStatistical Science
Issue number4
StatePublished - 2001


  • Central limit theorem
  • Convergence rate
  • Coupling inequality
  • Drift condition
  • General state space
  • Geometric ergodicity
  • Gibbs sampler
  • Hierarchical random effects model
  • Metropolis algorithm
  • Minorization condition
  • Regeneration
  • Splitting
  • Uniform ergodicity


Dive into the research topics of 'Honest exploration of intractable probability distributions via markov chain monte carlo'. Together they form a unique fingerprint.

Cite this