Markov chain monte carlo in practice: A roundtable discussion

Robert E. Kass, Bradley P. Carlin, Andrew Gelman, Radford M. Neal

Research output: Contribution to journalArticlepeer-review

470 Scopus citations


Markov chain Monte Carlo (MCMC) methods make possible the use of flexible Bayesian models that would otherwise be computationally infeasible. In recent years, a great variety of such applications have been described in the literature. Applied statisticians who are new to these methods may have several questions and concerns, however: How much effort and expertise are needed to design and use a Markov chain sampler? How much confidence can one have in the answers that MCMC produces? How does the use of MCMC affect the rest of the model-building process? At the Joint Statistical Meetings in August, 1996, a panel of experienced MCMC users discussed these and other issues, as well as various “tricks of the trade” This article is an edited recreation of that discussion. Its purpose is to offer advice and guidance to novice users of MCMC—and to not-so-novice users as well. Topics include building confidence in simulation results, methods for speeding and assessing convergence, estimating standard errors, identification of models for which good MCMC algorithms exist, and the current state of software development.

Original languageEnglish (US)
Pages (from-to)93-100
Number of pages8
JournalAmerican Statistician
Issue number2
StatePublished - May 1998


  • Bayesian software
  • Convergence assessment
  • Gibbs sampler
  • Metropolis-Hastings algorithm


Dive into the research topics of 'Markov chain monte carlo in practice: A roundtable discussion'. Together they form a unique fingerprint.

Cite this