On MCMC sampling in hierarchical longitudinal models

Siddhartha Chib, Bradley P. Carlin

Research output: Contribution to journalArticlepeer-review

119 Scopus citations

Abstract

Markov chain Monte Carlo (MCMC) algorithms have revolutionized Bayesian practice. In their simplest form (i.e., when parameters are updated one at a time) they are, however, often slow to converge when applied to high-dimensional statistical models. A remedy for this problem is to block the parameters into groups, which are then updated simultaneously using either a Gibbs or Metropolis-Hastings step. In this paper we construct several (partially and fully blocked) MCMC algorithms for minimizing the autocorrelation in MCMC samples arising from important classes of longitudinal data models. We exploit an identity used by Chib (1995) in the context of Bayes factor computation to show how the parameters in a general linear mixed model may be updated in a single block, improving convergence and producing essentially independent draws from the posterior of the parameters of interest. We also investigate the value of blocking in non-Gaussian mixed models, as well as in a class of binary response data longitudinal models. We illustrate the approaches in detail with three real-data examples.

Original languageEnglish (US)
Pages (from-to)17-26
Number of pages10
JournalStatistics and Computing
Volume9
Issue number1
DOIs
StatePublished - 1999

Bibliographical note

Funding Information:
The research of the second author was supported in part by a National Institute of Allergy and Infectious Diseases (NIAID) Grant 1-R01-AI41966.

Keywords

  • Blocking
  • Convergence acceleration
  • Correlated binary data
  • Gibbs sampler
  • Linear mixed model
  • Metropolis-Hastings algorithm
  • Panel data
  • Random effects

Fingerprint

Dive into the research topics of 'On MCMC sampling in hierarchical longitudinal models'. Together they form a unique fingerprint.

Cite this