On computation using Gibbs sampling for multilevel models

Alan E. Gelfand, Brad Carlin, Matilde Trevisani

Research output: Contribution to journalArticlepeer-review

7 Scopus citations


Multilevel models incorporating random effects at the various levels are enjoying increased popularity. An implicit problem with such models is identifiability. From a Bayesian perspective, formal identifiability is not an issue. Rather, when implementing iterative simulation-based model fitting, a poorly behaved Gibbs sampler frequently arises. The objective of this paper is to shed light on two computational issues in this regard. The first concerns autocorrelation in the sequence of iterates of the Markov chain. For estimable functions we clarify when, after convergence, autocorrelation will drop off to zero rapidly, enabling high effective sample size. The second concerns immediate convergence, i.e., when, at an arbitrary iteration, the simulated value of a variable is in fact an observation from the posterior distribution of the variable. Again, for estimable functions, we clarify when the chain will produce at each iteration a sample drawn essentially from the true posterior of the function. We provide both analytical and computational support for our conclusions, including exemplification for three multilevel models having normal, Poisson, and binary responses, respectively.

Original languageEnglish (US)
Pages (from-to)981-1003
Number of pages23
JournalStatistica Sinica
Issue number4
StatePublished - Oct 1 2001


  • Autocorrelation
  • Estimable function
  • Exact sampling
  • Identifiability

Fingerprint Dive into the research topics of 'On computation using Gibbs sampling for multilevel models'. Together they form a unique fingerprint.

Cite this