Convergence complexity analysis of albert and chib's algorithm for Bayesian probit regression

Qian Qin, James P. Hobert

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

The use of MCMC algorithms in high dimensional Bayesian problems has become routine. This has spurred so-called convergence complexity analysis, the goal of which is to ascertain how the convergence rate of a Monte Carlo Markov chain scales with sample size, n, and/or number of covariates, p. This article provides a thorough convergence complexity analysis of Albert and Chib's [J. Amer. Statist. Assoc. 88 (1993) 669-679] data augmentation algorithm for the Bayesian probit regression model. The main tools used in this analysis are drift and minorization conditions. The usual pitfalls associated with this type of analysis are avoided by utilizing centered drift functions, which are minimized in high posterior probability regions, and by using a new technique to suppress high-dimensionality in the construction of minorization conditions. The main result is that the geometric convergence rate of the underlying Markov chain is bounded below 1 both as n → 8 (with p fixed), and as p → 8 (with n fixed). Furthermore, the first computable bounds on the total variation distance to stationarity are byproducts of the asymptotic analysis.

Original languageEnglish (US)
Pages (from-to)2320-2347
Number of pages28
JournalAnnals of Statistics
Volume47
Issue number4
DOIs
StatePublished - Jan 1 2019

Fingerprint

Probit Regression
Complexity Analysis
Convergence Analysis
Monte Carlo Markov Chain
Geometric Convergence
MCMC Algorithm
Total Variation Distance
Data Augmentation
Probit Model
Posterior Probability
Stationarity
Asymptotic Analysis
Dimensionality
Convergence Rate
Covariates
Markov chain
Regression Model
Sample Size
Rate of Convergence
High-dimensional

Keywords

  • Drift condition
  • Geometric ergodicity
  • High dimensional inference
  • Large p-small n
  • Markov chain Monte Carlo
  • Minorization condition

Cite this

Convergence complexity analysis of albert and chib's algorithm for Bayesian probit regression. / Qin, Qian; Hobert, James P.

In: Annals of Statistics, Vol. 47, No. 4, 01.01.2019, p. 2320-2347.

Research output: Contribution to journalArticle

@article{1e36fa69b46843fe867f6f0b29c6289a,
title = "Convergence complexity analysis of albert and chib's algorithm for Bayesian probit regression",
abstract = "The use of MCMC algorithms in high dimensional Bayesian problems has become routine. This has spurred so-called convergence complexity analysis, the goal of which is to ascertain how the convergence rate of a Monte Carlo Markov chain scales with sample size, n, and/or number of covariates, p. This article provides a thorough convergence complexity analysis of Albert and Chib's [J. Amer. Statist. Assoc. 88 (1993) 669-679] data augmentation algorithm for the Bayesian probit regression model. The main tools used in this analysis are drift and minorization conditions. The usual pitfalls associated with this type of analysis are avoided by utilizing centered drift functions, which are minimized in high posterior probability regions, and by using a new technique to suppress high-dimensionality in the construction of minorization conditions. The main result is that the geometric convergence rate of the underlying Markov chain is bounded below 1 both as n → 8 (with p fixed), and as p → 8 (with n fixed). Furthermore, the first computable bounds on the total variation distance to stationarity are byproducts of the asymptotic analysis.",
keywords = "Drift condition, Geometric ergodicity, High dimensional inference, Large p-small n, Markov chain Monte Carlo, Minorization condition",
author = "Qian Qin and Hobert, {James P.}",
year = "2019",
month = "1",
day = "1",
doi = "10.1214/18-AOS1749",
language = "English (US)",
volume = "47",
pages = "2320--2347",
journal = "Annals of Statistics",
issn = "0090-5364",
publisher = "Institute of Mathematical Statistics",
number = "4",

}

TY - JOUR

T1 - Convergence complexity analysis of albert and chib's algorithm for Bayesian probit regression

AU - Qin, Qian

AU - Hobert, James P.

PY - 2019/1/1

Y1 - 2019/1/1

N2 - The use of MCMC algorithms in high dimensional Bayesian problems has become routine. This has spurred so-called convergence complexity analysis, the goal of which is to ascertain how the convergence rate of a Monte Carlo Markov chain scales with sample size, n, and/or number of covariates, p. This article provides a thorough convergence complexity analysis of Albert and Chib's [J. Amer. Statist. Assoc. 88 (1993) 669-679] data augmentation algorithm for the Bayesian probit regression model. The main tools used in this analysis are drift and minorization conditions. The usual pitfalls associated with this type of analysis are avoided by utilizing centered drift functions, which are minimized in high posterior probability regions, and by using a new technique to suppress high-dimensionality in the construction of minorization conditions. The main result is that the geometric convergence rate of the underlying Markov chain is bounded below 1 both as n → 8 (with p fixed), and as p → 8 (with n fixed). Furthermore, the first computable bounds on the total variation distance to stationarity are byproducts of the asymptotic analysis.

AB - The use of MCMC algorithms in high dimensional Bayesian problems has become routine. This has spurred so-called convergence complexity analysis, the goal of which is to ascertain how the convergence rate of a Monte Carlo Markov chain scales with sample size, n, and/or number of covariates, p. This article provides a thorough convergence complexity analysis of Albert and Chib's [J. Amer. Statist. Assoc. 88 (1993) 669-679] data augmentation algorithm for the Bayesian probit regression model. The main tools used in this analysis are drift and minorization conditions. The usual pitfalls associated with this type of analysis are avoided by utilizing centered drift functions, which are minimized in high posterior probability regions, and by using a new technique to suppress high-dimensionality in the construction of minorization conditions. The main result is that the geometric convergence rate of the underlying Markov chain is bounded below 1 both as n → 8 (with p fixed), and as p → 8 (with n fixed). Furthermore, the first computable bounds on the total variation distance to stationarity are byproducts of the asymptotic analysis.

KW - Drift condition

KW - Geometric ergodicity

KW - High dimensional inference

KW - Large p-small n

KW - Markov chain Monte Carlo

KW - Minorization condition

UR - http://www.scopus.com/inward/record.url?scp=85072247959&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85072247959&partnerID=8YFLogxK

U2 - 10.1214/18-AOS1749

DO - 10.1214/18-AOS1749

M3 - Article

AN - SCOPUS:85072247959

VL - 47

SP - 2320

EP - 2347

JO - Annals of Statistics

JF - Annals of Statistics

SN - 0090-5364

IS - 4

ER -