### Abstract

Let π denote the intractable posterior density that results when the likelihood from a multivariate linear regression model with errors from a scale mixture of normals is combined with the standard non-informative prior. There is a simple data augmentation algorithm (based on latent data from the mixing density) that can be used to explore π. Let h and d denote the mixing density and the dimension of the regression model, respectively. Hobert et al. (2018) have recently shown that, if h converges to 0 at the origin at an appropriate rate, and ∫_{0} ^{∞}u^{d∕2}h(u)du<∞ then the Markov chains underlying the data augmentation (DA) algorithm and an alternative Haar parameter expanded DA (PX-DA) algorithm are both geometrically ergodic. Their results are established using probabilistic techniques based on drift and minorization conditions. In this paper, spectral analytic techniques are used to establish that something much stronger than geometric ergodicity often holds. In particular, it is shown that, under simple conditions on h, the Markov operators defined by the DA and Haar PX-DA Markov chains are trace-class, i.e., compact with summable eigenvalues. Many standard mixing densities satisfy the conditions developed in this paper. Indeed, the new results imply that the DA and Haar PX-DA Markov operators are trace-class whenever the mixing density is generalized inverse Gaussian, log-normal, Fréchet (with shape parameter larger than d∕2), or inverted Gamma (with shape parameter larger than d∕2).

Original language | English (US) |
---|---|

Pages (from-to) | 335-345 |

Number of pages | 11 |

Journal | Journal of Multivariate Analysis |

Volume | 166 |

DOIs | |

State | Published - Jul 2018 |

Externally published | Yes |

### Fingerprint

### Keywords

- Compact operator
- Data augmentation algorithm
- Haar PX-DA algorithm
- Heavy-tailed distribution
- Markov operator
- Scale mixture
- Trace-class operator