Abstract
A common problem in statistics is to compute sample vectors from a multivariate Gaussian distribution with zero mean and a given covariance matrix A. A canonical approach to the problem is to compute vectors of the form y = Sz, where S is the Cholesky factor or square root of A, and z is a standard normal vector. When A is large, such an approach becomes computationally expensive. This paper considers preconditioned Krylov subspace methods to perform this task. The Lanczos process provides a means to approximate A1/2z for any vector z from an m-dimensional Krylov subspace. The main contribution of this paper is to show how to enhance the convergence of the process via preconditioning. Both incomplete Cholesky preconditioners and approximate inverse preconditioners are discussed. It is argued that the latter class of preconditioners has an advantage in the context of sampling. Numerical tests, performed with stationary covariance matrices used to model Gaussian processes, illustrate the dramatic improvement in computation time that can result from preconditioning.
Original language | English (US) |
---|---|
Pages (from-to) | A588-A608 |
Journal | SIAM Journal on Scientific Computing |
Volume | 36 |
Issue number | 2 |
DOIs | |
State | Published - 2014 |
Keywords
- Covariance matrix
- Gaussian processes
- Krylov subspace methods
- Lanczos process
- Matrix square root
- Preconditioning
- Sampling
- Sparse approximate inverse