Abstract
Given a certain function f, various methods have been proposed in the past for addressing the important problem of computing the matrix-vector product f(A)b without explicitly computing the matrix f(A). Such methods were typically developed for a specific function f, a common case being that of the exponential. This paper discusses a procedure based on least squares polynomials that can, in principle, be applied to any (continuous) function f. The idea is to start by approximating the function by a spline of a desired accuracy. Then a particular definition of the function inner product is invoked that facilitates the computation of the least squares polynomial to this spline function. Since the function is approximated by a polynomial, the matrix A is referenced only through a matrix-vector multiplication. In addition, the choice of the inner product makes it possible to avoid numerical integration. As an important application, we consider the case when f(t) = √t and A is a sparse, symmetric positive-definite matrix, which arises in sampling from a Gaussian process distribution. The covariance matrix of the distribution is defined by using a covariance function that has a compact support, at a very large number of sites that are on a regular or irregular grid. We derive error bounds and show extensive numerical results to illustrate the effectiveness of the proposed technique.
Original language | English (US) |
---|---|
Pages (from-to) | 195-222 |
Number of pages | 28 |
Journal | SIAM Journal on Scientific Computing |
Volume | 33 |
Issue number | 1 |
DOIs | |
State | Published - 2011 |
Keywords
- Gaussian process
- Least squares polynomials
- Matrix function
- Sampling