Highly scalable parallel algorithms for sparse matrix factorization

Research output: Contribution to journalArticlepeer-review

131 Scopus citations


In this paper, we describe scalable parallel algorithms for symmetric sparse matrix factorization, analyze their performance and scalability, and present experimental results for up to 1,024 processors on a Cray T3D parallel computer. Through our analysis and experimental results, we demonstrate that our algorithms substantially improve the state of the art in parallel direct solution of sparse linear systems - both in terms of scalability and overall performance. It is a well known fact that dense matrix factorization scales well and can be implemented efficiently on parallel computers. In this paper, we present the first algorithms to factor a wide class of sparse matrices (including those arising from two- and three-dimensional finite element problems) that are asymptotically as scalable as dense matrix factorization algorithms on a variety of parallel architectures. Our algorithms incur less communication overhead and are more scalable than any previously known parallel formulation of sparse matrix factorization. Although, in this paper, we discuss Cholesky factorization of symmetric positive definite matrices, the algorithms can be adapted for solving sparse linear least squares problems and for Gaussian elimination of diagonally dominant matrices that are almost symmetric in structure. An implementation of one of our sparse Cholesky factorization algorithms delivers up to 20 GFlops on a Cray T3D for medium-size structural engineering and linear programming problems. To the best of our knowledge, this is the highest performance ever obtained for sparse Cholesky factorization on any supercomputer.

Original languageEnglish (US)
Pages (from-to)502-520
Number of pages19
JournalIEEE Transactions on Parallel and Distributed Systems
Issue number5
StatePublished - 1997

Bibliographical note

Funding Information:
This work was supported by the U.S. National Science Foundation under grant number CCR-9423082 and by the U.S. Army Research Office under contract DA/DAAH04-95-1-0538, and by the U.S. Army High Performance Computing Research Center under the auspices of the Department of the Army, Army Research Laboratory cooperative agreement number DAAH04-95-2-0003/contract number DAAH04-95-C-0008, the content of which does not necessarily reflect the position or the policy of the government, and no official endorsement should be inferred. Access to computing facilities was provided by Minnesota Supercomputer Institute, Cray Research Inc, and by the Pittsburgh Supercomputing Center. Related papers are available via WWW at URL: http://www.cs.umn.edu.


  • Cholesky factorization
  • High performance computing
  • Parallel processing
  • Parallel scientific computing
  • Scalability analysis
  • Sparse linear systems
  • Sparse matrices


Dive into the research topics of 'Highly scalable parallel algorithms for sparse matrix factorization'. Together they form a unique fingerprint.

Cite this