Parallel matrix-vector product using approximate hierarchical methods

Ananth Grama, Vipin Kumar, Ahmed Sameh

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations


Matrix-vector products (mat-vecs) form the core of iterative methods used for solving dense linear systems. Often, these systems arise in the solution of integral equations used in electromagnetics, heat transfer, and wave propagation. In this paper, we present a parallel approximate method for computing mat-vecs used in the solution of integral equations. We use this method to compute dense mat-vecs of hundreds of thousands of elements. The combined speedups obtained from the use of approximate methods and parallel processing represent an improvement of several orders of magnitude over exact mat-vecs on uniprocessors. We demonstrate that our parallel formulation incurs minimal parallel processing overhead and scales up to a large number of processors. We study the impact of varying the accuracy of the approximate mat-vec on overall time and on parallel efficiency. Experimental results are presented for 256 processor Cray T3D and Thinking Machines CM5 parallel computers. We have achieved computation rates in excess of 5 GFLOPS on the T3D.

Original languageEnglish (US)
Pages (from-to)2065-2084
Number of pages20
JournalProceedings of the ACM/IEEE Supercomputing Conference
StatePublished - Dec 1 1995
EventProceedings of the 1995 ACM/IEEE Supercomputing Conference. Part 2 (of 2) - San Diego, CA, USA
Duration: Dec 3 1995Dec 8 1995


Dive into the research topics of 'Parallel matrix-vector product using approximate hierarchical methods'. Together they form a unique fingerprint.

Cite this