Abstract
A number of parallel formulations of dense matrix multiplication algorithm have been developed. For arbitrarily large number of processors, any of these algorithms or their variants can provide near linear speedup for sufficiently large matrix sizes and none of the algorithms can be clearly claimed to be superior than the others. In this paper we analyze the performance and scalability of a number of parallel formulations of the matrix multiplication algorithm and predict the conditions under which each formulation is better than the others. We present a parallel formulation for hypt.rcube and related architectures thai performs better than any of the schemes described in the literature so far for a wide range of matrtx sizes and number of processors. The superior performance and the analytical scalability expressions for this algorithm are verified through experiments on the Thinking Machines Corporation's CM-5TM' parallel computer for up to 512 processors.
Original language | English (US) |
---|---|
Title of host publication | Algorithms and Applications |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 115-123 |
Number of pages | 9 |
ISBN (Electronic) | 0849389836 |
DOIs | |
State | Published - 1993 |
Event | 1993 International Conference on Parallel Processing, ICPP 1993 - Syracuse, United States Duration: Aug 16 1993 → Aug 20 1993 |
Publication series
Name | Proceedings of the International Conference on Parallel Processing |
---|---|
Volume | 3 |
ISSN (Print) | 0190-3918 |
Conference
Conference | 1993 International Conference on Parallel Processing, ICPP 1993 |
---|---|
Country/Territory | United States |
City | Syracuse |
Period | 8/16/93 → 8/20/93 |
Bibliographical note
Publisher Copyright:© 1993 IEEE.