Complexity and performance in parallel programming languages

Steven P. VanderWiel, Daphna Nathanson, David J. Lilja

Research output: Contribution to conferencePaper

13 Scopus citations

Abstract

Several parallel programming languages, libraries and environments have been developed to ease the task of writing programs for multiprocessors. Proponents of each approach often point out various language features that are designed to provide the programmer with a simple programming interface. However, virtually no data exists that quantitatively evaluates the relative ease of use of different parallel programming languages. The following paper borrows techniques from the software engineering field to quantify the complexity of three predominate programming models: shared memory, message passing and High-Performance Fortran. It is concluded that traditional software complexity metrics are effective indicators of the relative complexity of parallel programming languages. The impact of complexity on run-time performance is also discussed in the context of message-passing versus HPF on an IBM SP2.

Original languageEnglish (US)
Pages3-12
Number of pages10
StatePublished - Jan 1 1997
EventProceedings of 1997 2nd International Workshop on High-Level Programming Models and Supportive Environments - Geneva, Switz
Duration: Apr 1 1997Apr 1 1997

Other

OtherProceedings of 1997 2nd International Workshop on High-Level Programming Models and Supportive Environments
CityGeneva, Switz
Period4/1/974/1/97

Fingerprint Dive into the research topics of 'Complexity and performance in parallel programming languages'. Together they form a unique fingerprint.

  • Cite this

    VanderWiel, S. P., Nathanson, D., & Lilja, D. J. (1997). Complexity and performance in parallel programming languages. 3-12. Paper presented at Proceedings of 1997 2nd International Workshop on High-Level Programming Models and Supportive Environments, Geneva, Switz, .