DYNAMIC PROCESSOR SELF-SCHEDULING FOR GENERAL PARALLEL NESTED LOOPS.

Zhixi Fang, Pen Chung Yew, Peiyi Tang, Chuan Qi Zhu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

19 Scopus citations

Abstract

A complete dynamic processor self-scheduling approach for general nested loops is presented. General nested loops contain both parallel and serial loops, and the execution time of their iterations can vary widely. In the authors' scheme, an instance of the innermost loop is considered to be the basic unit in a precedence graph. An instance is said to be active if the instance is ready to be executed. Completion of an instance will activate instances of other innermost loops. The basic concept is to keep all processors busy as long as there exists any active instance. By effectively using synchronization primitives provided in a parallel processing system, the overhead of the approach can be significantly reduced. The data structure needed to represent a general nested loop is also presented.

Original languageEnglish (US)
Title of host publicationProceedings of the International Conference on Parallel Processing
EditorsSartaj K. Sahni
PublisherPennsylvania State Univ Press
Pages1-10
Number of pages10
ISBN (Print)0271006080
StatePublished - Dec 1 1987
EventProc Int Conf Parallel Process 1987 - Universal Park, PA, USA
Duration: Aug 17 1987Aug 21 1987

Publication series

NameProceedings of the International Conference on Parallel Processing
ISSN (Print)0190-3918

Other

OtherProc Int Conf Parallel Process 1987
CityUniversal Park, PA, USA
Period8/17/878/21/87

Fingerprint Dive into the research topics of 'DYNAMIC PROCESSOR SELF-SCHEDULING FOR GENERAL PARALLEL NESTED LOOPS.'. Together they form a unique fingerprint.

  • Cite this

    Fang, Z., Yew, P. C., Tang, P., & Zhu, C. Q. (1987). DYNAMIC PROCESSOR SELF-SCHEDULING FOR GENERAL PARALLEL NESTED LOOPS. In S. K. Sahni (Ed.), Proceedings of the International Conference on Parallel Processing (pp. 1-10). (Proceedings of the International Conference on Parallel Processing). Pennsylvania State Univ Press.