Existing techniques for sharing the processing resources in multiprogrammed shared-memory multiprocessors, such as time-sharing, space-sharing, and gang-scheduling, typically sacrifice the performance of individual parallel applications to improve overall system utilization. We present a new processor allocation technique called Loop-Level Process Control (LLPC) that dynamically adjusts the number of processors an application is allowed to use for the execution of each parallel section of code, based on the current system load. This approach exploits the maximum parallelism possible for each application without overloading the system. We implement our scheme on a Silicon Graphics Challenge multiprocessor system and evaluate its performance using applications from the Perfect Club benchmark suite and synthetic benchmarks. Our approach shows significant improvements over traditional time-sharing and gang-scheduling. It has performance comparable to, or slightly better than, static space-sharing, but our strategy is more robust since, unlike static space-sharing, it does not require a priori knowledge of the applications' parallelism characteristics.
|Number of pages
|IEEE Transactions on Parallel and Distributed Systems
|Published - 1997
Bibliographical noteFunding Information:
We would like to thank Robert Glamm for implementing the Fetch-and-Add subroutine, and Steve Soltis and Steve VanderWiel for assistance with using the SGI’s shared memory. This work was performed while Kelvin Yue was with the Department of Computer Science at the University of Minnesota, and was supported in part by the U.S. National Science Foundation under grant no. MIP-9221900, and equipment grant no. CDA-9414015.
- Operating system
- Parallel loop scheduling
- Processor allocation
- Shared-memory multiprocessors