An approach to efficient interprocedural analysis for program parallelization and restructuring is presented. Such analysis is needed for parallelizing loops which contain procedure calls. Our approach captures call effect on data dependences by propagating the precise information of array subscripts from the called procedure. This allows the optimizing compiler to choose an efficient yet precise data dependence test scheme depending on the complexity of array reference patterns. The other existing methods do not provide such flexibility, hence may suffer from either imprecision or inefficiency. The paper also discusses usage of classical summary information in several important transformations for program parallelization. Experimental results are reported.
|Title of host publication
|Proceedings of the ACM/SIGPLAN Conference on Parallel Programming
|Subtitle of host publication
|Experience with Applications, Languages and Systems, PPEALS 1988
|Richard L. Wexelblat
|Association for Computing Machinery
|Number of pages
|Published - Jan 1 1988
|1988 ACM/SIGPLAN Conference on Parallel Programming: Experience with Applications, Languages and Systems, PPEALS 1988 - New Haven, United States
Duration: Jul 19 1988 → Jul 21 1988
|Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPOPP
|1988 ACM/SIGPLAN Conference on Parallel Programming: Experience with Applications, Languages and Systems, PPEALS 1988
|7/19/88 → 7/21/88
Bibliographical noteFunding Information:
Thii work was supported in part by the National Science Foundation under Grants No. US NSF MIF’-8410110 and the US Department of Energy under Grant No. US DOE DEFG02-85ER25001, and by a donation from the IBM Corporation. and by a donation from the CDC Corporation.