Because of the rules of dynamic directive nesting and binding, some of the thread context in OpenMP programs can only be totally determined at runtime. However, by compiling time static analysis, nesting type can be partly determined and this information can be passed to other compiling phases to guide later translation and optimizations. Since the binding and nesting may span the procedure boundaries through calls, local and global analyses are not enough. It is the interprocedural analysis that provides the most required ability. By integrating information into traditional interprocedural analysis, the nesting type information of procedures is propagated along call graphs. And later translation and optimization phases can bind this global information with local information inside the procedure to determine the nesting types at compiling time. The results demonstrate that in typical science and engineering workload the nesting type is highly determinable at compiling time, and the application of this information may achieve less runtime overhead and the reduced code size.
[1]OpenMP Architecture Review Board. OpenMP FORTRAN Application Program Interface Version 2.0, November 2000. OpenMP C and C++ Application Program Interface version 2.0, March 2002. Http://www.openmp.org
[2]Jin H, Frumkin M, Yan J. The OpenMP implementation of NAS parallel benchmarks and its performance. Technical Report, Report NAS-99-011, NASA Ames Research Center, 1999.
[3]Brunschen C, Brorsson M. OdinMP/CCp-a portable implementation of OpenMP for C. Concurrency: Practice and Experience, 2000, 12(12):1193-1203.
[4]Seung JM, Seon WK, Voss M, Sang IL Eigenmann R. Portable compilers for OpenMP. In: Proc. Of the Workshop on OpenMP Applications and Tools (WOMPAT2001). Purdue University, West Lafayette, Indiana, 2001. 11-19.
[5]Ayguade E, Marc G, Labarta J. NanosCompiler: A research platform for OpenMP extensions. In: Proc. Of the 1st European Workshop on OpenMP (EWOMP'99). Lund, Sweden, 1999. 27-31.
[6]Sato M, Satoh S, Kusano K, Tanaka Y. Design of OpenMP compiler for an SMP cluster. In: Proc. Of the 1st European Workshop on OpenMP (EWOMP'99). Lund, Sweden, 1999. 32-39.
[7]Sato M, Harada H, Hasegawa A. Cluster-Enabled OpenMP: An OpenMP compiler for the SCASH software distributed shared memory system. Scientific Programming, 2001,9(2-3):123-130.
[8]Satoh S, Kusano K, Sato M. Compiler optimization techniques for OpenMP programs. Scientific Programming, 2001,9(2-3): 131-142.
[9]Kusano K, Satoh S, Sato M. Performance evaluation of the omni OpenMP compiler. In: Valero M, et al. Eds. Proc. Of the Workshop on OpenMP: Experiences and Implementations (WOMPEI2000). Berlin: Springer Verlag, 2000. 403-414.
[10]Open Research Compiler. Http://ipf-orc.sourceforge.net
[11]Chen YJ, Li JJ, Wang SY, Wang DX. ORC-OpenMP: An OpenMP compiler based on ORC. In: Voss M, ed. Proc. Of the Int'l Conf. On Computational Science 2004. Berlin: Springer-Verlag, 2004. 414-423.
[12]Tian XM, Bik A, Girkar M, Grey P, Saito H, Su E. Intel OpenMP C++/FORTRAN compiler for Hyper-Threading technology: Implementation and performance. Intel Technology Journal, 2002,6(1):36-46.