Abstract:Existing distributed memory parallelizing compiler systems are mostly developed based on sharedsystems. The parallelism recognition technologies of shared memory parallelizing compiler systems are suitable forOpenMP code generation. Their implementation is used to recognize all nested loops by the same technology, sothat the parallelism cannot be efficiently explored when applying them to distributed memory parallelizing compilersystems. Thus, this paper proposes some parallelism recognition technologies suitable for the MPI code generationfor distributed memory parallelizing compiler systems by classifying the nested loops according to their structures.To solve these problems, a new classification method of nested loops is proposed, according to the structure ofnested loops and characteristics of MPI parallel program. Corresponding parallelism recognition technologies fordifferent nested loops are also presented, respectively. The experimental results show that compared with thedistributed memory parallelizing compiler systems that used existing parallelism recognition technologies, thecompiler systems, which use the proposed classification method and the corresponding recognition technologies,can more efficiently recognize parallel nested loops in the benchmark programs, and the performance speedup of theMPI codes automatically increased to more than 20%.