Abstract:This paper presents a complete suit of systematic optimizing methods which may be used in parallelizing compilers for multicomputers or computer clusters. In the compilation scheme, two strategy are adopted. One is trading off parallelism and communication cost and the other is reducing and hiding communication overhead. Through analyzing the properties of data communication required for the program partition approach based on affine functions, the authors find a method to exploit parallelism in serial programs satisfying the special requirements of distributed memory machines. In order to minimize the total of data needing to be communicated, they invent a global optimization program partition method based on solving linear equations. In order to optimize the organization of communication codes and generate more efficient node programs, they invent a more practical method based on linear inequalities to perform communication optimization and node programs generation.