Abstract:The preference-inspired co-evolutionary algorithm (PICEA-g) uses goal vectors as preferences, and uses the number of target vectors that the individual can dominated as fitness value, to effectively decrease the proportion of non-dominated solutions in high dimensional space. However, the obtained set is approximate Pareto frontier, not Pareto optimal solution that decision makers are really interested in. This leads to the performance degradation and computational resources waste when dealing with high-dimensional optimization problems. Therefore, a preference vector guided co-evolutionary algorithm for many-objective optimization is proposed in this study. Firstly, the ASF extension function is used to map the ideal point in the evolution population on the objective space, which is used as a preference vector to guide the evolution direction of the population. Then, two temporary points are obtained by preference region selection strategy in order to build region of preference for decision maker (ROI). The range of upper and lower bounds generated by random preference sets is determined, and the co-evolution mechanism is used to guide the population to converge towards the ROI. The ASF-PICEA-g is compared with g-NSGA-II and r-NSGA-II on WFG and DTLZ benchmark test functions based on 3 dimension to 20 dimension. The experimental results demonstrate that ASF-PICEA-g shows sound performance on the WFG series test function, and the obtained solution set is better than the comparison algorithm; it is slightly better than the comparison algorithm in the DTLZ series test function, especially in the 10 dimension or higher dimension. In addition, ASF-PICEA-g shows better stability, and the obtained solution set has better convergence and distribution.