Abstract:To analyze the convergence of differential evolution (DE) and enhance its capability and stability, this paper first defines a differential operator (DO) as a random mapping from the solution space to the Cartesian product of solution space, and proves the asymptotic convergence of DE based on the random contraction mapping theorem in random functional analysis theory. Then, inspired by “quasi-physical personification algorithm”, this paper proposes an improved differential evolution with multi-strategy cooperating evolution (MEDE) is addressed based on the fact that each evolution strategy of DE has common peculiarity but different characteristics. Its asymptotic convergence is given with the definition of multi-strategy differential operator (MDO), and the connotative peculiarity of MEDE is analyzed. Compared with the original DE, DEfirDE and DEfirSPX, the simulation results on 5 classical benchmark functions show that MEDE has obvious advantages in the convergence rate, solution-quality and adaptability. It is suitable for solving complex high-dimension numeral optimization To analyze the convergence of differential evolution (DE) and enhance its capability and stability, this paper first defines a differential operator (DO) as a random mapping from the solution space to the Cartesian product of solution space, and proves the asymptotic convergence of DE based on the random contraction mapping theorem in random functional analysis theory. Then, inspired by “quasi-physical personification algorithm”, this paper proposes an improved differential evolution with multi-strategy cooperating evolution (MEDE) is addressed based on the fact that each evolution strategy of DE has common peculiarity but different characteristics. Its asymptotic convergence is given with the definition of multi-strategy differential operator (MDO), and the connotative peculiarity of MEDE is analyzed. Compared with the original DE, DEfirDE and DEfirSPX, the simulation results on 5 classical benchmark functions show that MEDE has obvious advantages in the convergence rate, solution-quality and adaptability. It is suitable for solving complex high-dimension numeral optimization problems.