LEA: Beyond Evolutionary Algorithms via Learned Optimization Strategy

Kai Wu, Penghui Liu, Jing Liu

Evolutionary algorithms (EAs) have emerged as a powerful framework for expensive black-box optimization. Obtaining better solutions with less computational cost is essential and challenging for black-box optimization. The most critical obstacle is figuring out how to effectively use the target task information to form an efficient optimization strategy. However, current methods are weak due to the poor representation of the optimization strategy and the inefficient interaction between the optimization strategy and the target task. To overcome the above limitations, we design a learned EA (LEA) to realize the move from hand-designed optimization strategies to learned optimization strategies, including not only hyperparameters but also update rules. Unlike traditional EAs, LEA has high adaptability to the target task and can obtain better solutions with less computational cost. LEA is also able to effectively utilize the low-fidelity information of the target task to form an efficient optimization strategy. The experimental results on one synthetic case, CEC 2013, and two real-world cases show the advantages of learned optimization strategies over human-designed baselines. In addition, LEA is friendly to the acceleration provided by Graphics Processing Units and runs 102 times faster than unaccelerated EA when evolving 32 populations, each containing 6400 individuals.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment