Engineering design optimization plays a key role in the development and manufacturing of energy-efficient products in a wide range of industries. However, practical design spaces are highly multi-modal, with complex interactions among a large number of control variables, which renders traditional experimental design extremely cumbersome and costly.
Virtual simulation-driven design optimization has played an increasingly prominent role in narrowing down from large design spaces to the most promising designs for experimental prototyping, leading to significant cost savings. However, current state-of-the art optimization approaches, such as design of experiments (DoE) and genetic algorithm (GA), typically used by industry suffer from the low accuracy, lack of robustness, and slow convergence. This necessitates running a large number of simulations sampled from the design space, resulting in long time-to-design (months) and high computational cost. In order to circumvent these issues, Argonne has developed an advanced end-to-end optimization algorithm known as ActivO, which reduces design times and costs by 5-10 times relative to the current industry-standard optimization techniques. The tool employs active learning, on-the-fly adaptive exploration and exploitation of large design spaces, and embedded machine learning (ML) surrogate models.
In ActivO, a two-way coupling exists between the simulations and ML: ML guides the optimization automatically by indicating which design points to simulate in the next iteration, and the new simulation data, in turn, is used to re-train and refine the ML surrogate models to develop more accurate projections of where the global design optimum lies.
ActivO combines the power of two disparate ML models known as the “weak” and “strong” learners, to efficiently guide optimization toward the global optimum. The weak learner explores the design space to look for promising regions that are likely to exhibit high objective values based on its projections, while the strong learner seeks to exploit those promising regions by performing a more local search there. While the weak learner prevents the algorithm from converging prematurely to a suboptimal design, the strong learner helps the optimization scheme to converge quickly once in the neighborhood of the global design optimum. As a result, the optimal design variables can be found with a significantly lower number of simulations compared to state-of-the-art optimization techniques.