Modern aircraft engines operate at high pressures to improve thermal efficiency, and reduce fuel consumption and greenhouse gas emissions. At high pressures, the concomitant reduction in engine core- and combustor-size brings hot flame regions closer to the wall and increases heat loads on hot-section components, such as combustor liner walls and turbine blades. Increases in operating pressure also increases the combustor and turbine inlet temperatures. Under these conditions, thermal management becomes critical for improving component durability and lowering operating costs. However, optimization of such thermal management systems (e.g. minimization of cooling air needs) is required to maximize engine efficiency.
Film cooling, in which cooling air flows through a number of tiny passages to form a film of air to protect internal components, is a commonly used gas turbine (GT) thermal management technique. There are many factors to consider when it comes to cooling engines — cooling air flow rate, cooling hole angle, and the arrangement/design of cooling holes. To optimize their designs, engineers utilize computational fluid dynamics (CFD) simulations. On one hand, high-fidelity CFD tools, such as wall-resolved Large Eddy Simulation (WRLES), can predict the details of near-wall coolant flows necessary for thermal management optimization. Such models are capable of reliable and accurate predictions of coolant flow mixing with core flow and convective heat loads on the wall. However, the computational cost of such simulations prohibits their practical use in the design cycle of gas-turbine engines and components. On the other hand, CFD simulations using lower fidelity models, like Reynolds-averaged Navier-Stokes (RANS) and wall-modeled LES (WMLES), are more affordable but introduce large errors in predicting near-wall boundary layer dynamics and hence are not useful in predictive analysis and design of thermal management schemes. The inability of existing wall models to predict near-wall cooling flow physics in the design cycle leads to conservatism to mitigate durability risks, negatively impacting the overall efficiency.
Argonne researchers, along with collaborators from Raytheon Technologies Research Center (RTRC), have developed a novel physics-guided spatiotemporal machine learning (ML) emulator to enable accelerated simulation-driven design and optimization of GT film cooling schemes for aircraft engines. In particular, a data-driven subgrid wall model has been developed to enable predictive yet computationally inexpensive large eddy simulations (LES) of complex GT film cooling flows with coarse near-wall resolutions. The ML emulator, based on light gradient boosting machine (LGBM), was trained on datasets extracted from high-fidelity wall-resolved LES (WRLES) of a GT film cooling system performed on leadership class DOE supercomputers using Argonne’s massively parallel high-order Nek5000 CFD solver. The data-driven wall model uses a variety of local flow features (velocity components, velocity gradients, pressure gradients, and fluid properties), and incorporates spatial stencil and time delay to predict the local wall shear stress. This leads to high model accuracy and generalizability by way of enlarging the domain of dependence in space and time. Recent work performed by Argonne researchers has developed a high-performance computing (HPC) workflow coupling the ML wall model with Nek5000 solver for LES of GT film cooling schemes. The integrated simulation-AI framework offers the capability to extend the fuel efficiency and durability limits of next-generation aircraft engines while slashing design times and costs.
This collaborative Argonne-RTRC research effort was recipient of the 2022 HPCwire Readers’ Choice Award for Best Use of HPC in Industry.