Exploiting a Transformer Architecture to Simultaneous Development of Transition and Turbulence Models for Turbine Flow Predictions

TitleExploiting a Transformer Architecture to Simultaneous Development of Transition and Turbulence Models for Turbine Flow Predictions
Publication TypeConference Proceedings
Year of Publication2024
AuthorsFang Y, Reissmann M, Pacciani R, Zhao Y, Ooi A, Marconcini M, Akolekar H, Sandberg R
Conference NameASME Turbo Expo 2024 Turbomachinery Technical Conference and Exposition
PublisherASME
Conference LocationLondon, UK, June 24 – 28, 2024
Abstract

Accurate prediction of the detailed boundary layer behavior of turbine blades subject to laminar-turbulent transition remains a challenge in Reynolds Averaged Navier-Stokes (RANS) calculations. Previous studies have focused on enhancing the transition model near the wall in RANS, for example through a symbolic regression (SR) machine learning method known as gene expression programming (GEP)[1, 2] (Akolekar et al., GT2022-81091; Fang et al., GT2023-102902). However, better transition prediction alone does not guarantee the improvement of the results for the fully turbulent boundary layer state. It is crucial to also revise the Boussinesq approximation in the turbulence model, as it assumes the anisotropy stress is solely aligned with the mean strain rate. This assumption has been shown to be inaccurate for flows featuring sudden changes in mean strain rate, such as flow over curved surface, like those found in gas turbines. In this paper, instead of only revising the transition model, a nonlinear constitutive relation is also trained to supplement the Boussinesq approximation. Consequently, the transition and turbulence models near the wall, for the first time, are simultaneously trained in a fully coupled manner using GEP. To ensure
the trained transition and turbulence models’ compatibility and model consistency in RANS, we employ the CFD-driven training framework. Given the high computational cost associated with CFD-driven training, a newly developed feature in GEP called “memory effect” is adopted to speed up the training process. The memory effect draws inspiration from neural-guided SR methods using a transformer architecture from the field of natural language processing. This introduces a bias in the trained models based on previous model performance, thereby generating more models with a higher likelihood of achieving lower cost function values. Additionally, we monitor the evolution of models during the training process. This allows us to analyze the common components in models that exhibit good predictive performance, enabling us to delve into the underlying physical insights and provide suggestions for the construction of transition and turbulence models for use in gas turbines. As training cases, the T108, a low-pressure turbine (LPT), and the LS89, a high-pressure turbine (HPT), are selected. The trained model performance is then assessed on different LPTs, including PakB, T106A and T106C.

References
[1] Harshal Akolekar, Fabian Waschkowski, Richard Sandberg, Roberto Pacciani, and Yaomin Zhao. Multi-objective development of machine-learnt closures for fully integrated transition and wake mixing predictions in low pressure turbines. In Turbo Expo. American Society of Mechanical Engineers, 06 2022.
[2] Yuan Fang, Yaomin Zhao, Harshal D Akolekar, Andrew SH Ooi, Richard D Sandberg, Roberto Pacciani, and Michele Marconcini. A data-driven approach for generalizing the laminar kinetic energy model for separation and bypass transition in low-and high-pressure turbines. In Turbo Expo: Power for Land, Sea, and Air, volume 87103, page V13CT32A025. American Society
of Mechanical Engineers, 2023.

Notes

GT2024-125550