Accelerating CFD-Driven Training of Transition and Turbulence Models for Turbine Flows By One-Shot and Real-Time Transformer Integration

TitleAccelerating CFD-Driven Training of Transition and Turbulence Models for Turbine Flows By One-Shot and Real-Time Transformer Integration
Publication TypeJournal Article
Year of PublicationIn Press
AuthorsFang Y, Reissmann M, Pacciani R, Zhao Y, Ooi ASH, Marconcini M, Akolekar HD, Sandberg R
JournalComputers and Fluids
Abstract

Recent studies have demonstrated the effectiveness of applying the computational fluid dynamics (CFD)-driven symbolic machine learning (ML) frameworks to assist in the development of explicit physical models within Reynolds-averaged Navier-Stokes (RANS), particularly for modeling transition, turbulence, and heat flux. These approaches can yield improved flow predictions with marginal increase in computational cost compared to baseline models. Nevertheless, a key limitation lies in the substantial computational expense during the training phase, which often requires thousands of RANS evaluations. This challenge becomes severe in training models for complex industrial applications, where each RANS run is computationally intensive, and further exacerbated when attempting to develop more generalizable and coupled multiple models across multiple product designs. Take the development of general transition and turbulence model corrections for both low- and high-pressure turbines as the study case, this work introduces two transformer-assisted strategies to accelerate model training. In the first, previously trained models are stored and used as inputs to the transformer, which generates new models informed by prior knowledge to partially replace randomly initialized models at the first training iteration. Results show that leveraging prior knowledge trained from different turbine configurations all effectively guide the search toward more promising regions of the solution space, thereby accelerating the training process. In the second scenario, when no prior knowledge is available, the transformer is integrated into the training loop to dynamically generate candidate models based on the small error models from the last training iteration and discarding high-error models. Results indicate that more frequent transformer updates, such as after every training iteration, further enhance the acceleration effect.

Refereed DesignationRefereed