Exploiting a Transformer Architecture to Simultaneous Development of Transition and Turbulence Models for Turbine Flow Predictions

TitleExploiting a Transformer Architecture to Simultaneous Development of Transition and Turbulence Models for Turbine Flow Predictions
Publication TypeJournal Article
Year of PublicationSubmitted
AuthorsFang Y, Reissmann M, Pacciani R, Zhao Y, Ooi ASH, Marconcini M, Akolekar HD, Sandberg RD
JournalASME J Turbomach
ISSN Number0889-504X
Abstract

Previous studies have shown the potential of using a multi-objective computational fluid dynamics (CFD) - driven approach to train both transition and turbulence models in RANS (Reynolds averaged Navier-Stokes) calculations for improved turbine flow predictions (Akolekar et al., GT2022-81091; Fang et al., GT2023-102902). However, conducting CFD-driven training incurs a high computational cost as thousands of RANS calculations are required if the starting guesses are taken from an initial population of randomly generated models. This paper, for the first time, adopts a transformer technique, belonging to the class of natural language processing models, in gene expression programming (GEP), to expedite the training process for transition and turbulence models. The efficacy of utilizing the transformer is investigated for two scenarios. In one, we introduce previously trained models to randomly generated ones in the initial population of candidate models, facilitating the generation of models with a higher likelihood of achieving lower cost function values from the outset. In the other scenario, assuming that no suitable information is available from pre-training, a dynamic approach is employed at certain training iterations, where models exhibiting significant errors are excluded and replaced by those trained on the fly by the transformer and demonstrating smaller errors. Moreover, we introduce two additional physical features that serve as training inputs for the turbulence model that contribute to smaller errors. With these enhancements to the previous GEP framework, model training is accelerated considerably and the models show improved performance for both trained and testing cases.

Refereed DesignationRefereed