Research
Project Description
In the exaFSA project, we enable efficient three-fold fluid-structure-acoustics interaction which is a typical showcase for complex coupled simulations requiring both a sophisticated parallel coupling approach and high-end data handling methods. The task is to run coupled simulations efficiently on a large number of cores and on different computer architectures as provided, e.g., by the largest German supercomputers at the Leibniz Supercomputing Center in Munich (ww.lrz.de), the High Performance Computing Center Stuttgart (HLRS, www.hlrs.de), and the Jülich Supercomputing Centre (JSC, www.fz-juelich/jsc/).
To structure the project work, we identified six subtasks
(T1-T6) to be
tackled on the way to a truly parallel and efficient fluid-structure-acoustics
simulation. Each subproject is working on
a selection of subtasks in a cooperative manner:
|
T1.FSA Fluid-Structure-Acoustics Simulations.
In order to enable a first fully coupled simulation of fluid-structure-acoustics interactions, in particular adapters for the coupling of the solver codes with preCICE have been implemented and preCICE itself has been enhanced in order to allow for a multi-solver coupling, i.e., the coupling of more than two codes. preCICE adapters have been implemented in a cooperation between the group in Munich/Stuttgart (Miriam Mehl) and the groups housing the solver for the acoustic flow (Ateles, developed in Siegen, including a Fortran2003 interface), for the flow and acoustics near-field (FASTEST, developed in Darmstadt), and for the open source software OpenFOAM used for flow and structure simulations by the group in Delft. The multi-solver coupling implemented in preCICE by the group in Munich/Stuttgart includes a sophisticated execution logic for solver calls and implicit and explicit couping allowing for both sequential and parallel execution of solvers [Gatzhammer2014]. Next steps will be actual simulation runs after solving last fundamental numerical coupling issues in the fluid-acoustics coupling in T4.
T2.FSI Fluid-Structure Coupling Numerics
The non-simultaneous execution of the fluid and the structure solver in state-of-the-art implicit coupling schemes for fluid-structure interactions is one of the large hindrances for a scalable parallel simulation of fluid-structure-acoustics interactions. The group of Miriam Mehl, therefore, developed a combination of parallel solver execution (Jacobi-type coupling) with efficient quasi-Newton-type fixed-point equation solvers known to be efficient for the non-parallel (Gauss-Seidel type) coupling. Results are very promising and published in [Uekermann2012,Mehl2014].The most efficient alternative of the resulting two new coupling schemes has been implemented also in the coupling tool preCICE and validated for various benchmark cases using Fluent, COMSOL, and OpenFOAM as solver codes. Further improvements using degrees of freedom inherent to the method are subject to work in progress.
T3.FAI Fluid-Acoustics Coupling Numerics
As mentioned in T1, coupling via preCICE has been enabled in Ateles by the groups of Sabine Roller in Siegen and of Miriam Mehl in Munich/Stuttgart and in FASTEST by the groups of D\"orte Sternel (Darmstadt) and Miriam Mehl. Thus, the acoustic flow solver in FASTEST for the nearfield can now be (surface-)coupled to the acoustic far-field simulated with Ateles. Ateles provides a high-order discontinuous Galerkin method that is highly efficient for solving the linear wave equation due to its very low dissipation and at the same time high accuracy even on fairly coarse computational grids. |
The next steps will be coupled simulation runs of FASTEST with Ateles. Although the technical requirements are already fulfilled with a conventional parallel coupling scheme (CPS) is implemented, the numerical coupling of the explicit fourth order Runge-Kutta scheme in Ateles with the low order implicit time stepping in FASTEST requires the development of suitable interpolation or space-time expansion methods to cope with the different time stepping methods and, in addition, the different timescales of the acoustic far field and the acoustic flow in the near field.
T4.COM Communication Reducing Coupling
preCICE is capable of coupling parallel solver codes. Communication between the surface coupled solvers is currently realized using a server concept that has been tested within the last months on a pure communication testcase on a compute cluster in Munich [Gatzhammer2014]. The results show a good use of the bandwidth capacity. However, a more realistic test case including compute operations in the solvers and preCICE is expected to be compute bound. For this purpose, the groups in Munich/Stuttgart and in Siegen prepared a test configuration using preCICE to couple two subdomains both solving the Euler equation. Extensive scalability studies in order to see the scalability limits of the current setup are work in progress. Depending on the results, next steps might be the replacement of the server concept with a library concept also for parallel solver codes which requires some additional communication logic in preCICE.
T5.LOAD Load Balancing
Large advances in load balancing require further advances in T1 and T4. However, the basis for a balanced simulation has already been layed by the new coupling numerics developed in T2 for the fluid-structure coupling. Also the scalability studies prepared in {\bf T4} will give hints on sources for load imbalances that have to be tackled in the next months.
T6.VIS Visualization
The visualization group in Stuttgart started with the development of visualization techniques for the analysis of oscillation and wave propagation. This includes the implementation and evaluation of fractional Fourier transformation for the analysis of periodic and aperiodic scalar fields. In particular, a ridge surface extraction has been implemented on the GPU as a plugin for ParaView following the height ridge definition of Eberly [Eberly1996]. %which shall serve for debugging and data exploration by %providing the structure of wavefronts in the pressure field. In this context, also an evaluation of possible unification and simultaneous execution of simulation and visualization codes has been started. Simultaneously, a volume rendering framework was developed, enabling the efficient interactive exploration of volumetric data from arbitrary viewpoints in remote and in-situ visualization using volumetric depth images (VDI). VDIs represent an intermediate abstraction of volume data and, thus avoid the repeated access of the complete original data, making it particularly useful for the Exascale environment. While VDIs so far only represent single time steps, we are currently extending the concept for time-dependent data to eliminate delays in the visualization stage of the targeted environment.