EPSRC logo

Details of Grant 

EPSRC Reference: EP/V001523/1
Title: Massively Parallel Particle Hydrodynamics for Engineering and Astrophysics
Principal Investigator: Bower, Professor RG
Other Investigators:
Loren-Aguilar, Dr P Fourtakas, Dr G Kay, Dr ST
Basden, Dr AG De Vuyst, Dr T Rogers, Professor BD
Acreman, Dr DM Crain, Dr R Frenk, Professor C
Weinzierl, Dr T
Researcher Co-Investigators:
Project Partners:
ARM Ltd DiRAC (Distributed Res utiliz Adv Comp) IBM
Leiden University nVIDIA
Department: Physics
Organisation: Durham, University of
Scheme: Standard Research - NR1
Starts: 01 May 2020 Ends: 31 December 2021 Value (£): 294,665
EPSRC Research Topic Classifications:
EPSRC Industrial Sector Classifications:
Aerospace, Defence and Marine Information Technologies
Related Grants:
Panel History:
Panel DatePanel NameOutcome
04 Mar 2020 Software use code development for exascale computing Announced
Summary on Grant Application Form
SPH (smoothed particle hydrodynamics), and Lagrangian approaches to hydrodynamics in general, are a powerful approach to hydrodynamics problems. In this scheme, the fluid is represented by a large number of particles, moving with the flow. The scheme does not require a predefined grid making it very suitable for tracking flows with moving boundaries, particularly flows with free surfaces, and problems that involve flows with physically active elements or large dynamic range. The range of applications of the method is growing rapidly and is being adopted by a rapidly growing range of commercial companies including Airbus, Unilever, Shell, EDF, Michelin and Renault.

The widespread use of SPH, and its potential for adoption across a wide range of science domains, make it a priority use case for the Excalibur project. Massively parallel simulations with billion to hundreds of billions of particles have the potential for revolutionising our understanding of the Universe and will empower engineering applications of unprecedented scale, ranging from the end-to-end simulation of transients (such as a bird strike) in jet engines to the simulation of tsunami waves over-running a series of defensive walls.

The working group will identify a path to the exascale computing challenge. The group has expertise across both Engineering and Astrophysics allowing us to develop an approach that satisfies the needs of a wide community. The group will start from two recent codes that already highlight the key issues and will act as the working group's starting point.

- SWIFT (SPH with Interdependent Fine-grained Tasking) implements a cutting-edge approach to task-based parallelism. Breaking the problem into a series of inter-dependent tasks allows for great flexibility in scheduling, and allows communication tasks to be entirely overlapped with communication. The code uses a timestep hierarchy to focus computational effort where is most need in response to the problems.

- DualSPHysics draws its speed from effective use of GPU accelerators to execute the SPH operations on large groups of identical particles. This allows the code to gain from exceptional parallel execution. The challenge is to effectively connect multiple GPUs across large numbers of inter-connected computing nodes.

The working group will build on these codes to identify the optimal approach to massively parallel execution on exa-scale systems. The project will benefit from close connections to the Excalibur Hardware Pilot working group in Durham, driving the co-design of code and hardware. The particular challenges that we will address are:

- Optimal algorithms for Exascale performance. In particular, we will address the best approaches to the adaptive time-stepping and out-of-time integration, and adaptive domain decomposition. The first allows different spatial regions to be integrated forward in time optimally, the second allows the regions to be optimally distributed over the hardware.

- Modularisation and Separation of Concerns. Future codes need to be flexible and modularised, so that a separation can be achieved between integration routines, task scheduling and physics modules. This will make the code future-proof and easy to adapt to new science domain requirements and computing hardware.

- CPU/GPU performance optimisation. Next generation hardware will require specific (and possibly novel) techniques to be developed to optimally advance particles in the SPH scheme. We will build on the programming expertise gain in DualSPHysics to allow efficient GPU use across multiple nodes.

- Communication performance optimisation. Separated computational regions need to exchange information at their boundaries. This can be done asynchronously, so that the time-lag of communication does not slow computation. While this has been demonstrated on current systems, the scale of Excalibur will overload current subsystems, and a new solution is needed.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: