HPX for Scientific Computing (HPXSc)

Performant building blocks for scientific computing using task-based programming

Software development in scientific computing is adopting new developments of programming and parallelization models only very slowly or not at all. A core reason is their dependency on existing libraries and software components. The focus of the development of scientific simulation codes lies of course on the advancement of the application domain. Thus, if the underlying software components do not adapt new developments in high-performance computing, the application software will not do so either. This fact leads to significant disadvantages in performance, scalability, efficiency, maintenance, and development.

The HPXSc project aims to use the usage of modern programming concepts (in particular ParalleX) and their implementations (HPX), as well as their further development. This approach offers several advantages in comparison to the current standard parallelization model using MPI+X: The necessary parallel synchronization (OpenMP: forced synchronization points at the end of parallel loops, MPI: global synchronization after time steps/iterations) shows its catastrophic effect especially in larger codes and when using a high number of compute nodes. Furthermore, the omission of the C++ bindings within the MPI standard resets MPI to the level of the C programming language, which prevents MPI-parallel programming from taking advantage of modern C++ concepts. In contrast, HPX supports the current C++ standard (C++20) and its entire standard library. Moreover, HPX extends the C++ standard library with a highly scalable task-based parallelization model, which processes fine-grained tasks using lightweight threads, and it supports an adaptive migration of tasks and data across compute nodes.

However, the entry barrier to using HPX is very high for users especially for porting existing parallel codes. A main obstacle to the broader use of HPX in high-performance computing is the lack of an HPX ecosystem of reusable software components, mainly parallel numeric blocks, as it exists in many ways for MPI, e.g., by PETSc. This missing ecosystem makes porting existing scientific applications more complex and also risky.

This is a main reason why this project aims to develop efficient and portable HPX standard components (HPXSc). We plan to provide a modular, task-parallel, asynchronous numeric library HPXSc that is going to contain the essential standard components for scientific computing using task-parallel programming with HPX. Planned features will include numerical methods, different building blocks (solvers, stencil-based matrix assembly, etc.), and distributed data structures (sparse/full matrices, structured grids, etc.). Not to be forgotten service components (parallel I/O, accelerator support), documentation, and examples.

The composability of HPX components, particularly without the usual component-by-component synchronization, and the ensuring of interoperability with conventional HPC components will become a great development challenge. The latter is essential for the individual porting of complex application codes and the permanent combination with existing HPC solutions.

This image shows Dirk Pflüger

Dirk Pflüger

Prof. Dr. rer. nat.

Head of Institute

This image shows Alexander Strack

Alexander Strack

M.Sc.

Researcher

To the top of the page