Exascale systems, capable of 1018 floating point operations per seconds, will be available in the near future. And, as they will be parallel in an extent not known for current systems, they will demand new approaches for suitable software (algorithms and implementations), including questions of memory access and data exchange, resiliance, and power consumption.
Analyzing huge and high-dimensional data sets is feasible today and an important source of scientific insight. It allows and demands advanced processing such as the quantification of uncertainties or optimization.
Simulations have left classical settings with up to three or four dimensions (space and time) and work in parameter and phase spaces of much higher dimensionality. This requires methods to deal with the ‘curse of dimensionality’, the exponential growth of complexity with the number of dimensions.
The advances in simulation of single phenomena such as fluid flow or structural mechanics have put the simulation of complex systems into the focus of scientific computing. Here, mathematical methods for coupling different simulations have to be considered as well as realization approaches on distributed memory systems required for the computation of the overall system.