Hinweise zum Einsatz der Google Suche
Personensuchezur unisono Personensuche
Veranstaltungssuchezur unisono Veranstaltungssuche
Katalog plus

Workshop: Multi-scale, Multi-physics and Coupled Problems on Highly Parallel Systems (MMCP) at HPC Asia 2020, January 15-17, Fukuoka/Japan

This workshop will provide a platform to present and discuss advances in numerical simulation for complex multi-scale, multi-physics and coupled problems. The goal is to gather researchers (computer scientists, engineers, mathematicians, physicists, chemists, biologists, material sciences etc.) specialized in different disciplines but related to challenges in multi-scale and multi-physics as well as coupled simulations on HPC systems.

Applications with different characteristics in parts of the computational domain lead to additional performance issues. The optimum setting for the one part might be contradictory to the optimum for another; the overall optimum might be a non-optimal, but still good enough compromise for all individuals.

Several approaches have been considered, while developing different solutions depending on the application and hardware combination. On the application side monolithic or partitioned approaches have been introduced, on the hardware side homogeneous and heterogeneous cluster settings. All combinations have some advantages, some disadvantages, all leading to the question: how to find the optimal configuration and setting of all parameters, with respect to quality of solution vs. computational efficiency.

The basis is – of course – the performance and efficiency of each component (region or solver), but the characteristics of other scales, other physical phenomena might change the game. Also a coupling tool might introduce e.g. load imbalances or hinder the overlapping of communication and computation.

The main focus will be set on computational issues regarding performance and suitability for high-performance computing. Furthermore the underlying strategies to enable these simulations will be highlighted.

Keeping these aims in mind, contributions from all aspects of engineering applications will be considered. Topics of applications will include (but not be limited to):

  • Multi-scale problems
  • Multi-physics problems
  • Molecular dynamics
  • Multi-domain/concurrency
  • Multi-scale and/or multi-physics modelling for biomedical or biological systems
  • Novel approaches to combine different scales and physics models in one problem solution
  • Challenging applications in industry and academia, e.g. multi-phase flows, fluid-structure interactions, chemical engineering, material science, biophysics, automotive industry, …
  • Load balancing
  • Adaptivity
  • Heterogeneous architectures
  • New algorithms for parallel-distributed computing, specific to this topic.

Format and Submission Guidelines

Workshop participants are requested to submit a 2-page abstract (templates for Word and Latex) via EasyChair. Publication of peer-reviewed proceedings papers is planned for the workshop as a proceedings book in the STS series, which is not mandatory for participation.

Important dates

Submission deadline:September 24, 2019 (extended)
Notification of Acceptance:November 14, 2019
Camera-ready paper (not mandatory):December 15, 2019
Conference date:January, 15 to 17, 2020


Travel grants for presenters will be available, funded by the German priority program SPPEXA.


  • Prof. Dr.-Ing. Sabine Roller
  • Prof. Dr.-Ing. Sabine Roller is professor at the University of Siegen where she is heading the Lab for Simulation Techniques and Scientific Computing as well as the Center for Information and Media Technology (ZIMT). She is working in the field of coupled multi-physics and multi-scale simulations, computational fluid dynamics and efficient implementation of different methods, using parallelization and vectorization techniques, heterogeneous domain decomposition and modern hard- and software developments like GPU computing and PGAS languages. She was scientific chair of the PASC’2018 conference together with Jack Wells, ORNL, and has organized workshops on sustained simulation performance in Japan, as well as a panel on "Funding strategies for HPC software beyond borders" at SC’14, and a workshop on “Software Frameworks for Scalable Scientific Simulations” at ISC’15. She is vice-president of the strategic committee of National High-Performance Computing (NHR) at German Joint Science Conference GWK (Gemeinsame Wissenschaftskonferenz).

    Address: University of Siegen, Adolf-Reichwein-Straße 2, 57076 Siegen/Germany

    Email: sabine.roller@uni-siegen.de

  • Neda Ebrahimi Pour, M. Sc.
  • Neda Ebrahimi Pour, M. Sc. is PhD student in the area of simulation techniques and scientific computing at the University of Siegen. She studied mechanical engineering and her work is related to high performance computing and coupled simulations for fluid-structure-acoustics (FSA) interactions. Starting from 2017, she contributes in the organisation of the yearly CFD Workshop at the University of Siegen. Furthermore she was one of the co-organizer of the first SPPEXA Women workshop, held January 2019 in Munich/Germany. She was also an organizer of the special track at PDSEC’19 workshop at IPDPS conference 2019 in Rio de Janeiro/Brazil.

    Address: University of Siegen, Adolf-Reichwein-Straße 2, 57076 Siegen/Germany

    Email: neda.epour@uni-siegen.de

  • Prof. Dr. Nahid Emad
  • Nahid Emad is professor of computer science at the Université de Versailles. She received the Habitation to Direct Researches (HDR) in computer science from University of Versailles, the Ph.D. and MS in applied mathematics from Pierre et Marie Currie University (Paris VI) and BS from University of Arak (Iran). She leads the Intensive Numerical Computation group of computer science department at the University of Versailles. She authored about a hundred papers in international journals and conferences. She is specialized in high performance numerical calculation, linear algebra, parallel and distributed programming paradigms, software engineering for parallel computing, big data analytics, and software engineering for parallel and distributed numerical computing.

    Address: University of Versailles 45, Av. des États-Unis 78035 Versailles Cedex

    Email: emad@uvsq.fr

Program Committee members

  • Benjamin Uekermann, Eindhoven University of Technology, Department of Mechanical Engineering, Energy Technology, Netherland
  • Holger Marschall, TU Darmstadt, Thermo-Fluids & Interfaces, Germany
  • Philipp Schlatter, KTH Stockholm, Department of Mechanics, Sweden
  • Anshu Dubey, Argonne National Laboratory, US
  • Hiroyuki Takizawa, Tohoku University, Department of Mechanical Engineering, High Performance Computing, Japan
  • Ryusuke Egawa, Tohoku University, Department of Mechanical Engineering, High Performance Computing, Japan
  • Keigo Matsuda, JAMSTEC, Center for Earth Information Science and Technology, Japan


Time Title Authors
13:00 - 13:10 Opening Remarks Sabine Roller (STS, Zimt, University of Siegen)
13:10 - 14:00 Invited Talk: Parallel Eigensolvers based on Unconstrained Energy Functionals Methods Osni Marques (Scalable Solvers Group, Lawrence Berkeley National Laboratory)
14:00 - 14:30 Talk: Advantages of Space-Time Finite Elements for Fluid-Structure-Contact Interaction Norbert Hosters (RWTH Aachen), Max von Danwitz, Thomas Spenke and Marek Behr
14:30 - 14:50 Coffee Break
14:50 - 15:20 Talk: Coupled multi-physics simulation of electrodialysis process for seawater desalination Kannan Masilamani (University of Siegen), Harald Klimach and Sabine Roller
15:20 - 15:50 Talk: Two-Level Parallel Initialization and Coupling for Partitioned Multi-Physics Simulations with preCICE Amin Totounferoush (University of Stuttgart), Frédéric Simonis, Benjamin Uekermann and Miriam Mehl
15:50 - 16:20 Talk: Dynamic Load Balancing Strategies for Extreme-Scale Multiphysics Simulations Niclas Jansson (KTH Stockholm), Rahul Bale, Makoto Tsubokura and Erwin Laure
16:20 - 16:30 Closing Remarks Sabine Roller (STS, Zimt, University of Siegen)

The program is also available as PDF file.