Extended Software Development Workshop: Mesoscopic simulation models and High-Performance Computing

[button url=”https://www.e-cam2020.eu/calendar/” target=”_self” color=”primary”]Back to Calendar[/button]

If you are interested in attending this event, please visit the CECAM website here.

Workshop Description

In Discrete Element Methods the equation of motion of large number of particles is numerically integrated to obtain the trajectory of each particle [1]. The collective movement of the particles very often provides the system with unpredictable complex dynamics inaccessible via any mean field approach. Such phenomenology is present for instance in a seemingly simple systems such as the hopper/silo, where intermittent flow accompanied with random clogging occurs [2]. With the development of computing power alongside that of the numerical algorithms it has become possible to simulate such scenarios involving the trajectories of millions of spherical particles for a limited simulation time. Incorporating more complex particle shapes [3] or the influence of the interstitial medium [4] rapidly decrease the accessible range of the number of particles.

Another class of computer simulations having a huge popularity among the science and engineering community is the Computational Fluid Dynamics (CFD). A tractable method for performing such simulations is the family of Lattice Boltzmann Methods (LBMs) [5]. There, instead of directly solving the strongly non-linear Navier-Stokes equations, the discrete Boltzmann equation is solved to simulate the flow of Newtonian or non-Newtonian fluids with the appropriate collision models [6,7]. The method resembles a lot the DEMs as it simulates the the streaming and collision processes across a limited number of intrinsic particles, which evince viscous flow applicable across the greater mass.

As both of the methods have gained popularity in solving engineering problems, and scientists have become more aware of finite size effects, the size and time requirements to simulate practically relevant systems using these methods have escaped beyond the capabilities of even the most modern CPUs [8,9]. Massive parallelization is thus becoming a necessity. This is naturally offered by graphics processing units (GPUs) making them an attractive alternative for running these simulations, which consist of a large number of relatively simple mathematical operations readily implemented in a GPU [8,9].

 

References

[1] P.A. Cundall and O.D.L. Strack, Geotechnique 29, 47–65 (1979).
[2] H. G. Sheldon and D. J. Durian, Granular Matter 6, 579-585 (2010).
[3] A. Khazeni, Z. Mansourpour Powder Tech. 332, 265-278 (2018).
[4] J. Koivisto, M. Korhonen, M. J. Alava, C. P. Ortiz, D. J. Durian, A. Puisto, Soft Matter 13 7657-7664 (2017).
[5] S. Succi,The lattice Boltzmann equation: for fluid dynamics and beyond. Oxford university press, (2001).
[6] L. S. Luo, W. Liao, X. Chen, Y. Peng, W. Zhang, Phys. Rev. E, 83, 056710 (2011).
[7] S. Gabbanelli, G.Drazer, J. Koplik, Phys. Rev. E, 72, 046312 (2005).
[8] N Govender, R. K. Rajamani, S. Kok, D. N. Wilke, Minerals Engin. 79, 152-168 (2015).
[9] P.R. Rinaldi, E. A. Dari, M. J. Vénere, A. Clausse, Simulation Modelling Practice and Theory, 25, 163-171 (2012).

Share

Extension of the ParaDiS code to include precipitate interactions, and code optimisation to run on HPC environment


Here present two featured software modules of the month:

  1. ParaDiS with precipitates
  2. ParaDiS with precipitates optimized to HPC environment

that provide extensions to the ParaDIS Discrete dislocation dynamics (DDD) code (LLNL, http://paradis.stanford.edu/) where dislocation/precipitate interactions are included. Module 2 was built to run the code on an HPC environment, by optimizing the original code for the Cray XC40 cluster at CSC in Finland. Software was developed by E-CAM partners at CSC and Aalto University (Finland).

Practical application and exploitation of the codes

The ParaDiS code is a free large scale dislocation dynamics (DD) simulation code to study the fundamental mechanisms of plasticity. However, DDD simulations don’t always take into account scenarios of impurities interacting with the dislocations and their motion. The consequences of the impurities are multiple: the yield stress is changed, and in general the plastic deformation process is greatly affected. Simulating these by DDD allows to look at a large number of issues from materials design to controlling the yield stress and may be done in a multiscale manner by computing the dislocation-precipitate interactions from microscopic simulations or by coarse-graining the DDD results for the stress-strain curves on the mesoscopic scale to more macroscopic Finite Element Method.

Modules 1 and 2 provide therefore an extension of the ParaDIS code by including dislocation/precipitate interactions. The possibility to run the code on HPC environments is also provided.

Software documentation and link to the source code can be found in our E-CAM software Library here.

Share

Upcoming event: Extended Software Development Workshop in Mesoscopic simulation models and HPC


E-CAM partners at Aalto University (CECAM Finish Node) in collaboration with the HPC training experts from the CSC Supercomputing Centre, are organizing a joint Extended Software Development Workshop from 15-19 October 2019 , aimed at people interested in particle based methods, such as the Discrete Element and Lattice Boltzmann Methods, and on their massive parallelization using GPU architectures. The workshop will mix three different ingredients: (1) workshop on state-of-the-art challenges in computational science and software, (2) CSC -run school, and (3) coding sessions with the aid of CSC facilities and expertise.

How to Apply

Follow the instruction at the CECAM website for the event: https://www.cecam.org/workshop1752/

Organizers

  • Mikko Alava
    Aalto University, Finland
  • Brian Tighe
    TU Delft, The Netherlands
  • Jan Astrom
    CSC It center for science, Finland
  • Antti Puisto
    Aalto University, Finland

Location

CECAM-FI Node, Aalto University, Finland

Dates

October 15 – 19, 2019

Share

Mesoscale simulation of billion atom complex systems using thousands of GPGPU’s, an industry success story


Dr. Jony Castagna, Science and Technology Facilities Council, United Kingdom


Abstract

Jony Castagna recounts his transition from industry scientist to research software developer at the STFC, his E-CAM rewrite of  DL_MESO allowing the simulation of billion atom systems on thousands of GPGPUs, and his latest role as Nvidia ambassador focused on machine learning.


Jony, can you tell us how you came to work on the E-CAM project and what you were doing before?

My background is in Computational Fluid Dynamic (CFD), and I worked for many years in London as a computational scientist for an Oil & Gas industry. I joined STFC – Hartree Centre in 2016 and E-CAM was my first project. E-CAM offered an opportunity to work in a new and more academic and fundamental research environment.  

What is your role in E-CAM?

My role is as research software developer, which consists mainly in supporting the E-CAM Postdoctoral Researchers in developing their software modules, benchmarking available codes and contribute to the deliverables of the several work-packages. This includes the work described here, in co-designing DL_MESO to run on GPUs.

What is DL_MESO and why was it important to port it to massively parallel computing platforms?

DL_MESO is a software package for mesoscale simulations developed by M. Seaton at the Hartree Centre [1, 2]. It is basically made of two software components: a Lattice Boltzmann method solver, which uses the Lattice Boltzmann equation discretize on a lattice (2D or 3D)  to simulate the fluid dynamic effects of complex multiphase systems; and a Dissipative Particle Dynamics (DPD) solver based on particle method where a soft potential, together with a coupled dissipation and stochastic forces, allows the use of Molecular Dynamics but with a larger time step.

The need to port DL_MESO to massively parallel computing platforms arose because  often real systems are made of millions of beads (each bead representing a group of molecules) and small clusters are usually not sufficient to obtain results in brief time. Moreover, with the advent of hybrid architectures, updating the code is becoming an important software engineering step to allow scientist to continue their work on such systems.

How well were you able to improve the scaling performance of DL_MESO with multiple GPGPU’s, and as a consequence, how large a system can you now treat?

The current multi-GPU version of DL_MESO scales with an 85% efficiency up to 2048 GPUs equivalent to about 10 petaflops of performance double precision (see Fig. 1 reproduced from E-CAM Deliverable 7.6[3]). This allows the simulation of very large systems like a phase mixture with 1.8 billion particles (Fig. 2). The performance has been obtained using the PRACE resource Piz Daint supercomputer from CSCS.


Figure 1. Strong scaling efficiency of DL_MESO versus the number of GPGPU for a simulation of a complex mixed phase system consisting of 1.8 billion atoms.  
Figure 2. Snapshot of the simulated system.

What are the sorts of practical problems that motivated these developments, and what is the interest from industry (in particular IBM and Unilever) ?

DPD has the intrinsic capability to conserve hydrodynamic behavior, which means it reproduces fluid dynamic effects when a large number of beads is used. The use of massively parallel computing allows the simulation of complex phenomena like shear banding in surfactants and ternary systems present in many personal care, nutrition, and hygiene products. DL_MESO has been used intensively by IBM Research UK and Unilever and there is a long history of collaboration with Hartree Centre still going on.   

Are there some examples of the power of DL_MESO to simulate continuum problems with difficult boundary conditions, etc., where standard continuum approaches fail?

Yes. One good example is the polymer melt simulation. Realistic polymers typically are notoriously very large macromolecules, and their modeling in industrial manufacturing processes, where fluid dynamic effects like extrusion exist, is a very challenging task. Traditional CFD solvers fail to describe well the complex interface and interactions between polymers.  DPD represents the ideal approach for such systems.

What were the particular challenges to porting DL_MESO to GPUs? You started by an implementation on a single GPU and only afterwards ported it to multi-GPUs. Was that needed?

The main challenge has been to adapt the numerical algorithm implemented in the serial version to the multithread GPU architecture. This required mainly a reorganization of the memory layout to guarantee coalescent access and take advantage of the extreme parallelism provided by the accelerator. The single GPU version was developed first, optimized and then extended to multi-GPU capability based on MPI library and a typical domain decomposition approach.

We know you are adding functionalities to the GPU version of DL_MESO, such as electrostatics and bond forces. Why is that important?

Electrostatic forces are very common in real systems, they allow the simulation of complex products where charges are distributed across the beads creating polarization effects like those in a molecule of water. However, these are long-range interactions and special methods like Ewald Summation and Smooth Particle Ewald Mesh are needed to fully compute their effects. They represent a challenge from numerical implementation due to their high computational cost and difficulties they present to parallelization.

Where can the reader find documentation about the software developments that you have been doing in DL_MESO?

Mainly on the E-CAM modules dedicated to DL_MESO that have been reported on Deliverables 4.4[4] and 7.6[3], and also on the E-CAM software repository here .

Did your work with E-CAM, on the porting of DL_MESO to GPUs, opened doors to you in some sense?

Yes. IBM Research UK has shown interest in the multi-GPU version of the code for their studies on multiphase systems and Formeric, a spin-off company of STFC, is planning to use it as the back end of their products for mesoscale simulations.

Recently, you have also been nominated as an NVidia Ambassador. How did that happen?

We have a regular collaboration with NVidia, not only through the Nvidia Deep Learning Institute (DLI) for dissemination and tutorials, but also for optimization in porting software to multi-GPU as well as Deep Learning applications applied mainly to computer vision industrial problems.  This is how I got the Nvidia DLI Ambassador status in October 2018. It is being a great experience and an exciting opportunity.

What would you like to do next?

The Nvidia Ambassador experience in Deep Learning opened a new exciting opportunity in the so-called Naive Science: the idea is to use neural networks for replacing traditional computational science solvers. A Neural Network can be trained using real or simulated data and then used to predict new properties of molecules or fluid dynamic behaviour in different systems. This will speed up the simulation by a couple of orders of magnitude as well as avoiding complex modeling based on the use of ad hoc parameters that are often difficult to determine.   

References

[1] http://www.cse.clrc.ac.uk/ccg/software/DL_MESO/
[2] M. A. Seaton, R. L. Anderson, S.Metz, and W. Smith, “DL_MESO: highly scalable mesoscale simulations,”Molecular Simulation, vol. 39, no. 10, pp. 796–821, Sep. 2013.
[3] Alan O’Cais, & Jony Castagna. (2019). E-CAM Software Porting and Benchmarking Data III (Version 1.0). Available in Zenodo: https://doi.org/10.5281/zenodo.2656216
[4] Silvia Chiacchiera, Jony Castagna, & Christian Krekeler. (2019). D4.4: Meso- and multi-scale modelling E-CAM modules III (Version 1.0). Available in Zenodo: https://doi.org/10.5281/zenodo.2555012


Share

Abrupt GC-AdResS: A new and more general implementation of the Grand Canonical Adaptive Resolution Scheme (GC-AdResS)

The Grand Canonical Adaptive resolution scheme (GC-AdResS) gives a methodological description to partition a simulation box into different regions with different degrees of accuracy. For more details on the theory see Refs. [1,2,3].

In the context of an E-CAM pilot project focused on the development of the GC-AdResS scheme, an updated version of GC-AdResS was built and implemented in GROMACS, as reported in https://aip.scitation.org/doi/10.1063/1.5031206 (open access version: https://arxiv.org/abs/1806.09870). The main goal of the project is to develop a library or recipe with which GC-AdResS can be implemented in any Classical MD Code.

The current implementation of GC- AdResS in GROMACS has several performance problems. We know that the main performance loss of AdResS simulations in GROMACS is in the neighbouring list search and the generic serial force calculation linking the atomistic (AT) and coarse grained (CG) forces together via a smooth weighting function. Thus, to remove the bottleneck with respect to performance and a hindrance regarding the easy/general implementation into other codes and eliminate the non optimized force calculation, we had to change the neighbourlist search. This lead to a considerable speed up of the code. Furthermore it decouples the method directly from the core of any MD code, which does not hinder the performance and makes the scheme hardware independent[4].

This module presents a very straight forward way to implement a new partitioning scheme in GROMACS . And this solves two problems which affect the performance, the neighborlist search and the generic force kernel.

Information about module purpose, background information, software installation, testing and a link to the source code, can be found in our E-CAM software Library here.

E-CAM Deliverables D4.3[5] and D4.4[6] present more modules developed in the context of this pilot project.

References

[1] L. Delle Site and M. Praprotnik, “Molecular Systems with Open Boundaries: Theory and Simulation,” Phys. Rep., vol. 693, pp. 1–56, 2017

[2] H.Wang, C. Schütte, and L.Delle Site, “Adaptive Resolution Simulation (AdResS): A Smooth Thermodynamic and Structural Transition fromAtomistic to Coarse Grained Resolution and Vice Versa in a Grand Canonical Fashion,” J. Chem. Theory Comput., vol. 8, pp. 2878–2887, 2012

[3] H. Wang, C. Hartmann, C. Schütte, and L. Delle Site, “Grand-Canonical-Like Molecular-Dynamics Simulations by Using an Adaptive-Resolution Technique,” Phys. Rev. X, vol. 3, p. 011018, 2013

[4] C. Krekeler, A. Agarwal, C. Junghans, M. Prapotnik and L. Delle Site, “Adaptive resolution molecular dynamics technique: Down to the essential”, J. Chem. Phys. 149, 024104

[5] B. Duenweg, J. Castagna, S. Chiacchera, H. Kobayashi, and C. Krekeler, “D4.3: Meso– and multi–scale modelling E-CAM modules II”, March 2018 . [Online]. Available: https://doi.org/10.5281/zenodo.1210075

[6] B. Duenweg, J. Castagna, S. Chiacchera, and C. Krekeler, “D4.4: Meso– and multi–scale modelling E-CAM modules III”, Jan 2019 . [Online]. Available: https://doi.org/10.5281/zenodo.2555012

Share

E-CAM related work labeled as “Excellent Science” by the EC Innovation Radar Initiative

The Innovation Radar aims to identify high-potential innovations and innovators. It is an important source of actionable intelligence on innovations emerging from research and innovation projects funded through European Union programmes.

E-CAM is associated to the following Innovations (Innovation topic: excellence science):

    1. Improved Simulation Software Packages for Molecular Dynamics (see link)
    2. Improved software modules for Meso– and multi–scale modelling (see link)

Related to the work of our E-CAM funded Postdoctoral researchers supervised by scientists in the team, working on:

  • Development of the OpenPathSampling package to study rare events  (Universiteit van Amsterdam). Link1
  • Implementation of GPU version of DL_MESO_DPD (Hartree Centre (STFC)). Link
  • Development of polarizable mesoscale model for DL_MESO_DPD (Hartree Centre (STFC)). Link
  • Development of the GC-AdResS scheme (Freie Universitaet Berlin). Link

  • Implementation of hierarchical strategy on ESPResSO++ (Max Plank Institute for Polymer Research, Mainz). Link
Share

New E-CAM publication is out: “Molecular Dynamics of Open Systems: Construction of a Mean‐Field Particle Reservoir”



New publication from E-CAM partners working at the Institute of Mathematics of the Freie Universität Berlin:

Molecular Dynamics of Open Systems: Construction of a Mean‐Field Particle Reservoir

Authors: Luigi Delle Site, Christian Krekeler, John Whittaker, Animesh Agarwal, Rupert Klein, and Felix Höfling

Adv. Theory Simul. 2019, 1900014, DOI: 10.1002/adts.201900014 (Open access)

Synopsis

A procedure for the construction of a particle and energy reservoir for the simulation of open molecular systems is presented. The reservoir is made of non‐interacting particles (tracers), embedded in a mean‐field. The tracer molecules acquire atomistic resolution upon entering the atomistic region, while atomistic molecules become tracers after crossing the atomistic boundary.

Abstract

The simulation of open molecular systems requires explicit or implicit reservoirs of energy and particles. Whereas full atomistic resolution is desired in the region of interest, there is some freedom in the implementation of the reservoirs. Here, a combined, explicit reservoir is constructed by interfacing the atomistic region with regions of point-like, non-interacting particles (tracers) embedded in a thermodynamic mean field. The tracer molecules acquire atomistic resolution upon entering the atomistic region and equilibrate with this environment, while atomistic molecules become tracers governed by an effective mean-field potential after crossing the atomistic boundary. The approach is extensively tested on thermodynamic, structural, and dynamic properties of liquid water. Conceptual and numerical advantages of the procedure as well as new perspectives are highlighted and discussed.

Share

Open Postdoctoral Position in Mesoscale Modeling in Nanostructured Materials


In the context of the EU H2020 project E-CAM we are seeking a highly qualified post-doctoral researcher for an exciting collaborative project on the fundamental challenges of  driven transport in complex media. 

Increasingly, modern technology is addressing problems where fluid transport takes place in submicron sized channels, or in pores. The physical laws of transport in such channels are qualitatively different from those that determine bulk flow; they are poorly understood and, importantly, barely exploited. 

The postdoctoral position will  address complementary aspects related to the fundamental challenges of thermodynamic driving on systems of potential industrial interest. In this respect, the  project will be developed in close contact with an industrial partner. 

The project will involve both algorithmic and scientific developments. The candidate will benefit from existing in-house expertise in lattice Boltzmann methods for non-equilibrium soft materials and will contribute to its extension and use on complex materials out of equilibrium. The project will go  beyond the state-of-the-art macroscopic descriptions of phoresis to capture the effects of solute and surface specificity, solute flexibility, surface wettability and heterogeneity, fluctuations and correlations.

We seek motivated researchers, with theoretical and computational expertise. Candidates should have a background in computer simulation, statistical mechanics, biophysics and/or soft condensed matter.

The project will be carried out at the University of Barcelona, under the supervision of Prof. Ignacio Pagonabarraga, for an initial period of 20 months. Candidates with an appropriate background, who are interested in a cutting-edge research at the interface between physics and the biological sciences, are invited to apply.

We look forward to receiving a CV and 1 referee letter. You can address these  documents, or any additional information you require, to Prof. I. Pagonabarraga by email ipagonabarraga@ub.edu. Review of applications will continue until the position is filled.

Share

Porting of electrostatics to the GPU version of DL_MESO_DPD


The porting of DL_MESO_DPD [1,2] to graphic cards (GPUs) was reported in deliverable D4.2 of E-CAM[3] (for a single GPU) and deliverable D4.3 [4] (for multiple GPUs) (Figure 1), and has now been extended to include electrostatics, with two alternative schemes as explained below. This work was recently reported on deliverable D4.4[5].

Figure 1: DL_MESO strong scaling results on PizDaint, obtained using 1.8 billion particles for 256 to 2048 GPUs. Results show very good scaling, with efficiency always above 89% for 2048 GPUs.


To allow Dissipative Particle Dynamics (DPD) methods to treat systems with electrically charged particles, several approaches have been proposed in the literature, mostly based on the Ewald summation method [6]. The DL_MESO_DPD  code includes Standard Ewald and Smooth Particle Mesh Ewald (SPME) methods (in version 2.7, released in December 2018). Accordingly, here the same methods are implemented for the single-GPU version of the code. Continue reading…

Share

9 software modules recently delivered in the area of Meso and Multi-scale Modelling

In this report for Deliverable 4.4 [1]  of E-CAM, nine software modules in meso– and multi–scale modelling are presented. Four of the modules have been implemented in DL_MESO_DPD:

• Ewald method for the GPU version of DL_MESO_DPD

• Smooth Particle Mesh Ewald (SPME) method for the GPU version of DL_MESO_DPD

• Analysis of local tetrahedral ordering for DL_MESO_DPD[2]

• Consistency check of input files in DL_MESO_DPD[2]

Five of the modules concern the Grand Canonical Adaptive Resolution Scheme (GC-AdResS) and have been developed, implemented and tested in/with GROMACS 5.1.0 and GROMACS 5.1.5 [3]. The patches provided are for GROMACS 5.1.5. The modules provide a recipe to simplify the implementation and to allow to look into a microcanonical (i.e., NVE-like) environment. They are based on the same principles as the Abrupt AdResS modules reported in a previous deliverable D4.3[4].

Furthermore, we provide all the tools necessary to run and check the AdResS simulations. The modules are:

• Local Thermostat Abrupt AdResS

• Thermodynamic Force Calculator for Abrupt AdResS

• Energy (AT)/Energy(interface) ratio: Necessary condition for AdResS simulations

• Velocity-Velocity autocorrelation function for AdResS

• AdResS-Radial Distribution Function (RDF).

A short description is written for each module, followed by a link to the respective Merge-Request on the GitLab service of E-CAM. These merge requests contain detailed information about the code development, testing and documentation of the modules.

Full report available here.

[1] S. Chiacchiera, J. Castagna, and C. Krekeler, “Meso– and multi–scale modelling E-CAM modules III,” Jan. 2019. [Online]. Available: https://doi.org/10.5281/zenodo.2555012

[2] This work is part of an E-CAM pilot project focused on the development of Polarizable Mesoscale Models

[3] This work is part of an E-CAM pilot project focused on the development of the GC-AdResS scheme

[4] B. Duenweg, J. Castagna, S. Chiacchiera, H. Kobayashi, and C. Krekeler, “Meso– and multi–scale modelling E-CAM modules II,” Mar. 2018. [Online]. Available: https://doi.org/10.5281/zenodo.1210075

Share