Porting of electrostatics to the GPU version of DL_MESO_DPD


The porting of DL_MESO_DPD [1,2] to graphic cards (GPUs) was reported in deliverable D4.2 of E-CAM[3] (for a single GPU) and deliverable D4.3 [4] (for multiple GPUs) (Figure 1), and has now been extended to include electrostatics, with two alternative schemes as explained below. This work was recently reported on deliverable D4.4[5].

Figure 1: DL_MESO strong scaling results on PizDaint, obtained using 1.8 billion particles for 256 to 2048 GPUs. Results show very good scaling, with efficiency always above 89% for 2048 GPUs.


To allow Dissipative Particle Dynamics (DPD) methods to treat systems with electrically charged particles, several approaches have been proposed in the literature, mostly based on the Ewald summation method [6]. The DL_MESO_DPD  code includes Standard Ewald and Smooth Particle Mesh Ewald (SPME) methods (in version 2.7, released in December 2018). Accordingly, here the same methods are implemented for the single-GPU version of the code. Continue reading…

Share

9 software modules recently delivered in the area of Meso and Multi-scale Modelling

In this report for Deliverable 4.4 [1]  of E-CAM, nine software modules in meso– and multi–scale modelling are presented. Four of the modules have been implemented in DL_MESO_DPD:

• Ewald method for the GPU version of DL_MESO_DPD

• Smooth Particle Mesh Ewald (SPME) method for the GPU version of DL_MESO_DPD

• Analysis of local tetrahedral ordering for DL_MESO_DPD[2]

• Consistency check of input files in DL_MESO_DPD[2]

Five of the modules concern the Grand Canonical Adaptive Resolution Scheme (GC-AdResS) and have been developed, implemented and tested in/with GROMACS 5.1.0 and GROMACS 5.1.5 [3]. The patches provided are for GROMACS 5.1.5. The modules provide a recipe to simplify the implementation and to allow to look into a microcanonical (i.e., NVE-like) environment. They are based on the same principles as the Abrupt AdResS modules reported in a previous deliverable D4.3[4].

Furthermore, we provide all the tools necessary to run and check the AdResS simulations. The modules are:

• Local Thermostat Abrupt AdResS

• Thermodynamic Force Calculator for Abrupt AdResS

• Energy (AT)/Energy(interface) ratio: Necessary condition for AdResS simulations

• Velocity-Velocity autocorrelation function for AdResS

• AdResS-Radial Distribution Function (RDF).

A short description is written for each module, followed by a link to the respective Merge-Request on the GitLab service of E-CAM. These merge requests contain detailed information about the code development, testing and documentation of the modules.

Full report available here.

[1] S. Chiacchiera, J. Castagna, and C. Krekeler, “Meso– and multi–scale modelling E-CAM modules III,” Jan. 2019. [Online]. Available: https://doi.org/10.5281/zenodo.2555012

[2] This work is part of an E-CAM pilot project focused on the development of Polarizable Mesoscale Models

[3] This work is part of an E-CAM pilot project focused on the development of the GC-AdResS scheme

[4] B. Duenweg, J. Castagna, S. Chiacchiera, H. Kobayashi, and C. Krekeler, “Meso– and multi–scale modelling E-CAM modules II,” Mar. 2018. [Online]. Available: https://doi.org/10.5281/zenodo.1210075

Share

Extended Software Development Workshop: Mesoscopic simulation models and High-Performance Computing

[button url=”https://www.e-cam2020.eu/calendar/” target=”_self” color=”primary”]Back to Calendar[/button]

If you are interested in attending this event, please visit the CECAM website here.

Workshop Description

In Discrete Element Methods the equation of motion of large number of particles is numerically integrated to obtain the trajectory of each particle [1]. The collective movement of the particles very often provides the system with unpredictable complex dynamics inaccessible via any mean field approach. Such phenomenology is present for instance in a seemingly simple systems such as the hopper/silo, where intermittent flow accompanied with random clogging occurs [2]. With the development of computing power alongside that of the numerical algorithms it has become possible to simulate such scenarios involving the trajectories of millions of spherical particles for a limited simulation time. Incorporating more complex particle shapes [3] or the influence of the interstitial medium [4] rapidly decrease the accessible range of the number of particles.

Another class of computer simulations having a huge popularity among the science and engineering community is the Computational Fluid Dynamics (CFD). A tractable method for performing such simulations is the family of Lattice Boltzmann Methods (LBMs) [5]. There, instead of directly solving the strongly non-linear Navier-Stokes equations, the discrete Boltzmann equation is solved to simulate the flow of Newtonian or non-Newtonian fluids with the appropriate collision models [6,7]. The method resembles a lot the DEMs as it simulates the the streaming and collision processes across a limited number of intrinsic particles, which evince viscous flow applicable across the greater mass.

As both of the methods have gained popularity in solving engineering problems, and scientists have become more aware of finite size effects, the size and time requirements to simulate practically relevant systems using these methods have escaped beyond the capabilities of even the most modern CPUs [8,9]. Massive parallelization is thus becoming a necessity. This is naturally offered by graphics processing units (GPUs) making them an attractive alternative for running these simulations, which consist of a large number of relatively simple mathematical operations readily implemented in a GPU [8,9].

 

References

[1] P.A. Cundall and O.D.L. Strack, Geotechnique 29, 47–65 (1979).
[2] H. G. Sheldon and D. J. Durian, Granular Matter 6, 579-585 (2010).
[3] A. Khazeni, Z. Mansourpour Powder Tech. 332, 265-278 (2018).
[4] J. Koivisto, M. Korhonen, M. J. Alava, C. P. Ortiz, D. J. Durian, A. Puisto, Soft Matter 13 7657-7664 (2017).
[5] S. Succi,The lattice Boltzmann equation: for fluid dynamics and beyond. Oxford university press, (2001).
[6] L. S. Luo, W. Liao, X. Chen, Y. Peng, W. Zhang, Phys. Rev. E, 83, 056710 (2011).
[7] S. Gabbanelli, G.Drazer, J. Koplik, Phys. Rev. E, 72, 046312 (2005).
[8] N Govender, R. K. Rajamani, S. Kok, D. N. Wilke, Minerals Engin. 79, 152-168 (2015).
[9] P.R. Rinaldi, E. A. Dari, M. J. Vénere, A. Clausse, Simulation Modelling Practice and Theory, 25, 163-171 (2012).

Share

Extended Software Development Workshop: Meso and multiscale modeling

If you are interested in attending this workshop, please visit the CECAM website bellow.

Share

Extended Software Development Workshop: Meso and multiscale modeling

If you are interested in attending this workshop, please visit the CECAM website below.

Share

State of the Art Workshop: Meso and Multiscale Modelling

If you are interested in attending this workshop, please visit the CECAM website bellow.

Share