PRACE/E-CAM successful collaboration produces task scheduling library for optimising time-scale molecular dynamics simulations

Challenge

E-CAM is interested in the challenge of bridging up timescales. To study molecular dynamics with atomistic detail, timesteps must be used on the order of a femto-second. Many problems in biological chemistry, materials science, and other ends involve events that only spontaneously occur after a millisecond or longer (for example, biomolecular conformational changes, or nucleation processes). That means that around 1012 time steps would be needed to see a single millisecond-scale event. This is the problem of “rare events” in theoretical and computational chemistry. Modern supercomputers are beginning to make it possible to obtain trajectories long enough to observe some of these processes, but to fully characterize a transition with proper statistics, many examples are needed. And in order to obtain many examples, the same application must be run thousands of times with varying inputs. To manage this kind of computation, a task scheduling library is needed

Solution and benefits

The development of a python library, in collaboration with PRACE. This library builds on top of the scalable analytics framework Dask and enables it to resiliently manage multi-node and multiarchitecture environments. This offers exciting possibilities in the areas of interactive supercomputing and burst supercomputing. A white paper focused on the library was written in collaboration with PRACE and is available here.

The main elements of the mentioned scheduling library are: task de definition, a task scheduling (handled in Python) and task execution (facilitated by the MPI layer). While traditionally an HTC workload is looked down upon in the HPC space, the scientific use case for extreme-scale resources exists and algorithms that require a coordinated approach make efficient libraries that implement this approach increasingly important in the HPC space. The 5 Peta op booster technology of JURECA is an interesting concept with respect to this approach since the offloading approach of heavy computation marries perfectly to the concept outlined here.

Reference

Alan O’Cais, David Swenson, Mariusz Uchronski, & Adam Wlodarczyk. (2019, August 14). Task Scheduling Library for Optimising Time-Scale Molecular Dynamics Simulations. Zenodo. http://doi.org/10.5281/zenodo.3527643

Share

A Conversation on The Fourth Industrial Revolution: Opportunities & Trends for Particle Based Simulation

 

Abstract

In the margins of a recent multiscale simulation workshop a discussion began between a prominent  pharmaceutical industry scientist, and E-CAM and EMMC regarding the unfolding Fourth Industrial Revolution and the role of particle based simulation and statistical methods there.  The impact of simulation  is predicted to become very significant.  This discussion is intended to create awareness of the general public, of how industry 4.0 is initiating in companies, and  how academic research will support that transformation.

Authors: Prof. Pietro Asinari (EMMC and Politecnico di Torino, denoted below as PA) and Dr. Donal MacKernan (E-CAM and University College Dublin, denoted below as  DM) , and a prominent  pharmaceutical industry scientist (name withheld at author’s request as  the view expressed is a personal one, denoted below as  IS)

Continue reading…
Share

Multi-GPU version of DL_MESO_DPD

This module implements the first version of the DL_MESO_DPD Mesoscale Simulation Package, with multiple NVidia Graphical Processing Units (GPUs).

In this module the main framework of a multi-GPU version of the DL_MESO_DPD code has been developed. The exchange of data between GPUs overlaps with the computation of the forces for the internal cells of each partition (a domain decomposition approach based on the MPI parallel version of DL_MESO_DPD has been followed). The current implementation is a proof of concept and relies on slow transfers of data from the GPU to the host and vice-versa. Faster implementations will be explored in future modules.

Future plans include benchmarking of the code with different data transfer implementations other than the current (trivial) GPU-host-GPU transfer mechanism. These are: of Peer To Peer communication within a node, CUDA-aware MPI, and CUDA-aware MPI with Direct Remote Memory Access (DRMA).

Practical application and exploitation of the code

Dissipative Particle Dynamics (DPD) is routinely used in an industrial context to find out the static and dynamic behaviour of soft-matter systems. Examples include colloidal dispersions, emulsions and other amphiphilic systems, polymer solutions, etc. Such materials are being produced or processed in industries like cosmetics, food, pharmaceutics, biomedicine, etc. Porting the method to GPUs is thus inherently useful in order to provide cheaper calculations.

See more information in the industry success story recently reported by E-CAM.

Software documentation and link to the source code can be found in our E-CAM software Library here.

Share

E-CAM State of the Art Workshop: CHALLENGES IN MULTIPHASE FLOWS

We would like to draw your attention to a school cum workshop on

CHALLENGES IN MULTIPHASE FLOWS

that will run on Dec 9-12, 2019, at the Monash University Prato Center,
see http://monash.it/, in Tuscany. The event is an E-CAM state-of-the-art
workshop, and its aim is to focus on computer
simulation methods for multiphase systems and their dynamics, and
their strengths and shortcomings. This is a topic that is relevant in
physics, mathematics, chemistry, and engineering, and we are trying to
bring these communities together for a fruitful exchange. At the same
time, a set of advanced lectures at the school is intended to provide
a solid foundation of background knowledge. For more information (in
particular, the list of Invited Speakers), see the

Main web site for the event

Registration is now open. Regular participants need to pay a fee of
500 Australian Dollars (roughly 300 Euros) for meals etc.; however the
first 25 students (with proven status) who register may attend for free.

DEADLINE for registration and abstract submission is September 22.

Please do not hesitate to contact the organisers (contact information on the main website for the event) if you feel you need more information beyond what is provided on the web.

The Organisers

Burkhard Duenweg, Mainz
Ravi Prakash Jagadeeshan, Melbourne
Ignacio Pagonabarraga, Lausanne

Share

Integrating LAMMPS with OpenPathSampling

This module shows how LAMMPS can be used as Molecular Dynamic (MD) engine in OpenPathSampling (OPS) and it also provide a benchmark for the impact of OPS overhead over the MD engine.

Practical application and exploitation of the code

OpenPathSampling uses OpenMM as default engine for calculating the sampled trajectories. Other engines as GROMACS and LAMMPS can be used (despite not yet available in the official release) allowing to exploit different computer architectures like hybrid CPU-GPU and to simulate more complex problems.

In this module we present the source code for the integration of OPS with LAMMPS as well as a benchmark for of a simple test case to show the impact on the performance due to OPS overhead.

Software documentation and link to the source code can be found in our E-CAM software Library here.

Share

FFTXlib, a rewrite and optimisation of earlier versions of FFT related routines inside QE pre-v6

FFTXlib is mainly a rewrite and optimisation of earlier versions of FFT related routines inside Quantum ESPRESSO (QE) pre-v6; and finally their replacement. Despite many similarities, current version of FFTXlib dramatically changes the FFT strategy in the parallel execution, from 1D+2D FFT performed in QE pre v6 to a 1D+1D+1D one; to allow for greater flexibility in parallelisation.

Practical application and exploitation of the code

FFTXlib module is a collection of driver routines that allows the user to perform complex 3D fast Fourier transform (FFT) in the context of plane wave based electronic structure software. It contains routines to initialize the array structures, to calculate the desired grid shapes. It imposes underlying size assumptions and provides correspondence maps for indices between the two transform domains.

Once this data structure is constructed, forward or inverse in-place FFT can be performed. For this purpose FFTXlib can either use a local copy of an earlier version of FFTW (a commonly used open source FFT library), or it can also serve as a wrapper to external FFT libraries via conditional compilation using pre-processor directives. It supports both MPI and OpenMP parallelisation technologies.

FFTXlib is currently employed within Quantum Espresso package, a widely used suite of codes for electronic structure calculations and materials modeling in the nanoscale, based on planewave and pseudopotentials.

FFTXlib is also interfaced with “miniPWPP” module that solves the Kohn Sham equations in the basis of planewaves and soon to be released as a part of E-CAM Electronic Structure Library.

Software documentation and link to the source code can be found in our E-CAM software Library here.

Share

Issue 11 – June 2019

E-CAM Newsletter of June 2019

Get the latest news from E-CAM, sign up for our quarterly newsletter.

Share

Extension of the ParaDiS code to include precipitate interactions, and code optimisation to run on HPC environment


Here present two featured software modules of the month:

  1. ParaDiS with precipitates
  2. ParaDiS with precipitates optimized to HPC environment

that provide extensions to the ParaDIS Discrete dislocation dynamics (DDD) code (LLNL, http://paradis.stanford.edu/) where dislocation/precipitate interactions are included. Module 2 was built to run the code on an HPC environment, by optimizing the original code for the Cray XC40 cluster at CSC in Finland. Software was developed by E-CAM partners at CSC and Aalto University (Finland).

Practical application and exploitation of the codes

The ParaDiS code is a free large scale dislocation dynamics (DD) simulation code to study the fundamental mechanisms of plasticity. However, DDD simulations don’t always take into account scenarios of impurities interacting with the dislocations and their motion. The consequences of the impurities are multiple: the yield stress is changed, and in general the plastic deformation process is greatly affected. Simulating these by DDD allows to look at a large number of issues from materials design to controlling the yield stress and may be done in a multiscale manner by computing the dislocation-precipitate interactions from microscopic simulations or by coarse-graining the DDD results for the stress-strain curves on the mesoscopic scale to more macroscopic Finite Element Method.

Modules 1 and 2 provide therefore an extension of the ParaDIS code by including dislocation/precipitate interactions. The possibility to run the code on HPC environments is also provided.

Software documentation and link to the source code can be found in our E-CAM software Library here.

Share

Upcoming event: Extended Software Development Workshop in Mesoscopic simulation models and HPC


E-CAM partners at Aalto University (CECAM Finish Node) in collaboration with the HPC training experts from the CSC Supercomputing Centre, are organizing a joint Extended Software Development Workshop from 15-19 October 2019 , aimed at people interested in particle based methods, such as the Discrete Element and Lattice Boltzmann Methods, and on their massive parallelization using GPU architectures. The workshop will mix three different ingredients: (1) workshop on state-of-the-art challenges in computational science and software, (2) CSC -run school, and (3) coding sessions with the aid of CSC facilities and expertise.

How to Apply

Follow the instruction at the CECAM website for the event: https://www.cecam.org/workshop1752/

Organizers

  • Mikko Alava
    Aalto University, Finland
  • Brian Tighe
    TU Delft, The Netherlands
  • Jan Astrom
    CSC It center for science, Finland
  • Antti Puisto
    Aalto University, Finland

Location

CECAM-FI Node, Aalto University, Finland

Dates

October 15 – 19, 2019

Share

Mesoscale simulation of billion atom complex systems using thousands of GPGPU’s, an industry success story


Dr. Jony Castagna, Science and Technology Facilities Council, United Kingdom


Abstract

Jony Castagna recounts his transition from industry scientist to research software developer at the STFC, his E-CAM rewrite of  DL_MESO allowing the simulation of billion atom systems on thousands of GPGPUs, and his latest role as Nvidia ambassador focused on machine learning.

Continue reading…
Share