Extended Software Development Workshop: Quantum Dynamics

If you are interested in attending this event, please visit the CECAM website here.

Workshop Description

Quantum molecular dynamics simulations describe the behavior of matter at the microscopic scale and require the combined effort of theory and computation to achieve an accurate and detailed understanding of the motion of electrons and nuclei in molecular systems. Theory provides the fundamental laws governing the dynamics of quantum systems, i.e., the time-dependent Schroedinger equation or the Liouville-von Neumann equation, whereas numerical techniques offer practical ways of solving those equations for applications. For decades now, theoretical physicists and quantum chemists have been involved in the development of approximations, algorithms, and computer softwares, that together have enabled for example the investigation of photo-activated processes, like exciton transfer in photovoltaic compounds, or of nonequilibrium phenomena, such as the current-driven Joule heating in molecular electronics. The critical challenge ahead is to beat the exponential growth of the numerical cost with the number of degrees of freedom of the studied problem. In this respect, a synergy between theoreticians and computer scientists is becoming more and more beneficial as high-performance computing (HPC) facilities are nowadays widely accessible, and will lead to an optimal exploitation of the computational power available and to the study of molecular systems of increasing complexity.

From a theoretical perspective, the two main classes of approaches to solving the quantum molecular dynamical problem are wavepacket propagation schemes and trajectory-based (or trajectory-driven) methods. The difference between the two categories lies in the way the nuclear degrees of freedom are treated: either fully quantum mechanically or within the (semi)classical approximation. In the first case, basis-function contraction techniques have to be introduced to represent the nuclear wavefunction as soon as the problem exceeds 5 or 6 dimensions. Probably the most successful efforts in this direction have been oriented towards the development of the multi-configuration time-dependent Hartree (MCTDH) method [1]. Other strategies are also continuously proposed, focusing for instance on the identification of procedures to optimize the “space” where the wavefunction information is computed, e.g., by replacing Cartesian grids with Smolyak grids [2], and thus effectively reducing the computational cost of the calculation. In the second case, the nuclear subsystem is approximated classically, or semiclassically. Although leading to a loss of some information, this approximation offers the opportunity to access much larger systems for longer time-scales. Various examples of trajectory-based approaches can be mentioned, ranging from the simplest, yet very effective, trajectory surface hopping and Ehrenfest schemes [3], to the more involved but also more accurate coupled-trajectory mixed quantum-classical (CTMQC) [4] and quantum-classical Liouville equation (QCLE) [5]. At the interface between wavepacket and trajectory schemes, Gaussian-MCTDH [6], variational multi-configuration Gaussian (vMCG) [7], and multiple spawning [8] exploit the support of trajectories to propagate (Gaussian) wavepackets, thus recovering some of the information lost with a purely classical treatment. In the case of trajectory-based techniques, the literature provides a significant number of propositions that aim at recovering some of the quantum-mechanical features of the dynamics via appropriately choosing the initial conditions based on the sampling of a Wigner distribution [9].

From the computational point of view, a large part of the calculation effort is spent to evaluate electronic properties. In fact, the nuclei move under the effect of the electronic subsystem, either “statically” occupying its ground state or “dynamically” switching between excited states. Also, the nuclear dynamics part of a calculation becomes itself a very costly computational task in the case of wavepacket propagation methods. Therefore, algorithms for molecular dynamics simulations are not only required to reproduce realistically the behavior of quantum systems in general cases, but they also have to scale efficiently on parallelized HPC architectures.

The extended software development workshop (ESDW) planned for 2018 has three main objectives: (i) build upon the results of ESDW7 of July 2017 to enrich the library of softwares for trajectory-based propagation schemes; (ii) extend the capabilities of the existing modules by including new functionalities, thus giving access to a broader class of problems that can be tackled; (iii) construct links among the existing and the new modules to transversally connect methods for quantum molecular dynamics, types of modules (HPC/Interface/Functionality), and E-CAM work-packages (WP2 on electronic structure).

The central projects of the proposed ESDW, which are related to the modules that will be provided for the E-CAM library, are:
1. Extension of the ModLib library of model Hamiltonians, especially including high-dimensional models, which are used to test and compare existing propagation schemes, but also to benchmark new methods. The library consists of a set of subroutines that can be included in different codes to generate diabatic/adiabatic potential energy surfaces, and eventually, diabatic and nonadiabatic couplings, necessary for both quantum wavepackets methods and trajectory-based methods.
2. Use of machine-learning techniques to construct analytical forms of potential energy surfaces based on information collected along on-the-fly calculations. The Quantics software [10] provides the platform for performing direct-dynamics propagation employing electronic-structure properties determined at several different levels of theory (HF, DFT, or CASSCF, for example). The sampled nuclear configuration space is employed to build a “library” of potentials, that will be used for generating the potential energy surfaces.
3. Development of an interface for CTMQC. Based on the CTMQC module proposed during the Extended Software Develoment Workshop (ESDW) 7, the interface will allow the evolution of the coupled trajectories according to the CTMQC equations based on electronic-structure information calculated from quantum-chemistry packages, developing a connection between the E-CAM WP2 on electronic structure and WP3 on quantum dynamics. Potentially, CTMQC can be adapted to the Quantics code, since the latter has already been interfaced with several electronic-structure packages. Optimal scaling on HPC architectures is fundamental for maximizing efficiency.
4. Extension of the QCLE module developed during the ESDW7 to high dimensions and general potentials. Two central issues need to be addressed to reach this goal : (i) the use of HPC infrastructures to efficiently parallelize the multi-trajectory implementation, and (ii) the investigation of the stochastic sampling scheme associated with the electronic part of the time evolution. Progress in these areas will aid greatly in the development of this quantum dynamics simulation tool that could be used by the broader community.
5. Development of a module to sample initial conditions for trajectory-based procedures. Based on the PaPIM module proposed during the ESDW7, sampling of initial conditions from a Wigner distribution will be adapted to excited-state problems, overcoming the usual approximation of a molecule pictured as a set of uncoupled harmonic oscillators. Also, an adequate sampling of the ground vibrational nuclear wavefunction would ensure calculations of accurate photoabsorption cross-sections. This topic connects various modules of the E-CAM WP3 since it can be employed for CTMQC, QCLE, and for the surface-hopping functionality (SHZagreb developed during the ESDW7) of Quantics.
6. Optimization of some of the modules for HPC facilities, adopting hybrid OpenMP-MPI parallelization approaches. The main goal here is to be able to exploit different architectures by adapting different kinds of calculations, e.g., classical evolution of trajectories vs. electronic-structure calculations, to the architecture of the computing nodes.

The format and organization described here focuses specifically on the production of new modules. Parallel or additional activities, e.g. transversal workshop on optimization of I/O with electronic structure code and possible exploitation of advanced hardware infrastructures (e.g. booster cluster in Juelich), will also be considered based on input from the community.

[1] H. D. Meyer, U. Manthe, L. S. Cederbaum. Chem. Phys. Lett. 165 (1990) 73.
[2] D. Lauvergant, A. Nauts. Spectrochimica Acta Part A 119 (2014) 18.
[3] J. C. Tully. Faraday Discuss. 110 (1998) 407.
[4] S. K. Min, F. Agostini, I. Tavernelli, E. K. U. Gross. J. Phys. Chem. Lett. 8 (2017) 3048.
[5] R. Kapral. Annu. Rev. Phys. Chem. 57 (2006) 129.
[6] G. A. Worth, I. Burghardt. Chem. Phys. Lett. 368 (2003) 502.
[7] B. Lasorne, M. J. Bearpark, M. A. Robb, G. A. Worth. Chem. Phys. Lett. 432 (2006) 604.
[8] M. Ben-Nun, J. Quenneville, T. J. Martínez. J. Phys. Chem. A 104 (2000) 5161.
[9] J. Beutier, D. Borgis, R. Vuilleumier, S. Bonella. J. Chem. Phys. 141 (2014) 084102.
[10] Quantics. A suite of programs for molecular quantum dynamics. http://stchem.bham.ac.uk/~quantics/doc/
[11] PaPIM. A code for calculation of equilibrated system properties (observables). http://e-cam.readthedocs.io/en/latest/Quantum-Dynamics-Modules/modules/PaPIM/readme.html

Share

Extended Software Development Workshop: Intelligent high throughput computing for scientific applications

If you are interested in attending this event, please visit the CECAM website here.

Workshop Description

High throughput computing (HTC) is a computing paradigm focused on the execution of many loosely coupled tasks. It is a useful and general approach to parallelizing (nearly) embarrassingly parallel problems. Distributed computing middleware, such as Celery [1] or COMP Superscalar (COMPSs) [2], can include tools to facilitate HTC, although there may be challenges extending such approaches to the exascale.

Across scientific fields, HTC is becoming a necessary approach in order to fully utilize next-generation computer hardware. As an example, consider molecular dynamics: Excellent work over the years has developed software that can simulate a single trajectory very efficiently using massive parallelization [3]. Unfortunately, for a fixed number of atoms, the extent of possible parallelization is limited. However, many methods, including semiclassical approaches to quantum dynamics [4,5] and some approaches to rare events [6,7], require running thousands of independent molecular dynamics trajectories. Intelligent HTC, which can treat each trajectory as a task and manage data dependencies between tasks, provides a way to run these simulations on hardware up to the exascale, thus opening the possibility of studying previously intractable systems.

In practice, many scientific programmers are not aware of the range of middleware to facilitate parallel programming. When HTC-like approaches are implemented as part of a scientific software project, they are often done manually, or through custom scripts to manage SSH, or by running separate jobs and manually collating the results. Using the intelligent high-level approaches enabled by distributed computing middleware will simplify and speed up development.

Furthermore, middleware frameworks can meet the needs of many different computing infrastructures. For example, in addition to working within a single job on a cluster, COMPSs includes support for working through a cluster’s queueing system or working on a distributed grid. Moreover, architecting a software package such that it can take advantage of one HTC library will make it easy to use other HTC middleware. Having all of these possibilities immediately available will enable developers to quickly create software that can meet the needs of many users.

This E-CAM Extended Software Development Workshop (ESDW) will focus on intelligent HTC as a technique that crosses many domains within the molecular simulation community in general and the E-CAM community in particular. Teaching developers how to incorporate middleware for HTC matches E-CAM’s goal of training scientific developers on the use of more sophisticated software development tools and techniques.

This E-CAM extended software development workshop (ESDW) will focus on intelligent HTC, with the primary goals being:

1. To help scientific developers interface their software with HTC middleware.
2. To benchmark, and ideally improve, the performance of HTC middleware as applications approach extreme scale.

This workshop will aim to produce four or more software modules related to intelligent HTC, and to submit them, with their documentation, to the E-CAM software module repository. These will include modules adding HTC support to existing computational chemistry codes, where the participants will bring the codes they are developing. They may also include modules adding new middleware or adding features to existing middleware that facilitate the use of HTC by the computational chemistry community. This workshop will involve training both in the general topic of designing software to interface with HTC libraries, and in the details of interfacing with specific middleware packages.

The range of use for intelligent HTC in scientific programs is broad. For example, intelligent HTC can be used to select and run many single-point electronic structure calculations in order to develop approximate potential energy surfaces. Even more examples can be found in the wide range of methods that require many trajectories, where each trajectory can be treated as a task, such as:

* rare events methods, like transition interface sampling, weighted ensemble, committor analysis, and variants of the Bennett-Chandler reactive flux method
* semiclassical methods, including the phase integration method and the semiclassical initial value representation
* adaptive sampling methods for Markov state model generation
* approaches such as nested sampling, which use many short trajectories to estimate partition functions

The challenge is that most developers of scientific software are not familiar with the way such packages can simplify their development process, and the packages that exist may not scale to exascale. This workshop will introduce scientific software developers to useful middleware packages, improve scaling, and provide an opportunity for scientific developers to add support for HTC to their codes.

Major topics that will be covered include:

* Concepts of HTC; how to structure code for HTC
* Accessing computational resources to use HTC
* Interfacing existing C/C++/Fortran code with Python libraries
* Specifics of interfacing with Celery/COMPSs
* Challenges in using existing middleware at extreme scale

[1] Celery: Distributed Task Queue. http://celeryproject.org, date accessed 14 August 2017.

[2] R.M. Badia et al. SoftwareX 3-4, 32 (2015).

[3] S. Plimpton. J. Comput. Phys. 117, 1 (1995).

[4] W.H. Miller. J. Chem. Phys. 105, 2942 (2001).

[5] J. Beutier et al. J. Chem. Phys. 141, 084102 (2014).

[6] Du et al. J. Chem. Phys. 108, 334 (1998).

[7] G.A. Huber and S. Kim. Biophys. J. 70, 97 (1996).

Share

Extended Software Development Workshop: Atomistic, Meso- and Multiscale Methods on HPC Systems

If you are interested in attending this event, please visit the CECAM website here. This a multi-part event and we indicate the date for the first meeting. Dates of follow ups are decided during the first event.

Workshop Description

E-CAM is an EINFRA project funded by H2020. Its goal is to create, develop, and sustain a European infrastructure for computational science, applied to simulation and modelling of materials and biological processes that are of industrial and societal interest. E-CAM builds upon the considerable European expertise and capability in this area.

E-CAM is organized around four scientific areas: Molecular dynamics, electronic structure, quantum dynamics and meso- and multiscale modelling, corresponding to work packages WP1-4. E-CAM gathers a number of groups with complementary expertise in the area of meso- and multiscale modeling and has also very well established contact to simulation code developers. Among the aims of the involved groups in this area is to produce a software stack by combining software modules, and to further develop existing simulation codes towards highly scalable applications on high performance computer architectures. It has been identified as a key issue that simulation codes in the field of molecular dynamics, meso- and multiscale applications should be prepared for the upcoming HPC architectures. Different approaches have been proposed by E-CAM WPs: (i) developing and optimizing highly scalable applications, running a single application on a large number of cores and (ii) developing micro-schedulers for task-farming approaches, where multiple simulations run each on smaller partitions of a large HPC system and work together on the collection of statistics or the sampling of a parameter space, for which only loosely coupled simulations would be needed. Both approaches rely on the efficient implementation of simulation codes.

Concerning strategy, most of modern parallelized (classical) particle simulation programs are based on a spatial decomposition method as an underlying parallel algorithm. In this case, different processors administrate different spatial regions of the simulation domain and keep track of those particles that are located in their respective region. Processors exchange information (i) in order to compute interactions between particles located on different processors, and (ii) to exchange particles that have moved to a region administrated by a different processor. This implies that the workload of a given processor is very much determined by its number of particles, or, more precisely, by the number of interactions that are to be evaluated within its spatial region.

Certain systems of high physical and practical interest (e.g. condensing fluids) dynamically develop into a state where the distribution of particles becomes spatially inhomogeneous. Unless special care is being taken, this results in a substantially inhomogeneous distribution of the processors’ workload. Since the work usually has to be synchronized between the processors, the runtime is determined by the slowest processor (i.e. the one with highest workload). In the extreme case, this means that a large fraction of the processors is idle during these waiting times. This problem becomes particularly severe if one aims at strong scaling, where the number of processors is increased at constant problem size: Every processor administrates smaller and smaller regions and therefore inhomogeneities will become more and more pronounced. This will eventually saturate the scalability of a given problem, already at a processor number that is still so small that communication overhead remains negligible.

The solution to this problem is the inclusion of dynamic load balancing techniques. These methods redistribute the workload among the processors, by lowering the load of the most busy cores and enhancing the load of the most idle ones. Fortunately, several successful techniques are known already to put this strategy into practice (see references). Nevertheless, dynamic load balancing that is both efficient and widely applicable implies highly non-trivial coding work. Therefore it has has not yet been implemented in a number of important codes of the E-CAM community, e.g. DL_Meso, DL_Poly, Espresso, Espresso++, to name a few. Other codes (e.g. LAMMPS) have implemented somewhat simpler schemes, which however might turn out to lack sufficient flexibility to accommodate all important cases. Therefore, the present proposal suggests to organize an Extended Software Development Workshop (ESDW) within E-CAM, where code developers of CECAM community codes are invited together with E-CAM postdocs, to work on the implementation of load balancing strategies. The goal of this activity is to increase the scalability of these applications to a larger number of cores on HPC systems, for spatially inhomogeneous systems, and thus to reduce the time-to-solution of the applications.

The workshop is intended to make a major community effort in the direction of improving European simulation codes in the field of classical atomistic, mesoscopic and multiscale simulation. Various load balancing techniques will be presented, discussed and selectively implemented into codes. Sample implementations of load balancing techniques have been done for the codes IMD and MP2C. These are highly scalable particle codes, cf. e.g. http://www.fz-juelich.de/ias/jsc/EN/Expertise/High-Q-Club/_node.html. The technical task is to provide a domain decomposition with flexible adjustment of domain boarders. The basic load balancing functionality will be implemented and provided by a library, which will be accessed via interfaces from the codes.

In order to attract both developers of the codes as well as postdocs working within E-CAM the workshop will be split into 3 parts:

Part 1: preparation meeting (2 days)
– various types of load balancing schemes will be presented conceptually and examples of implemented techniques will be shown
– code developers / owners will present their codes. Functionalities will be presented and parallel implementations are discussed in view of technical requirements for the implementation of load balancing techniques
– an interface definition for exchanging information from a simulation code to a load balancing library will be set up

Part 2: training and implementation (1 week)
– to enable E-CAM postdocs to actively participate in the development, some advanced technical courses on MPI and high-performance C++ will be offered in combination with the PRACE PATC course program at Juelich
– during and after the courses (planned for 2-3 days), participants can start implementing a load balancing scheme into a code
– for those participants who are already on an expert level in HPC techniques, it is possible to start immediately with implementing load balancing schemes

Part 3: implementation and benchmarking (1 week)
– final implementation work with the goal to have at least one working implementation per code
– for successful implementations benchmarks are conducted on Juelich supercomputer facilities

The second part will also be open for a broader community from E-CAM, so that the workshop can have an impact on the HPC training of postdocs in E-CAM, which will strengthen their skills and experience in HPC.

It is intended that between the face-to-face parts of the workshop, postdocs and developers continue the preparation and work on the load balancing schemes, so that the meetings will be an important step to synchronise, exchange information and experience and improve the current versions of implementation.

Share

Extended Software Development Workshop: Meso and multiscale modeling

If you are interested in attending this workshop, please visit the CECAM website bellow.

Share

Extended Software Development Workshop: Meso and multiscale modeling

If you are interested in attending this workshop, please visit the CECAM website below.

Share

Extended Software Development Workshop: Quantum MD

If you are interested in attending this workshop, please visit the CECAM website bellow.

Share

Extended Software Development Workshop: Classical Molecular Dynamics

If you are interested in attending this workshop, please visit the CECAM website bellow.

Share

Extended Software Development Workshop: Wannier90

The aim of the workshop is to share recent developments related to the generation and use of maximally-localised Wannier functions and to either implement these developments in, or interface them to, theWannier90 code. It will also be an opportunity to improve and update existing interfaces to other codes and write new ones. The format will be deliberately open, with the majority of the time allocated for coding and discussion.

Share

Extended Software Development Workshop: Trajectory Sampling

This is the 3rd of E-CAM’s extended software development workshops; this one on the theme of trajectory sampling.

Share

Electronic Structure Library Coding Workshop

This is the first E-CAM Extended Software Development Workshop, taking place in Zaragoza in Spain. The Electronic Structure Library  is a new project to build a community-maintained library of software of use for electronic structure simulations. The goal is to create an extended library that can be employed by everyone for building their own packages and projects.

Share