Pilot Project on Code Optimization for Exact and Linearized Quantum Dynamics

Dr. Ahai Chen

Host beneficiary: Maison de la Simulation, CEA-Saclay, France

Co-affiliation: Université Paris-Sud, France; École Normale Supérieure, Paris, France

Google Scholar: https://scholar.google.com/citations?hl=en&user=KIj56rMAAAAJ

ResearchGate: https://www.researchgate.net/profile/A_Chen6

 

Description

This project provides a general quantum dynamics code with Smolyak method. It involves the implementation of MPI and MPI/openMP hybrid, according to the progress, for the improvement of the code efficiency in the framework of Smolyak method.

The MPI is designed to be applicable for various levels of machine, from small to massive clusters. The code will be available for switching among different types of MPI methods, openMP and non-parallel case according to the interested system and the cluster resource available. The code is compatible with gfortran, mpifort, ifort, pgf90, etc. For more details, please refer to the code develop page and the MPI branch.

The code is generally applicable for normal quantum molecular simulation. The first direct application of the code in the project will be the simulation of Clathrate hydrate.

Development Plan

List of Tasks

  • Task 1
  • MPI implementation with Smolyak method, level 1
  • Task 2
  • MPI implementation with Smolyak method, level 2
  • Task 3
  • MPI/openMP hybrid implementation

List of Modules

Module 1

Status: in progress

Expected delivery date: Sep. 2020

Description: quantum_smolyak_MPI

The level-1 MPI implementation with Smolyak method. The MPI is directly applied to the action of Operators. The parallelization could be adjusted to fit different level of machine from small to massive clusters.

Module 2

Status: in progress

Expected delivery date: TBC

Description: quantum_smolyak_MPI2

The level-2 MPI implementation with Smolyak method. The MPI is working on the full Smolyak representation, which provides maximum improvement of the efficiency with Smolyak method, and, in the meantime, require massive cluster.

Module 3

Status: planning

Expected delivery date: TBC

Description: MPI/openMP hybrid implementation

Published Results

Outreach Material