Accelerating the design and discovery of materials with tailored properties using first principles high-throughput calculations and automated generation of Wannier functions

 

A successful collaboration between the EU H2020 E-CAM and MaX Centres of Excellence, and the Swiss NCCR MARVEL

Abstract

In a recent paper[1], researchers from the Centres of Excellence E-CAM[2] and MaX[3], and the centre for Computational Design and Discovery of Novel Materials NCCR MARVEL[4], have proposed a new procedure for automatically generating Maximally-Localised Wannier functions (MLWFs) for high-throughput frameworks. The methodology and associated software  can be used for hitherto difficult cases of entangled bands, and allows the  electronic properties of a wide variety of materials to be obtained starting only from the specification of the initial crystal structure, including insulators, semiconductors and metals. Industrial applications that this work will facilitate include the development of novel superconductors, multiferroics, topological insulators, as well as more traditional electronic applications.

Graphical representation of all data and calculations run in the project and their interconnections (provenance), as tracked automatically by AiiDA in the form of a directed acyclic graph (image credits: G. Pizzi)

Challenge/context

Predicting the properties of complex materials generally entails the use of methods that facilitate coarse grained perspectives more suitable for large scale modelling, and  ultimately device design and manufacture. When a quantum level of description of a modular-like system  is required, this can often be facilitated by expressing the Hamiltonian in terms of a localised, real-space basis set, enabling it to be partitioned without ambiguity into sub-matrices that correspond to the individual subsystems. Maximally-localised Wannier functions  (MLWFs) are particularly suitable in this context. However, until now generating MLWFs has been difficult to exploit  in high-throughput design of materials, without  the specification by users of a set of initial guesses for the MLWFs,  typically trial functions localised in real space, based on their experience and chemical intuition. 

Solution

E-CAM[2] scientist Valerio Vitale and co-authors from the partner H2020 Centre of Excellence  MAX[3] and the Swiss based NCCR MARVEL [4] in a recent article[1] look afresh at this problem in the context of an algorithm by Damle et al[5], known as the selected columns of the density matrix (SCDM) method, as a method to provide automatically initial guesses for the MLWF search, to compute a set of localized orbitals associated with the Kohn–Sham subspace for insulating systems. This has shown great promise in avoiding the need for user intervention in obtaining MLWFs and is robust, being based on standard linear-algebra routines rather than on iterative minimisation. In particular, Vitale et al. developed a fully-automated protocol based on the SCDM algorithm in which the three remaining free parameters (two from the SCDM method, plus the choice of the target dimensionality for the disentangled subspace) are determined automatically, making it thus parameter-free even in the case of entangled bands. The work systematically compares the accuracy and ease of use of standard methods to generate localised basis sets  as (a) MLWFs; (b)  MLWFs combined with SCDM’s and (c) using solely SCDM’s;  and applies this multifaceted perspective to hundreds of materials including insulators, semiconductors and metals.

Comparison between Wannier-interpolated valence bands (red lines) and the full direct-DFT band structure (black lines), for 150 different materials. The direct and interpolated band structures are essentially indistinguishable (image credits: G. Pizzi)

Benefit

This is significant because it greatly expands the scope of materials for which MLWFs can be generated in high throughput studies and has the potential to accelerate the design and discovery of materials with tailored properties using first-principles high-throughput (HT) calculations, and facilitate advanced industrial applications. Industrial applications that this work will facilitate include the development of novel superconductors, multiferroics, topological insulators, as well as more traditional electronic applications.

Background information

This module is a collaboration between the E-CAM and MaX HPC centres of excellence, and the NCCR MARVEL

In SCDM Wannier Functions, E-CAM has implemented the SCDM algorithm in the pw2wannier90 interface code between the Quantum ESPRESSO software and the Wannier90 code. This was done in the context of an E-CAM pilot project at the University of Cambridge. Researchers have then used this implementation as the basis for a complete computational workflow for obtaining MLWFs and electronic properties based on Wannier interpolation of the Brillouin zone, starting only from the specification of the initial crystal structure. The workflow was implemented within the AiiDA materials informatics platform (from the NCCR MARVEL and the MaX CoE) , and used to perform a HT study on a dataset of 200 materials.

Source Code

See the Materials Cloud Archive entry. A downloadable virtual machine is provided that allows to reproduce the results of the associated paper and also to run new calculations for different materials, including all first-principles and atomistic simulations and the computational workflows.

Bibliography

[1] Automated high-throughput Wannierisation, Valerio Vitale, Giovanni Pizzi, Antimo Marrazzo, Jonathan R. Yates, Nicola Marzari and Arash A. Mostofi, Nature Computational Materials (2020)6:66 ; https://doi.org/10.1038/s41524-020-0312-y

[2] https://www.e-cam2020.eu/

[3] http://www.max-centre.eu/

[4] https://nccr-marvel.ch/

[5] Compressed Representation of Kohn−Sham Orbitals via Selected Columns of the Density Matrix , Anil Damle, Lin Lin,  and Lexing Ying, J. Chem. Theory Comput. 2015, 11, 1463−1469 https://pubs.acs.org/doi/10.1021/ct500985f

Share

The CECAM Electronic Structure Library and the modular software development paradigm

E-CAM is working closely with the Electronic Structure Library (ESL) initiative since some years now. A review of the CECAM ESL is now out and can be accessed at  https://arxiv.org/abs/2005.05756. The abstract is below.

Abstract

First-principles electronic structure calculations are very widely used thanks to the many successful software packages available. Their traditional coding paradigm is monolithic, i.e., regardless of how modular its internal structure may be, the code is built independently from others, from the compiler up, with the exception of linear-algebra and message-passing libraries. This model has been quite successful for decades. The rapid progress in methodology, however, has resulted in an ever increasing complexity of those programs, which implies a growing amount of replication in coding and in the recurrent re-engineering needed to adapt to evolving hardware architecture. The Electronic Structure Library (ESL) was initiated by CECAM to catalyze a paradigm shift away from the monolithic model and promote modularization, with the ambition to extract common tasks from electronic structure programs and redesign them as free, open-source libraries. They include “heavy-duty” ones with a high degree of parallelisation, and potential for adaptation to novel hardware within them, thereby separating the sophisticated computer science aspects of performance optimization and re-engineering from the computational science done by scientists when implementing new ideas. It is a community effort, undertaken by developers of various successful codes, now facing the challenges arising in the new model. This modular paradigm will improve overall coding efficiency and enable specialists (computer scientists or computational scientists) to use their skills more effectively. It will lead to a more sustainable and dynamic evolution of software as well as lower barriers to entry for new developers.

Share

Automated high-throughput Wannierisation, a successful collaboration between E-CAM and the MaX Centre of Excellence

Maximally-localised Wannier functions (MLWFs) are routinely used to compute from first- principles advanced materials properties that require very dense Brillouin zone (BZ) integration and to build accurate tight-binding models for scale-bridging simulations. At the same time, high-thoughput (HT) computational materials design is an emergent field that promises to accelerate the reliable and cost-effective design and optimisation of new materials with target properties. The use of MLWFs in HT workflows has been hampered by the fact that generating MLWFs automatically and robustly without any user intervention and for arbitrary materials is, in general, very challenging. We address this problem directly by proposing a procedure for automatically generating MLWFs for HT frameworks. Our approach is based on the selected columns of the density matrix method (SCDM, see SCDM Wannier Functions) and is implemented in an AiiDA workflow.

Purpose of the module

Create a fully-automated protocol based on the SCDM algorithm for the construction of MLWFs, in which the two free parameters are determined automatically (in our HT approach the dimensionality of the disentangled space is fixed by the total number of states used to generate the pseudopotentials in the DFT calculations).

A paper describing the work is available at https://arxiv.org/abs/1909.00433, where this approach was applied to a dataset of 200 bulk crystalline materials that span a wide structural and chemical space.

Background information

This module is a collaboration between E-CAM and the MaX Centre of Excellence.

In the SCDM Wannier Functions module, E-CAM has implemented the SCDM algorithm in the pw2wannier90.f90 interface code between the Quantum ESPRESSO software and the Wannier90 code. This implementation was used as the basis for a complete computational workflow for obtaining MLWFs and electronic properties based on Wannier interpolation of the BZ, starting only from the specification of the initial crystal structure. The workflow was implemented within the AiiDA materials informatics platform, and used to perform a HT study on a dataset of 200 materials, as described in here.

More information at https://e-cam.readthedocs.io/en/latest/Electronic-Structure-Modules/modules/W90_MaX_collab/readme.html

Share

Integration of ESL modules into electronic-structure codes

[button url=”https://www.e-cam2020.eu/calendar/” target=”_self” color=”primary”]Back to Calendar[/button]

If you are interested in attending this event, please visit the CECAM website here.

Workshop Description

The evolutionary pressure on electronic structure software development is greatly increasing, due to the emergence of new paradigms, new kinds of users, new processes, and new tools. Electronic structure software complexity is consequently also increasing, requiring a larger effort on code maintenance. Developers of large electronic structure codes are trying to relieve some complexity by transitioning standardized algorithms into separate libraries [BigDFT-PSolver, ELPA, ELSI, LibXC, LibGridXC, etc.]. This paradigm shift requires library developers to have a hybrid developer profile where the scientific and computational skill set becomes equally important. These topics have been extensively and publicly discussed between developers of various projects including ABINIT, ASE, ATK, BigDFT, CASTEP, FHI-aims, GPAW, Octopus, Quantum Espresso, SIESTA, and SPR-KKR.

High-quality standardized libraries are not only a highly challenging effort lying at the hands of the library developers, they also open possibilities for codes to take advantage of a standard way to access commonly used algorithms. Integration of these libraries, however, requires a significant initial effort that is often sacrificed for new developments that often not even reach the mainstream branch of the code. Additionally, there are multiple challenges in adopting new libraries which have their roots in a variety of issues: installation, data structures, physical units and parallelism – all of which are code-dependent. On the other hand, adoption of common libraries ensures the immediate propagation of improvements within the respective library’s field of research and ensures codes are up-to-date with much less effort [LibXC]. Indeed, well-established libraries can have a huge impact on multiple scientific communities at once [PETSc].

In the Electronic Structure community, two issues are emerging. Libraries are being developed [esl, esl-gitlab] but require an ongoing commitment from the community with respect to sharing the maintenance and development effort. Secondly, existing codes will benefit from libraries by adopting their use. Both issues are mainly governed by the exposure of the libraries and the availability of library core developers, which are typically researchers pressured by publication deliverables and fund-raising burdens. They are thus not able to commit a large fraction of their time to software development.

An effort to allow code developers to make use of, and develop, shared components is needed. This requires an efficient coordination between various elements:

– A common and consistent code development infrastructure/education in terms of compilation, installation, testing and documentation.
– How to use and integrate already published libraries into existing projects.
– Creating long-lasting synergies between developers to reach a “critical mass” of component contributors.
– Relevant quality metrics (“TRLs” and “SRLs”), to provide businesses with useful information .

This is what the Electronic Structure Library (ESL)[esl, esl-gitlab] has been doing since 2014, with a wiki, a data-exchange standard, refactoring code of global interest into integrated modules, and regularly organizing workshops, within a wider movement lead by the European eXtreme Data and Computing Initiative [exdci].

 

References

[BigDFT-PSolver] http://bigdft.org/Wiki/index.php?title=The_Solver_Package
[ELPA] https://gitlab.mpcdf.mgp.de/elpa/elpa
[ELSI] http://elsi-interchange.org
[LibXC] http://www.tddft.org/programs/libxc/
[LibGridXC] https://launchpad.net/libgridxc
[PETSc] https://www.mcs.anl.gov/petsc/
[esl] http://esl.cecam.org/
[esl-gitlab] http://gitlab.e-cam2020.eu/esl
[exdci] https://exdci.eu/newsroom/press-releases/exdci-towards-common-hpc-strategy-europe

Share

QMCPack Interfaces for Electronic Structure Computations

Quantum Monte Carlo (QMC) methods are a class of ab initio, stochastic techniques for the study of quantum systems. While QMC simulations are computationally expensive, they have the advantage of being accurate, fully ab initio and scalable to a large number of cores with limited memory requirements.

These features make QMC methods a valuable tool to assess the accuracy of DFT computations, which are widely used in the fields of condensed matter physics, quantum chemistry and material science.

QMCPack is a free package for QMC simulations of electronic structure developed in several national labs in the US. This package is written in object oriented C++, offers a great flexibility in the choice of systems, trial wave functions and QMC methods and supports massive parallelism and the usage of GPUs.

Trial wave functions for electronic QMC computations commonly require the use of  single electrons orbitals, typically computed by DFT. The aim of the E-CAM pilot project described here is to build interfaces between QMCPack and other softwares for electronic structure computations, e.g. the DFT code Quantum Espresso.

These interfaces are used to manage the orbital reading or their DFT generation within QMCPack, to establish an automated, black box workflow for QMC computations. QMC simulation can for example be used in the benchmark and validation of DFT calculations: such a procedure can be employed in the study of several physical systems of interest in condensed matter physics, chemistry or material science, with application in the industry, e.g. in the study of metal-ion or water-carbon interfaces.

The following modules have been built as part of this pilot project:

  • QMCQEPack, that provides the files to download and  properly patch Quantum Espresso 5.3 to build the libpwinterface.so library; this library is required to use the module ESPWSCFInterface to generate single particle orbitals during a QMCPack computation using Quantum Espresso.
  • ESInterfaceBase that provides a base class for a general interface to generate single particle orbitals to be used in QMC simulations in QMCPack; implementations of specific interfaces as derived classes of ESInterfaceBase are available as the separate modules as follows:

The documentation about interfaces in QMCPack, can be seen in the QMCPack user manual at https://github.com/michruggeri/qmcpack/blob/f88a419ad1a24c68b2fdc345ad141e05ed0ab178/manual/interfaces.tex

Share

PANNA: Properties from Artificial Neural Network Architectures

PANNA is a package for training and validating neural networks to represent atomic potentials. It implements configurable all-to-all connected deep neural network architectures which allow for the exploration of training dynamics. Currently it includes tools to enable original[1] and modified[2] Behler-Parrinello input feature vectors, both for molecules and crystals, but the network can also be used in an input-agnostic fashion to enable further experimentation. PANNA is written in Python and relies on TensorFlow as underlying engine.

A common way to use PANNA in its current implementation is to train a neural network in order to estimate the total energy of a molecule or crystal, as a sum of atomic contributions, by learning from the data of reference total energy calculations for similar structures (usually ab-initio calculations).

The neural network models in literature often start from a description of the system of interest in terms of local feature vectors for each atom in the configuration. PANNA provides tools to calculate two versions of the Behler-Parrinello local descriptors but it allows the use of any species-resolved, fixed-size array that describes the input data.

PANNA allows the construction of neural network architectures with different sizes for each of the atomic species in the training set. Currently the allowed architecture is a deep neural network of fully connected layers, starting from the input feature vector and going through one or more hidden layers. The user can determine to train or freeze any layer, s/he can also transfer network parameters between species upon restart.

In summary, PANNA is an easy-to-use interface for obtaining neural network models for atomistic potentials, leveraging the highly optimized TensorFlow infrastructure to provide an efficient and parallelized, GPU-accelerated training.

It provides:

  • an input creation tool (atomistic calculation result -> G-vector )
  • an input packaging tool for quick processing of TensorFlow ( G-vector -> TFData bundle)
  • a network training tool
  • a network validation tool
  • a LAMMPS plugin
  • a bundle of sample data for testing[3]

See the full documentation of PANNA at https://gitlab.com/PANNAdevs/panna/blob/master/doc/PANNA_documentation.md

GitLab repository for PANNA: https://gitlab.com/PANNAdevs/panna

See manuscript at https://arxiv.org/abs/1907.03055

References

[1] J. Behler and M. Parrinello, “Generalized Neural-Network Representation of High-Dimensional  Potential-Energy Surfaces”, Phys. Rev. Lett. 98, 146401 (2007)

[2] Justin S. Smith, Olexandr Isayev, Adrian E. Roitberg, “ANI-1: An extensible neural network potential with DFT accuracy at force field computational cost», Chemical Science,(2017), DOI: 10.1039/C6SC05720A

[3] Justin S. Smith, Olexandr Isayev, Adrian E. Roitberg, “ANI-1, A data set of 20 million calculated off-equilibrium conformations for organic molecules; Scientific Data, 4 (2017), Article number: 170193, DOI: 10.1038/sdata.2017.193

Share

FFTXlib, a rewrite and optimisation of earlier versions of FFT related routines inside QE pre-v6

FFTXlib is mainly a rewrite and optimisation of earlier versions of FFT related routines inside Quantum ESPRESSO (QE) pre-v6; and finally their replacement. Despite many similarities, current version of FFTXlib dramatically changes the FFT strategy in the parallel execution, from 1D+2D FFT performed in QE pre v6 to a 1D+1D+1D one; to allow for greater flexibility in parallelisation.

Practical application and exploitation of the code

FFTXlib module is a collection of driver routines that allows the user to perform complex 3D fast Fourier transform (FFT) in the context of plane wave based electronic structure software. It contains routines to initialize the array structures, to calculate the desired grid shapes. It imposes underlying size assumptions and provides correspondence maps for indices between the two transform domains.

Once this data structure is constructed, forward or inverse in-place FFT can be performed. For this purpose FFTXlib can either use a local copy of an earlier version of FFTW (a commonly used open source FFT library), or it can also serve as a wrapper to external FFT libraries via conditional compilation using pre-processor directives. It supports both MPI and OpenMP parallelisation technologies.

FFTXlib is currently employed within Quantum Espresso package, a widely used suite of codes for electronic structure calculations and materials modeling in the nanoscale, based on planewave and pseudopotentials.

FFTXlib is also interfaced with “miniPWPP” module that solves the Kohn Sham equations in the basis of planewaves and soon to be released as a part of E-CAM Electronic Structure Library.

Software documentation and link to the source code can be found in our E-CAM software Library here.

Share

DBCSR@MatrixSwitch, an optimised library to deal with sparse matrices

MatrixSwitch is a module which acts as an intermediary interface layer between high-level and low-level routines dealing with matrix storage and manipulation. It allows a seamlessly switch between different software implementations of the matrix operations.

DBCSR is an optimized library to deal with sparse matrices, which appear frequently in many kind of numerical simulations.

In DBCSR@MatrixSwitch, DBCSR capabilities have been added to MatrixSwitch as an optional library dependency.

To carry out calculations in serial mode may be too slow sometimes and a parallelisation strategy is needed. Serial/parallel MatrixSwitch employs Lapack/ScaLapack to perform matrix operations, irrespective of their dense or sparse character. The disadvantage of the Lapack/ScaLapack schemes is that they are not optimised for sparse matrices. DBCSR provides the necessary algorithms to solve this problem and in addition is specially suited to work in parallel.

Direct link to module documentation: https://e-cam.readthedocs.io/en/latest/Electronic-Structure-Modules/modules/MatrixSwitchDBCSR/readme.html

Share

Scientific Report from State-of-the-Art Workshop “Improving the accuracy of ab-initio predictions for materials” is available on our website

The workshop scientific report from the E-CAM State-of-the-Art Workshop Improving the accuracy of ab-initio predictions for materials, that took place on the 17-20 September 2018 at the CECAM-FR-MOSER Node (France), is now available for consultation and download on our website under this link.

Short Description:

The State-of-the-Art workshop in the E-CAM Electronic Structure Work-Package (WP2) gathered together 38 participants from the academic research world, shared in a rather equilibrated fashion among Density Functional Theory, Quantum Monte Carlo and Machine Learning communities, and one industrial researcher from Scienomics. Key topics to the development of the field of computational materials science from first principles were thoroughly discussed, from which the following outcomes have emerged:  (1) Importance of computational benchmarks to assess the accuracy of different methods and to feed the machine learning and neural network schemes with reliable data; (2) Need of a common database, and need to develop a common language across different codes and different computational approaches; (3) Interesting capabilities for neural network methods to develop new correlated wave functions; (4) Cross-fertilizing combination of computational schemes in a multi-scale environment; and (5) Recent progress in Quantum Monte Carlo to further improve the accuracy of the calculations by taking alternative routes. Limitations in the field and open questions were also debated, as described in the workshop scientific report.

Other scientific reports from State of the Art and Scoping workshops can be found at  https://www.e-cam2020.eu/scientific-reports/.

Share

Extended Software Development Workshop: Scaling Electronic Structure Applications

If you are interested in attending this event, please visit the CECAM website here.

Workshop Description

The evolutionary pressure on electronic structure software development is greatly increasing, due to the emergence of new paradigms, new kinds of users, new processes, and new tools. The large feature-full codes that were once developed within one field are now undergoing a heavy restructuring to reach much broader communities, including companies and non-scientific users[1]. More and more use cases and workflows are performed by highly-automated frameworks instead of humans: high-throughput calculations and computational materials design[2], large data repositories[3], and multiscale/multi-paradigm modeling[4], for instance. At the same time, High-Performance Computing Centers are paving the way to exascale, with a cascade of effects on how to operate, from computer architectures[5] to application design[6]. The disruptive paradigm of quantum computing is also putting a big question mark on the relevance of all the ongoing efforts[7].

All these trends are highly challenging for the electronic structure community. Computer architectures have become rapidly moving targets, forcing a global paradigm shift[8]. As a result, long-ignored and well-established software good practices that were summarised in the Agile Manifesto[9] nearly 20 years ago are now adopted at an accelerating pace by more and more software projects[10]. With time, this kind of migration is becoming a question of survival, the key for a successful transformation being to allow and preserve an enhanced collaboration between the increasing number of disciplines involved. Significant efforts of integration from code developers are also necessary, since both hardware and software paradigms have to change at once[11].

Two major issues are also coming from the community itself. Hybrid developer profiles, with people fluent both in computational and scientific matters, are still difficult to find and retain. On the long run, the numerous ongoing training initiatives will gradually improve the situation, while on the short run, the issue is becoming more salient and painful, because the context evolves faster than ever. Good practices have usually been the first element sacrificed in the “publish or perish” race. New features have usually been bound to the duration of a post-doc contract and been left undocumented and poorly tested, favoring the unsustainable “reinventing the wheel” syndrome.

Addressing these issues requires coordinated efforts at multiple levels:
– from a methodological perspective, mainly through the creation of open standards and the use of co-design, both for programming and for data[12];
– regarding documentation, with a significant leap in content policies, helped by tools like Doxygen and Sphinx, as well as publication platforms like ReadTheDocs[13];
– for testing, by introducing test-driven development concepts and systematically publishing test suites together with software[14];
– considering deployment, by creating synergies with popular software distribution systems[15];
– socially, by disseminating the relevant knowledge and training the community, through the release of demonstrators and giving all stakeholders the opportunity to meet regularly[16].

This is what the Electronic Structure Library (ESL)[17] has been doing since 2014, with a wiki, a data-exchange standard, refactoring code of global interest into integrated modules, and regularly organising workshops, within a wider movement lead by the European eXtreme Data and Computing Initiative (EXDCI)[18].

Since 2014, the Electronic Structure Library has been steadily growing and developing to cover most fundamental tasks required by electronic structure codes. In February 2018 an extended software development workshop will be held at CECAM-HQ with the purpose of building demonstrator codes providing powerful, non-trivial examples of how the ESL libraries can be used. These demonstrators will also provide a platform to test the performance and usability of the libraries in an environment as close as possible to real-life situations. This marks a milestone and enables the next step in the ESL development: going from a collection of libraries with a clear set of features and stable interfaces to a bundle of highly efficient, scalable and integrated implementations of those libraries.

Many libraries developed within the ESL perform low-level tasks or very specific steps of more complex algorithms and are not capable, by themselves, to reach exascale performances. Nevertheless, if they are to be used as efficient components of exascale codes, they must provide some level of parallelism and be as efficient as possible in a wide variety of architectures. During this workshop, we propose to perform advanced performance and scalability profiling of the ESL libraries. With that knowledge in hand it will be possible to select and implement the best strategies for parallelizing and optimizing the libraries. Assistance from HPC experts will be essential and is an unique opportunity to foster collaborations with other Centres of Excellence, like PoP (https://pop-coe.eu/) and MaX (http://www.max-centre.eu/).

Based on the successful experience of the previous ESL workshops, we propose to divide the workshop in two parts. The first two days will be dedicated to initial discussions between the participants and other invited stakeholders, and to presentations on state-of-the art methodological and software developments, performance analysis and scalability of applications. The remainder of the workshop will consist in a 12 days coding effort by a smaller team of experienced developers. Both the discussion and software development will take advantage of the ESL infrastructure (wiki, gitlab, etc) that was set up during the previous ESL workshops.

[1] See http://www.nanogune.eu/es/projects/spanish-initiative-electronic-simulations-thousands-atoms-codigo-abierto-con-garantia-y and
[2] See http://pymatgen.org/ and http://www.aiida.net/ for example.
[3] http://nomad-repository.eu/
[4] https://abidev2017.abinit.org/images/talks/abidev2017_Ghosez.pdf
[5] http://www.deep-project.eu/
[6] https://code.grnet.gr/projects/prace-npt/wiki/StarSs
[7] https://www.newscientist.com/article/2138373-google-on-track-for-quantum-computer-breakthrough-by-end-of-2017/
[8] https://arxiv.org/pdf/1405.4464.pdf (sustainable software engineering)
[9] http://agilemanifesto.org/
[10] Several long-running projects routinely use modern bug trackers and continuous integration, e.g.: http://gitlab.abinit.org/, https://gitlab.com/octopus-code/octopus, http://qe-forge.org/, https://launchpad.net/siesta
[11] Transition of HPC Towards Exascale Computing, Volume 24 of Advances in Parallel Computing, E.H. D’Hollander, IOS Press, 2013, ISBN: 9781614993247
[12] See https://en.wikipedia.org/wiki/Open_standard and https://en.wikipedia.org/wiki/Participatory_design
[13] See http://www.doxygen.org/, http://www.sphinx-doc.org/, and http://readthedocs.org/
[14] See https://en.wikipedia.org/wiki/Test-driven_development and http://agiledata.org/essays/tdd.html
[15] See e.g. http://www.etp4hpc.eu/en/esds.html
[16] See e.g. https://easybuilders.github.io/easybuild/, https://github.com/LLNL/spack, https://github.com/snapcore/snapcraft, and https://www.macports.org/ports.php?by=category&substr=science
[17] http://esl.cecam.org/
[18] https://exdci.eu/newsroom/press-releases/exdci-towards-common-hpc-strategy-europe

Share