Protein based biosensors: application in detecting influenza

Donal MacKernan, University College Dublin & E-CAM

An E-CAM transverse action is the development of a protein based sensor (pending patent filled in by UCD[1,2]) with applications in medical diagnostics, scientific visualisation and therapeutics. At the heart of the sensor is a novel protein based molecular switch which allows extremely sensitive real time measurement of molecular targets to be made, and to turn on or off  protein functions and other processes accordingly (see Figure 1). For a description of the sensor, see this piece

One of the applications of the protein based sensor can be to detect influenza, by modifying the sensor to measure ‘up regulated Epidermal growth factor receptor’ (EGFR) in living cells. The interest of using it for the flu, is that it is cheap, easy to use in the field by non-specialists, and accurate – that is with very low false negatives and positives compared to existing field tests. UCD’s patent pending sensors have these attributes built into their ‘all-n-one’ design, through a novel type of molecular switch, that thrived in the laboratory proof of concept phase. A funded research project to continue this development at UCD is almost certain, and likely to start within weeks.

And the answer to the current frequently asked question “can we modify this sensor to quickly detect the COVID 19 ?” is yes, provided we know amino acid sequences of antibody -epitope pairs specific to this coronavirus.

Figure 1. Schematic illustration of a widely used sensor on the left of Komatsu et al[3] and the “all-n-one” UCD sensor on the right in the “OFF” and “ON” states corresponding to the absence and presence of the target biomarker respectively. The “all-n-one” substitutes the Komatsu flexible linker with a hinge protein with charged residues q1,q2,..which are symmetrically placed on either side of the centre so as to ensure that in the absence of the target, the Coulomb repulsion forces the hinge to be open. Their location and number can be adjusted to suit each application. The spheres B and B’ denote the sensing modules which tend to bind to each other when a target biomarker or analyte is present. The spheres A and A’ denote the reporting modules which emit a recognisable (typically optical) signal when they are close or in contact with each other i.e. in the presence of a target biomarker or analyte.

[1] EP3265812A2, 2018-01-10, UNIV. COLLEGE DUBLIN NAT. UNIV. IRELAND. Inventors: Donal MacKernan and Shorujya Sanyal. Earliest priority: 2015-03-04, Earliest publication: 2016-09-09. https://worldwide.espacenet.com/patent/search?q=pn%3DEP3265812A2  

[2] WO2018047110A1, 2018-03-15, UNIV. COLLEGE DUBLIN NAT. UNIV. IRELAND. Inventor: Donal MacKernan. Earliest priority: 2016-09-08, Earliest publication: 2018-03-15. https://worldwide.espacenet.com/patent/search?q=pn%3DWO2018047110A1

[3] Komatsu N., Aoki K., Yamada M., Yukinaga H., Fujita Y., Kamioka Y., Matsuda M., Development of an optimized backbone of FRET biosensors for kinases and GTPases. Mol. Biol. Cell. 2011 Dec; 22(23): 4647-56.

Share

QMCPack Interfaces for Electronic Structure Computations

Quantum Monte Carlo (QMC) methods are a class of ab initio, stochastic techniques for the study of quantum systems. While QMC simulations are computationally expensive, they have the advantage of being accurate, fully ab initio and scalable to a large number of cores with limited memory requirements.

These features make QMC methods a valuable tool to assess the accuracy of DFT computations, which are widely used in the fields of condensed matter physics, quantum chemistry and material science.

QMCPack is a free package for QMC simulations of electronic structure developed in several national labs in the US. This package is written in object oriented C++, offers a great flexibility in the choice of systems, trial wave functions and QMC methods and supports massive parallelism and the usage of GPUs.

Trial wave functions for electronic QMC computations commonly require the use of  single electrons orbitals, typically computed by DFT. The aim of the E-CAM pilot project described here is to build interfaces between QMCPack and other softwares for electronic structure computations, e.g. the DFT code Quantum Espresso.

These interfaces are used to manage the orbital reading or their DFT generation within QMCPack, to establish an automated, black box workflow for QMC computations. QMC simulation can for example be used in the benchmark and validation of DFT calculations: such a procedure can be employed in the study of several physical systems of interest in condensed matter physics, chemistry or material science, with application in the industry, e.g. in the study of metal-ion or water-carbon interfaces.

The following modules have been built as part of this pilot project:

  • QMCQEPack, that provides the files to download and  properly patch Quantum Espresso 5.3 to build the libpwinterface.so library; this library is required to use the module ESPWSCFInterface to generate single particle orbitals during a QMCPack computation using Quantum Espresso.
  • ESInterfaceBase that provides a base class for a general interface to generate single particle orbitals to be used in QMC simulations in QMCPack; implementations of specific interfaces as derived classes of ESInterfaceBase are available as the separate modules as follows:

The documentation about interfaces in QMCPack, can be seen in the QMCPack user manual at https://github.com/michruggeri/qmcpack/blob/f88a419ad1a24c68b2fdc345ad141e05ed0ab178/manual/interfaces.tex

Share

New publication is out: “Towards extreme scale dissipative particle dynamics simulations using multiple GPGPUs”

 

E-CAM researchers working at the Hartree Centre – Daresbury Laboratory have co-designed the DL_MESO Mesoscale Simulation package to run on multiple GPUs, and ran for the first time a Dissipative Particle Dynamics simulation of a very large system (1.8 billion particles) on 4096 GPUs.

 

Towards extreme scale dissipative particle dynamics simulations using multiple GPGPUs
J. Castagna, X. Guo, M. Seaton and A. O’Cais
Computer Physics Communications (2020) 107159
DOI: 10.1016/j.cpc.2020.107159 (open access)

Abstract

A multi-GPGPU development for Mesoscale Simulations using the Dissipative Particle Dynamics method is presented. This distributed GPU acceleration development is an extension of the DL_MESO package to MPI+CUDA in order to exploit the computational power of the latest NVIDIA cards on hybrid CPU–GPU architectures. Details about the extensively applicable algorithm implementation and memory coalescing data structures are presented. The key algorithms’ optimizations for the nearest-neighbour list searching of particle pairs for short range forces, exchange of data and overlapping between computation and communications are also given. We have have carried out strong and weak scaling performance analyses with up to 4096 GPUs. A two phase mixture separation test case with 1.8 billion particles has been run on the Piz Daint supercomputer from the Swiss National Supercomputer Center. With CUDA aware MPI, proper GPU affinity, communication and computation overlap optimizations for multi-GPU version, the final optimization results demonstrated more than 94% efficiency for weak scaling and more than 80% efficiency for strong scaling. As far as we know, this is the first report in the literature of DPD simulations being run on this large number of GPUs. The remaining challenges and future work are also discussed at the end of the paper.

Share

6 software modules delivered in the area of Quantum Dynamics

 

In this report for Deliverable 3.5 of E-CAM [1], 6 software modules in quantum dynamics are presented.

All modules stem from the activities initiated during the State-of-the-Art Workshop held at Lyon (France) in June 2019 and the Extended Software Development Workshop in Quantum Dynamics, held at Durham University (UK) in July 2019. The modules originate from the input of E-CAM’s academic user base. They have been developed by members of the project (S. Bonella – EPFL), established collaborators (G. Worth – University College London, S. Gomez – University of Vienna, C. Sanz – University of Madrid, D. Lauvergnat – Univeristy of Paris Sud) and new contributors to the E-CAM repository (F. Agostini – University of Paris Sud, Basile Curchod – University of Durham, A. Schild – ETH Zurich, S. Hupper and T. Plé – Sorbonne University, G. Christopoulou – University College London). The presence of new contributors indicates the interest of the community in our efforts. Furthermore, the contributors to modules in WP3 continue to be at different stages of their careers (in particular, Thomas Plé and G. Christopoulou are PhD students) highlighting the training value of our activities.

Following the order of presentation, the 6 modules are named: CLstunftiPIM_QTBPerGaussDirect Dynamics DatabaseExact Factorization Analysis Code (EFAC), and GuessSOC. In this report, a short description is written for each module, followed by a link to the respective Merge-Request document on the GitLab service of E-CAM. These merge requests contain detailed information about the code development, testing and documentation of the modules. 

[1] “D3.5.: Quantum dynamics e-cam modules IV,” Dec. 2019. [Online]. Available: https://doi.org/10.5281/zenodo.3598325

Full report available here.

 

Share

Issue 12 – December 2019

E-CAM Newsletter of December 2019

Get the latest news from E-CAM, sign up for our  newsletter.

Share

PANNA: Properties from Artificial Neural Network Architectures

PANNA is a package for training and validating neural networks to represent atomic potentials. It implements configurable all-to-all connected deep neural network architectures which allow for the exploration of training dynamics. Currently it includes tools to enable original[1] and modified[2] Behler-Parrinello input feature vectors, both for molecules and crystals, but the network can also be used in an input-agnostic fashion to enable further experimentation. PANNA is written in Python and relies on TensorFlow as underlying engine.

A common way to use PANNA in its current implementation is to train a neural network in order to estimate the total energy of a molecule or crystal, as a sum of atomic contributions, by learning from the data of reference total energy calculations for similar structures (usually ab-initio calculations).

The neural network models in literature often start from a description of the system of interest in terms of local feature vectors for each atom in the configuration. PANNA provides tools to calculate two versions of the Behler-Parrinello local descriptors but it allows the use of any species-resolved, fixed-size array that describes the input data.

PANNA allows the construction of neural network architectures with different sizes for each of the atomic species in the training set. Currently the allowed architecture is a deep neural network of fully connected layers, starting from the input feature vector and going through one or more hidden layers. The user can determine to train or freeze any layer, s/he can also transfer network parameters between species upon restart.

In summary, PANNA is an easy-to-use interface for obtaining neural network models for atomistic potentials, leveraging the highly optimized TensorFlow infrastructure to provide an efficient and parallelized, GPU-accelerated training.

It provides:

  • an input creation tool (atomistic calculation result -> G-vector )
  • an input packaging tool for quick processing of TensorFlow ( G-vector -> TFData bundle)
  • a network training tool
  • a network validation tool
  • a LAMMPS plugin
  • a bundle of sample data for testing[3]

See the full documentation of PANNA at https://gitlab.com/PANNAdevs/panna/blob/master/doc/PANNA_documentation.md

GitLab repository for PANNA: https://gitlab.com/PANNAdevs/panna

See manuscript at https://arxiv.org/abs/1907.03055

References

[1] J. Behler and M. Parrinello, “Generalized Neural-Network Representation of High-Dimensional  Potential-Energy Surfaces”, Phys. Rev. Lett. 98, 146401 (2007)

[2] Justin S. Smith, Olexandr Isayev, Adrian E. Roitberg, “ANI-1: An extensible neural network potential with DFT accuracy at force field computational cost», Chemical Science,(2017), DOI: 10.1039/C6SC05720A

[3] Justin S. Smith, Olexandr Isayev, Adrian E. Roitberg, “ANI-1, A data set of 20 million calculated off-equilibrium conformations for organic molecules; Scientific Data, 4 (2017), Article number: 170193, DOI: 10.1038/sdata.2017.193

Share

New publication is out: “Atomistic insight into the kinetic pathways for Watson–Crick to Hoogsteen transitions in DNA”

Title: Atomistic insight into the kinetic pathways for Watson-Crick to Hoogsteen transitions in DNA

Authors: Vreede J, Pérez de Alba Ortíz A, Bolhuis PG, and Swenson DWH

Nucleic Acids Research 2019, Vol. 47, No. 21, 11069–11076, DOI: 10.1093/nar/gkz837 (open access)

Synopsis

DNA predominantly contains Watson–Crick (WC) base pairs, but a non-negligible fraction of base pairs are in the Hoogsteen (HG) hydrogen bonding motif at any time. In the HG motif, the purine is “upside down” compared to the WC motif. Two classes of mechanism have been proposed for the transition between these motifs: one where the base pair stays inside the confines of the helical backbone, and one where one base flips outside of the helical backbone before returning in the “upside down” HG conformation. The transitions between WC and HG may play a role in recognition and replication, but are difficult to investigate because they occur quickly, but only rarely. To gain insight into the mechanisms for this process, researchers performed transition path sampling simulations on a model nucleotide sequence in which an adenine-thymine base pair changes from WC to HG, and found that the outside transition was strongly preferred. Simulated rates and free energy differences agree with experiments, the simulations provide highly detailed insights into the mechanisms of this process.

Share

Pyscal- A python module for structural analysis of atomic environments

Description

pyscal is a python module for the calculation of local atomic structural environments including Steinhardt’s bond orientational order parameters[1] during post-processing of atomistic simulation data. The core functionality of pyscal is written in C++ with python wrappers using pybind11 which allows for fast calculations and easy extensions in python.

Practical Applications

Steinhardt’s order parameters are widely used for the identification of crystal structures [3]. They are also used to distinguish if an atom is in a solid or liquid environment [4]. pyscal is inspired by the BondOrderAnalysis code, but has since incorporated many additional features and modifications. The pyscal module includes the following functionalities:

  • calculation of Steinhardt’s order parameters and their averaged version [2].
  • links with the Voro++ code, for the calculation of Steinhardt parameters weighted using the face areas of Voronoi polyhedra [3].
  • classification of atoms as solid or liquid [4].
  • clustering of particles based on a user defined property.
  • methods for calculating radial distribution functions, Voronoi volumes of particles, number of vertices and face area of Voronoi polyhedra, and coordination numbers.

Background information

See the application documentation for full details. A paper about pyscal is also available in Ref. [5].

The utilisation of Dask within the project came about as a result of the E-CAM High Throughput Computing ESDW held in Turin in 2018 and 2019.

The software module was developed by Sarath Menon, Grisell Díaz Leines and Jutta Rogal, and is under a GNU General Public License v3.0.

References

[1] Steinhardt, P. J., Nelson, D. R., & Ronchetti, M. (1983). Physical Review B, 28.

[2] Lechner, W., & Dellago, C. (2008). The Journal of Chemical Physics, 129.

[3] (12) Mickel, W., Kapfer, S. C., Schröder-Turk, G. E., & Mecke, K. (2013). The Journal of Chemical Physics, 138.

[4] (12) Auer, S., & Frenkel, D. (2005). Advances in Polymer Science, 173.

[5] Menon, S., Díaz Leines, G., & Rogal, J.(2019). pyscal: A python module for structural analysis of atomic environments. Journal of Open Source Software, 4(43), 1824

Share

E-CAM Case Study: The development of the GC-AdResS scheme:

from smooth coupling

to a direct interface (abrupt)

Dr. Christian Krekeler, Freie Universität Berlin

Abstract

GC-AdResS is a technique  that speeds up computations without loss of accuracy for key system properties by dividing the simulation box into two or more regions having different levels of resolution, for instance a high resolution region where the molecules of the system are treated at an atomistic level of detail, and other regions where molecules are treated at a coarse grained level, and transition regions where a weighted average of the two resolutions is used. The goal of the E-CAM GC-AdResS pilot project was to eliminate  the need of a transition region so as to significantly improve  performance, and to allow much greater flexibility. For example, the  low resolution region can be a particle reservoir (ranging in detail from coarse grained  to ideal gas particles) and a high resolution atomistic region with no transition region, as was needed hitherto.  The only requirement is that the two regions can exchange particles, and that a corresponding “thermodynamic” force is computed self-consistently, which it turns out is very simple to implement.

Continue reading…
Share

PRACE/E-CAM successful collaboration produces task scheduling library for optimising time-scale molecular dynamics simulations

Challenge

E-CAM is interested in the challenge of bridging up timescales. To study molecular dynamics with atomistic detail, timesteps must be used on the order of a femto-second. Many problems in biological chemistry, materials science, and other ends involve events that only spontaneously occur after a millisecond or longer (for example, biomolecular conformational changes, or nucleation processes). That means that around 1012 time steps would be needed to see a single millisecond-scale event. This is the problem of “rare events” in theoretical and computational chemistry. Modern supercomputers are beginning to make it possible to obtain trajectories long enough to observe some of these processes, but to fully characterize a transition with proper statistics, many examples are needed. And in order to obtain many examples, the same application must be run thousands of times with varying inputs. To manage this kind of computation, a task scheduling library is needed

Solution and benefits

The development of a python library, in collaboration with PRACE. This library builds on top of the scalable analytics framework Dask and enables it to resiliently manage multi-node and multiarchitecture environments. This offers exciting possibilities in the areas of interactive supercomputing and burst supercomputing. A white paper focused on the library was written in collaboration with PRACE and is available here.

The main elements of the mentioned scheduling library are: task de definition, a task scheduling (handled in Python) and task execution (facilitated by the MPI layer). While traditionally an HTC workload is looked down upon in the HPC space, the scientific use case for extreme-scale resources exists and algorithms that require a coordinated approach make efficient libraries that implement this approach increasingly important in the HPC space. The 5 Peta op booster technology of JURECA is an interesting concept with respect to this approach since the offloading approach of heavy computation marries perfectly to the concept outlined here.

Reference

Alan O’Cais, David Swenson, Mariusz Uchronski, & Adam Wlodarczyk. (2019, August 14). Task Scheduling Library for Optimising Time-Scale Molecular Dynamics Simulations. Zenodo. http://doi.org/10.5281/zenodo.3527643

Share