Amber masthead
Filler image AmberTools18 Amber18 Manuals Tutorials Force Fields Contacts History
Filler image

Useful links:

Amber Home
Download Amber
Installation
News
Amber Citations
GPU Support
Features
Get Started
Benchmarks
Hardware
Logistics
Patches
Intel Support
Updates
Mailing Lists
For Educators
File Formats

GPU overview and brief history.

This page provides background on running MD simulations in Amber18 (pmemd) with GPU acceleration. If you are using earlier verisons, please see the archived Amber16 GPU pages or the archived Amber14 GPU pages. Information about GPU acceleration in the cpptraj or pbsa programs can be found in the chapters on those program in the Amber 2018 Reference Manual.

The following pages give additional information about the GPU code. Links will persist on the navigation bar to the left when visiting the GPU section of the Amber site.

Introduction

The fastest academic GPU MD simulation engine, pmemd.cuda, is written and maintained by researchers in the Amber community. The original code owes to the pioneering work of Scott Legrand, now at Amazon Web Services, and Ross Walker, now at GlaxoSmithKline; see literature references below. Principal current and past developers include:

  • David Cerutti (Rutgers, LBSR), overseeing major code renovations, performance enhancements, and maintenance of the general MD engine
  • Taisung Lee (Rutgers, LBSR), author of the default thermodynamic integration and free energy feature extensions
  • Daniel Mermelstein, author of alternative free energy methods
  • Charles Lin (Silicon Therapeutics), author of alternative free energy methods
  • Delaram Goreishi (University of Florida), author of Nudged Elastic Band methods in CUDA and Fortran
  • Scott Legrand (Amazon), primary author of the original CUDA and C++ routines
  • Ross Walker (GSK), author of the first CUDA extensions for the original

    pmemd

    Fortran program

The state of the code is also buoyed by the generous support of technology engineers and alliance managers at NVIDIA Corporation, including Ke Li, Peng Wang, and Mark Berger.

Since the advent of GPU simulations in Amber11, the engine has taken on new features, quality control mechanisms, and algorithms. While the inherently parallel GPU architecture does not permit the verbose error checking and reporting that the CPU code contains, we actively monitor user feedback and engage a set of built-in debugging functions to help us understand any issues that arise. Hundreds of labs and companies all over the world use the latest Amber18 GPU simulation engine.

The code supports serial as well as parallel GPU simulations, but from Pascal (2016) onward, the benefit of running a simulation on two GPUs is marginal, and on the latest Volta architectures our algorithms cannot scale to multiple GPUs. We therefore recommend executing independent simulations on separate GPUs in most cases. Because the entirety of the molecular dynamics is performed on the GPU, one CPU core can drive a simulation and a server full of four or eight GPUs can run one independent simulation per card without loss of performance provided that there are a few free CPU cores available. (Most commodity CPU chips have at least four cores.) The fact that GPU performance is unaffected by CPU performance means that any CPU compiler (the open source GNU C and Fortran compilers are adequate) will deliver comparable results with Amber's premier engine, and sets Amber apart from other molecular dynamics codes.

The Amber16 GPU simulation engine (released in 2016) improved performance on the contemporary Maxwell architectures and adapted the code to work on Pascal architectures. Amber16 also added a number of important features, including:

  • Support for semi-isotropic pressure scaling.
  • Support for the Charmm VDW force switch.
  • Enhanced NMR restraints and R^6 averaging support
  • Gaussian accelerated molecular dynamics.
  • Expanded umbrella sampling support.
  • Constant pH and REMD Constant pH support.
  • Adaptively biased MD.

The Amber18 engine adds even more performance enhancements, surpassing the performance of Amber16 on Pascal architectures by an additional 25% to 42%. Both Amber16 and Amber18 now support the new Volta architecture. Amber18 adds more features for enhanced sampling and free energy computations, including:

  • Thermodynamic integration (TI) by linear alchemical transformation
  • Thermodynamic integration by parameter interpolation (PI-TI)
  • Nudged elastic band (NEB) calculations for reaction path exploration
  • Free energy perturbation (FEP) using a Zwanzig change of state process
  • Replica exchange molecular dynamics (REMD)
  • Constant pH molecular dynamics (cpHMD)
  • 12-6-4 potentials for metal ion solvation

Literature references

The initial Amber implementation papers, coveing implicit and explicit solvents:

  • Andreas W. Goetz; Mark J. Williamson; Dong Xu; Duncan Poole; Scott Le Grand;   & Ross C. Walker* "Routine microsecond molecular dynamics simulations with AMBER - Part I: Generalized Born", J. Chem. Theory Comput., 2012, 8 (5), pp 1542-1555, DOI: 10.1021/ct200909j
  • Romelia Salomon-Ferrer; Andreas W. Goetz; Duncan Poole; Scott Le Grand;  & Ross C. Walker* "Routine microsecond molecular dynamics simulations with AMBER - Part II: Particle Mesh Ewald", J. Chem. Theory Comput., 2013, 9 (9), pp 3878-3888. DOI: 10.1021/ct400314y
  • Scott Le Grand; Andreas W. Goetz; & Ross C. Walker* "SPFP: Speed without compromise - a mixed precision model for GPU accelerated molecular dynamics simulations.", Comp. Phys. Comm, 2013, 184, pp374-380, DOI: 10.1016/j.cpc.2012.09.022

More recent thermodynamic integration capabilities:

  • Tai-Sung Lee, Yuan Hu, Brad Sherborne, Zhuyan Guo, & Darrin M. York, "Toward Fast and Accurate Binding Affinity Prediction with pmemdGTI: An Efficient Implementation of GPU-Accelerated Thermodynamic Integration", J. Chem. Theory Comput., 2017, 13, pp 3077–3084, DOI: 10.1021/acs.jctc.7b00102
  • Daniel J. Mermelstein, Charles Lin, Gard Nelson, Rachael Kretsch, J. Andrew McCammon, & Ross C. Walker, "Fast, Flexible and Efficient GPU Accelerated Binding Free Energy Calculations within the AMBER Molecular Dynamics Package", J. Comp. Chem., 2018, DOI: 10.1002/jcc.25187

"Insert clever motto here."

Last modified: Aug 17, 2018