GPU overview and brief history.
This page provides background on running MD simulations in Amber18
(pmemd) with GPU acceleration. If you are
using earlier versions, please see the archived Amber16
GPU pages or the
archived Amber14
GPU pages. Information about GPU acceleration in the cpptraj
or pbsa programs can be found in the chapters on those program in
the Amber 2018 Reference Manual.
The following pages give additional information about the GPU code. Links will persist
on the navigation bar to the left when visiting the GPU section of the Amber site.
Introduction
The fastest academic GPU MD simulation engine, pmemd.cuda, is written and maintained
by researchers in the Amber community. The original code owes to the pioneering work of Scott
Legrand, at NVIDIA, and Ross Walker, now at GlaxoSmithKline; see literature
references below.
Principal current and past developers include:
- David Cerutti (Rutgers, LBSR), overseeing major code renovations, performance
enhancements, and maintenance of the general MD engine
- Taisung Lee (Rutgers, LBSR), co-author of the thermodynamic integration
and free energy feature extensions
- Daniel Mermelstein (SDSC), co-author of the thermodynamic integration and free free energy feature extensions
- Charles Lin (SDSC, Silicon Therapeutics), co-author of the GPU NMR restraint code, thermodynamic integration and free energy extensions
- Perri Needham (SDSC, now OpenEye), co-author of the GPU NMR restraint code
- Delaram Goreishi (University of Florida), author of Nudged Elastic Band
methods in CUDA and Fortran
- Scott Legrand (NVIDIA), primary author of the original CUDA and C++
routines
- Ross Walker (SDSC, now GSK), project and QA lead, author of the first CUDA extensions for the original
pmemd Fortran program and developer of the mixed precision models
The state of the code is also buoyed by the generous support of Ke Li, Peng Wang, Duncan Poole and Mark Berger (technology engineers and
alliance managers) at NVIDIA Corporation, and Andrew Nelson, Nick Chen and Mike Chen at Exxact Corporation.
Since the advent of GPU accelerated simulations in Amber11, the engine has taken on new features,
quality control mechanisms, and algorithms. While the inherently parallel GPU architecture
does not permit the verbose error checking and reporting that the CPU code contains, we
actively monitor user feedback and engage a set of built-in debugging functions to help us
understand any issues that arise. Hundreds of labs and companies all over the world use the
latest Amber18 GPU simulation engine.
The code supports serial as well as parallel GPU simulations, but from Pascal (2016) onward,
the benefit of running a simulation, with the exception of REMD based simulations, on two or more GPUs is marginal. On the latest Voltai and Turing
architectures our algorithms cannot scale to multiple GPUs. We therefore recommend executing
independent simulations on separate GPUs in most cases. A key design feature of the GPU code is that the entirety of the molecular
dynamics calculation is performed on the GPU. This means that only one CPU core
is needed to drive a simulation and a server full of
four or eight GPUs can run one independent simulation per card without loss of performance
provided that there are at least the same number of free CPU cores available as GPUs in use. (Most commodity CPU chips have at least
four cores.) The fact that GPU performance is unaffected by CPU performance means that any CPU
compiler (the open source GNU C and Fortran compilers are adequate) will deliver comparable
results with Amber's premier engine, and sets Amber apart from other molecular dynamics
codes. Another key feature of this design choice is that it means low cost CPUs can be used which coupled with custom designed precision models and bitwise reproducibility use to validate consumer cards gives AMBER unrivaled performance per dollar.
The Amber16 GPU simulation engine (released in 2016) improved performance on the
contemporary Maxwell architectures and adapted the code to work on Pascal architectures.
Amber16 also added a number of important features, including:
- Support for semi-isotropic pressure scaling.
- Support for the Charmm VDW force switch.
- Enhanced NMR restraints and R^6 averaging support
- Gaussian accelerated molecular dynamics.
- Expanded umbrella sampling support.
- Constant pH and REMD Constant pH support.
- Adaptively biased MD.
The Amber18 engine adds even more performance enhancements, surpassing the performance of
Amber16 on Pascal architectures by an additional 25% to 42%. Both Amber16 and
Amber18 now support the new Volta architecture, while updates as of Oct 2018 provide Amber18 with support for the latest Turing architectures (RTX2060,2070,2080 & 2080TI). In terms of features Amber18 adds support for enhanced
sampling and free energy computations, including:
- Thermodynamic integration (TI) by linear alchemical transformation
- Thermodynamic integration by parameter interpolation (PI-TI)
- Nudged elastic band (NEB) calculations for reaction path exploration
- Free energy perturbation (FEP) using a Zwanzig change of state process
- Replica exchange molecular dynamics (REMD)
- Constant pH molecular dynamics (cpHMD)
- 12-6-4 potentials for metal ion solvation
Literature references
The initial Amber implementation papers, covering implicit and explicit solvents:
- Andreas W. Goetz; Mark J. Williamson; Dong Xu; Duncan Poole; Scott Le Grand;
& Ross C. Walker* "Routine microsecond molecular dynamics simulations with
AMBER - Part I: Generalized Born", J. Chem. Theory Comput., 2012, 8
(5), pp 1542-1555, DOI:
10.1021/ct200909j
- Romelia Salomon-Ferrer; Andreas W. Goetz; Duncan Poole; Scott Le Grand;
& Ross C. Walker* "Routine microsecond molecular dynamics simulations with
AMBER - Part II: Particle Mesh Ewald", J. Chem. Theory Comput., 2013,
9 (9), pp 3878-3888. DOI:
10.1021/ct400314y
- Scott Le Grand; Andreas W. Goetz; & Ross C. Walker*
"SPFP: Speed without compromise - a mixed precision model for GPU accelerated
molecular dynamics simulations.", Comp. Phys. Comm, 2013, 184,
pp374-380, DOI:
10.1016/j.cpc.2012.09.022
More recent thermodynamic integration capabilities are described here:
- Tai-Sung Lee, Dan Mermelstein, Charles Lin, Scott LeGrand, Timothy J. Giese, Adrian Roitberg, David A. Case, Ross C. Walker* & Darrin M. York*,
"GPU-accelerated molecular dynamics and free energy methods in Amber18: performance enhancements and new features", J. Chem. Inf. Mod., 2018, in press, DOI:
10.1021/acs.jcim.8b00462
- Tai-Sung Lee, Yuan Hu, Brad Sherborne, Zhuyan Guo, & Darrin M. York*,
"Toward Fast and Accurate Binding Affinity Prediction with pmemdGTI: An Efficient
Implementation of GPU-Accelerated Thermodynamic Integration", J. Chem. Theory
Comput., 2017, 13, pp 3077–3084, DOI:
10.1021/acs.jctc.7b00102
- Daniel J. Mermelstein, Charles Lin, Gard Nelson, Rachael Kretsch, J. Andrew
McCammon, & Ross C. Walker*, "Fast, Flexible and Efficient GPU Accelerated Binding
Free Energy Calculations within the AMBER Molecular Dynamics Package", J. Comp.
Chem., 2018, DOI:
10.1002/jcc.25187
|