Generalized Born Implicit Solvent

Allows accelerated simulation of bio-molecules without the explicit inclusion of water molecules. The use of implicit solvent over explicit solvent speeds up the effective simulation time because, without the viscous drag of explicit water, bio-molecules can explore their conformational space faster.

NAMD 2.8 includes the Generalized Born model of Onufriev, Bashford and Case (GB-OBC). This implicit solvent model is very easy to use in that users are not required to specify any additional simulation parameters (though there are many optional parameters) beyond those already required by NAMD. Users can choose whether or not to employ periodic boundary conditions.

Built on the Charm++ library, this implicit solvent implementation is very scalable and can simulate 13,000 atoms at over 30 ns/day on 512 PEs and 130,000 atoms at 15 ns/day on 2048 PEs (both using 14 Ang cutoff and 2 fs time step). GBIS also allows the use of Interactive Molecular Dynamics for several hundred atoms on a desktop at 40 ns/day.

Accelerated Molecular Dynamics

Accelerated MD provides a robust biasing potential that increases the escape rates from potential wells, while still converging to the correct canonical distribution. Molecular systems trapped in conformational wells of the potential energy surface with high energy transition barriers may be driven by the biasing potential to more frequently sample rare events of interest without apriori knowledge of the relevant degrees of freedom or location of either the potential energy wells or saddle points. This method effectively accelerates and extends the time scale of molecular dynamics simulations.

MARTINI Residue-Based Coarse Grain forcefield

The MARTINI forcefield for residue-based coarse grained (RBCG) simulations of biomolecular systems are now supported. The MARTINI model maps approximately four atoms per one RBCG bead and has been parameterized according to the reproduction of partitioning free energies between polar and apolar phases of collections of molecules.

Non-uniform Grid Forces Grids

Extends grid-steered molecular dynamics (grid force) to include support for non-uniform grids. Non-uniform grid support allows different regions of a grid to be defined at different resolutions. This permits fast-changing parts of the grid to be defined at a higher resolution than slow-changing parts, thus reducing grid size and associated computational cost while maintaining accurate force calculation.

Symmetry and Domain Restraints

Additional restraints based on domain designation and symmetrical relationships. Domain restraints, implemented as optional Targeted MD parameters, restrain user-defined groups of atoms independently from one another. Symmetry restraints force symmetrical subunits to an average configuration determined by overlapping the monomers through a set of predefined transformation matrices. These restraints are useful for enforcing secondary and tertiary structures when molecules are subjected to deforming forces, such as those found in the Molecular Dynamics Flexible Fitting (MDFF) method.

Collective Variables Improvements

Colvar calculations are now fully compatible with energy minimization, making minimization under complex restraints possible. Support for scripted sequences of MD runs, e.g. alchemical FEP calculations, is much improved. Bias and colvar energies are now reported to NAMD under the MISC title. Combinations of periodic colvar components are now allowed (although with a strong warning against improper usage). Overhead has been reduced in cases where system forces are not required (all but ABF calculations). An issue with the extended Lagrangian integrator has been fixed, resulting in increased stability. Moving harmonic restraints now work well when split across several NAMD runs, using restart files. Output files containing a history of sampling and the free energy gradient estimate can now be generated for ABF calculations to monitor convergence.

Force Output and Trajectory Files

Net forces on each atom can be written to PDB or binary output files using the Tcl output command as "output withforces basename" or "output onlyforces basename". Net forces on each atom can be output to DCD trajectory files by specifying forceDCDfreq and optionally forceDCDfile. In the current implementation only those forces that are evaluated during the timestep that a frame is written are included in that frame. This is different from the behavior of TclForces and is likely to change based on user feedback. For this reason it is strongly recommended that forceDCDfreq be a multiple of fullElectFrequency. Forces are written in kcal/mol/Angstrom.

Shared-Memory Multicore and SMP Builds

The multicore (shared-memory parallelism only) and SMP (both shared-memory and distributed-memory parallelism) versions of NAMD are now fully supported on most platforms. Much of the molecular structure is shared by threads within a process, greatly reducing per-node memory requirements. The multicore version is generally faster than the network version (with multiple processes) on a single host. The SMP version, however, requires that one core per process be devoted to communication processing and is hence usually slower than a non-SMP build running one process per core. The SMP version should only be used to reduce per-core memory usage or possibly to reduce network contention on wide parallel runs.

Improved Load Balancer

The load balancer will automatically adjust the grain size of the compute objects based on measurements at the start of the simulation, reducing overhead while increasing scalability for simulations with irregular atom distributions (implicit solvent or other simulations without periodic boundaries, for example). Parallel runs on many thousands of cores may instead benefit from using a new hybrid load balancer specified by the "ldBalancer hybrid" option. This feature balances load within independent blocks of approximately 512 cores each, providing most of the benefits of centralized load balancing while distributing load-balancer memory and computation.

Experimental Memory-Optimized Version with Parallel I/O

NAMD binaries compiled with the --with-memopt to the config script use compressed molecular structure files generated by normal binaries of the same NAMD version and most input and output is done in parallel. This greatly reduces the per-node memory requirements for large simulations such as the 100-million-atom NCSA Blue Waters acceptance test. I/O performance is also improved when multiple writers are used with appropriately striped files in a parallel filesystem. This feature is still considered experimental and many simulation features are not supported. Usage information is provided on the NAMD Wiki at NamdMemoryReduction.

Windows HPC Server Binaries

Released Win64-MPI binaries are available for Windows HPC Server.

Some CUDA Acceleration Limitations Lifted

CUDA version functionality is largely unchanged from NAMD 2.7 except that molecular systems with NBFIX nonbonded parameters or many cross-links are now supported. As a result of changes to improve GPU-sharing performance NAMD now requires devices of at least compute capability 1.1, which includes all but the earliest CUDA-capable GPUs. SMP and multicore CUDA builds are not yet supported.