NAMD Wiki: Namd28b1Release

  You are encouraged to improve NamdWiki by adding content, correcting errors, or removing spam.

Issues with the 2.8b1 release of NAMD.

Please see the release notes at

For bugs fixed since the 2.7 release see Namd27Release and for all changes see

There is a memory leak in position output (both trajectory and restart files). Fixed in CVS, workaround is to restart simulation before it runs out of memory.

The limit on the number of VDW types in the CUDA build is eliminated as of the March 29 nightly build.

Spherical and cylindrical boundary conditions are broken, giving nan for forces and energies. Fixed in 2.8b2.

SMP and multicore builds complain "FixedAtoms may not be enabled in a script." when attempting to turn fixed atoms off in a script. Fixed in 2.8b2.

Various I/O calls returning interrupted system call errors exit rather than retry. Fixed in 2.8b2.

The charm-6.3.0 included with NAMD 2.8b1 includes some patches from charm-6.3.1.

The experimental memory-optimized build option with parallel I/O is documented at NamdMemoryReduction.

On Abe and Lincoln at NCSA:


uses the ibverbs build.


uses one process per node for increased maximum available memory, but is slightly slower since one core per node is reserved for the communication thread.


runs on GPU-accelerated Lincoln nodes. See the CUDA section of the release notes for details.

On the Ember Altix UV at NCSA:


On Ranger at TACC:


uses ibverbs.


can use 1way, 2way, or 4way processes per node with 15, 7, or 3 compute threads per process, and is again progressively slower because more cores are used for communication.

On Lonestar at TACC:




On Kraken at NICS:


still uses MPI but the binary is built with g++ now so it should be noticably faster.


will use 11 compute threads per process.