NAMD Wiki: Namd27Release

  You are encouraged to improve NamdWiki by adding content, correcting errors, or removing spam.

Issues with the 2.7 release of NAMD.

Please see the release notes at

For bugs fixed since the 2.7b4 release see Namd27b4Release and for all changes see

The charm-6.2.2 included with NAMD 2.7 is a pre-release (charm-6.2.2-pre3), the same used for NAMD 2.7b4.

The charmrun for net versions of NAMD 2.7 (and 2.7b3 and 2.7b4) does not tolerate being suspended with control-Z or kill -STOP. When resumed with bg, fg, or kill -CONT it will exit with this message:

Charmrun: error on request socket--
Node program terminated unexpectedly!

This should mostly effect installations where the queueing system uses STOP/CONT to implement subordinate queues. This will be fixed in a 2.7 re-release as soon as possible.

The memory-optimized build of NAMD is broken in 2.7b3, 2.7b4, and 2.7. The current implementation will be replaced with full parallel I/O in 2.8b1. It was not incorporated before 2.7 final because of the risk of introducing new bugs.

On Abe and Lincoln at NCSA:


now uses the ibverbs build.


uses one process per node for increased maximum available memory, but is slightly slower since one core per node is reserved for the communication thread.


runs on GPU-accelerated Lincoln nodes. See the CUDA section of the release notes for details.

On Ranger at TACC:


uses ibverbs.


can use 1way, 2way, or 4way processes per node with 15, 7, or 3 compute threads per process, and is again progressively slower because more cores are used for communication.

On Lonestar at TACC:




On Kraken at NICS:


still uses MPI but the binary is built with g++ now so it should be noticably faster.


will use 11 compute threads per process.

The memory optimized build is broken, so you'll want to stick with the builds from April for those projects. As an alternative, the SMP builds share molecular data within a node, allowing larger simulations to run on any given machine.