NAMD Wiki: Namd28b3Release

  You are encouraged to improve NamdWiki by adding content, correcting errors, or removing spam.

Issues with the 2.8b3 release of NAMD.


Please see the release notes at http://www.ks.uiuc.edu/Research/namd/2.8b3/notes.html


For bugs fixed since the 2.8b2 release see Namd28b2Release and for all changes see http://www.ks.uiuc.edu/Research/namd/cvs2html/chronological.html


The experimental memory-optimized build option with parallel I/O is documented at NamdMemoryReduction.


On Lincoln at NCSA:

/u/ac/jphillip/NAMD_scripts/runbatch_2.8b3_cuda

runs on GPU-accelerated Lincoln nodes. See the CUDA section of the release notes for details.

On the Ember Altix UV at NCSA:

/gpfs1/u/ac/jphillip/NAMD_scripts/runbatch_2.8b3

On Ranger at TACC:

/share/home/00288/tg455591/NAMD_scripts/runbatch_2.8b3

uses ibverbs.

/share/home/00288/tg455591/NAMD_scripts/runbatch_2.8b3_smp

can use 1way, 2way, or 4way processes per node with 15, 7, or 3 compute threads per process, and is again progressively slower because more cores are used for communication.

On Lonestar at TACC:

/home1/00288/tg455591/NAMD_scripts/runbatch_2.8b3

and

/home1/00288/tg455591/NAMD_scripts/runbatch_2.8b3_smp
  

On Kraken at NICS:

/nics/b/home/jphillip/NAMD_scripts/runbatch_2.8b3

still uses MPI but the binary is built with g++ for better performance.

/nics/b/home/jphillip/NAMD_scripts/runbatch_2.8b3_smp

will use 11 compute threads per process.