NAMD Wiki: NamdAtPDC

  You are encouraged to improve NamdWiki by adding content, correcting errors, or removing spam.

Tutorial Materials

Full list:

Recommended tutorials:

  • QwikMD - best for beginners
  • NAMD Tutorial - very long
  • Methods for calculating Potentials of Mean Force
  • One-dimensional Replica-exchange Umbrella Sampling
  • String Method with Swarms of Trajectories
  • Parameterizing Small Molecules Using Force Field Toolkit
  • Molecular Dynamics Flexible Fitting (MDFF)
  • Self-guided scaling studies on Beskow - see directions below (compiling optional)

Recommended tutorial materials, from any PDC system:


Commands for Running QwikMD Tutorial

VMD 1.9.3 should be pre-installed on the lab workstations for your use.

This tutorial runs completely on the lab workstation (or your laptop) and does not use Beskow or Tegner.

tar xzf /afs/

evince /afs/ &

Warning: QwikMD will try to use /usr/tmp, which doesn't exist on lab workstations, unless you set TMPDIR as below.

env TMPDIR=/tmp vmd

When you get to the part of the tutorial that requires additional input files, the following will untar them in /tmp:

( cd /tmp; tar xzf /afs/ )

Running pre-compiled binaries on Beskow

The following commands submit simple batch jobs


(Notes: To prepare files for running on Beskow, Do not check out the "Live View" option in QwikMD)

This script requires three arguments:

  • NAMD input file
  • NAMD log file
  • number of nodes (number of cores / 32)
  • processes per node (1, 2, 4, 31, 32; defaults to 1, ignored for cuda)
  • queue (defaults to "regular", other options are "test" and "low")
  • replica or other args (optional)

For memory-optimized build use ~jphillip/NAMD_scripts/runbatch_memopt


Files need to be available on compute nodes, substitute your initial and username below.

ln -s /cfs/klemming/nobackup/y/youruser ~/work

cd ~/work

(The following are available at

tar xzf /afs/

tar xzf /afs/

tar xzf /afs/

cp stmv/* stmv_sc14/.

The following commands will run NAMD in variety of configurations on 4 nodes.

The "test" queue just sets the runtime to 30 minutes so the job might start earlier.

NAMD (Charm++) options in the generated batch job are:

  • +ppn 31 31 worker threads per node
  • +pemap 1-31 worker threads on cpus 1-31
  • +commap 0 smp build communication thread on cpu 0

To understand the sbatch and aprun options in the submitted scripts see or "man aprun"

~jphillip/NAMD_scripts/runbatch apoa1/apoa1.namd apoa1_4_default.log 4 test

~jphillip/NAMD_scripts/runbatch apoa1/apoa1.namd apoa1_4_1.log 4 1 test

~jphillip/NAMD_scripts/runbatch apoa1/apoa1.namd apoa1_4_2.log 4 2 test

~jphillip/NAMD_scripts/runbatch apoa1/apoa1.namd apoa1_4_4.log 4 4 test

~jphillip/NAMD_scripts/runbatch apoa1/apoa1.namd apoa1_4_31.log 4 31 test

~jphillip/NAMD_scripts/runbatch apoa1/apoa1.namd apoa1_4_32.log 4 32 test

squeue -u $USER

When jobs have finished running, check performance with "grep time: *.log", the "Benchmark time:" lines are after load balancing and should represent the performance of a long run.

Using memory optimization for very large systems

cd ~/work/stmv_sc14

We use a compute node to avoid running the login node out of memory.

salloc --nodes=1 -t 3:00:00 -A edu17.bioexcel

The following commands generate multiple copies of the STMV structure. A recent nightly build is used due to performance improvements in the compression code.

alias psfgen='aprun -n 1 /cfs/klemming/nobackup/j/jphillip/NAMD_2.12_Linux-x86_64-multicore/psfgen'

psfgen make_stmv.pgn

psfgen make_5x2x2stmv.pgn

psfgen make_210stmv.pgn

The following commands use a regular NAMD binary to compress the STMV structure. Again, we use a compute node to avoid running the login node out of memory.

alias namd2='aprun -n 1 /cfs/klemming/nobackup/j/jphillip/NAMD_CVS-2017-04-10_Linux-x86_64-multicore/namd2'

namd2 compress_1stmv.namd

namd2 compress_20stmv.namd

The 210stmv system (224 million atoms) must be compressed on tegner as to avoid running out of memory and takes up to 30 minutes to compress, so skip it or work in another terminal while it runs.

On tegner: module load gcc

On tegner: module load openmpi

On tegner: salloc --nodes=1 -t 1:00:00 -A edu17.bioexcel

On tegner: mpirun -n 1 /cfs/klemming/nobackup/j/jphillip/NAMD_CVS-2017-04-10_Linux-x86_64-multicore/namd2 compress_210stmv.namd

The following NAMD config files require a memopt binary. Even if memory usage isn't an issue the memopt version greatly reduces startup time.

~jphillip/NAMD_scripts/runbatch_memopt 1stmv2fs.namd 1stmv_32_1.log 32 1 test

~jphillip/NAMD_scripts/runbatch_memopt 20stmv2fs.namd 20stmv_32_1.log 32 1 test

~jphillip/NAMD_scripts/runbatch_memopt 210stmv2fs.namd 210stmv_32_1.log 32 1 test

Compiling Charm++ and NAMD on Beskow

Libraries available from

(Patch for Cray is


tar xzf /afs/

tar xzf /afs/

ln -s tcl8.5.9-crayxe-threaded tcl-threaded

ln -s tcl8.5.9-crayxe tcl

module swap PrgEnv-cray PrgEnv-intel

module load fftw

module load rca

module load craype-hugepages8M

module list

Files need to be available on compute nodes, substitute your initial and username below.

ln -s /cfs/klemming/nobackup/y/youruser ~/work

cd ~/work

(Normally you would download NAMD source from

tar xzf /afs/

cd NAMD_2.12_Source

tar xf charm-6.7.1.tar

cd charm-6.7.1

./build charm++ gni-crayxc persistent --no-build-shared --with-production

cd gni-crayxc-persistent/tests/charm++/megatest

make pgm

mv pgm ../../../bin/megatest

cd ../../converse/megacon

make pgm

mv pgm ../../../bin/megacon

cd ../../..

ldd bin/megatest bin/megacon

cd ..

./build charm++ gni-crayxc persistent smp --no-build-shared --with-production

cd gni-crayxc-persistent-smp/tests/charm++/megatest

make pgm

mv pgm ../../../bin/megatest

cd ../../converse/megacon

make pgm

mv pgm ../../../bin/megacon

cd ../../..

ldd bin/megatest bin/megacon

cd ..

cd ..

./config CRAY-XC-intel --with-fftw3 --charm-arch gni-crayxc-persistent

./config CRAY-XC-intel.smp --with-fftw3 --charm-arch gni-crayxc-persistent-smp

cd CRAY-XC-intel

make release

cd ../CRAY-XC-intel.smp

make release