+--------------------------------------------------------------------+
|                                                                    |
|                      NAMD 2.7b1 Release Notes                      |
|                                                                    |
+--------------------------------------------------------------------+

This file contains the following sections:

  - Problems?  Found a bug?
  - Running NAMD
  - Compiling NAMD
  - Memory Usage
  - Improving Parallel Scaling
  - Endian Issues

----------------------------------------------------------------------

Problems?  Found a bug?

 1. Download and test the latest version of NAMD. 

 2. Please check NamdWiki, NAMD-L, and the rest of the NAMD web site
    resources at http://www.ks.uiuc.edu/Research/namd/.

 3. Gather, in a single directory, all input and config files needed
    to reproduce your problem. 

 4. Run once, redirecting output to a file (if the problem occurs
    randomly do a short run and include additional log files showing
    the error). 

 5. Tar everything up (but not the namd2 or charmrun binaries) and
    compress it, e.g., "tar cf mydir.tar mydir; gzip mydir.tar"
    Please do not attach your files individually to your email message
    as this is error prone and tedious to extract.  The only exception
    may be the log of your run showing any error messages.

 6. For problems concerning compiling or using NAMD please consider
    subscribing to NAMD-L and posting your question there, and
    summarizing the resolution of your problem on NamdWiki so that
    others may benefit from your experience.  Please avoid sending large
    attachments to NAMD-L by posting any related files on your web site
    or on BioCoRE (http://www.ks.uiuc.edu/Research/BioCoRE/, in BioFS for
    the NAMD public project) and including the location in your message.

 7. For bug reports, mail namd@ks.uiuc.edu with: 
    - A synopsis of the problem as the subject (not "HELP" or "URGENT").
    - The NAMD version, platform and number of CPUs the problem occurs
      (to be very complete just copy the first 20 lines of output).
    - A description of the problematic behavior and any error messages. 
    - If the problem is consistent or random. 
    - The URL of your compressed tar file on a web server.

 8. We'll get back to you with further questions or suggestions.  While
    we try to treat all of our users equally and respond to emails in a
    timely manner, other demands occasionally prevent us from doing so.
    Please understand that we must give our highest priority to crashes
    or incorrect results from the most recently released NAMD binaries.

----------------------------------------------------------------------

Running NAMD

NAMD runs on a variety of serial and parallel platforms.  While it is 
trivial to launch a serial program, a parallel program depends on a 
platform-specific library such as MPI to launch copies of itself on other 
nodes and to provide access to a high performance network such as Myrinet 
or InfiniBand if one is available.

For typical workstations (Windows, Linux, Mac OS X, or other Unix) with 
only ethernet networking (hopefully gigabit), NAMD uses the Charm++ native 
communications layer and the program charmrun to launch namd2 processes 
for parallel runs (either exclusively on the local machine with the 
++local option or on other hosts as specified by a nodelist file).  The 
namd2 binaries for these platforms can also be run directly (known as 
standalone mode) for single process runs.

For workstation clusters and other massively parallel machines with 
special high-performance networking, NAMD uses the system-provided MPI 
library (with a few exceptions) and standard system tools such as mpirun 
are used to launch jobs.  Since MPI libraries are very often incompatible 
between versions, you will likely need to recompile NAMD and its 
underlying Charm++ libraries to use these machines in parallel (the 
provided non-MPI binaries should still work for serial runs.) The provided 
charmrun program for these platforms is only a script that attempts to 
translate charmrun options into mpirun options, but due to the diversity 
of MPI libraries it often fails to work.

-- Individual Windows, Linux, Mac OS X, or Other Unix Workstations --

Individual workstations use the same version of NAMD as workstation 
networks, but running NAMD is much easier.  If your machine has only one 
processor core you can run the namd2 binary directly:

  namd2 <configfile>

For multicore workstations, Windows and Mac OX X (Intel) released binaries 
are based on "multicore" builds of Charm++ that can run multiple threads. 
These multicore builds lack a network layer, so they can only be used on a 
single machine. The Solaris (Sparc and x86_64) released binaries are based 
on "smp" builds of Charm++ that can be used with multiple threads on 
either a single machine like a multicore build, or across a network. For 
best performance use one thread per processor with the +p option:

  namd2 +p<procs> <configfile>

For other multiprocessor workstations the included charmrun program is 
needed to run multiple namd2 processes.  The ++local option is also 
required to specify that only the local machine is being used:

  charmrun namd2 ++local +p<procs> <configfile>

You may need to specify the full path to the namd2 binary.

-- Linux or Other Unix Workstation Networks --

The same binaries used for individual workstations as described above 
(other than pure "multicore" builds and MPI builds) can be used with 
charmrun to run in parallel on a workstation network. The only difference 
is that you must provide a "nodelist" file listing the machines where 
namd2 processes should run, for example:

  group main
  host brutus
  host romeo

The "group main" line defines the default machine list.  Hosts brutus and 
romeo are the two machines on which to run the simulation.  Note that 
charmrun may run on one of those machines, or charmrun may run on a third 
machine.  All machines used for a simulation must be of the same type and 
have access to the same namd2 binary.

By default, the "rsh" command ("remsh" on HPUX) is used to start namd2 on 
each node specified in the nodelist file.  You can change this via the 
CONV_RSH environment variable, i.e., to use ssh instead of rsh run "setenv 
CONV_RSH ssh" or add it to your login or batch script.  You must be able 
to connect to each node via rsh/ssh without typing your password; this can 
be accomplished via a .rhosts files in your home directory, by an 
/etc/hosts.equiv file installed by your sysadmin, or by a 
.ssh/authorized_keys file in your home directory.  You should confirm that 
you can run "ssh hostname pwd" (or "rsh hostname pwd") without typing a 
password before running NAMD.  Contact your local sysadmin if you have 
difficulty setting this up.  If you are unable to use rsh or ssh, then add 
"setenv CONV_DAEMON" to your script and run charmd (or charmd_faceless, 
which produces a log file) on every node.

You should now be able to try running NAMD as:

  charmrun namd2 +p<procs> <configfile>

If this fails or just hangs, try adding the ++verbose option to see more 
details of the startup process.  You may need to specify the full path to 
the namd2 binary.  Charmrun will start the number of processes specified 
by the +p option, cycling through the hosts in the nodelist file as many 
times as necessary.  You may list multiprocessor machines multiple times 
in the nodelist file, once for each processor.

You may specify the nodelist file with the "++nodelist" option and the 
group (which defaults to "main") with the "++nodegroup" option.  If you do 
not use "++nodelist" charmrun will first look for "nodelist" in your 
current directory and then ".nodelist" in your home directory.

Some automounters use a temporary mount directory which is prepended to 
the path returned by the pwd command.  To run on multiple machines you 
must add a "++pathfix" option to your nodelist file.  For example:

  group main ++pathfix /tmp_mnt /
  host alpha1
  host alpha2

There are many other options to charmrun and for the nodelist file. These 
are documented at in the Charm++ Installation and Usage Manual available 
at http://charm.cs.uiuc.edu/manuals/ and a list of available charmrun 
options is available by running charmrun without arguments.

If your workstation cluster is controlled by a queueing system you will 
need build a nodelist file in your job script.  For example, if your 
queueing system provides a $HOST_FILE environment variable:

  set NODES = `cat $HOST_FILE`
  set NODELIST = $TMPDIR/namd2.nodelist
  echo group main >! $NODELIST
  foreach node ( $nodes )
    echo host $node >> $NODELIST
  end
  @ NUMPROCS = 2 * $#NODES
  charmrun namd2 +p$NUMPROCS ++nodelist $NODELIST <configfile>

Note that $NUMPROCS is twice the number of nodes in this example. This is 
the case for dual-processor machines.  For single-processor machines you 
would not multiply $#NODES by two.

Note that these example scripts and the setenv command are for the csh or 
tcsh shells.  They must be translated to work with sh or bash.

-- Windows Workstation Networks ---

These are no longer supported, but NAMD has been reported to compile
on Windows HPC Server.

-- SGI Altix --

Be sure that the MPI_DSM_DISTRIBUTE environment variable is set, then use 
the Linux-Itanium-MPI-Altix version of NAMD along with the system mpirun:

  mpirun -np <procs> <configfile>

-- IBM POWER Clusters --

Run the MPI version of NAMD as you would any POE program.  The options and 
environment variables for poe are various and arcane, so you should 
consult your local documentation for recommended settings.  As an example, 
to run on Blue Horizon one would specify:

  poe namd2 <configfile> -nodes <procs/8> -tasks_per_node 8

----------------------------------------------------------------------

Compiling NAMD

Building a complete NAMD binary from source code requires working
C and C++ compilers, Charm++/Converse, TCL, and FFTW.  NAMD will
compile without TCL or FFTW but certain features will be disabled.
Fortunately, precompiled libraries are available from
http://www.ks.uiuc.edu/Research/namd/libraries/.  You may disable
these options by specifying --without-tcl --without-fftw as options
when you run the config script.  Some files in arch may need editing
to set the path to TCL and FFTW libraries correctly.

As an example, here is the build sequence for 64-bit Linux workstations:

Unpack NAMD and matching Charm++ source code and enter directory:
  tar xzf NAMD_2.7b1_Source.tar.gz
  cd NAMD_2.7b1_Source
  tar xf charm-6.1.tar
  cd charm-6.1

Build and test the Charm++/Converse library:
  ./build charm++ net-linux-x86_64 --no-shared -O -DCMK_OPTIMIZE=1
  cd net-linux-x86_64/tests/charm++/megatest
  make pgm
  ./charmrun ++local +p4 ./pgm    (also try running on multiple nodes)
  cd ../../../../..

Download and install TCL and FFTW libraries:
  (cd to NAMD_2.7b1_Source if you're not already there)
  wget http://www.ks.uiuc.edu/Research/namd/libraries/fftw-linux-x86_64.tar.gz
  tar xzf fftw-linux-x86_64.tar.gz
  mv linux-x86_64 fftw
  wget http://www.ks.uiuc.edu/Research/namd/libraries/tcl-linux-x86_64.tar.gz
  tar xzf tcl-linux-x86_64.tar.gz
  mv linux-x86_64 tcl

Optionally edit various configuration files:
  (not needed if charm-6.1, fftw, and tcl are in NAMD_2.7b1_Source)
  vi Make.charm  (set CHARMBASE to full path to charm)
  vi arch/Linux-x86_64.fftw     (fix library name and path to files)
  vi arch/Linux-x86_64.tcl      (fix library version and path to TCL files)

Set up build directory and compile:
  ./config Linux-x86_64-g++ --charm-arch net-linux-x86_64
  cd Linux-x86_64-g++
  make   (or gmake -j4, which should run faster)

Quick tests using one and two processes:
  (this is a 66-atom simulation so don't expect any speedup)
  ./namd2
  ./namd2 src/alanin
  ./charmrun ++local +p2 ./namd2
  ./charmrun ++local +p2 ./namd2 src/alanin

Longer test using four processes:
  wget http://www.ks.uiuc.edu/Research/namd/utilities/apoa1.tar.gz
  tar xzf apoa1.tar.gz
  ./charmrun ++local +p4 ./namd2 apoa1/apoa1.namd
  (FFT optimization will take a several seconds during the first run.)

That's it.  A more complete explanation of the build process follows.
Note that you will need Cygwin to compile NAMD on Windows.

Download and unpack fftw and tcl libraries for your platform from
http://www.ks.uiuc.edu/Research/namd/libraries/.  Each tar file
contains a directory with the name of the platform.  These libraries
don't change very often, so you should find a permanent home for them.

Unpack the NAMD source code and the enclosed charm-6.1.tar archive.  This
version of Charm++ is the same one used to build the released binaries
and is more likely to work and be bug free than any other we know of.
Edit Make.charm to point at .rootdir/charm-6.1 or the full path to the
charm directory if you unpacked outside of the NAMD source directory.

Run the config script without arguments to list the available builds,
which have names like Linux-x86_64-icc.  Each build or "ARCH" is of the
form BASEARCH-compiler, where BASEARCH is the most generic name for a
platform, like Linux-x86_64.

Note that many of the options that used to require editing files can
now be set with options to the config script.  Running the config script
without arguments lists the available options as well.

Edit arch/BASEARCH.fftw and arch/BASEARCH.tcl to point to the libraries
you downloaded.  Find a line something like
"CHARMARCH = net-linux-x86_64-iccstatic" in arch/ARCH.arch to tell
what Charm++ platform you need to build.  The CHARMARCH name is of the
format comm-OS-cpu-options-compiler.  It is important that Charm++ and
NAMD be built with the same C++ compiler.  To change the CHARMARCH, just
edit the .arch file or use the --charm-arch config option.

Enter the charm directory and run the build script without options
to see a list of available platforms.  Only the comm-OS-cpu part will
be listed.  Any options or compiler tags are listed separately and
must be separated by spaces on the build command line.  Run the build
command for your platform as:

  ./build charm++ comm-OS-cpu options compiler --no-shared -O -DCMK_OPTIMIZE=1

For this specific example:

  ./build charm++ net-linux tcp icc --no-shared -O -DCMK_OPTIMIZE=1

The README distributed with Charm++ contains a complete explanation.
You only actually need the bin, include, and lib subdirectories, so
you can copy those elsewhere and delete the whole charm directory,
but don't forget to edit Make.charm if you do this.

If you are building a non-smp, non-tcp version of net-linux with the
Intel icc compiler you will need to disable optimization for some
files to avoid crashes in the communication interrupt handler.  The
smp and tcp builds use polling instead of interrupts and therefore
are not affected.  Adding +netpoll to the namd2 command line also
avoids the bug, but this option reduces performance in many cases.
These commands recompile the necessary files without optmization:

  cd charm/net-linux-icc
  /bin/rm tmp/sockRoutines.o
  /bin/rm lib/libconv-cplus-*
  ( cd tmp; make charm++ OPTS="-O0 -DCMK_OPTIMIZE=1" )

If you're building an MPI version you will probably need to edit
compiler flags or commands in the Charm++ src/arch directory.  The
file charm/src/arch/mpi-linux/conv-mach.sh contains the definitions
that select the mpiCC compiler for mpi-linux, while other compiler
choices are defined by files in charm/src/arch/common/.

If you want to run NAMD on Myrinet one option is to build a Myrinet 
GM library network version by specifying the "gm" option as in:

  ./build charm++ net-linux gm icc --no-shared -O -DCMK_OPTIMIZE=1

You would then change "net-linux-icc" to "net-linux-gm-icc" in your
namd2/arch/Linux-x86-icc.arch file (or create a new .arch file).
Alternatively you could use a Myrinet-aware MPI library.  If you want
to use MPICH-GM you must use the "gm" option, as in "mpi-linux gm icc"
or equivalently you can edit charm/arch/mpi-linux/conv-mach.h to set
"#define CMK_MALLOC_USE_OS_BUILTIN 1" and the other malloc defines to
0; alternately you could disable memory registration by specifying
"--with-device=ch_gm:-disable-registration" when configuring MPICH-GM
(at least according to the Myrinet FAQ at www.myri.com).

There is a similar "ibverbs" option for InfiniBand networks, as well
as an "mx" option for newer Myrinet networks.  Both help performance.

Run make in charm/CHARMARCH/tests/charm++/megatest/ and run the
resulting binary "pgm" as you would run NAMD on your platform.  You
should try running on several processors if possible.  For example:

  cd net-linux-tcp-icc/tests/charm++/megatest/
  make pgm
  ./charmrun +p16 ./pgm

If any of the tests fail then you will probably have problems with
NAMD as well.  You can continue and try building NAMD if you want,
but when reporting problems please mention prominently that megatest
failed, include the megatest output, and copy the Charm++ developers
at ppl@cs.uiuc.edu on your email.

Now you can run the NAMD config script to set up a build directory:

  ./config ARCH

For this specific example:

  ./config Linux-x86-icc --charm-arch net-linux-tcp-icc

This will create a build directory Linux-x86-icc.

If you wish to create this directory elsewhere use config DIR/ARCH,
replacing DIR with the location the build directory should be created.
A symbolic link to the remote directory will be created as well.  You
can create multiple build directories for the same ARCH by adding a
suffix.  These can be combined, of course, as in:

  ./config tcl fftw /tmp/Linux-x86-icc.test1

Now cd to your build directory and type make.  The namd2 binary and
a number of utilities will be created.

If you have trouble building NAMD your compiler may be different from
ours.  The architecture-specific makefiles in the arch directory use
several options to elicit similar behavior on all platforms.  Your
compiler may conform to an earlier C++ specification than NAMD uses.
You compiler may also enforce a later C++ rule than NAMD follows.
You may ignore repeated warnings about new and delete matching.

The NAMD Wiki at http://www.ks.uiuc.edu/Research/namd/wiki/ has entries
on building and running NAMD at various supercomputer centers (e.g.,
NamdAtTexas) and on various architectures (e.g., NamdOnMPICH).  Please
consider adding a page on your own porting effort for others to read.

----------------------------------------------------------------------

Memory Usage

NAMD has traditionally used less than 100MB of memory even for systems
of 100,000 atoms.  With the reintroduction of pairlists in NAMD 2.5,
however, memory usage for a 100,000 atom system with a 12A cutoff can
approach 300MB, and will grow with the cube of the cutoff.  This extra
memory is distributed across processors during a parallel run, but a
single workstation may run out of physical memory with a large system.

To avoid this, NAMD now provides a pairlistMinProcs config file option
that specifies the minimum number of processors that a run must use
before pairlists will be enabled (on fewer processors small local
pairlists are generated and recycled rather than being saved, the
default is "pairlistMinProcs 1").  This is a per-simulation rather than
a compile time option because memory usage is molecule-dependent.

----------------------------------------------------------------------

Improving Parallel Scaling

While NAMD is designed to be a scalable program, particularly for
simulations of 100,000 atoms or more, at some point adding additional
processors to a simulation will provide little or no extra performance.
If you are lucky enough to have access to a parallel machine you should
measure NAMD's parallel speedup for a variety of processor counts when
running your particular simulation.  The easiest and most accurate way
to do this is to look at the "Benchmark time:" lines that are printed
after 20 and 25 cycles (usually less than 500 steps).  You can monitor
performance during the entire simulation by adding "outputTiming <steps>"
to your configuration file, but be careful to look at the "wall time"
rather than "CPU time" fields on the "TIMING:" output lines produced.
For an external measure of performance, you should run simulations of
both 25 and 50 cycles (see the stepspercycle parameter) and base your
estimate on the additional time needed for the longer simulation in
order to exclude startup costs and allow for initial load balancing.

We provide both standard (UDP) and new TCP based precompiled binaries
for Linux clusters.  We have observed that the TCP version is better
on our dual processor clusters with gigabit ethernet while the basic
UDP version is superior on our single processor fast ethernet cluster.
When using the UDP version with gigabit you can add the +giga option
to adjust several tuning parameters.  Additional performance may be
gained by building NAMD against an SMP version of Charm++ such as
net-linux-smp or net-linux-smp-icc.  This will use a communication
thread for each process to respond to network activity more rapidly.
For dual processor clusters we have found it that running two separate
processes per node, each with its own communication thread, is faster
than using the charmrun ++ppn option to run multiple worker threads.
However, we have observed that when running on a single hyperthreaded
processor (i.e., a newer Pentium 4) there is an additional 15% boost
from running standalone with two threads (namd2 +p2) beyond running
two processors (charmrun namd2 ++local +p2).  For a cluster of single
processor hyperthreaded machines an SMP version should provide very
good scaling running one process per node since the communication
thread can run very efficiently on the second virtual processor.  We
are unable to ship an SMP build for Linux due to portability problems
with the Linux pthreads implementation needed by Charm++.

Extremely short cycle lengths (less than 10 steps) will also limit
parallel scaling, since the atom migration at the end of each cycle
sends many more messages than a normal force evaluation.  Increasing
pairlistdist from, e.g., cutoff + 1.5 to cutoff + 2.5, while also
doubling stepspercycle from 10 to 20, may increase parallel scaling,
but it is important to measure.  When increasing stepspercycle, also
try increasing pairlistspercycle by the same proportion.

NAMD should scale very well when the number of patches (multiply the
dimensions of the patch grid) is larger or rougly the same as the
number of processors.  If this is not the case, it may be possible
to improve scaling by adding ``twoAwayX yes'' to the config file,
which roughly doubles the number of patches.  (Similar options
twoAwayY and twoAwayZ also exist, and may be used in combination,
but this greatly increases the number of compute objects.  twoAwayX
has the unique advantage of also improving the scalability of PME.)

Additional performance tuning suggestions and options are described
at http://www.ks.uiuc.edu/Research/namd/wiki/?NamdPerformanceTuning

----------------------------------------------------------------------

Endian Issues

Some architectures write binary data (integer or floating point) with
the most significant byte first; others put the most significant byte
last.  This doesn't effect text files but it does matter when a binary
data file that was written on a "big-endian" machine (POWER, PowerPC) is
read on a "small-endian" machine (Intel) or vice versa.

NAMD generates DCD trajectory files and binary coordinate and velocity
files which are "endian-sensitive".  While VMD can now read DCD files
from any machine and NAMD reads most other-endian binary restart files,
many analysis programs (like CHARMM or X-PLOR) require same-endian DCD
files.  We provide the programs flipdcd and flipbinpdb for switching the
endianness of DCD and binary restart files, respectively.  These programs
use mmap to alter the file in-place and may therefore appear to consume
an amount of memory equal to the size of the file.

----------------------------------------------------------------------