+--------------------------------------------------------------------+
|                                                                    |
|                       NAMD 2.4 Release Notes                       |
|                                                                    |
+--------------------------------------------------------------------+

This file includes directions for running and compiling NAMD,
an explanation of the automated usage reporting mechanism,
advice for dealing with endian issues when switching platforms,
a list of know deficiencies, and advice for reporting bugs.

----------------------------------------------------------------------

New Features

--- Improved Parallel Scaling with Particle Mesh Ewald ---

Full electrostatics calculations now transmit less data and take
better advantage of large clusters of multiprocessors to allow
efficient use of several hundred processors for large systems.

--- Locally Enhanced Sampling ---

Multiple images of a subset of the system with a reduced nonbonded
potential can be used to increase sampling and transition rates.

--- Alchemical Free Energy Perturbation ---

A dual topology method allows NAMD to calculate free energy changes
resulting from chemical mutations, the removal of ligands, etc.

--- GROMACS Input File Compatibility ---

NAMD can load GROMACS ASCII topology (.top) and coordinate (.gro)
files, allowing most GROMACS simulations to be run in NAMD.  There
are, however, several performance-enabling features of the GROMACS
force field which NAMD does not (yet) take advantage of.

----------------------------------------------------------------------

Running NAMD

-- Individual Workstations --

Individual workstations use the same version of NAMD as workstation
networks, but running NAMD is much easier.  You may launch any number
of namd2 processes on the local machine (for best performance lauch
one process per processor) using the "++local" option via:

  charmrun namd2 ++local +p<procs> <configfile>

There is no longer any need to be able to "rsh localhost" or to
create a nodelist file containing the single host localhost.

Intel and Alpha processors produce binary files (restart and DCD
files) which must be "byte-swapped" to be read on other platforms.
NAMD and VMD now handle this conversion automatically for most files.
See the section "Endian Issues" below for details.

-- Individual Windows Workstations --

NAMD may be run on a single Windows workstation via the command:

  charmrun namd2 ++local +p<procs> <configfile>

For best performance, <procs> should be the number of processors in
your machine, and defaults to one if the "+p" option is omitted.
However, the "++local" option is required unless charmd is running
and a nodelist file (containing only localhost) is present.

See "Windows Workstation Networks" below to run on multiple machines.

-- Workstation Networks --

Workstation networks require two files, the namd2 executable and the
charmrun program.  The charmrun program starts namd2 on the desired
hosts, and handles console I/O for the node programs.

To specify what machines namd2 will run on, the user creates a file
called "nodelist".  Below is an example nodelist file:

group main
 host brutus
 host romeo

The "group main" line defines the default machine list.  Hosts brutus
and romeo are the two machines on which to run the simulation.  Note
that charmrun may run on one of those machines, or charmrun may run
on a third machine.

The "rsh" command ("remsh" on HPUX) is used to start namd2 on each node
specified in the nodelist file.  If NAMD fails without printing any
output, check to make sure that "rsh" works on your machine, by seeing
if "rsh hostname ls" works for each host in the nodelist.  If you want
or need to use "ssh" instead, then add "setenv CONV_RSH ssh" to your
login or batch script and try "ssh hostname ls" to each host first to
ensure that the machine is in your .ssh/known_hosts file.  If you are
unable to use rsh or ssh, then add "setenv CONV_DAEMON" and run charmd
(or charmd_faceless, which produces a log file) on every node.

Some automounters use a temporary mount directory which is prepended
to the path returned by the pwd command.  To run on multiple machines
you must add a "++pathfix" option to your nodelist file.  For example:

group main ++pathfix /tmp_mnt /
 host alpha1
 host alpha2

A number of parameters may be passed to charmrun.  The most important
is the "+pX" option, where X specifies the number of processors.  If X
is less than the number of hosts in the nodelist, machines are
selected from top to bottom.  If X is greater than the number of
hosts, charmrun will start multiple processes on the machines,
starting from the top.  To run multiple processes on members of a SMP
workstation cluster, you may either just use the +p option to go
through the list the right number of times, or list each machine
several times, once for each processor.  The default is +p1.

You may specify the nodelist file with the "++nodelist" option and the
group (which defaults to "main") with the "++nodegroup" option.  If
you do not use "++nodelist" charmrun will first look for "nodelist"
in your current directory and then ".nodelist" in your home directory.

If you always want to run on the machine you are logged in to you may
use "localhost" in place of the hostname in your nodelist file, but
only if there are no other machines.  You will not need "++pathfix".
For example, ".nodelist" in your home directory could read:

group main
 host localhost

It is simpler in many cases to instead use the "++local" option as
described under "Individual Workstations" above, which eliminates the
need for the nodelist file and rsh entirely.

Once the nodelist file is set up, and you have your configuration file
prepared, run NAMD as follows:

  charmrun +p<procs> namd2 <configfile>

-- Windows Workstation Networks ---

Windows is the same as other workstation networks described above,
except that rsh is not available on this platform.  Instead, you must
run the provided daemon (charmd.exe) on every node listed in the
nodelist file.  Using charmd_faceless rather than charmd will eliminate
consoles for the daemon and node processes.  The "++local" option is
also available under Windows, eliminating the need for charmd and
nodelist when running NAMD only on the local machine.

-- Scyld Beowulf Clusters --

Scyld Beowulf clusters replace rsh and other methods of launching jobs
via a distributed process space.  There is no need for a nodelist file
or any special daemons.  In order to allow access to files, the first
NAMD process must be on the master node of the cluster.  Launch jobs
from the master node of the cluster via the command:

  charmrun namd2 +p<procs> <configfile>

For best performance, run a single NAMD job on all available nodes and
never run multiple NAMD jobs at the same time.  You may safely suspend
and resume a running NAMD job on a Scyld Beowulf using control-Z or
"kill -STOP" and "kill -CONT" on the charmrun process.

-- Compaq AlphaServer SC --

Although NAMD uses MPI and the Elan library on this platform, parallel
jobs are run using the "prun" command.  (The standard MPI charmrun is
wrong on this platform.)  The syntax for this command is:

  prun -n <procs> <configfile>

There are additional options.  Consult your local documentation.

-- IBM RS/6000 SP --

Run NAMD as you would any POE program.  The options and environment
variables for poe are various and arcane, so you should consult your
local documentation for recommended settings.  As an example, to run
on Blue Horizon one would specify:

  poe namd2 <configfile> -nodes <procs/8> -tasks_per_node 8

-- Cray T3E --

The T3E version has been tested on the Pittsburgh Supercomputer Center
T3E.  To run on <procs> processors, use the mpprun command:

  mpprun -n <procs> namd2 <configfile>

-- Origin 2000 --

For small numbers of processors (1-8) use the non-MPI version of namd2.
If your stack size limit is unlimited, which DQS may do, you will need
to set it with "limit stacksize 64M" to run on multiple processors.
To run on <procs> processors call the binary directly with the +p option:

  namd2 +p<procs> <configfile>

For better performance on larger numbers of processors we recommend
that you use the MPI version of NAMD.  To run this version, you must
have MPI installed.  Furthermore, you must set two environment
variables to tell MPI how to allocate certain internal buffers.  Put
the following commands in your .cshrc or .profile file, or in your
job file if you are running under a queuing system:

  setenv MPI_REQUEST_MAX 10240
  setenv MPI_TYPE_MAX 10240

Then run NAMD with the following command:

  mpirun -np <procs> namd2 <configfile>

----------------------------------------------------------------------

Automated Usage Reporting

NAMD may print information similar to the following at startup:

 Info: Sending usage information to NAMD developers via UDP.
 Info: Sent data is: 1 NAMD  2.2  Solaris-Sparc  1    verdun  jim

The information you see listed is sent via a single UDP packet and
logged for future analysis.  The UDP protocol does not retransmit
lost packets and does not attempt to establish a connection.  We will
use this information only for statistical analysis to determine how
frequently NAMD is actually being used.  Your user and machine name
are only used for counting unique users, they will not be correlated
with your download registration data.

We collect this information in order to better justify continued
development of NAMD to the NIH, our primary source of funding.  We
also use collected information to direct NAMD development efforts.
If you are opposed to this reporting, you are welcome to recompile
NAMD and disable it.  Usage reporting is disabled in the release
Windows binaries, but will be added in a later version.

----------------------------------------------------------------------

Endian Issues

Some architectures write binary data (integer or floating point) with
the most significant byte first; others put the most significant byte
last.  This doesn't effect text files but it does matter when a binary
data file that was written on a "big-endian" machine (Sun, HP, SGI) is
read on a "small-endian" machine (Intel, Alpha) or vice versa.

NAMD generates DCD trajectory files and binary coordinate and velocity
files which are "endian-sensitive".  While VMD can now read DCD files
from any machine and NAMD reads most other-endian binary restart files,
most analysis programs (like CHARMM or X-PLOR) require same-endian DCD
files.  We provide the programs flipdcd and flipbinpdb for switching the
endianness of DCD and binary restart files, respectively.  These programs
use mmap to alter the file in-place and may therefore appear to consume
an amount of memory equal to the size of the file.

----------------------------------------------------------------------

Problems?  Found a bug?

For problems or questions, send email to namd@ks.uiuc.edu.  If you
think you have found a bug, please follow the steps outlined below.
Your feedback will help us improve NAMD.

 1. Download and test the latest version of NAMD. 
 2. Please check the FAQ, known bugs and problem reports. 
 3. Gather, in a single directory, all input and config files needed
    to reproduce your problem. 
 4. Run once, redirecting output to a file. 
 5. Tar everything up (but not the namd2 or charmrun binaries) and
    compress it. 
 6. Email namd@ks.uiuc.edu with: 
    - A synopsis of the problem as the subject. 
    - The NAMD version number the problem occurs with. 
    - The platform and number of CPU's the problem occurs with. 
    - A description of the problematic behavior and any error messages. 
    - If the problem is consistent or random. 
    - The compressed tar file as an attachment (or a URL if it is too big). 
 7. We'll get back to you by the next business day. 

----------------------------------------------------------------------

Compiling NAMD

Building a complete NAMD binary from source code requires working
C and C++ compilers, Charm++/Converse, TCL, and FFTW.  NAMD will still
compile without TCL and FFTW but certain features will be disabled.
Fortunately, all required packages are freely available on the net.
TCL and FFTW are not used by default, but must be enabled when you run
the NAMD config script and some files in arch may need editing.

As an example, here is the build sequence for Linux workstations:

Download and build Tcl from http://www.scriptics.com/products/tcltk/
and FFTW from http://www.fftw.org/ (use --disable-type-prefix) if
not already available on your machine.  Both are free.

Unpack source code and enter directory:
  tar xzf NAMD_2.4_Source.tar.gz
  cd NAMD_2.4_Source

Download and build Charm++/Converse library:
  setenv CVSROOT ":pserver:checkout@charm.cs.uiuc.edu:/cvsroot"
  cvs login   (PRESS RETURN WHEN ASKED FOR A PASSWORD)
  cvs co -D 2002-02-20 -P charm
  cd charm
  ./build charm++ net-linux
  cd ..

Edit various configuration files:
  vi Make.charm  (set CHARMBASE to .rootdir/charm or full path to charm)
  vi arch/Linux-i686.tcl   (fix library version and path to Tcl files)
  vi arch/Linux-i686.fftw  (fix library name and path to files)

Set up build directory and compile:
  ./config tcl fftw Linux-i686-g++
  cd Linux-i686-g++ 
  make

Quick tests using one and two processes:
  ./charmrun ++local +p1 ./namd2
  ./charmrun ++local +p2 ./namd2
  ./charmrun ++local +p1 ./namd2 src/alanin
  ./charmrun ++local +p2 ./namd2 src/alanin

That's it.  A more complete explanation of the build process follows.

You must first download and install NAMD's communications library,
Charm++/Converse, from http://charm.cs.uiuc.edu/.  You will have
to obtain the February 20, 2002 beta version via public CVS.  Follow the
directions at http://charm.cs.uiuc.edu/beta.html for connecting but
use the command "cvs co -D 2002-02-20 -P charm" to get the files.

This table shows the version needed for each NAMD architecture.  It is
important that the same C++ compiler is used for both Charm and NAMD.

    NAMD Arch Name           Charm Version            Comments
   ----------------------   ---------------------    ------------------
    AIX-RS6000-xlC           net-rs6k                 IBM compiler
  * ASCI-Red                 paragon-red              ASCI Red (Sandia)
  * Cygwin-i686-MSVC         net-win32                Microsoft compiler
  * Cygwin-i686              net-cygwin               GNU compiler
    Darwin-PPC-c++           net-ppc-darwin           Mac OS X
    HPUX-aCC                 net-hp-acc               ANSI HP compiler
  * HPUX-CC                  net-hp-cc                Old HP compiler
    HPUX-g++                 net-hp                   GNU compiler
  * HPUX-KCC                 net-hp-kcc               KAI C++ compiler
    HPUX-PA2.0-aCC           net-hp-acc               Newer HP CPUs
    IBM-SP                   mpi-sp                   IBM xlC compiler
    Linux-Alpha-cxx          net-linux-axp-cxx        Compaq compiler
    Linux-Alpha-g++          net-linux-axp            GNU compiler
    Linux-Alpha-MPI-cxx      mpi-linux-axp-cxx        Compaq compiler
    Linux-Alpha-Scyld-cxx    net-linux-axp-scyld-cxx  Scyld Beowulf
    Linux-Alpha-ScyldCD-cxx  net-linux-axp-scyld-cxx  Scyld distribution
    Linux-i686-g++           net-linux                GNU compiler
    Linux-i686-KCC           net-linux-kcc            KAI C++ compiler
    Linux-i686-MPI-pgCC      mpi-linux-pgcc           NCSA Linux cluster
    Linux-i686-MPI           mpi-linux                GNU compiler
    Linux-i686-Scyld         net-linux-scyld          Scyld Beowulf
    Linux-i686-ScyldCD       net-linux-scyld          Scyld distribution
    Origin2000-CC            origin2000               SGI compiler
    Origin2000-MPI-CC        mpi-origin               SGI compiler
    Solaris-i686-g++         net-sol-x86              GNU compiler
    Solaris-Sparc-CC         net-sol-cc               Sun compiler
    Solaris-Sparc-g++        net-sol                  GNU compiler
    Solaris-Sparc-MPI        mpi-sol-cc               Sun compiler
    T3E-CC                   t3e                      Cray compiler
    Tru64-Alpha-CC           net-axp-cc               Compaq compiler
  * Tru64-Alpha-g++          net-axp                  GNU compiler
    Tru64-Alpha-MPI-CC       mpi-axp-cc               Alphaserver SC
  * Win32-i686-MSVC          net-win32                MSVC and nmake

  * These architectures may fail to build or be unstable.  Building
    on Win32 is not simple.  Cygwin is needed to build Charm on any
    Windows platform, but Win32-i686-MSVC may be built with only
    MSVC and nmake from the DOS command line.

Once you have obtained the Charm++/Converse source code, you can
build the libraries needed by NAMD for each architecture with the
build command in the charm directory:

  ./build charm++ CHARMVERSION

where CHARMVERSION is the appropriate version listed above, separated
into the comm-OS-cpu base (such as "net-linux-axp") and the separate
options which are also needed (such as "scyld" and "cxx").  So, for 
Scyld on Alpha Linux, use "./build charm++ net-linux-axp scyld cxx".
The README distributed with Charm contains a complete explanation.

You only actually need the bin, include, and lib subdirectories, so
you can copy those elsewhere and delete the whole build directory.

Edit Make.charm to point to the directory where Charm++ is installed.

Now you can type "./config ARCH" where ARCH is a ARCH architecture
name listed above (or as a file suffix in the arch directory).  This
will create a build directory for that architecture.  If you wish to
create this directory elsewhere use "./config DIR/ARCH", replacing
DIR with the location the build directory should be created.

Now "cd" to your build directory and type make.  The namd2 binary and
a number of utilities will be created.

If you have trouble building NAMD your compiler may be different from
ours.  The architecture-specific makefiles in the arch directory use
several options to elicit similar behavior on all platforms.  Your
compiler may conform to an earlier C++ specification than NAMD uses.
You compiler may also enforce a later C++ rule than NAMD follows.
You may ignore repeated warnings about new and delete matching.

You may edit Makefile to enable DPMTA and DPME.  DPMTA is required
if you want to use the fast multipole algorithm for full
electrostatics.  DPME is only useful for comparison with the new
particle mesh Ewald code.  You must specify "useDPME on" in the
simulation config file to switch to the old DPME code.  DPMTA and
DPME are *not* included in the release binaries.  In order to build
NAMD with DPTMA you will also need to compile the PVM compatibility
library by running "./build pvm CHARMVERSION" in the charm directory.

You may enable linking with Tcl by editing arch/BASEARCH.tcl (where
BASEARCH is ARCH without the -CC, -g++, or -KCC compiler extension)
and adding "tcl" to the config command line: "./config tcl ARCH"
Tcl is required for Tcl global forces (tclForces) and for Tcl
scripting.  Tcl is available from www.scriptics.com.  If the Tcl
preinstalled on your system has only shared libraries, you may want
to build Tcl yourself.  Tcl is included in the release binaries.

You may enable linking with FFTW by editing arch/BASEARCH.fftw (where
BASEARCH is ARCH without the -CC, -g++, or -KCC compiler extension)
and adding "fftw" to the config command line: "./config tcl fftw ARCH"
FFTW is required in order to use the newer particle mesh Ewald.  FFTW
is available from www.fftw.org.  If you download the RPM or configure
FFTW with --enable-type-prefix you will need to make symbolic links to
the double precision versions of the libraries and header files.  FFTW
is now included in the release binaries under a special license.

----------------------------------------------------------------------