+--------------------------------------------------------------------+
|                                                                    |
|                     NAMD 2.1b2 Release Notes                       |
|                                                                    |
+--------------------------------------------------------------------+

This file includes directions for compiling and running NAMD,
configuration file parameters for new simulation features,
advice for dealing with endian issues when switching platforms,
a list of know deficiencies, and advice for reporting bugs.

----------------------------------------------------------------------

Compiling NAMD

You must first download and install NAMD's communications library,
Charm++/Converse 5.0, from http://charm.cs.uiuc.edu/.  The following
table shows the version needed for each NAMD architecture.  It is
important that the same compiler is used for both Charm++ and NAMD.

    NAMD Name      Description                   Charm Version
   ----------     ---------------------------   ---------------
    KCC-i486       Linux with KCC 3.3 compiler   net-linux-kcc
    g++-i486       Linux with GNU compiler       net-linux
    Origin2000-MPI SGI Origin 2000 with MPI      mpi-origin
    Origin2000     SGI Origin 2000 w/o MPI       origin2000
    alpha          Tru64 Unix w/ DEC compiler    net-axp-cc
  * g++-alpha      Tru64 Unix w/ GNU compiler    net-axp
    T3E            SGI/Cray T3E                  t3e
    Solaris        Solaris 2.6 or later          net-sol-cc
  * HPUX           HP-UX 10.0 or later with CC   net-hp-cc
  * RED            ASCI Red                      paragon-red
  * SP3            IBM SP Series                 sp3
  * exemplar       Convex Exemplar               mpi-exemplar

  * These architectures may fail to build or be unstable.

Edit Make.charm to point to the directory where Charm++ is installed.

Now you can type "./config ARCH" where ARCH is a ARCH architecture
name listed above (or as a file suffix in the arch directory).  This
will create a build directory for that architecture.  If you wish to
create this directory elsewhere use "./config DIR/ARCH", replacing
DIR with the location the build directory should be created.

Now "cd" to your build directory and type make.  The namd2 binary as
well as the flipdcd and flipbinpdb utilities will be created.  A link
to conv-host (see below) will be created as appropriate.

If you have trouble building NAMD your compiler may be different from
ours.  The architecture-specific makefiles in the arch directory use
several options to elicit similar behavior on all platforms.  Your
compiler may conform to an earlier C++ specification than NAMD uses.
You compiler may also enforce a later C++ rule than NAMD follows.
You may ignore repeated warnings about new and delete matching.

You may edit Makefile to enable or disable DPMTA and DPME.  DPMTA is
required if you want to use the fast multipole algorithm for full
electrostatics.  DPME is only useful for comparison with the new
particle mesh Ewald code.  You must specify "useDPME on" in the
simulation config file to switch to the old DPME code.  (DPMTA and
DPME are included in the release binaries.)

You may edit arch/Makearch.ARCH to enable or disable linking with Tcl
and FFTW.  Tcl is required for Tcl global forces (tclForces) and for
Tcl scripting (tcl).  Tcl is available from www.scriptics.com.  FFTW
replaces the slower the built-in FFT in particle mesh Ewald.  FFTW is
available from www.fftw.org.  If you download the RPM or configure FFTW
with --enable-type-prefix you will need to make symbolic links to the
double precision versions of the libraries and header files.  (Tcl is
included in the release binaries, FFTW is not.)

----------------------------------------------------------------------

Running NAMD

-- Workstation Networks --

Workstation networks require two files, the namd2 executable and the
conv-host program.  The conv-host program starts namd2 on the desired
hosts, and handles console I/O for the node programs.

To specify what machines namd2 will run on, the user creates a file
called "nodelist".  Below is an example nodelist file:

group main
 host brutus
 host romeo

The "group main" line defines the default machine list.  Hosts brutus
and romeo are the two machines on which to run the simulation.  Note
that conv-host may run on one of those machines, or conv-host may run
on a third machine.  The "rsh" command ("remsh" on HPUX) is used to
start namd2 on each of the nodes specified in the nodelist file.  If
NAMD fails without printing any output, check to make sure that "rsh"
works on your machine, by seeing if "rsh hostname ls" works for each
host in the nodelist.

A number of parameters may be passed to conv-host.  The most important
is the "+pX" option, where X specifies the number of processors.  If X
is less than the number of hosts in the nodelist, machines are
selected from top to bottom.  If X is greater than the number of
hosts, conv-host will start multiple processes on the machines,
starting from the top.  To run multiple processes on members of a SMP
workstation cluster, you may either just use the +p option to go
through the list the right number of times, or list each machine
several times, once for each processor.

Once the nodelist file is set up, and you have your configuration file
prepared (alanin.conf, in this example), run NAMD as follows:

conv-host +pX namd2 alanin.conf

Due to their processor design, Intel-based machines running the linux
version produce binary files (restart and DCD files) with a different
format.  The "flipbinpdb" and the "flipdcd" programs will convert
files back and forth from Intel format to the non-Intel format.

-- T3E --

The T3E version has been tested on the Pittsburgh Supercomputer Center
T3E.  To run on X processors, use the mpprun command:

mpprun -n X namd2 alanin.conf

-- Origin 2000 --

For small numbers of processors (1-8) use the non-MPI version of namd2.
To run on X processors call the binary directly with the +p option:

namd2 +pX alanin.conf

For better performance on larger numbers of processors we recommend
that you use the MPI version of NAMD.  To run this version, you must
have MPI installed.  Furthermore, you must set two environment
variables to tell MPI how to allocate certain internal buffers.  Put
the following commands in your .cshrc/.profile file, or in your job
file if you are running under a queuing system:

setenv MPI_REQUEST_MAX 10240
setenv MPI_TYPE_MAX 10240

Then run NAMD with the following command:

mpirun -np X namd2 alanin.conf

----------------------------------------------------------------------

New Features

--- Mollified Impulse Method (MOLLY) ---

This method eliminates the components of the long range electrostatic
forces which contribute to resonance along bonds to hydrogen atoms,
allowing a fullElectFrequency of 6 (vs. 4) with a 1 fs timestep
without using "rigidBonds all".  You may use "rigidBonds water" but
using "rigidBonds all" with MOLLY makes no sense since the degrees of
freedom which MOLLY protects from resonance are already frozen.
The following parameters enable MOLLY:
  * molly on
  * mollyIterations  (optional, default=100)
  * mollyTolerance  (optional, default=0.00001)

--- Non-orthogonal Periodic Cells ---

The parameters (cellBasisVector[123] and cellOrigin) have not changed,
but the basis vectors may now be in any direction.  This feature has
received limited testing.  PME should work.  Non-isotropic constant
pressure methods (useFlexibleCell) won't.

--- Interactive Molecular Dynamics ---

NAMD now works directly with VMD (http://www.ks.uiuc.edu/Research/vmd/)
to allow you to view and interactively steer your simulation.  NAMD will
wait for VMD to connect on startup with the following parameters:
  * IMDon yes
  * IMDport 
  * IMDfreq 

--- Faster Particle Mesh Ewald ---

If for some reason you wish to use the slower DPME implementation
of the particle mesh Ewald method add "useDPME yes".  If you set
cellOrigin to something other than (0,0,0) the energy may differ
slightly between the old and new implementations.  For even better
performance get the NAMD source code and compile with FFTW.

----------------------------------------------------------------------

Endian Issues

Some architectures write binary data (integer or floating point) with
the most significant byte first; others put the most significant byte
last.  This doesn't effect text files but it does matter when a binary
data file that was written on a "big-endian" machine (Sun, HP, SGI) is
read on a "small-endian" machine (Intel, Alpha) or vice versa.

NAMD generates DCD trajectory files and binary coordinate and velocity
files which are "endian-sensitive".  While VMD 1.4b1 can read DCD files
from any machine, NAMD requires same-endian binary restart files and
most analysis programs (like CHARMM or X-PLOR) require same-endian DCD
files.  We provide the programs flipdcd and flipbinpdb for switching the
endianness of DCD and binary restart files, respectively.  These programs
use mmap to alter the file in-place and may therefore appear to consume
an amount of memory equal to the size of the file.

----------------------------------------------------------------------

Known Deficiencies

- PME (the Ewald sum) is not well parallelized.

- NAMD requires X-PLOR or CHARMM to produce the .psf structure input
files.  If you don't have one of these, you probably can't use NAMD yet.

----------------------------------------------------------------------

Problems?

For problems or questions, send email to namd@ks.uiuc.edu.  If you
think you have found a bug, please include what machine you are
running on, and, if possible, a dump of the program output and/or a
copy of your input files.  Your feedback will help us improve NAMD.

----------------------------------------------------------------------