+--------------------------------------------------------------------+
|                                                                    |
|                     NAMD 2.1b3 Release Notes                       |
|                                                                    |
+--------------------------------------------------------------------+

This file includes directions for compiling and running NAMD,
configuration file parameters for new simulation features,
advice for dealing with endian issues when switching platforms,
a list of know deficiencies, and advice for reporting bugs.

----------------------------------------------------------------------

Compiling NAMD

You must first download and install NAMD's communications library,
Charm++/Converse 5.0, from http://charm.cs.uiuc.edu/.  The following
table shows the version needed for each NAMD architecture.  It is
important that the same compiler is used for both Charm++ and NAMD.

    NAMD Name      Description                   Charm Version
   ----------     ---------------------------   ---------------
    KCC-i486       Linux with KCC 3.3 compiler   net-linux-kcc
    g++-i486       Linux with GNU compiler       net-linux
    Origin2000-MPI SGI Origin 2000 with MPI      mpi-origin
    Origin2000     SGI Origin 2000 w/o MPI       origin2000
    alpha          Tru64 Unix w/ DEC compiler    net-axp-cc
  * g++-alpha      Tru64 Unix w/ GNU compiler    net-axp
    T3E            SGI/Cray T3E                  t3e
    Solaris        Solaris 2.6 or later          net-sol-cc
  * HPUX           HP-UX 10.0 or later with CC   net-hp-cc
  * RED            ASCI Red                      paragon-red
  * SP3            IBM SP Series                 sp3
  * exemplar       Convex Exemplar               mpi-exemplar

  * These architectures may fail to build or be unstable.

Edit Make.charm to point to the directory where Charm++ is installed.

Now you can type "./config ARCH" where ARCH is a ARCH architecture
name listed above (or as a file suffix in the arch directory).  This
will create a build directory for that architecture.  If you wish to
create this directory elsewhere use "./config DIR/ARCH", replacing
DIR with the location the build directory should be created.

Now "cd" to your build directory and type make.  The namd2 binary as
well as the flipdcd and flipbinpdb utilities will be created.  A link
to conv-host (see below) will be created as appropriate.

If you have trouble building NAMD your compiler may be different from
ours.  The architecture-specific makefiles in the arch directory use
several options to elicit similar behavior on all platforms.  Your
compiler may conform to an earlier C++ specification than NAMD uses.
You compiler may also enforce a later C++ rule than NAMD follows.
You may ignore repeated warnings about new and delete matching.

You may edit Makefile to enable or disable DPMTA and DPME.  DPMTA is
required if you want to use the fast multipole algorithm for full
electrostatics.  DPME is only useful for comparison with the new
particle mesh Ewald code.  You must specify "useDPME on" in the
simulation config file to switch to the old DPME code.  (DPMTA and
DPME are included in the release binaries.)

You may edit arch/Makearch.ARCH to enable or disable linking with Tcl
and FFTW.  Tcl is required for Tcl global forces (tclForces) and for
Tcl scripting.  Tcl is available from www.scriptics.com.  FFTW replaces
the slower the built-in FFT in particle mesh Ewald.  FFTW is available
from www.fftw.org.  If you download the RPM or configure FFTW with
--enable-type-prefix you will need to make symbolic links to the double
precision versions of the libraries and header files.  (Tcl is included
in the release binaries, FFTW is not.)

----------------------------------------------------------------------

Running NAMD

-- Workstation Networks --

Workstation networks require two files, the namd2 executable and the
conv-host program.  The conv-host program starts namd2 on the desired
hosts, and handles console I/O for the node programs.

To specify what machines namd2 will run on, the user creates a file
called "nodelist".  Below is an example nodelist file:

group main
 host brutus
 host romeo

The "group main" line defines the default machine list.  Hosts brutus
and romeo are the two machines on which to run the simulation.  Note
that conv-host may run on one of those machines, or conv-host may run
on a third machine.  The "rsh" command ("remsh" on HPUX) is used to
start namd2 on each of the nodes specified in the nodelist file.  If
NAMD fails without printing any output, check to make sure that "rsh"
works on your machine, by seeing if "rsh hostname ls" works for each
host in the nodelist.

Some automounters use a temporary mount directory which is prepended
to the path returned by the pwd command.  To run on multiple machines
you must place a "pathfix" line in your nodelist file before any
"host" lines.  For example:

group main
 pathfix /tmp_mnt
 host alpha1
 host alpha2

A number of parameters may be passed to conv-host.  The most important
is the "+pX" option, where X specifies the number of processors.  If X
is less than the number of hosts in the nodelist, machines are
selected from top to bottom.  If X is greater than the number of
hosts, conv-host will start multiple processes on the machines,
starting from the top.  To run multiple processes on members of a SMP
workstation cluster, you may either just use the +p option to go
through the list the right number of times, or list each machine
several times, once for each processor.  The default is +p1.

You may specify the nodelist file with the "++nodelist" option and the
group (which defaults to "main") with the "++nodegroup" option.  If
you do not use "++nodelist" conv-host will first look for "nodelist"
in your current directory and then ".nodelist" in your home directory.

If you always want to run on the machine you are logged in to you may
use "localhost" in place of the hostname in your nodelist file, but
only if there are no other machines.  You will not need "pathfix".
For example, ".nodelist" in your home directory could read:

group main
 host localhost

Once the nodelist file is set up, and you have your configuration file
prepared (alanin.conf, in this example), run NAMD as follows:

conv-host +pX namd2 alanin.conf

Intel and Alpha processors produce binary files (restart and DCD
files) which must be "byte-swapped" to be read on other platforms.
The flipbinpdb and flipdcd programs will perform this conversion.
See the section "Endian Issues" below.

-- T3E --

The T3E version has been tested on the Pittsburgh Supercomputer Center
T3E.  To run on X processors, use the mpprun command:

mpprun -n X namd2 alanin.conf

-- Origin 2000 --

For small numbers of processors (1-8) use the non-MPI version of namd2.
If your stack size limit is unlimited, which DQS may do, you will need
to set it with "limit stacksize 64M" to run on multiple processors.
To run on X processors call the binary directly with the +p option:

namd2 +pX alanin.conf

For better performance on larger numbers of processors we recommend
that you use the MPI version of NAMD.  To run this version, you must
have MPI installed.  Furthermore, you must set two environment
variables to tell MPI how to allocate certain internal buffers.  Put
the following commands in your .cshrc or .profile file, or in your
job file if you are running under a queuing system:

setenv MPI_REQUEST_MAX 10240
setenv MPI_TYPE_MAX 10240

Then run NAMD with the following command:

mpirun -np X namd2 alanin.conf

----------------------------------------------------------------------

New Features

--- Tcl Scripting Language Interface ---

This exciting feature was scheduled for version 3.0 but an incomplete
version is available now---and quite useful!  When compiled with Tcl
(all released binaries) the config file is parsed by Tcl in a fully
backwards compatible manner with the added bonus that any Tcl command
may also be used.  This alone allows:
  * the "source" command to include other files (works w/o Tcl too!),
  * the "print" command to display messages ("puts" is broken, sorry),
  * environment variables through the env array ("$env(USER)"), and
  * user-defined variables ("set base sim23", "dcdfile ${base}.dcd").

But wait, there's more:
  * The "callback" command takes a 2-parameter Tcl procedure which
    is then called with a list of labels and a list of values during
    every timestep, allowing analysis, formatting, whatever.
  * The "run" command takes a number of steps to run (overriding the
    now optional numsteps parameter, which defaults to 0) and can be
    called repeatedly.  You can "run 0" just to get energies.
  * The "output" command takes an output file basename and causes
    .coor, .vel, and .xsc files to be written with that name.
  * Between "run" commands the reassignTemp, rescaleTemp, and
    langevinTemp parameters can be changed to allow simulated
    annealing protocols within a single config file.  (Many more
    parameters of this type will be enabled in future versions.)

Please note that while NAMD has traditionally allowed comments to be
started by a # appearing anywhere on a line, Tcl only allows comments
to appear where a new statement could begin.  With Tcl config file
parsing enabled (all shipped binaries) both NAMD and Tcl comments are
allowed before the first "run" command.  At this point only pure Tcl
syntax is allowed.  In addition, the ";#" idiom for Tcl comments will
only work with Tcl enabled.  NAMD has also traditionally allowed 
parameters to be specified as "param=value".  This is supported, but
only before the first "run" command.  Some examples:

# this is my config file                            <- OK
reassignFreq 100 ; # how often to reset velocities  <- only w/ Tcl
reassignTemp 20 # temp to reset velocities to       <- OK before "run"
run 1000                                            <- now Tcl only
reassignTemp 40 ; # temp to reset velocities to     <- ";" is required

NAMD has also traditionally allowed parameters to be specified as
"param=value" as well as "param value".  This is supported, but only
before the first "run" command.  For an easy life, use "param value".

--- Mollified Impulse Method (MOLLY) ---

This method eliminates the components of the long range electrostatic
forces which contribute to resonance along bonds to hydrogen atoms,
allowing a fullElectFrequency of 6 (vs. 4) with a 1 fs timestep
without using "rigidBonds all".  You may use "rigidBonds water" but
using "rigidBonds all" with MOLLY makes no sense since the degrees of
freedom which MOLLY protects from resonance are already frozen.
The following parameters enable MOLLY:
  * molly on
  * mollyIterations  (optional, default=100)
  * mollyTolerance  (optional, default=0.00001)

--- Non-orthogonal Periodic Cells ---

The parameters (cellBasisVector[123] and cellOrigin) have not changed,
but the basis vectors may now be in any direction.  This feature has
received limited testing.  PME should work.  Non-isotropic constant
pressure methods (useFlexibleCell) won't.

--- Interactive Molecular Dynamics ---

NAMD now works directly with VMD (http://www.ks.uiuc.edu/Research/vmd/)
to allow you to view and interactively steer your simulation.  NAMD will
wait for VMD to connect on startup with the following parameters:
  * IMDon yes
  * IMDport 
  * IMDfreq 

--- Faster Particle Mesh Ewald ---

If for some reason you wish to use the slower DPME implementation
of the particle mesh Ewald method add "useDPME yes".  If you set
cellOrigin to something other than (0,0,0) the energy may differ
slightly between the old and new implementations.  For even better
performance get the NAMD source code and compile with FFTW.

----------------------------------------------------------------------

Endian Issues

Some architectures write binary data (integer or floating point) with
the most significant byte first; others put the most significant byte
last.  This doesn't effect text files but it does matter when a binary
data file that was written on a "big-endian" machine (Sun, HP, SGI) is
read on a "small-endian" machine (Intel, Alpha) or vice versa.

NAMD generates DCD trajectory files and binary coordinate and velocity
files which are "endian-sensitive".  While VMD 1.4b1 can read DCD files
from any machine, NAMD requires same-endian binary restart files and
most analysis programs (like CHARMM or X-PLOR) require same-endian DCD
files.  We provide the programs flipdcd and flipbinpdb for switching the
endianness of DCD and binary restart files, respectively.  These programs
use mmap to alter the file in-place and may therefore appear to consume
an amount of memory equal to the size of the file.

----------------------------------------------------------------------

Known Deficiencies

- PME (the Ewald sum) is not well parallelized.

- TclForces runs in a separate Tcl intepreter from the so you cannot
  pass variables between TclForces and the Tcl front end script.

- NAMD requires X-PLOR or CHARMM to produce the .psf structure input
  files.  If you don't have one of these, you probably can't use NAMD.
  We are planning on adding these features to VMD or a new program.

----------------------------------------------------------------------

Problems?  Found a bug?

For problems or questions, send email to namd@ks.uiuc.edu.  If you
think you have found a bug, please follow the steps outlined below.
Your feedback will help us improve NAMD.

 1. Download and test the latest version of NAMD. 
 2. Please check the FAQ, known bugs and problem reports. 
 3. Gather, in a single directory, all input and config files needed
    to reproduce your problem. 
 4. Run once, redirecting output to a file. 
 5. Tar everything up (but not the namd2 or conv-host binaries) and
    compress it. 
 6. Email namd@ks.uiuc.edu with: 
    - A synopsis of the problem as the subject. 
    - The NAMD version number the problem occurs with. 
    - The platform and number of CPU's the problem occurs with. 
    - A description of the problematic behavior and any error messages. 
    - If the problem is consistent or random. 
    - The compressed tar file as an attachment (or a URL if it is too big). 
 7. We'll get back to you by the next business day. 

----------------------------------------------------------------------