NAMD Wiki: NamdOnAltix

  You are encouraged to improve NamdWiki by adding content, correcting errors, or removing spam.

This page is about the old Itanium-based Altix system.

For the new Xeon-based Altix UV see NamdOnAltixUV.


See also NamdOnIA64 and NamdAtNCSA.


Performance issues with the Intel 8.1 and later compilers are described at NamdOnIA64.


Compile Charm++ for SGI Altix

Download the latest development version of Charm++ from Charm++ Automated-Build website: http://charm.cs.uiuc.edu/autobuild/cur. Download the latest nightly build Charm++ source from link "Charm++ source code".

Build charm++ by:

./build charm++ mpi-linux-ia64 mpt

or, if you wish intel compiler:

./build charm++ mpi-linux-ia64 mpt icc

Config NAMD using Linux-ia64-MPT-icc:

./config fftw ... Linux-ia64-MPT-icc

"Array services not available" error with MPT (SGI MPI)

From Edward Chrzanowski:

o Add the following entry to /etc/services for arrayd
  service and port.  The default port number is 5434 and
  is specified in the arrayd.conf configuration file.

  sgi-arrayd   5434/tcp    # SGI Array Services daemon

o If necessary, modify the default authentication
  configuration.  The default authentication is
  AUTHENTICATION NOREMOTE, which does not allow access
  from remote hosts.  The authentication model is
  specified in the /usr/lib/array/arrayd.auth
  configuration file.

o The array services will not become active until they
  are enabled with the chkconfig(1) command:

      chkconfig  --add array
      chkconfig  --level 2345 array on

o It is not necessary to reboot the system after
  installing the array services, to make them active, but
  if you do not reboot, it will be necessary to restart
  them manually.

  To do so, use the following command:

      /etc/init.d/array   start


I tracked down the source of the problem with NAMD performance on NCSA's Altix system. On July 5, the environment variable MPI_DSM_DISTRIBUTE was inadvertently removed from the default environment. Here is a description of what it does from the mpi man page:

MPI_DSM_DISTRIBUTE

Ensures that each MPI process gets a unique CPU and physical
memory on the node with which that CPU is associated.
Currently, the CPUs are chosen by simply starting at relative
CPU 0 and incrementing until all MPI processes have been forked.
To choose specific CPUs, use the MPI_DSM_CPULIST environment
variable.  This feature is most useful if running on a dedicated
system or running within a cpuset.  Some batch schedulers
including LSF 5.1 will cause MPI_DSM_DISTRIBUTE to be set
automatically when using dynamic cpusets.

It was added again yesterday afternoon. To ensure the environment variable is always set, you can add the following line to your .cshrc file:

setenv MPI_DSM_DISTRIBUTE

We apologize for the problem.

Dave McWilliams, NCSA Consulting Services