next up previous contents index
Next: SGI Altix Up: Running NAMD Previous: Windows Workstation Networks   Contents   Index

BProc-Based Clusters (Scyld and Clustermatic)

Scyld and Clustermatic replace rsh and other methods of launching jobs via a distributed process space. There is no need for a nodelist file or any special daemons, although special Scyld or Clustermatic versions of charmrun and namd2 are required. In order to allow access to files, the first NAMD process must be on the master node of the cluster. Launch jobs from the master node of the cluster via the command:

  charmrun namd2 +p<procs> <configfile>

For best performance, run a single NAMD job on all available nodes and never run multiple NAMD jobs at the same time. You should probably determine the number of processors via a script, for example on Scyld:

  @ NUMPROCS = `bpstat -u` + 1
  charmrun namd2 +p$NUMPROCS <configfile>

You may safely suspend and resume a running NAMD job on these clusters using kill -STOP and kill -CONT on the process group. Queueing systems typically provide this functionality, allowing you to suspend a running job to allow a higher priority job to run immediately.

If you want to run multiple NAMD jobs simultaneously on the same cluster you can use the charmrun options ++startpe and ++endpe to specify the range of nodes to use. The master node (-1) is always included unless the ++skipmaster option is given. The requested number of processes are assigned to nodes round-robin as with other network versions, or the ++ppn option can be used to specify the number of processes per node. To limit the master node to one process use the ++singlemaster option.


next up previous contents index
Next: SGI Altix Up: Running NAMD Previous: Windows Workstation Networks   Contents   Index
namd@ks.uiuc.edu