Re: Job script for multi node job

From: Phil Greer (
Date: Fri Feb 14 2014 - 09:04:37 CST

On Feb 14, 2014, at 5:20 AM, Subbarao Kanchi <> wrote:

> Dear All,
> I am using compiled version of NAMD_2.9_Linux-x86_64-ibverbs. The following job script is working for a single node but if I submit with two/more nodes,job is running but using only one node and not using other nodes. I am giving the script below and I do not able to figure the mistake in the script. I will appreciate any suggestions.
> Regards,
> Subbu.
> #!/bin/csh -f
> #PBS -l nodes=2:ppn=32
> #PBS -o /present_working_dir/out.out
> #PBS -e /present_working_dir/err.out
> #PBS -N test
> cat $PBS_NODEFILE > temp.1
> set nprocs = `wc -l < $PBS_NODEFILE`
> echo $nprocs
> setenv DO_PARALLEL "/home/NAMD_2.9_Linux-x86_64-ibverbs/charmrun ++remote-shell ssh ++nodelist nodelist +p$nprocs "
> setenv exc "/home/NAMD_2.9_Linux-x86_64-ibverbs/namd2 "
> set j="e"
> set tot=0
> echo group main >> nodelist
> foreach i ( `cat temp.1` )
> echo host $i >> nodelist
> if ( $j != $i ) then
> ssh -n $i mkdir -p /temp1_dir
> ssh -n $i cp -r /present_working_dir/* /temp1_dir
> echo "$i " >> t2
> ssh -n $i limit >> t2
> ssh -n $i limit memorylocked unlimited
> endif
> set j="$i"
> end
> cd /temp1_dir
> $DO_PARALLEL $exc namd.conf > namd.log

I am confused about the whole middle part of this script. Do you not have a shared home directory that is mounted across all the nodes? I assume the namd.conf is pointing to the /temp1_dir but it seems unnecessary. You are running this over ib so it should be running at a minimum of 10Gb/sec and possibly up to 40Gb/sec. most home dirs will be on NFS mounted raid arrays with performance in the 400-500MB/sec R/W. Unless all the nodes have SSDs, which get about 400-500MB R/W, the best you will get with a standard HD local is ~140MB R/W. So really, what is the point?

My advice, unless your cluster admin is forcing you to do the local copy to each node, keep it simple and just run the program from your network home dir. Here is the batch script I use on my cluster:

#PBS -l walltime=1:00:00
#PBS -l nodes=2:ppn=32:bigmem
#PBS -j oe
#PBS -m abe

# load modules
. /etc/profile.d/
module load mpi/openmpi164

# setup some variables
NPROCS=`wc -l $PBS_NODEFILE| awk '{print $1}'`


# make up nodefile in the charm++ format
if [ -e $NODELIST ]
        rm $NODELIST

echo group main > $NODELIST
for i in $NODES
        echo host $i >> $NODELIST

# go to my job submission directory and then execute the program
$CHARMRUN_BINARY $NAMD_BINARY ++remote-shell /usr/bin/ssh +p$NPROCS ++nodelist $NODELIST apoa1_5k.namd >& apoa1_5k-2x32-bigmem.out

This archive was generated by hypermail 2.1.6 : Wed Dec 31 2014 - 23:22:07 CST