AW: NAMD 2.9 install on CLuster mpirun/charmrun with ibverb version ?

From: Norman Geist (norman.geist_at_uni-greifswald.de)
Date: Thu Nov 14 2013 - 06:43:49 CST

If you need mpirun or charmrun depends on if you compiled namd against mpi
or charm++. If you just downloaded a precompiled build, it's likely without
mpi and charmrun needs to be used. If you need to add one of them in general
depends on if you use more than one physical node. For multiple nodes you
will of course need mpirun/charmrun. But, in case of charmrun the
machinefile (nodelist) has a different format:

 

group main

host node1

host node1

[.]

 

For mpirun simply:

 

node1

node2

[.]

 

For local parallel jobs you need at least a multicore build of namd which
has shared memory parallelism.

 

Norman Geist.

 

Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im Auftrag
von Nicolas Floquet
Gesendet: Donnerstag, 14. November 2013 12:18
An: namd-l_at_ks.uiuc.edu
Betreff: namd-l: NAMD 2.9 install on CLuster mpirun/charmrun with ibverb
version ?

 

Dear all ,

 

I try to install NAMD on a cluster with infiniband and tested different
versions of NAMD

 

My submission script is as follows (in which I tested different solutions):

 

#!/bin/sh

 

# @ job_name = TMD1

# @ output = $(job_name).out

# @ error = $(job_name).err

# @ environment = COPY_ALL

# @ class = specialIntel

# @ account_no = RCPGs2

# @ job_type = mpich

# @ notify_user = nicolas.floquet_at_univ-montp1.fr

# @ node = 10

# @ total_tasks = 120

# @ environment = COPY_ALL

# @ wall_clock_limit = 36:00:10,36:00:01

# @ queue

 

source
/opt/cluster/softs/gcc-4.6.x-soft/system/module/3.2.10/Modules/3.2.10/init/s
h

module load hpclr-wrapper gcc-4.6.4 openmpi-1.6.5-gcc

export CPPFLAGS=-I/opt/cluster/gcc-soft/fftw/3.3.3-shared-float/include

export LDFLAGS=-L/opt/cluster/gcc-soft/fftw/3.3.3-shared-float/lib

 

/opt/cluster/softs/gcc-4.6.x-soft/system/openmpi/1.6.5/bin/mpirun -x
LD_LIBRARY_PATH -np $LOADL_TOTAL_TASKS -machinefile $LOADL_HOSTFILE

/opt/cluster/gcc-soft/namd/2.9/NAMD_2.9_Linux-x86_64-ibverbs/namd2
/scratch/floquetn/NEURO/TMD2/POINT3/tmd1.namd

 

#/opt/cluster/softs/gcc-4.6.x-soft/system/openmpi/1.6.5/bin/mpirun -x
LD_LIBRARY_PATH -np $LOADL_TOTAL_TASKS -machinefile $LOADL_HOSTFILE

/opt/cluster/gcc-soft/namd/2.9/NAMD_2.9_Linux-x86_64/namd2
/scratch/floquetn/NEURO/TMD2/POINT3/tmd1.namd

 

#/opt/cluster/gcc-soft/namd/2.9/NAMD_2.9_Linux-x86_64/charmrun +p
$LOADL_TOTAL_TASKS ++nodelist $LOADL_HOSTFILE /opt/cluster/gcc-soft/namd/

2.9/NAMD_2.9_Linux-x86_64/namd2
/scratch/floquetn/NEURO/TMD2/POINT3/tmd1.namd

 

 

Which solution is the best one ?

The one selected seems to launch but not in parallel ? do I need to use
charmrun with the namd2 executable for iverbs ? do I need to include mpirun
as I did?

 

Thank you in advance for your help.

 

Nicolas Floquet

This archive was generated by hypermail 2.1.6 : Tue Dec 31 2013 - 23:23:59 CST