Re: NAMD 2.11 CUDA forces IBverbs

From: Jim Phillips (jim_at_ks.uiuc.edu)
Date: Fri Nov 13 2015 - 08:31:50 CST

Because even though the underlying network is the same, in practice the
Charm++ mpi-smp layer is slower than ibverbs/verbs/gni, and this shows up
in CUDA builds. The the ++mpiexec option to charmrun lets you use the
mpiexec script on your cluster to launch NAMD so you don't have to write
any complex scripts to interface to your queueing system. For example,
launching two processes of 16 worker threads each:

   ./charmrun ++mpiexec +p32 ++ppn 16 ++verbose ./namd2

Worst case, as at TACC, if your mpiexec doesn't accept "-n <N>" you need
to add "++remote-shell $SCRIPTDIR/mpiexec" where the mpiexec script drops
the first two arguments before calling the real mpiexec (or ibrun, srun,
etc.), and possible also "++runscript $SCRIPTDIR/runscript" if you need to
modify LD_LIBRARY_PATH. This is described in notes.txt.

If you want to use MPI anyway you can just remove the "exit 1" after the
error in the config script or manually set CHARMARCH in the Make.config
file that the config script generates in your build directory.

Jim

On Fri, 13 Nov 2015, Norman Geist wrote:

> Hey there,
>
>
>
> is there a special reason why namd 2.11 CUDA doesn't allow a pure MPI build?
>
> One could also use an MPI that supports verbs or use IPoIB with the same
> performance, so why disallow MPI which is
>
> the standard on modern HPC systems.
>
>
>
> Thanks
>
>
>
> Norman Geist
>
>
>
>

This archive was generated by hypermail 2.1.6 : Thu Dec 31 2015 - 23:22:13 CST