Compilation with OpenMPI

From: Kevin C Chan (cchan2242-c_at_my.cityu.edu.hk)
Date: Mon Jul 20 2015 - 05:02:10 CDT

Dear Users,

I am compiling a MPI version of NAMD 2.10 with OpenMPI 1.8.6. I followed
the steps on the release note but found difficulties understanding it. For
your information, I am currently working on a cluster with 16 nodes each
with 16 processors.

First I started from compiling a multicore version. According to the note,
this version lacks a network layer so can only be used in a single machine.
However this version can utilize the +p or +ppn option.

Next I compiled the MPI version and found that I could successfully do
"mpirun -n 80 namd2 <config>" but the log tells me that it was
"Info: Running on 80 processors, 80 nodes, 1 physical nodes."
And this relatively basic version cannot utilize the +ppn option as it
non-SMP.

Okay so I compiled a new MPI version with smp option for charm++. This time
"mpirun -n 5 namd2 +ppn 16 <config>" works but still the log tells me:
"Info: Running on 80 processors, 5 nodes, 1 physical nodes."
So this time I guess it works like dividing the job into 5 threads each
with 16 processors? But I do not understand why it still says "1 physical
nodes".

Generally I would like to the NAMD 2.10 to work like diving itself into
number of nodes assigned and then utilizing number of processors/cores
assigned for each node to do the segmented job. That also means, within
each node, the job runs like a multicore version do so here I assume that a
multicore version is faster than MPI (no matter intel or openmpi) segmented
jobs communicating with each other. So my questions are:
1) Is this the right approach to achieve a optimized MPI version of NAMD
2.10? and
2) How can I achieve it?

Thanks in advance for any experience shared.

Regards,
Kevin
City University of Hong Kong

This archive was generated by hypermail 2.1.6 : Thu Dec 31 2015 - 23:21:59 CST