Re: Running NAMD on Intel OmniPath

From: Giacomo Fiorin (
Date: Thu Jul 27 2017 - 17:53:52 CDT

Hi David,

On Thu, Jul 27, 2017 at 5:36 PM, David Hardy <> wrote:

> Hi Giacomo,
> For Inifiniband, you do want verbs or ibverbs rather than MPI.
> However, Omnipath is supposedly incompatible with ibverbs.

These are both obvious.

> The release notes say that you can build verbs for Omnipath using
> --with-qlogic, but Jim and the Charm++ developers tell me that MPI with SMP
> gives better performance.

The problem with combining MPI+SMP is that there are two statements against
it in the release notes, one in the "SMP builds" section that I already
quoted, and one further down:
*MPI-smp-CUDA is not recommended due to poor MPI-smp performance,*
neither of which is specific to a certain hardware.

Common sense suggests that a plain MPI build could be safer, also because
it is much easier to use (mpirun namd2 <input>), whereas the SMP options
require a careful allocation of threads including CPU core affinity.

Maxime: be aware that you may be using a combination of hardware that is
not extremely friendly to Charm++, and you may not be able to use all CPU
cores and the GPUs.

I would suggest running with a single MPI task per GPU (using mpirun
-npernode) and gradually increasing the number of tasks per node until the
performance settles.


> Best regards,
> Dave
> On Jul 27, 2017, at 2:33 PM, Giacomo Fiorin <>
> wrote:
> Hi David, can you clarify the following from the release notes?
> *MPI-based SMP builds have worse performance than verbs or ibverbs andare
> not recommended, particularly for GPU-accelerated builds*.
> I don't have an OmniPath machine accessible to test whether this is true,
> but it is possibly a concern.
> Giacomo‚Äč

Giacomo Fiorin
Associate Professor of Research, Temple University, Philadelphia, PA
Contractor, National Institutes of Health, Bethesda, MD

This archive was generated by hypermail 2.1.6 : Mon Dec 31 2018 - 23:20:28 CST