NAMD Wiki: NamdOnOmniPath

  You are encouraged to improve NamdWiki by adding content, correcting errors, or removing spam.

Intel Omni-Path adapters use InfiniBand fabric. Most processing is done on the host CPU, rather than the offload model of Mellanox adapters.

While Omni-Path adapters can use the Charm++ verbs and ibverbs targets if the --with-qlogic option is passed to the Charm++ build command, performance is lower than for simple MPI builds, even for smp builds (necessary on KNL or GPU-accelerated builds). This is likely because the native Omni-Path PSM2 API is designed to allow very efficient MPI implementation.

Because less work is offloaded to the adapter, more processes per host (i.e., more communication threads), may be needed for multi-host runs.

See also NamdOnKNL.