performance of NAMD SMP MPI version

From: jing liang (
Date: Thu May 25 2017 - 13:38:10 CDT


I am running NAMD SMP+MPI version on a cluster where nodes have 28 cores
 each. My script for slurm looks like:

#SBATCH -n 56

mpiexec -np 2 namd2 +ppn 27 input_file > output_file

with this script, I want to run two processes on two different nodes and
using 27 threads for each process.
Then, I get the output:

Charm++> Running on MPI version: 3.1
Charm++> level of thread support used: MPI_THREAD_FUNNELED (desired:
Charm++> Running in SMP mode: numNodes 2, 27 worker threads per process
Charm++> The comm. thread both sends and receives messages
Charm++> Using recursive bisection (scheme 3) for topology aware partitions
Warning> Randomization of stack pointer is turned on in kernel, thread
migration may not work! Run 'echo 0 > /proc/sys/kernel/random' as root to
disable it
, or try run with '+isomalloc_sync'.
CharmLB> Load balancer assumes all CPUs are same.
Charm++> Running on 1 unique compute nodes (28-way SMP).
Charm++> cpu topology info is gathered in 0.056 seconds.

Charm++> Warning: the number of SMP threads (56) is greater than the number
of physical cores (28), so threads will sleep while idling. Use
+CmiSpinOnIdle or +CmiSleep
OnIdle to control this directly.

Info: NAMD 2.12 for Linux-x86_64-MPI-smp

The warning about the number of threads greater than the number of physical
cores tells me that NAMD is not recognizing
the number of MPI processes?

By changing the number of nodes (-N) and number of threads (+ppn) I don't
observe significant performance improvement
in my simulation. In fact, a pure SMP NAMD version running on 28 cores (1
node) outperforms the SMP-MPI version running on 3 nodes.
Most probably I am missing some knowledge on how to run this version of
NAMD or my installation is not working well.
Any comment is welcome,


This archive was generated by hypermail 2.1.6 : Mon Dec 31 2018 - 23:20:19 CST