Always 24-way SMP?

From: Andrew Pearson (andrew.j.pearson_at_gmail.com)
Date: Fri Feb 22 2013 - 12:29:58 CST

I'm investigating scaling problems with NAMD. I'm running precompiled
linux-64-tcp binaries on a linux cluster with 12-core nodes using "charmrun
+p $NPROCS ++mpiexec".

I know scaling problems have been covered, but I can't find the answer to
my specific question. No matter how many cores I use or how many nodes
they are spread over, at the top of stdout charm++ always reports "Running
on # unique compute nodes (24-way SMP)". It gets # correct, but it's
always 24-way SMP. Is this supposed to be this way? If so, why?

Everyone seems to say that you should recompile NAMD with your own MPI
library, but I don't seem to have problems running NAMD jobs to completion
with charmrun + OpenMPI built with intel (except for the scaling). Could
using the precompiled binaries result in scaling problems?

Thank you.

This archive was generated by hypermail 2.1.6 : Wed Dec 31 2014 - 23:20:56 CST