AW: Always 24-way SMP?

From: Norman Geist (norman.geist_at_uni-greifswald.de)
Date: Mon Feb 25 2013 - 00:55:26 CST

Hi Andrew,

 

it's a bad idea to ask someone else if you have scaling problems. You should
know if you have or not. The information from the outfile just comes from
the charm++ startup and is simply a information about the underlying
hardware. It doesn't mean it uses smp. It just tells you it's a
multiprocessor/multicore node. Watch the output carefully and you will see
IMHO that it uses the right number of cpus (for example the Benchmark
lines). So what kind of scaling problems you have? Don't you get the
expected speedup?

 

Norman Geist.

 

Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im Auftrag
von Andrew Pearson
Gesendet: Freitag, 22. Februar 2013 19:30
An: namd-l_at_ks.uiuc.edu
Betreff: namd-l: Always 24-way SMP?

 

I'm investigating scaling problems with NAMD. I'm running precompiled
linux-64-tcp binaries on a linux cluster with 12-core nodes using "charmrun
+p $NPROCS ++mpiexec".

I know scaling problems have been covered, but I can't find the answer to my
specific question. No matter how many cores I use or how many nodes they
are spread over, at the top of stdout charm++ always reports "Running on #
unique compute nodes (24-way SMP)". It gets # correct, but it's always
24-way SMP. Is this supposed to be this way? If so, why?

Everyone seems to say that you should recompile NAMD with your own MPI
library, but I don't seem to have problems running NAMD jobs to completion
with charmrun + OpenMPI built with intel (except for the scaling). Could
using the precompiled binaries result in scaling problems?

Thank you.

This archive was generated by hypermail 2.1.6 : Wed Dec 31 2014 - 23:20:56 CST