From: Norman Geist (norman.geist_at_uni-greifswald.de)
Date: Thu Aug 28 2014 - 05:25:13 CDT
please let me 1st say that if you phrase your question such horrible, you might get a similar phrased or no answer.
The limit you are asking for is of course the number of cores your machines have.
total_cores = x * y
Where x is the number of processes and y the number of threads per process. Processes will show up in top as individual line whereas threads are shown as multiples of 100% utilization. So a process having 6 threads would show 600% cpu usage at full utilization.
Please also inform about the difference between “distributed memory parallelism” (processes) and “shared memory parallelism (smp)” (threads).
There might be a optimum ratio between processes and threads to utilize all the cores available.
Processes are started in round robin fashion foreach entry in the nodelist file until number of x is reached where each process tries to fork y threads. This shouldn’t exceed the number of cores available.
If the above doesn’t contain the answer you was looking for, please feel free to rephrase your question such carefully and detailed, that someone is actually able to figure out what you want to know or is actually willing to read it.
Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im Auftrag von ukulililixl
Gesendet: Donnerstag, 28. August 2014 11:31
Betreff: namd-l: ibverb&&smp build NAMD
like 'charmrun ++nodelist *.nodelist namd2 +idlepoll +p16 ++ppn 4 *.namd'
I run 'charmrun ++nodelist *.nodelist namd2 +idlepoll +px ++ppn y *.namd' many times and I find for some x and y,program can't run.
Is there a top limit for x or y?
from my experiment,I top to see how many progress during the program,and it less than 10 or 11 for each node(every node has 16 core).
another problem:if x can't be devised by y or number of nodes,how pes are allocated?
--- Diese E-Mail ist frei von Viren und Malware, denn der avast! Antivirus Schutz ist aktiv. http://www.avast.com
This archive was generated by hypermail 2.1.6 : Thu Dec 31 2015 - 23:21:10 CST