From: Norman Geist (norman.geist_at_uni-greifswald.de)
Date: Mon Jan 02 2012 - 00:44:13 CST
Hi Neelanjana,
It's really hard to make expectations on this. I think the best is to just
try out. Usually the bigger the system the better the scaling, because there
is less need of inter process communication. The messages sent are larger
and less often. Means while a smaller system would already scale out because
of too high latency, the bigger system won't at the same number of
nodes/cpus/gpus etc. As long as your RAM size is sufficient, and the file
formats are not at their format limits (PDB is horrible for that, especially
with amber tools), you can make a system as big as you want, and it should
scale if your configuration works. But keep in mind, the bigger the system
is, the bigger are the relaxation times and times per step. Means that you
need more time per step and even more simulation steps to see interesting
things, that doubles the cost.
If your aim is a parallel benchmark, it doesn't make much sense to use a
"useless" big system. You should benchmark with everyday cases to make
reliable expectations on how your cluster scales. IMHO yes its fine to
crunch a really big and realistic system, but the whole analysis and
handling is mostly "not so comfortable"
Hope that helps
Norman Geist.
This archive was generated by hypermail 2.1.6 : Mon Dec 31 2012 - 23:21:06 CST