From: Giacomo Fiorin (gfiorin_at_seas.upenn.edu)
Date: Fri Sep 11 2009 - 11:11:12 CDT
Hi Amr, it also depends a lot on the speed on the network connection
between the nodes you're using. If it's good, NAMD may still hold up
to that point, but if it's too slow, performance drops much earlier.
You should look up the benchmarks page on the NAMD website, and check
the performance on the cluster or supercomputer most similar to the
one you use.
Giacomo
---- ----
Giacomo Fiorin
ICMS - Institute for Computational Molecular Science
Temple University
1900 N 12 th Street, Philadelphia, PA 19122
work phone: (+1)-215-204-4216
mobile: (+1)-267-324-7676
mail: giacomo.fiorin_at_gmail.com
---- ----
On Fri, Sep 11, 2009 at 9:22 AM, Amr Zeinalabideen Majul <amajul_at_gmu.edu> wrote:
> Hello all,
>
> Im simulating a small system of 16000 atoms, on a different number of CPUS to get an idea of the performance increases. I noticed that at a 100 CPUs, that only 10-20 percent of the CPU is being used.
>
> I searched around and I found out that estimates that each cpu can handle a minimum of around 500 to 1000 atoms, I was wondering if this is a reliable method of roughly estimating how many CPUs I need, and if this is indeed the reason why my CPUs are not working at maximum efficiency? If indeed each CPU can handle 500 atoms with linear increases it would seem more than 32 cpus isnt much help. Does that sound correct?
>
> Or would it be a communication issue between the cpus? How much would the frequency of output to the various files affect the wallclock time of the simulation if I am using ethernet to communicate between the nodes?
>
> Thanks,
> Amr Majul
>
>
>
>
>
>
This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:53:16 CST