From: Amr Zeinalabideen Majul (amajul_at_gmu.edu)
Date: Fri Sep 11 2009 - 08:22:08 CDT
Im simulating a small system of 16000 atoms, on a different number of CPUS to get an idea of the performance increases. I noticed that at a 100 CPUs, that only 10-20 percent of the CPU is being used.
I searched around and I found out that estimates that each cpu can handle a minimum of around 500 to 1000 atoms, I was wondering if this is a reliable method of roughly estimating how many CPUs I need, and if this is indeed the reason why my CPUs are not working at maximum efficiency? If indeed each CPU can handle 500 atoms with linear increases it would seem more than 32 cpus isnt much help. Does that sound correct?
Or would it be a communication issue between the cpus? How much would the frequency of output to the various files affect the wallclock time of the simulation if I am using ethernet to communicate between the nodes?
This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:51:28 CST