From: Cesar Luis Avila (cavila_at_fbqf.unt.edu.ar)
Date: Tue Mar 27 2007 - 14:33:35 CDT
What did you mean by "the system explode" ? Do your simulations get
stuck on a random step so they need to be restarted from a previous
point? Perhaps you are suffering from data corruption on nodes
Marcos Sotomayor escribió:
> Hi Dong,
> There are two different issues that you should carefully analise.
> First, the system should not "explode" in 1 or 32 cpus. In fact, it
> should be stable in any reasonable number of cpus. Otherwise, there is
> something wrong with your binary/hardware and/or system.
> Second, in order to know whether the number of processors you are
> using is adequate to the system size, you may want to run a set of
> benchmark simulations using different number of cpus and compare the
> ns/day you get with what you would get using only one processor.
> Usually 1000 atoms per processors works fine, but you need to run the
> mentioned benchmarks to be sure.
> On Tue, 27 Mar 2007, Dong Luo wrote:
>> Parallel simulation in NAMD is based on space
>> division, which won't be a problem for large systems.
>> However, when dealing with small system like a short
>> peptide in water box, running the simulation with more
>> than 32 cpus may not get a good space distribution.
>> Based on my experience of heat annealing simulation of
>> a 22 residue peptide in water box, the system exploded
>> easily on 32 cpus, but stay quite stable with only one
>> My question is that is there a rough relationship
>> between system size and cpu numbers that we can
>> Dong Luo
>> Department of physiology and biophysics
>> Boston University
>> TV dinner still cooling?
>> Check out "Tonight's Picks" on Yahoo! TV.
This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:44:30 CST