From: Axel Kohlmeyer (akohlmey_at_gmail.com)
Date: Mon Sep 10 2012 - 08:54:50 CDT
On Mon, Sep 10, 2012 at 3:39 PM, Marc Gordon <marcgrdn55_at_gmail.com> wrote:
> What do you mean by "the tiny size of the system make almost all of the
> other discussions pointless. there is no way to make this kind of system
> scale beyond a few threads."?
> You mean I should keep my numsteps low? There is no need to crank it up so
> high? This is where this instability is coming from?
no. but you can only parallelize, if you have work to distribute on in parallel
and if the overhead of managing that work is reasonably small (since that
is non-parallel). have a look at amdahl's law.
also a lot of the tricks used to get good scaling are tricks that try to make
what is originally an O(N**2) problem as close as possible to an O(N)
problem. however for a few hundred atoms or less, this is often not worth
the effort. you can see a rather basic discussion of these effect in the pdf
attached to this page on my home page:
for a system of so few atoms, you probably get a better performance
by not doing any kind of long-range electrostatics (which would be
mostly computing vacuum) and just use a long coulomb cutoff that
would cover the entire extent of your molecule.
a while ago, i've done a simulation of a small water droplet
(with LAMMPS, not NAMD, but that doesn't matter in this case)
and not using long-range electrostatics but a long
cutoff made the simulation *much* faster *and*
perfectly accurate, since all interactions were fully
accounted for. no scaling or shifting was needed.
mind you, this only holds true for very small systems...
-- Dr. Axel Kohlmeyer akohlmey_at_gmail.com http://goo.gl/1wk0 International Centre for Theoretical Physics, Trieste. Italy.
This archive was generated by hypermail 2.1.6 : Mon Dec 31 2012 - 23:22:03 CST