Re: explicit NVT simulation

From: Kenno Vanommeslaeghe (kvanomme_at_rx.umaryland.edu)
Date: Mon Sep 09 2013 - 13:46:20 CDT

On 09/09/2013 02:02 PM, Axel Kohlmeyer wrote:
> - and you need to enforce a consistent rounding mode that does not
> allow denormalized numbers. otherwise you can still have some
> randomness creep in when you have numbers that very, very close but
> not quite zero (like 1e-300). this one, however, is extremely
> unlikely. but then again, if you run with a very large number of
> atoms...

That shouldn't matter. As long as the CPU is built from binary logic gates
and not buggy or defective, a calculation with a denormalized result (or
even a signed zero) should always yield the same denormalized result /
signed zero. Again provided that the CPU model is the same.

> - and if you run on a multi-core CPU, you have to set processor
> affinity so that the process doesn't get bounced around to a different
> CPU core.

Same thing, shouldn't matter as long as all the cores have the same FPUs.
Maybe one day heterogeneous CPUs will invalidate this assumption, but for
now, we can perfectly get reproducible trajectories with a single core
CHARMM job bouncing around between cores.

> - and you have to make certain, that there are no actions that are
> triggered by the amount of time passed or other unexpected
> things like signal handlers etc.

This would require these time-triggered actions / signal handlers to be a
bit tricky as to change the order in which the simulation's associative
math is done (or consume random numbers from the same PRNG as the
simulation). I'm pretty sure this is not the case for CHARMM but I'm less
sure about NAMD.

> at this point, writing an MD kernel in fixed point math almost seems
> like the easier undertaking... ;-)

Let's say that in an imaginary world where reproducibility of the exact
trajectories were important, the assumptions in my previous message are
not reasonable/convenient, so one would be forced to either write MD
kernels in fixed point, or use something like STREFLOP and eliminate all
sources of differences in associative math. Which one would be *easier* is
a not a trivial question. As I understand from reading very old papers, it
took a while for the MD community to master the numerical analysis aspects
of what they were doing and satisfactorily mitigate loss of significance
issues. This whole exercise would need to be repeated in a fixed-point
context, and it would presumable be even more difficult.

But I presume that's part of the reason why you said "almost"? ;-)

This archive was generated by hypermail 2.1.6 : Tue Dec 31 2013 - 23:23:42 CST