Re: CUDA vd non-CUDA for minimization

From: Kenno Vanommeslaeghe (
Date: Mon Oct 21 2013 - 11:34:26 CDT

On 10/19/2013 03:28 AM, Francesco Pietra wrote:
> Good for you if you find the time to search among archives

*You*'re the one asking a question, so the burden of searching the
archives (with the help of Google of course) is on you. Remember *we* are
not getting paid for answering questions. We're all busy scientists who
are helping you on a voluntary basis. Even if you feel your time is more
valuable than ours, consider this: we are the people who put together the
tools you're using. Due to the popularity of these tools, we get many
questions such as yours every day. If all of our users would value their
own time more than ours and not do their homework before posting, we would
be so swamped with trivial questions that we would be forced to either:
  * stop answering questions altogether
  * not have time to fix problems or add features to the programs you're using
  * give rude and unhelpful answers to discourage people from wasting our time

> If you read better, my message was to remark that the situation needs
> attention, not as far as minimization is concerned (for which I have an
> alternative) but from what in general comes out from graphic cards.

If *you* read better, you would have noticed the GPU code is known to
overflow if local contributions to the energy become very high. As I
understand, this is inherent to the use of single precision math in the
GPU, and cannot be addressed without compromising the GPU code's speed
advantage. Or, to quote Axel (in the post I asked you to read): "you don't
expect a race car to have an automatic transmission, or do you?"

Now, as explained by Axel and alluded in my previous post, this
overflowing becomes vanishingly unlikely in normal well-behaved MD
simulations, thus (in my understanding) the GPU code can perfectly be
trusted for run-of-the-mill MD work on properly prepared and relaxed
systems, and probably also for some of the more advanced techniques that
don't introduce high forces into the system. For anything else, one could
indeed deem it prudent to stay away from the GPU code. OTOH, the
overflowing will fatally destabilize the simulation in most cases, so
there's little risk of triggering it and continuing the simulation without

This archive was generated by hypermail 2.1.6 : Wed Dec 31 2014 - 23:21:48 CST