From: Axel Kohlmeyer (akohlmey_at_gmail.com)
Date: Wed Sep 11 2013 - 02:14:13 CDT
On Wed, Sep 11, 2013 at 8:31 AM, James Starlight <jmsstarlight_at_gmail.com> wrote:
> Axel, thanks for suggestions!
>
> So as I understood GPU-namd use ONLY single precision math for calculation
> and cpu version enable use DP, doesn't it ?
neither statement is correct. both versions use single precision and
double precision math. e.g. the FFTs are single precision and even in
the CUDA version done on the CPU. it is only that the CUDA version
uses single precision math in more places, since it is so much faster
on the GPU and its error is small for well behaved systems.
> Should I define in conf file or
> in launch command some flag to activate DP in case when I perform simulation
> on CPU or DP always turned on in case of such simulations ?
you cannot change this unless you write your own MD code. besides, it
is missing the point. your problem is not the single vs. double
precision issue, it only makes your problem more visible. your problem
is that you start with a bad initial configuration. if you improve
that, you will not have a problem with either GPU or CPU version. if
your initial configuration was a bit worse it would produce the very
same issues with the CPU version.
in short, you got away with it this time, but don't expect that to
work all the time, and you cannot blame the executable for feeding it
bad data. as any alchemist can tell you, it is not easy to turn dirt
into gold. ;-)
axel.
> James
>
>
> 2013/9/10 Axel Kohlmeyer <akohlmey_at_gmail.com>
>>
>> On Tue, Sep 10, 2013 at 4:06 PM, James Starlight <jmsstarlight_at_gmail.com>
>> wrote:
>> > The problem was solved by running minimization without CUDA (on cpu
>> > only).
>> > So as I've told previously cuda-based namd has some problems with the
>> > minimization implementation.
>>
>> no. the problem is not understanding the implications of using the GPU
>> accelerated version.
>> GPU accelerated force kernels use single precision math and that will
>> much more easily overflow an create bogus data then with double
>> precision math. now, if you feed NAMD an initial structure with bad
>> contacts and then just depend on the minimizer to right this, then you
>> must not use the cuda version. you don't expect a race car to have an
>> automatic transmission, or do you?
>>
>> a better solution is to identify the bad contacts and figure out how
>> they came to be, and then avoid this entire issue.
>> the next time around the situation may be worse and may be resolved
>> with full double precision math either.
>>
>> axel.
>>
>>
>> >
>> >
>> > James
>> >
>> >
>> > 2013/9/9 James Starlight <jmsstarlight_at_gmail.com>
>> >>
>> >> Dear NAmd users!
>> >>
>> >> I've forced with the serious problem of the equilibration large systems
>> >> mainly consists of the hydrated lipids.
>> >>
>> >>
>> >> Briefly I've done long minimization after which tried to equilibrate my
>> >> system in npt ensemble and obtained crash at the begining of the
>> >> equilibration phase with the below listened errors
>> >>
>> >> colvars: Writing the state file "step6.1_equilibration.
>> >> restart.colvars.state".
>> >> colvars: Synchronizing (emptying the buffer of) trajectory file
>> >> "step6.1_equilibration.colvars.traj".
>> >> ERROR: Constraint failure in RATTLE algorithm for atom 14039!
>> >>
>> >> ERROR: Constraint failure; simulation has become unstable.
>> >> ERROR: Constraint failure in RATTLE algorithm for atom 8959!
>> >>
>> >> ERROR: Constraint failure; simulation has become unstable.
>> >> ERROR: Constraint failure in RATTLE algorithm for atom 66!
>> >>
>> >> ERROR: Constraint failure; simulation has become unstable.
>> >> ERROR: Constraint failure in RATTLE algorithm for atom 15321!
>> >>
>> >> ERROR: Constraint failure; simulation has become unstable.
>> >> ERROR: Constraint failure in RATTLE algorithm for atom 13670!
>> >>
>> >> ERROR: Constraint failure; simulation has become unstable.
>> >> ERROR: Constraint failure in RATTLE algorithm for atom 9380!
>> >>
>> >> Here you can find example of the bilayer which i"Ve tried to simulate
>> >> in
>> >> GPU-based regime
>> >> http://www.charmm-gui.org/?doc=input/membrane_only&time=1378378701
>> >>
>> >> could someone tell me
>> >>
>> >> 1) What additions should I add to the conf files to increase stability
>> >> of
>> >> the simulations in case of gpu-based calculations ?
>> >>
>> >> 2) How I could performed simulation in the cpu-only regime (turned of
>> >> the
>> >> gpu) on the namd 2.7 compiled with the cuda support.
>> >>
>> >> Thanks for help
>> >> James
>> >>
>> >
>>
>>
>>
>> --
>> Dr. Axel Kohlmeyer akohlmey_at_gmail.com http://goo.gl/1wk0
>> International Centre for Theoretical Physics, Trieste. Italy.
>
>
-- Dr. Axel Kohlmeyer akohlmey_at_gmail.com http://goo.gl/1wk0 International Centre for Theoretical Physics, Trieste. Italy.
This archive was generated by hypermail 2.1.6 : Tue Dec 31 2013 - 23:23:42 CST