From: David Tanner (dtanner_at_ks.uiuc.edu)
Date: Wed Mar 21 2012 - 12:41:23 CDT
Along with the generalized Born portion of the implicit solvent, is
the SASA hydrophobic energy calculation. In NAMD and Amber, at least,
this is calculated by the LCPO algorithm. In NAMD, LCPO is turned on
by default, and to turn it off you need to include "lcpo off" in your
config file. If LCPO is being calculated in NAMD but not in Amber that
would explain the performance discrepancy.
Thank you,
David E. Tanner
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
David E. Tanner
Theoretical and Computational Biophysics Group
3159 Beckman Institute
University of Illinois at Urbana-Champaign
405 N. Mathews
Urbana, IL 61801
http://www.linkedin.com/in/davidetanner
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
On Tue, Mar 20, 2012 at 11:16 PM, Aron Broom <broomsday_at_gmail.com> wrote:
> Thanks for the 2.9 release NAMD developers!
>
> Whatever changes were made either in general or as part of making the
> multicore-CUDA version have been incredible for GPU enhanced vacuum
> simulations, literally improving simulation speed by ~100-fold compared with
> 2.8.
>
> I am, however, wondering if I'm getting the expected performance for GB
> implicit solvation with CUDA:
>
> Running a test PME control case of a protein (2k atoms) in a box of water
> (28k atoms) with PME and 10 angstrom cutoff and NVE, I get ~10 ns/day. That
> compares quite nicely against AMBER at ~20 ns/day and GROMACS-OpenMM at ~10
> ns/day (all are 2 fs timesteps, no multistepping).
>
> If I then get rid of the water, and switch to a GB implicit solvent model
> with the default settings other than alphaCutoff being set to 15 angstroms
> for all 3 simulation packages, I see ~100 ns/day with AMBER and ~130 ns/day
> with GROMACS-OpenMM, 5-fold and 13-fold improvements. In NAMD, however, the
> simulation speed actually drops to ~6 ns/day. Given that vacuum simulations
> also compare reasonably well across the 3 packages, I'm wondering what the
> origin of that large difference is. I tested both without a nonbonded
> cutoff (as AMBER and GROMACS don't allow anything other than this, and also
> with the recommended 16 angstrom cutoff, but there was no appreciable
> difference in NAMD speed between those cutoffs).
>
> Do you feel this kind of performance is as expected at this point in
> development (I realize the GB stuff was just added to the GPU with this
> release) or have you seen better numbers, and should I consider compiling
> from source or something else (at the moment I'm using the multicore-CUDA
> precompiled binaries).
>
> Thanks,
>
> ~Aron
>
> --
> Aron Broom M.Sc
> PhD Student
> Department of Chemistry
> University of Waterloo
>
This archive was generated by hypermail 2.1.6 : Tue Dec 31 2013 - 23:21:47 CST