From: Aron Broom (broomsday_at_gmail.com)
Date: Tue Mar 20 2012 - 23:16:57 CDT
Thanks for the 2.9 release NAMD developers!
Whatever changes were made either in general or as part of making the
multicore-CUDA version have been incredible for GPU enhanced vacuum
simulations, literally improving simulation speed by ~100-fold compared
I am, however, wondering if I'm getting the expected performance for GB
implicit solvation with CUDA:
Running a test PME control case of a protein (2k atoms) in a box of water
(28k atoms) with PME and 10 angstrom cutoff and NVE, I get ~10 ns/day.
That compares quite nicely against AMBER at ~20 ns/day and GROMACS-OpenMM
at ~10 ns/day (all are 2 fs timesteps, no multistepping).
If I then get rid of the water, and switch to a GB implicit solvent model
with the default settings other than alphaCutoff being set to 15 angstroms
for all 3 simulation packages, I see ~100 ns/day with AMBER and ~130 ns/day
with GROMACS-OpenMM, 5-fold and 13-fold improvements. In NAMD, however,
the simulation speed actually drops to ~6 ns/day. Given that vacuum
simulations also compare reasonably well across the 3 packages, I'm
wondering what the origin of that large difference is. I tested both
without a nonbonded cutoff (as AMBER and GROMACS don't allow anything other
than this, and also with the recommended 16 angstrom cutoff, but there was
no appreciable difference in NAMD speed between those cutoffs).
Do you feel this kind of performance is as expected at this point in
development (I realize the GB stuff was just added to the GPU with this
release) or have you seen better numbers, and should I consider compiling
from source or something else (at the moment I'm using the multicore-CUDA
-- Aron Broom M.Sc PhD Student Department of Chemistry University of Waterloo
This archive was generated by hypermail 2.1.6 : Tue Dec 31 2013 - 23:21:47 CST