The Future of NAMD was Re: GeForce vs. Tesla

From: Biff Forbush (biff.forbush_at_yale.edu)
Date: Wed Nov 25 2009 - 15:22:22 CST

Hi Tian et al.,

    Most often, specific questions (like yours) about NAMD and CUDA/GPUs
go unanswered on this site. Presumably this shows that the number of
users who have real experience with NAMD/CUDA behavior is extremely
low. One hopes that this also indicates the developers are too busy
working on NAMD / CUDA code to deal with individual replies.

    It would be extremely helpful if the NAMD team could make available
a short "white paper" on the situation with GPUs and NAMD. Potential
users face purchasing decisions and are currently in the dark as to the
present and future usefulness of GPUs for NAMD. Perhaps some
information could be supplied on the following: (1) Difficulties in
applying GPUs to MD that have limited GPU utility. (2) Current
NAMD-CUDA -- is it production quality? What is the timeline for
bringing the CUDA (PTX 1.0) version to full functionality -- will that
ever happen, given the clear incentive to migrate immediately to Fermi?
(3) Some discussion of best architectures (gpu/cpu, scaling, multi-node,
etc.) and compilation of available benchmark results would be extremely
useful.
    The programming enhancements in Fermi (PTX 2.0) are substantial,
even revolutionary in the context of gpus -- NVIDIA has made a real
commitment to serious computing. (4, and most important) Is it possible
to tell at this point whether these enhancements will allow a (fairly
rapid) port of full NAMD to the Fermi platform (PTX 2.0 now supporting
C++). Is this planned? If so, users should be advised that significant
investment in GT200-generation GPUs of any sort is a complete mistake
(probably a moot point in a month or so due to availability), as
backward compatability will not be there if major improvements are made.

    While it sounds like Anton may soon be the only game in town at the
high end of long simulations, it seems NAMD should still have its place
for shorter simulations, particularly if it can finally utilize the
increasing power of GPU technology in small commodity clusters.

Regards,
Biff Forbush

Tian, Pu (NIH/NIDDK) [C] wrote:
> Hello NAMD users,
>
> I am planning to build a GPU cluster of 4 or 8 nodes, each with 4 GPUs. The Tesla cards are very expensive compared to the GeForce series. I am not sure if the expanded memories would matter that much for MD simulations of systems with less than 100,000 atoms. The reliability is another consideration. So as long as the GeForce cards have reasonable reliability, I would want to wait for the consumer series (Ge3XX) of the "Fermi" architecture cards instead of Tesla C2050/70.
>
> I would really appreciate it if anybody who has used GeForce2XX cards for a while could share their experiences! I assume that the coming GeForce3XX would similar reliability as GeForce2XX series. Additionally, is that possible to lower the clock rate of these cards a little bit to improve their reliability?
>
>
> Best,
>
> Pu
>
>
>
>
>

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 05:22:34 CST