Re: Question about GPUs hardware.

From: Aron Broom (broomsday_at_gmail.com)
Date: Mon Apr 23 2012 - 17:57:56 CDT

I'm no expert, but I'll give you my two cents.

1) For NAMD, I've found the GTX cards run pretty well (GTX570 performs at
the same speed as an M2070), but you need to consider the expected size of
your simulated systems. I think there is data on how much memory an X-atom
system will take up for Implicit solvent, or explicit NTP with PME. You
can get GTX580s with 3GB of RAM, but if your system is larger (I think 3GB
will allow up to ~500k atoms NTP/PME) you'll have to go with the expensive
Tesla's (6GB). Also, I would not purchase GTX 600 series, they are
supremely meagre at doing double-precision, so no good for MD (unless maybe
you are doing single precision GROMACS? never tried).

2) If you do go with the GTX, since NAMD requires a transfer of data from
the GPU every step (not everything is computed on the GPU), you'll want to
make sure you have full 16x PCI-E for each card, otherwise you're
performance will plummet when using multiple cards. So good motherboard is
probably something to consider.

3) Remember that the Tesla cards don't have fans like the GTX ones, so you
really need to make sure you have a good system for heat exchange. On this
note, if you do go with the GTX 580s, I believe there are 3GB cards that
have built in water cooled systems. They cost maybe $50-100 more than the
fan cooled 3GB ones, but could be worth considering if you are concerned
about heat (on the downside, due to the data transfer every step,
overclocking them probably won't lead to much of a performance spike
compared to AMBER or GROMACS). Overall though, NAMD doesn't generate as
much GPU heat as GROMACS or AMBER (due to the slowdown of waiting for data
transfer every step), what I've found with a GTX 570 with the fan running
at 65% is that NAMD his ~53 degrees (23 degree ambient temp in the room),
whereas AMBER or GROMACS can pass 70 degrees.

I guess if it was me with $40k, I'd build a larger number of multi-GTX 580
boxes. The guys who make OpenMM (the thing that enables GPU acceleration
in GROMACS) have a nice little binary for doing memory tests on GPU
memory. Since the GTX cards sometimes have memory issues, I'd suggest
testing all of them thoroughly when you first get them, no point in having
weeks of troubleshooting afterwards all because of some faulty memory.

I've found that a GTX 570/Tesla M2070 is about the equivalent of 48-96
CPUs, depending on the system (the GPU improvement is better for larger
systems, and particularly for implicit solvent, though maybe less in NAMD
on this last point). For smaller systems (<10k atoms) the GPU is really
not the best option, and obviously as mentioned earlier, if the system
exceeds the available GPU memory, it's useless. I know Axel has responded
on several emails recommending multiple cheaper AMD boxes (I think the
10-core ones???), but I think if you're expected systems fall into a good
size range for the GPU it's cheaper.

~Aron

On Mon, Apr 23, 2012 at 6:32 PM, Cesar Millan <pachequin_at_gmail.com> wrote:

> Hi everyone,
>
> I hope not to be repetitive with this question.
>
> Our group wants to buy some workstations with GPUs to do MD simulations of
> proteins (generate and analyze the results). We have a discussion about
> which GPUs (with the appropriate hardware to make the run fine) we must buy
> (our budget is around 40k dllrs). We plan to use that workstations on a
> office (around 12 m2) cooled with air conditioning .
>
> One option that we have is to buy tesla cards but they are expensive, and
> the other one is to buy GTX cards (less expensive). Unfortunately, we do
> not have access to this GPUs to test how well our systems run on both cards

-- 
Aron Broom M.Sc
PhD Student
Department of Chemistry
University of Waterloo

This archive was generated by hypermail 2.1.6 : Tue Dec 31 2013 - 23:21:54 CST