From: John Stone (johns_at_ks.uiuc.edu)
Date: Tue May 06 2014 - 16:01:20 CDT

Hi,
  Beyond what has already been said well by others, I would add
that for many users the main CUDA-accelerated feature they use
commonly is the QuickSurf surface representation. If you intend to
want to animate molecular surfaces, that would be a fairly common case
that benefits greatly from NVIDIA's CUDA. If you render movies in VMD and
want to use ray tracing, another feature that's coming in the latest version
of VMD is CUDA-accelerated ray tracing. Both the GPU surface display
and ray tracing features are fairly memory hungry, so you will want a
GPU that has at least 3GB of memory or more if tese are interesting to
you. Just as molecular viz. uses a fair amount of memory on the
host machine, having more memory on the GPU will also make things
run faster and more smoothly. The nicest thing about the GPU is that
you can upgrade it at any later time with only a relatively minor
expenditure, so I would suggest that you get a good base machine
that you can run for a few years, get an appropriate GPU now, and
then plan to get a newer GPU later.

We will be enabling a bunch of new CUDA kernels for trajectory analysis
and visualization over the next year, and these will benefit from having
GPUs with a decent amount of on-board memory, and of course a large core
count. There will be an increasing degree of support for OpenCL going
forward, but it appears that it will take a while before the loose ends
with multi-vendor support in OpenCL libraries are sorted out on Linux,
for example. For the time being we are focusing our development
efforts on CUDA because it is simpler to deploy on all platforms,
and it has many features that are only just now getting added to OpenCL.

If you take these comments along with those made by Axel and Josh, I
think you've got some reasonable guidance for the next year.

Cheers,
  John Stone
  vmd_at_ks.uiuc.edu

On Tue, May 06, 2014 at 12:08:40PM -0400, Erik Nordgren wrote:
> Hi folks,
>
> I have what must be a very common question, although I tried searching the
> list archives and somehow didn't come up with anything very recent &
> relevant, so figured I'd post.
>
> Basically, I'm just wondering if folks who are accustomed to purchasing
> new hardware regularly could comment with thoughts on the "optimum" choice
> (in terms of power vs. cost) of a GPU to put in a desktop workstation
> today, for smooth visualization of VMD structures with, say, 100-200 K
> atoms.A (I assume that the "sweet spot" for choosing a GPU is a moving
> target, with the ever-improving capabilities of cards, which is why posts
> on this subject from over a year ago are probably not very relevant
> anymore.)A I should add that I'm not in the market for an entire
> brand-new workstation, but rather considering just upgrading the GPU in
> the linux box I already have (a Dell Precision T3500, few years old
> already), which at the moment has an NVIDA Quadro NVS 295.
>
> As a related question, is it true that the only GPU manufacturer worth
> seriously considering for VMD is NVIDIA (due to the CUDA optimizations)?
> Many thanks in advance for any & all suggestions!
>
> Erik
> --
> C. Erik Nordgren, Ph.D.
> Department of Chemistry
> University of Pennsylvania

-- 
NIH Center for Macromolecular Modeling and Bioinformatics
Beckman Institute for Advanced Science and Technology
University of Illinois, 405 N. Mathews Ave, Urbana, IL 61801
http://www.ks.uiuc.edu/~johns/           Phone: 217-244-3349
http://www.ks.uiuc.edu/Research/vmd/