From: Axel Kohlmeyer (
Date: Wed Jul 21 2010 - 08:11:34 CDT

On Wed, Jul 21, 2010 at 8:07 AM, Ian Stokes-Rees
<> wrote:
> On 7/21/10 3:40 AM, Prskalo, Alen-Pilip wrote:
> I’m new in the VMD community, used Rasmol before. I am simulating systems of
> ~1.000.000 atoms in each frame and I intend to make nice videos later on. So
> it is obvious that I will have an enormous .pdb file containing large number
> of atoms AND frames to make the video look fluent. My question is: what is
> the best PC to do it? I do have a bit money on the side to upgrade my 4
> Core, 4GB RAM PC, but how? First idea is of course to by extra RAM, to go to
> 8 GB or even 16 GB. What about the graphic card, presently I have only
> onboard graphics, would a good graphic card make the loading and rotation
> easier and if, how exactly (from the technical stand point).
> I don't have direct experience with this, but from my indirect experience
> you would probably be well suited to purchase one of the new 8 or 12 core
> per CPU AMD systems which can get you 32 or 48 cores in a single box,
> include a recent nVidia CUDA 3.0 graphics card (a Tesla would be best, but
> an OEM model would probably work alright as well), and then buy at least 16
> GB of RAM, if not more.  From what I understand, you can put in as many
> Tesla cards as your system can handle and VMD will pick them up and use them
> all.

neither the multi-core cpus not the teslas will help much
if the problem is visualizing large systems. only some
parts of VMD are multi-threaded (it is a lot of work to do
that well) and even less modules have GPU acceleration.
the GPU acceleration is predominantly implemented for
very specific calculation modules that are extremely compute
intensive and thus benefit from GPUs the most.

many routine tasks in visualization are serialized at some
point in any case.

what will help a _lot_ is memory. the only thing that is better
than a lot of memory is more memory. VMD internals are
able to handle extremely large systems and are for the most
part limited by available memory.

if course the fastest cpu and boatloads of memory won't help
much when there is a slow GPU at the end. so a high end
nvidia or ati/amd GPU is a must. if you go for nvidia, you will
also benefit from its compute capability when you use a
feature that is already CUDA enabled.

one final remark, using .pdb file for 1,000,000 atom systems
is not a good idea. that file format was not designed for
systems that large. reading and parsing of that file will already
take a looong time.

> Are you running the simulation on the computer, or is the simulation running
> on a separate MD cluster?  If it is running somewhere else, you could make

running a 1,000,000 atom system on a desktop/workstation would
be far too slow. with a good MD code and a good machine one
can scale that kind of system easily to a thousand processor cores
or more and get some decent turnaround. with a system this big,
everything takes so much longer to converge.


> do with fewer CPUs.  If it is running on the new computer you are going to
> get, you should aim for the 48 core AMD system.  Here in the US you can get
> such a system for about $7k, or "fully loaded" (fast disks, lots of RAM,
> etc. etc.) for under $10k.
> I am very interested to hear what other people recommend.
> Ian

Dr. Axel Kohlmeyer
Institute for Computational Molecular Science
Temple University, Philadelphia PA, USA.