From: Axel Kohlmeyer (akohlmey_at_gmail.com)
Date: Mon Apr 11 2011 - 12:55:34 CDT
On Mon, Apr 11, 2011 at 1:22 PM, Salvador H-V <chava09hv_at_gmail.com> wrote:
> Hi Axel,
>
> Thanks for your reply and suggestions.
>
> In your reply you mentioned that: "as i said before, any serious
> calculation will have to be run on a serious machine anyways, so building a
> desktop for that is a waste of energy and money. "
>
> Because I don`t have any experience in supercomputing, I would like to ask
> you some questions about building a desktop or small cluster for
> bio-molecular simulations.
>
> i) It is not possible at all running some decent systems in a desktop?. In
> other words, what are the minimal requirements for a desktop or server to
> run some serious calculations?
that depends on what you consider a serious calculation.
there are some problems, that can indeed be solved with
a desktop, but they are rare and since you were asking for
a general recommendation and were not providing any
specific details, it would be misleading to err on the small side.
> ii) I though that using a good processor and plenty of memory ram with a
> good GPUs could be the job, but it seems that I am completely wrong. So, I
> was wondering what percentage of namd capabilities could be exploited using
> only GPUs?
first of all, for performing a classical MD run, you don't need much memory.
however, you may need a lot for analysis, as that allows you to keep a large
trajectory in memory and thus process it fast.
the speedup of GPUs can be very significant, but you have to look at the
fine print. a lot of GPU benchmarks compare a single CPU core to a full
GPU. considering that you can have 6-cores per CPU in a desktop (or even
up to 12-cores per cpu in a workstation with AMD magny cours processors)
this is a bit skewed. however, by the same token, one cannot simply multiply
the CPU cores, since the available memory bandwidth has become a
limiting factor. so performance predictions have become very difficult.
the speedup and thus benefit from a GPU will certainly be the largest
in a desktop setting. but you'd have to be working on a fairly small problem,
for that to be sufficient performance (or you have to be very patient).
NAMD offloads the most time consuming part of the calculation (the
short range non-bonded interactions) to the GPU and you can increase
the GPU utilization by running two host tasks per GPU.
> iii) I will receive some small funding from government and I would like to
> buy or built some computers to run biomolecular simulations. To be sincere,
> we don`t have the experience neither the money to built a standard cluster
> of hundred of processors. In fact, we are looking for the minimum cluster
> requirements that can run NAMD (with GPUs) for relative small systems (~
> 40000-100 000 atoms). Any suggestion about what is the best way to proceed?
if you want to go low risk and don't have much experience with GPU, then
probably the best deal you can currently get for running NAMD on small
problems is to build a "workstation" from 4-way 8-core AMD magny-cours
processors. that will be like a small cluster in a box (32-cores total) and
you may be able to operate it with 16GB of RAM (although 32GB is probably
not that much more expensive and will make it more future proof).
this way, you can save a lot of money by not needing a fast interconnect.
the second best option in my personal opinion would be a dual intel
westmere processor (4-core) based workstation with 4 GPUs. depending
on the choice of GPU, CPU and amount of memory, that may be bit
faster or slower than the 32-core AMD. the westmere has more memory
bandwidth and you can have 4x 16-x gen2 PCI-e to get the maximum
bandwidth to and from the GPUs. when using GeForce GPUs you are
taking a bit of a risk in terms of reliability, since there is no easy way
to tell, if you have memory errors, but the Tesla "compute" GPUs will
bump up the price significantly.
> If the funding is approved we will have to built by our own a small cluster.
> There is not many people with expertise around that can help us. Any
> suggestion, help or information would be greatly appreciated.
there is no replacement for knowing what you are doing and
some advice from a random person on the internet, like me,
is not exactly what i would call a trustworthy source that i
would unconditionally bet my money on. you have to make
tests by yourself.
i've been designing, setting up and running all kinds of linux
clusters for almost 15 years now and despite that experience,
_every_ time i have to start almost from scratch and evaluate
the needs and match it with available hardware options. one
thing that people often forget in the process is that they only
look at the purchasing price, but not the cost of maintenance
(in terms of time that the machine is not available, when it takes
long to fix problems, or when there are frequent hardware failures)
and the resulting overall "available" performance.
i've also learned that you cannot trust any sales person.
they don't run scientific applications and have no clue
what you really need or not. similarly for the associated
"system engineers" they know very well how to rig and
install a machine, but they are never have to _operate_
one (and often without expert staff, as usual in many
academic settings).
so the best you can do is to be paranoid, test for yourself
and make sure that the risk you are taking does not outweigh
the technical expertise that you have available to operate
the hardware.
HTH,
axel.
> Thanks a lot,
>
> HVS
>
>
>
>
>
-- Dr. Axel Kohlmeyer akohlmey_at_gmail.com http://goo.gl/1wk0 Institute for Computational Molecular Science Temple University, Philadelphia PA, USA.
This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:56:57 CST