From: Gianluca Interlandi (gianluca_at_u.washington.edu)
Date: Mon Apr 11 2011 - 14:41:28 CDT
> if you want to go low risk and don't have much experience with GPU, then
> probably the best deal you can currently get for running NAMD on small
> problems is to build a "workstation" from 4-way 8-core AMD magny-cours
> processors. that will be like a small cluster in a box (32-cores total) and
> you may be able to operate it with 16GB of RAM (although 32GB is probably
> not that much more expensive and will make it more future proof).
I wonder how NAMD would perform on one of these 48 AMD cores blade:
If you have a 100K budget one could test 2-3 of those and add more later.
> this way, you can save a lot of money by not needing a fast interconnect.
> the second best option in my personal opinion would be a dual intel
> westmere processor (4-core) based workstation with 4 GPUs. depending
> on the choice of GPU, CPU and amount of memory, that may be bit
> faster or slower than the 32-core AMD. the westmere has more memory
> bandwidth and you can have 4x 16-x gen2 PCI-e to get the maximum
> bandwidth to and from the GPUs. when using GeForce GPUs you are
> taking a bit of a risk in terms of reliability, since there is no easy way
> to tell, if you have memory errors, but the Tesla "compute" GPUs will
> bump up the price significantly.
>> If the funding is approved we will have to built by our own a small cluster.
>> There is not many people with expertise around that can help us. Any
>> suggestion, help or information would be greatly appreciated.
> there is no replacement for knowing what you are doing and
> some advice from a random person on the internet, like me,
> is not exactly what i would call a trustworthy source that i
> would unconditionally bet my money on. you have to make
> tests by yourself.
> i've been designing, setting up and running all kinds of linux
> clusters for almost 15 years now and despite that experience,
> _every_ time i have to start almost from scratch and evaluate
> the needs and match it with available hardware options. one
> thing that people often forget in the process is that they only
> look at the purchasing price, but not the cost of maintenance
> (in terms of time that the machine is not available, when it takes
> long to fix problems, or when there are frequent hardware failures)
> and the resulting overall "available" performance.
> i've also learned that you cannot trust any sales person.
> they don't run scientific applications and have no clue
> what you really need or not. similarly for the associated
> "system engineers" they know very well how to rig and
> install a machine, but they are never have to _operate_
> one (and often without expert staff, as usual in many
> academic settings).
> so the best you can do is to be paranoid, test for yourself
> and make sure that the risk you are taking does not outweigh
> the technical expertise that you have available to operate
> the hardware.
>> Thanks a lot,
> Dr. Axel Kohlmeyer
> akohlmey_at_gmail.com http://goo.gl/1wk0
> Institute for Computational Molecular Science
> Temple University, Philadelphia PA, USA.
Gianluca Interlandi, PhD gianluca_at_u.washington.edu
+1 (206) 685 4435
Postdoc at the Department of Bioengineering
at the University of Washington, Seattle WA U.S.A.
This archive was generated by hypermail 2.1.6 : Mon Dec 31 2012 - 23:20:06 CST