Re: Help building a desktop for namd

From: Jim Phillips (jim_at_ks.uiuc.edu)
Date: Wed Apr 13 2011 - 10:17:27 CDT

You can also get a generous startup allocation in about a week at any time
(no deadlines or lengthy proposal required) that you can use to run
tests and benchmarks in preparation for submitting a full proposal.

-Jim

On Mon, 11 Apr 2011, Gianluca Interlandi wrote:

> Salvador,
>
> You might also want to apply for computing time at one of the national
> supercomputing centers through Teragrid. Deadline is this Friday (or in three
> months). In this way, you could get a feeling what NAMD can do before you
> invest money.
>
> Gianluca
>
> On Mon, 11 Apr 2011, Axel Kohlmeyer wrote:
>
>> On Mon, Apr 11, 2011 at 3:41 PM, Gianluca Interlandi
>> <gianluca_at_u.washington.edu> wrote:
>>>> if you want to go low risk and don't have much experience with GPU, then
>>>> probably the best deal you can currently get for running NAMD on small
>>>> problems is to build a "workstation" from 4-way 8-core AMD magny-cours
>>>> processors. that will be like a small cluster in a box (32-cores total)
>>>> and
>>>> you may be able to operate it with 16GB of RAM (although 32GB is probably
>>>> not that much more expensive and will make it more future proof).
>>>
>>> I wonder how NAMD would perform on one of these 48 AMD cores blade:
>>
>> blades generally tend to be a bit slower than "normal" nodes.
>>
>> i am attaching some graphs with performance numbers from some local
>> machine(s).
>> note that the $$$ estimate includes a share of the (rather expensive) QDR
>> infiniband infrastructure switch and the IB HCAs. that - of course - works
>> in favor of the 48-core and GPU nodes in the performance per $$ category,
>> but also reduces their scaling capability.
>>
>>> http://configure.us.dell.com/dellstore/config.aspx?c=us&cs=555&l=en&oc=MLB1985&s=biz
>>>
>>> If you have a 100K budget one could test 2-3 of those and add more later.
>>
>> why so much memory? that is the main cost for those.
>> also, never go with the highest clock rate.
>>
>> i would expect you get the best bang for the buck using
>> 4-way 2.5GHz 8-core with 32GB (even 16GB would do
>> well for running only NAMD)
>>
>> axel.
>>
>>>
>>> Gianluca
>>>
>>>
>>>>
>>>> this way, you can save a lot of money by not needing a fast interconnect.
>>>>
>>>> the second best option in my personal opinion would be a dual intel
>>>> westmere processor (4-core) based workstation with 4 GPUs. depending
>>>> on the choice of GPU, CPU and amount of memory, that may be bit
>>>> faster or slower than the 32-core AMD. the westmere has more memory
>>>> bandwidth and you can have 4x 16-x gen2 PCI-e to get the maximum
>>>> bandwidth to and from the GPUs. when using GeForce GPUs you are
>>>> taking a bit of a risk in terms of reliability, since there is no easy
>>>> way
>>>> to tell, if you have memory errors, but the Tesla "compute" GPUs will
>>>> bump up the price significantly.
>>>>
>>>>> If the funding is approved we will have to built by our own a small
>>>>> cluster.
>>>>> There is not many people with expertise around that can help us. Any
>>>>> suggestion, help or information would be greatly appreciated.
>>>>
>>>> there is no replacement for knowing what you are doing and
>>>> some advice from a random person on the internet, like me,
>>>> is not exactly what i would call a trustworthy source that i
>>>> would unconditionally bet my money on. you have to make
>>>> tests by yourself.
>>>>
>>>> i've been designing, setting up and running all kinds of linux
>>>> clusters for almost 15 years now and despite that experience,
>>>> _every_ time i have to start almost from scratch and evaluate
>>>> the needs and match it with available hardware options. one
>>>> thing that people often forget in the process is that they only
>>>> look at the purchasing price, but not the cost of maintenance
>>>> (in terms of time that the machine is not available, when it takes
>>>> long to fix problems, or when there are frequent hardware failures)
>>>> and the resulting overall "available" performance.
>>>>
>>>> i've also learned that you cannot trust any sales person.
>>>> they don't run scientific applications and have no clue
>>>> what you really need or not. similarly for the associated
>>>> "system engineers" they know very well how to rig and
>>>> install a machine, but they are never have to _operate_
>>>> one (and often without expert staff, as usual in many
>>>> academic settings).
>>>>
>>>> so the best you can do is to be paranoid, test for yourself
>>>> and make sure that the risk you are taking does not outweigh
>>>> the technical expertise that you have available to operate
>>>> the hardware.
>>>>
>>>> HTH,
>>>>  axel.
>>>>
>>>>> Thanks a lot,
>>>>>
>>>>> HVS
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Dr. Axel Kohlmeyer
>>>> akohlmey_at_gmail.com  http://goo.gl/1wk0
>>>>
>>>> Institute for Computational Molecular Science
>>>> Temple University, Philadelphia PA, USA.
>>>>
>>>>
>>>
>>> -----------------------------------------------------
>>> Gianluca Interlandi, PhD gianluca_at_u.washington.edu
>>>                    +1 (206) 685 4435
>>>                    http://artemide.bioeng.washington.edu/
>>>
>>> Postdoc at the Department of Bioengineering
>>> at the University of Washington, Seattle WA U.S.A.
>>> -----------------------------------------------------
>>
>>
>>
>> --
>> Dr. Axel Kohlmeyer
>> akohlmey_at_gmail.com  http://goo.gl/1wk0
>>
>> Institute for Computational Molecular Science
>> Temple University, Philadelphia PA, USA.
>>
>
> -----------------------------------------------------
> Gianluca Interlandi, PhD gianluca_at_u.washington.edu
> +1 (206) 685 4435
> http://artemide.bioeng.washington.edu/
>
> Postdoc at the Department of Bioengineering
> at the University of Washington, Seattle WA U.S.A.
> -----------------------------------------------------

This archive was generated by hypermail 2.1.6 : Mon Dec 31 2012 - 23:20:07 CST