Re: NAMD on current CPUs

From: Jérôme Hénin (jhenin_at_ifr88.cnrs-mrs.fr)
Date: Thu Dec 17 2009 - 12:56:25 CST

Hi Axel,
Greetings from sunny Provence!

Thanks for all the info. Given my relatively small budget, I'll
probably not go for Nehalem then. Unfortunately, that means I can't
afford Moreno either...

Cheers,
Jerome

2009/12/17 Axel Kohlmeyer <akohlmey_at_gmail.com>:
> hey jerome,
>
> greetings from grey and snowy germany.
>
> just a few notes on the subject. i don't think that the choice of CPU
> is that important anymore. particularly for NAMD, since NAMD is
> not overly memory bandwidth hungry.  thus the old style harpertown
> xeon CPUs may be a good deal (large cache and the FPU is
> as fast as in the nehalem xeons). recent opterons should be a tiny
> bit faster in terms of FP, but they've had reliability issues since the
> barcelona generation, primarily due to lack of well tested mainboards.
>
> the biggest question is more how much time do you want to
> spend on maintaining the cluster? and thus how much money
> are you willing to dedicate to more convenient to set up and
> use hardware. this is why the dell nodes are such a good deal
> for mike's group. once set up well, they work and the downtime
> is little, even if there is next to nobody around to take care of them.
>
> if you go with cheaper hardware, e.g. supermicro nodes assembled
> by a reseller. you can save money, but have to deal with more hardware
> issue, i.e. need personnel to take care of that.
>
> finally, there is a significant saving potential in the infiniband hardware.
> don't get talked into QDR and latest greatest HCAs. for NAMD to
> previous generation will do very well, you save a bundle, and don't
> have to worry about driver issues. check out the hardware in delta.
> node81 to node84 have newer/better q-logic HCAs, but their driver
> _sucks_. the older mellanox HCAs in the other nodes have not shown
> the tiniest blink since day 1. also the price for the switch goes up
> by quite a bit with the size. if you have a large pile of cash, consider
> buying two 72 port switches without interconnect instead of one,
> very expensive 144 port switch.
>
> if you need more technical help, perhaps you can invite moreno
> over from trieste. ;)
>
> HTH,
>    axel.
>
>
> On Thu, Dec 17, 2009 at 9:34 AM, Jérôme Hénin <jhenin_at_ifr88.cnrs-mrs.fr> wrote:
>> Hi all,
>>
>> Does someone have experience they are willing to share on how well
>> various current CPUs (Xeons and Opterons) handle NAMD? We are thinking
>> of putting a cluster together here (with IB interconnect), and trying
>> to pick CPUs that will give the most bang for our euro-bucks.
>>
>> Thanks for any tips,
>> Jerome
>>
>>
>
>
>
> --
> Dr. Axel Kohlmeyer    akohlmey_at_gmail.com
> Institute for Computational Molecular Science
> College of Science and Technology
> Temple University, Philadelphia PA, USA.
>

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:51:48 CST