Re: Gigabit ethernet switch for cluster

From: Dow_Hurst (dhurst_at_mindspring.com)
Date: Wed Mar 22 2006 - 21:33:57 CST

You can purchase the Tiger SMC8624T switch or equivalent with excellent results. It is more expensive because of the high bandwidth uplink ports included in the device. If you don't plan to expand beyond 24 ports there is a less expensive version. You need a nonblocking wire speed Gigabit switch to get the performance you need. The backplane has to support every port talking to any other port at full wire speed. Dive into your switch specs and prove to yourself that it has that capability before purchasing. The TCP/IP based version of NAMD scales very well up to 16 ports and starts dropping off after that a little at a time. Look at the performance benchmark graph to see how the curves of performance versus hardware show what combinations really scale well. We have the AMMASSO nics that do RDMA in our 20 node dual Opteron cluster. The specs on the performance benchmark show that the Opteron really screams with NAMD 2.6 but note how the Ammasso RDMA keeps the performance curve flat out to the number
of processors in the cluster. We don't have a large cluster by any means. However, the special nics and high performance switch do gain us 40 minutes to 1 hour per day of performance on the benchmark apoa test files versus using the TCP/IP version over the same switch. This translates into real performance gains for long large simulations. I looked at the NAMD guy's cluster hardware pages for their clusters and tried to find similar hardware when we were purchasing last year. My reason for replying is if money is a concern then Ammasso fits a window for getting much lower latency for your limited dollars. CentOS 4.1 is the OS and we use the Ammasso MPI library linked into charm++ for the NAMD compile. Made if very simple and got us killer performance for the cluster. TeamHPC put the cluster together for us.
Dow

-----Original Message-----
>From: Cesar Luis Avila <cavila_at_fbqf.unt.edu.ar>
>Sent: Mar 21, 2006 3:43 PM
>To: namd-l_at_ks.uiuc.edu
>Subject: namd-l: Gigabit ethernet switch for cluster
>
>Dear all,
>When building a cluster I know that the most critical feature is the
>latency in the connection between the nodes. In that way it is highly
>recommended to choose a low latency interconnection solution like
>infiniband, myrinet, etc. A cheaper (not better) solution would involve
>Gigabit ethernet. But once again there are two kind of equipment out
>there in the market with big differences in prices (from 100 U$S to 3000
>U$S or even more).
>My question is, if I want to build a small cluster with around 5 nodes
>(see node spec below), is it worth to buy an expensive switch or should
>I choose a cheaper one?
>Thanks
>
>Cesar Avila
>Universidad Nacional de Tucuman
>
>
>Node Spec
>Processor Atlhon 64 X2 3800+ (Dual Core 1.8 Gz)
>1 x MSI RS482M4-FD (Gigabit ethernet and video on board)
>2 x 512 Mb Dual DDR PC-3200 RAM operating at 400MHz
>1 x HD 80 Gb SATA

No sig.

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:41:47 CST