AW: Infiniband and commodity hardware

From: Norman Geist (norman.geist_at_uni-greifswald.de)
Date: Mon Sep 19 2011 - 01:03:41 CDT

A little experience.

I built a little gpu custer out of three fujitsu workstations (r570-2-power)
with such of those cheaper voltire (Mellanox chipset Infinihost III) SDR x4
10Gbit/s Infiniband cards for pci-e (gen 1) x4. All workstations have 12
CPU-cores and 2 Tesla C2050. Scaling at this point (count of nodes) was
nice. I can't tell about more nodes. I could imagine that sdr Infiniband is
to slow for building a real cpu cluster. The more performance per node at
same problem, the more communication bandwidth will be used, because the
task are that are distributes gets computes incredibly fast. Also, if my
nodes are that fast that they need to heavily communicate to get work, the
latency becomes even more important.

Good luck

Norman Geist.

-----Ursprüngliche Nachricht-----
Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im Auftrag
von Thomas Albers
Gesendet: Samstag, 17. September 2011 21:59
An: namd-l_at_ks.uiuc.edu
Betreff: namd-l: Infiniband and commodity hardware

Hello!

Has anyone on this list perhaps experience with Infiniband and commodity
PC motherboards?

Recently SDR Infiniband cards have appeared on the market at very good
prices. The Mellanox manual states that a MHEA28-XTC card requires a
PCI-E x8 slot, however the average motherboard with two PCI-E x16 slots
tends to have only one slot with full x16 bandwith, the other one is
usually x4.

Does the Infiniband card fall back to x4 bandwith when plugged into the
x4 slot as your usual graphics card does?

If it doesn't do that, and the CUDA graphics card has to go into the x4
slot, will bus bandwith be a limiting issue for performance?

Regards,
Thomas

This archive was generated by hypermail 2.1.6 : Mon Dec 31 2012 - 23:20:48 CST