Re: Namd benchmark problem

From: Nicholas M Glykos (glykos_at_mbg.duth.gr)
Date: Mon Oct 26 2009 - 10:08:01 CDT

Hi Axel,

> > Ok. An example then. Take PCI bus and a network card sitting on it:
> > changing the PCI latency of the card will decrease the latency, but
> > will also reduce maximum throughput. For a small MD system, the
> > messages exchanged are short, the bottleneck is definitely latency,
> > and you are better-off zeroing it (as seen on lspci). But for, say, a
> > very large system with a small number of nodes, the bottleneck may
> > well be throughput, in which case you want the card to hold-on to the
> > bus for longer (higher value in lspci).
>
> well, i am probably spoiled by working with people that have access to
> large compute resources,

:-)

> but my take on this scenario is, that if your classical MD performance
> is governed by bandwidth rather than latency, then you are running on
> too small a machine, or with too few processors/nodes.

Tell me about it ;-))

> apart from that, i would be curious to see some numbers on some real
> life examples. i would expect that a driver for a HPC communication
> device would automatically set the device into the optimal mode. at
> least that seems to be true with the hardware where i have root access
> to, and can query the latency.

Ah, you can tell someone's interconnect even when he is not speaking
about it [but for the rest of us, the default PCI latencies are set by the
cards' manufacturers, and their combined settings may well be far from
optimal. Last time I got bitten by that was two weeks ago].

Nicholas

-- 
          Dr Nicholas M. Glykos, Department of Molecular Biology
     and Genetics, Democritus University of Thrace, University Campus,
  Dragana, 68100 Alexandroupolis, Greece, Tel/Fax (office) +302551030620,
    Ext.77620, Tel (lab) +302551030615, http://utopia.duth.gr/~glykos/

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:53:24 CST