Re: Namd benchmark problem

From: Axel Kohlmeyer (akohlmey_at_gmail.com)
Date: Mon Oct 26 2009 - 09:45:46 CDT

On Mon, 2009-10-26 at 16:14 +0200, Nicholas M Glykos wrote:

> Ok. An example then. Take PCI bus and a network card sitting on it:
> changing the PCI latency of the card will decrease the latency, but
> will also reduce maximum throughput. For a small MD system, the
> messages exchanged are short, the bottleneck is definitely latency,
> and you are better-off zeroing it (as seen on lspci). But for, say, a
> very large system with a small number of nodes, the bottleneck may
> well be throughput, in which case you want the card to hold-on to the
> bus for longer (higher value in lspci).

well, i am probably spoiled by working with people that have access
to large compute resources, but my take on this scenario is, that
if your classical MD performance is governed by bandwidth rather
than latency, then you are running on too small a machine, or with
too few processors/nodes.

apart from that, i would be curious to see some numbers on some
real life examples. i would expect that a driver for a HPC communication
device would automatically set the device into the optimal mode.
at least that seems to be true with the hardware where i have root
access to, and can query the latency.

cheers,
   axel.

> --
>
>
> Dr Nicholas M. Glykos, Department of Molecular Biology
> and Genetics, Democritus University of Thrace, University Campus,
> Dragana, 68100 Alexandroupolis, Greece, Tel/Fax (office)
> +302551030620,
> Ext.77620, Tel (lab) +302551030615, http://utopia.duth.gr/~glykos/
>
>

-- 
Dr. Axel Kohlmeyer  akohlmey_at_gmail.com 
Institute for Computational Molecular Science
College of Science and Technology
Temple University, Philadelphia PA, USA.

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:53:24 CST