AW: AW: AW: AW: namd-ibverbs fails to start

From: Norman Geist (norman.geist_at_uni-greifswald.de)
Date: Thu Nov 24 2011 - 00:15:52 CST

Hi,

before you spend too much time in getting a RDMA solution to work, have you
already benchmarked the performance of namd on your network. You should take
a udp (not tcp) version of namd, this is usually 10% (high number of nodes)
better scaling than tcp variant. The precompiled binary from namd homepage
should work for you as you have a simple Ethernet, no need for ibverbs etc.
There's also no need to compile your own mpi version, the charm++ versions
run fine.

Try out and let me know.

Best regards

Norman Geist.

-----Ursprüngliche Nachricht-----
Von: David Hemmendinger [mailto:hemmendd_at_union.edu]
Gesendet: Mittwoch, 23. November 2011 15:19
An: norman.geist_at_uni-greifswald.de
Betreff: Re: AW: AW: AW: namd-l: namd-ibverbs fails to start

        No question about that -- it's 10Gbit Ethernet. I'll look into
MyrinetExpress, but will also go ahead to compile namd to use MPI.
        Thanks,
          David
>ok, you need to check out what hardware you have: Is it infiniband or
>10Gbit/s-Ethernet. If it is infiniband you can do what described. If it is
>10Gbit/s-Ethernet, you can use the MyrinetExpress protocol over that to
>minimize latency, but maybe 10Gbit with normal TCP is already enough for
>you, what benchmarks will show. You could login to the nodes and do lspci
do
>see what hardware you have, then look for the network adapters. Infiniband
>would be mostly something like Mellanox or Infinihost III.
10Gbit/s-Ethernet
>would also be recognizable. You could also look to the connectors if you
>have physical access to the nodes. 10Gbit/s-Ethernet has the known RJ-45
>connector, while infiniband would have SFF-8470 or QSFP

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:57:57 CST