Re: mpi problems on opteron

From: Dow Hurst (Dow.Hurst_at_mindspring.com)
Date: Wed Aug 03 2005 - 22:39:35 CDT

Fixing up ssh for public key authentication requires what Gengbin said
but just make sure the passphrase is empty. Or, if you want to have a
passphrase then have ssh-agent startup on login with exported
environment variables as described here:
http://mah.everybody.org/docs/ssh
there are other pages on this but this procedure works for me. You run
ssh-add to add your keys to the agent who will manage them as you
connect to remote machines.
I think most clusters are setup with empty passphrases!

Here is the procedure I used to link the Ammasso MPI to charm for
compilation of NAMD 2.6b1, which was given to me by Eric Bohm:

"I find it far simpler just to make the Ammasso mpich the first (or even
only) mpi in your path. This ensures that its libraries and binaries
will be the ones used.
(I'd just follow Eric's advice and have only the MPI you want used in
your path)

export PATH=$PATH:/opt/mpicham.gcc/mpich-1.2.5/bin (My cluster's
location for the Ammasso MPI installation)

./build charm++ mpi-linux-amd64 -O -nobs

Which gave me a completely functional charm.

Then I downloaded NAMD.

Revised Make.charm to point to my charm.

and made the NAMD build dir
./config Linux-amd64-MPI

cd Linux-amd64-MPI
make
mpirun -np 4 ./namd src/alanin

And it works."

Now, Eric's procedure didn't include fftw and he mentioned that to me.
I'll deal with that later. If you can check the source your using, mine
didn't have the arch/All-Unix.plugins file. Maybe it isn't needed
now, I'm not sure. I was bouncing between the Wiki for AMD64 and the
Release Notes for 2.6b1 so I may have some steps screwed up when I tried
to compile before I got Eric's procedure. Post your procedure if you
get everything working the way you'd like. Thanks,
Dow

Leandro Martínez wrote:
>Hi Kyle,
>We have a cluster similar to yours, but running fedora. Probably the
>problem is that you need to set ssh to be used without passwords
>between the nodes. We are actually using rsh in our nodes instead
>because it was easier to configure. You need to put in your
>home directory a file named .rhosts containing
>
>143.106.51.147 username
>127.0.0.1 username
>192.168.0.100 username
>192.168.0.101 username
>192.168.0.102 username
>
>

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:39:47 CST