Re: installing NAMD on opteron cluster with MPI

From: Lina Nilsson (linan_at_u.washington.edu)
Date: Tue Mar 08 2005 - 02:27:25 CST

Hey-
The cluster supposedly uses neither Scyld or Clustermatic, but I have
never been able to get a more precise answer about the 'home-made'
solution used. but I also don't really know what I should ask about as
far as checking compatibility with NAMD.
NAMD (AMD-64 version) doesn't run on local host either, but gives a
segmentation fault.
Any thoughts?
Lina

Gengbin Zheng wrote:
>
> Hi,
>
> Is this a cluster installed Scyld or Clustermatic? Could it be some
> missing shared library on compute nodes?
>
> Also try if it can run on just one processor (the localhost).
>
> Gengbin
>
> Lina M Nilsson wrote:
>
>> Hi. We are trying to install NAMD on a Beowulf opteron 64-bit cluster
>> with MPI but have not gotten NAMD install correctly / generate any of
>> the normal output files. Instead, we get the “broken pipe” error
>> message below. Does anyone have any idea what we could try? (Our
>> install procedure, which worked fine on another cluster that uses
>> opteron 64 processors, is listed below).
>> Thanks, Lina Nilsson
>>
>> Output message:
>>
>> # LSBATCH: User input ./script2_gi
>> ------------------------------------------------------------
>>
>> Successfully completed.
>>
>> Resource usage summary:
>>
>> CPU time : 3.41 sec. Max Memory : 2 MB Max Swap : 11 MB
>>
>> Max Processes : 1 Max Threads : 1
>>
>> The output (if any) follows:
>>
>> nodes: 4 proc: 8 cp: omitting directory
>> `/hreidar/home/werk/pechmann/talin/job' cp: omitting directory
>> `/hreidar/home/werk/pechmann/talin/NAMD' starting from n417 companion
>> nodes: n356 n351 n349 /usr/local/apli/mpich/bin/mpirun: line 1: 14291
>> Broken pipe
>> /hreidar/home/werk/pechmann/NAMD/NAMD_2.5_Source/Linux-amd64-g++//namd2
>> "min.template.inp" -p4pg /scratch/werk/pechmann/PI14081 -p4wd
>> /scratch/werk/pechmann
>>
>>
>> COMPLILING PROTOCOL
>>
>>
>> I) Build fftw: --------------
>>
>> ./configure --enable-type-prefix --enable-float
>> --prefix=/asgard/home/phys/gianluca/NAMD/fftw-2.1.5/gcc
>>
>> make
>>
>> make install
>>
>>
>> II) Build charm++ -----------------
>>
>> Go to NAMD_2.5_Source and untar there the newest version of charm++
>> (e.g. charm-5.8, delete the charm.tar delivered with the NAMD-tarball!)
>>
>> tar xvfz charm-5.8.tar.gz
>>
>> ln -s charm-5.8 charm
>>
>> cd charm
>>
>> setenv PATH /usr/local/apli/mpich/bin:{$PATH}
>>
>> ./build charm++ mpi-linux --incdir=/usr/local/apli/mpich/include
>> --libdir=/usr/local/apli/mpich/lib --no-shared -O -DCMK_OPTIMIZE=1
>>
>> cd ../../../../
>>
>> vi Make.charm: CHARMBASE =
>> /asgard/home/phys/gianluca/NAMD/NAMD_2.5_Source/charm
>>
>> vi arch/Linux-amd64.fftw: FFTDIR=/asgard/home/phys/gianluca/NAMD/fftw/gcc
>>
>> vi arch/Linux-amd64-g++.arch: CHARMARCH = mpi-linux CXXOPTS = -O3
>> -march=k8 -m64 -fexpensive-optimizations -ffast-math COPTS = -O3
>> -march=k8 -m64 -fexpensive-optimizations -ffast-math
>>
>>
>> ./config fftw Linux-amd64-g++
>>
>> cd Linux-amd64-g++ make
>>
>>
>> ----------------------------------- didnt work, then:
>>
>> in NAMD_2.5_Source/arch/Linux-amd64-g++.arch
>>
>> instead: CXXOPTS = -O3 -march=k8 -m64 -fexpensive-optimizations
>> -ffast-math
>>
>> CXXOPTS = -O0 -march=k8 -m64 -fexpensive-optimizations -ffast-math
>>
>> instead: COPTS = -O3 -march=k8 -m64 -fexpensive-optimizations -ffast-math
>>
>> COPTS = -O0 -march=k8 -m64 -fexpensive-optimizations -ffast-math
>>
>> "-march=athlon" instead of "-march=k8". dropp
>> "-fexpensive-optimizations -ffast-math"
>>
>> ------------------------------------ tried:
>>
>> COPTS = -O0 -march=k8 -m64
>>
>>
>> Lina Nilsson
>>
>>
>> Research Assistant
>> Department of Bioengineering
>> University of Washington, Seattle
>> p:(206) 685-4432
>> f:(206) 685-4434
>>
>>

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:40:34 CST