Re: Can any one help me with this parallel issue

From: Morad Alawneh (alawneh_at_chem.byu.edu)
Date: Mon Jan 30 2006 - 14:35:09 CST

Thanks for your relying.

The reason why I compiled namd using amd64 option is even that cluster is has Xeon intel processors but the instructions for it is AMD instructions.

So do you still think that I have to install intel vesion, even I tried intel but it gave me a lot of warning messages?

Thanks



Morad Alawneh

Department of Chemistry and Biochemistry

C100 BNSN, BYU

Provo, UT 84602



Sterling Paramore wrote:
The first problem I see is that you're using the AMD64 compiling options when the Xeon is an intel processor.  I'm not sure if that will make much of a difference, but you should use the i686 options instead.  If you have the intel compiler, you should definitely use that.  I believe you can download an academic copy for free.  Here's what I did on the Xeon EM64T I have access to (since you're not using myrinet, you don't need the gm library):

CHARMM:

./build charm++ mpi-linux icc gm2 --no-shared -DCMK_OPTIMIZE=1 -I/opt/mpi/x86_64-intel-8.1/mpich-gm-1.2.6..14a/include -L/opt/mpi/x86_64-intel-8.1/mpich-gm-1.2.6..14a/lib -lgm -lmpich -lpmpich -lpthread

FFTW:

/configure --enable-float --enable-type-prefix --prefix=/home/airforce/paramore/NAMD_2.6b1_Source/fftw/linux-arl

TCL:

./configure --prefix=/home/airforce/paramore/NAMD_2.6b1_Source/tcl/linux-arl --disable-shared

NAMD:

edited arch/Linux-i686-icc.arch and arch/Linux-i686-icc-GM.arch to point to right charmm compilation

./config tcl fftw plugins Linux-i686-GM-icc


-Sterling

Morad Alawneh wrote:

*Dear everyone,

I have recently compiled namd2.6b1 to have it work in parallel. I have a system of about 92,000 atoms, and when I submit my job using more than 2 processors I get bad performance, which is lower than using 2 processors.
Would anyone help me to figure out what is the problem.

Thanks

Cluster system:
It is a Dell 1855 Linux cluster consisting of 630 nodes. Each node is equipped with two Intel Xeon EM64T processors (3.6GHz) and 4 GB of memory.  Each node runs its own Linux kernel (Red Hat Enterprise Linux 3). All nodes are connected with Gigabit Ethernet

Installation instructions:
_charm++ installation:_

Edit namd/Make.charm
(set CHARMBASE to the full path to charm-5.9)

Edit the file charm/src/arch/mpi-linux-amd64/conv-mach.sh to have the right
path to mpicc and mpiCC (replace mpiCC by mpicxx), use mpich/gnu.

./build charm++ mpi-linux-amd64 gcc --libdir=/opt/mpich/gnu/lib
--incdir=/opt/mpich/gnu/include --no-build-shared -O -verbose -DCMK_OPTIMIZE=1


_namd Installation:_

Edit various configuration files:
namd/arch/Linux-amd64-MPI.arch (fix CHARMARCH to be mpi-linux-amd64-gcc)
namd/arch/Linux-amd64.fftw  (fix path to files, and delete all
                             -I$(HOME)/fftw/include -L$(HOME)/fftw/lib)
namd/arch/Linux-amd64.tcl   (fix path to files, and delete all
                             -I$(HOME)/tcl/include -L$(HOME)/tcl/lib)

Set up build directory and compile:
  ./config tcl fftw Linux-amd64-MPI
  cd Linux-amd64-MPI
  make


Job submission script:
*

*#!/bin/bash

#PBS -l nodes=1:ppn=2,walltime=336:00:00
#PBS -N Huge_gA
#PBS -m abe
#PBS -M alawneh@chem.byu.edu

# The maximum memory allocation
let Memory=256*1024*1024
export P4_GLOBMEMSIZE=$Memory

#cd into the directory where I typed qsub
cd $PBS_O_WORKDIR

TMPDIR=/ibrix/scr/$PBS_JOBID
PROG=/ibrix/apps/biophysics/namd/Linux-amd64-MPI/namd2
ARGS="+strategy USE_GRID"
IFILE="equil1_sys.namd"
OFILE="equil1_sys.log"

# NP should always be: nodes*ppn from #PBS -l above
let NP=1*2

# Full path to mpirun
MPIRUN=/opt/mpich/gnu/bin/mpirun

# Execute the job using mpirun
$MPIRUN -machinefile $PBS_NODEFILE -np $NP $PROG $ARGS $IFILE > $OFILE

exit 0*

-- 



/*Morad Alawneh*/

*Department of Chemistry and Biochemistry*

*C100 BNSN, BYU*

*Provo, UT 84602*


This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:43:16 CST