Re: Compilation of Charm++ then NAMD 2.6 on Opteron with Portlandcompilers

From: Philip Peartree (p.peartree_at_postgrad.manchester.ac.uk)
Date: Wed Dec 12 2007 - 15:35:10 CST

Thanks,

I think I've managed to sort it out myself, but should anything go wrong, I'll try that... although a little bell in my head is saying that's only particularly useful with LAM and I'm using mpich, but that's just a gut feeling to be honest.

The other bits about the NAMD compilation will be useful though.

Philip Peartree

----- Original Message -----
From: "lorenzo alamo" <alamo_at_ivic.ve>
To: "Philip Peartree" <p.peartree_at_postgrad.manchester.ac.uk>
Sent: 12 December 2007 20:35:58 o'clock (GMT) Europe/London
Subject: Re: namd-l: Compilation of Charm++ then NAMD 2.6 on Opteron with Portlandcompilers

Dear Philip,
We had the same error and after a lot ol trials we found that the most important switch is to use pthreads at the building of CHARM, follows we send you all the information about our system and the detailed procedure we use.

Note:
We found this clue in a old e-mail (Feb,2006) on the NAMD-List Over the Subject: "charm++ over MPI" by Bogdam Costescu

Regards,
Lorenzo Alamo and Antonio Pinto
Structural Biology Department
IVIC,
Venezuela

Our system: Beowulf cluster of dualcore AMD Opteron 64 bits procesors, Linux SUSE Enterprise Server 9.2 operating system (Kernel 2.6.5-7.201-smp), we compiled and already are using to run another MPI programs,the LAM-MPI version 7.1.2 with Portland Group compiler version 6.0-5 (also we have gcc version 3.3.3 but we dont used it).

TO COMPILATE CHARM++
vi ~/software/namd_2.6/NAMD_2.6_Source/charm-5.9/src/arch/mpi-linux-amd64/conv-mach.sh
Change in CMK_SYSLIBS="-lmpich" by CMK_SYSLIBS=""
CMK_NATIVE_CC="mpicc $CMK_AMD64 " CMK_NATIVE_LD="mpicc $CMK_AMD64 " CMK_NATIVE_CXX="mpiCC $CMK_AMD64 " CMK_NATIVE_LDXX="mpiCC $CMK_AMD64 " CMK_NATIVE_LIBS="" # fortran compiler CMK_CF77="mpif77" CMK_CF90="mpif90" CMK_F90LIBS="-lf90math -lfio -lU77 -lf77math "
To eliminate the warning: pgCC-Warning-Unknown switch: -rdynamic during PGM compilation
vi ~/software/namd_2.6/NAMD_2.6_Source/charm-5.9/src/scripts/configure Eliminate -rdynamic in TRACE_LINK_FLAG="-rdynamic"

!!!! AFTER A LOT OF TRAIL AND ERROR WE FOUND THAT THE MOST IMPORTANT SWITCH IS THE "PTHREADS" DURING THE BUILD OF CHARM !!!!
./build charm++ mpi-linux-amd64 lam pthreads -O -nobs -DCMK_OPTIMIZE=1 ln -s mpi-linux-amd64-lam-pthreads mpi-linux-amd64 cd ~/software/namd_2.6/NAMD_2.6_Source/charm-5.9/mpi-linux-amd64/tests/charm++/megatest;make pgm

TO COMPILATE NAMD
cd ~/software/namd_2.6/NAMD_2.6_Source Edit various configuration files:
vi Make.charm
# Set CHARMBASE to the top level charm directory. CHARMBASE =/home/alamo/software/namd_2.6/NAMD_2.6_Source/charm-5.9
vi arch/Linux-amd64.fftw
FFTDIR=/home/alamo/software/namd_2.6/fftw/linux-amd64 FFTINCL=-I$(FFTDIR)/include FFTLIB=-L$(FFTDIR)/lib -lsrfftw -lsfftw FFTFLAGS=-DNAMD_FFTW FFT=$(FFTINCL) $(FFTFLAGS)
vi arch/Linux-amd64.tcl
TCLDIR=/home/alamo/software/namd_2.6/tcl/linux-amd64 TCLINCL=-I$(TCLDIR)/include TCLLIB=-L$(TCLDIR)/lib -ltcl8.3 -ldl TCLFLAGS=-DNAMD_TCL TCL=$(TCLINCL) $(TCLFLAGS)
vi ~/software/namd_2.6/NAMD_2.6_Source/arch/Linux-amd64-MPI.arch
NAMD_ARCH = Linux-amd64 CHARMARCH = mpi-linux-amd64 CXX = pgCC CXXOPTS = -O2 -Mpreprocess -fast -fastsse -tp k8-64 -mcmodel=medium -pc 64 -Minfo -Dosf_ieee CC = pgcc COPTS = -O2 -Mpreprocess -fast -fastsse -tp k8-64 -mcmodel=medium -pc 64 -Minfo -Dosf_ieee
Compila el AMD
./config tcl fftw Linux-amd64-MPI;cd Linux-amd64-MPI ; make

Philip Peartree wrote:

Hi,

I'm having some issues compiling Charm++/NAMD on our opteron based cluster. I've read somewhere that it is best to use the fastest compiler. I think I've just about got Charm++ working, and it works fine with the simplearrayhello, and I've compiled the megatest program.. the only error I got was an error from pgCC saying that -rdynamic is an unknown switch. If I run on 4 procs I get the following output:

Megatest is running on 4 processors.
test 0: initiated [groupring (milind)]
test 0: completed (45.86 sec)
test 1: initiated [nodering (milind)]
test 1: completed (45.36 sec)
test 2: initiated [varsizetest (mjlang)]
test 2: completed (0.27 sec)
test 3: initiated [varraystest (milind)]
test 3: completed (0.32 sec)
test 4: initiated [groupcast (mjlang)]
test 4: completed (1.57 sec)
test 5: initiated [nodecast (milind)]
test 5: completed (0.96 sec)
test 6: initiated [synctest (mjlang)]
**ERROR: in routine alloca() there is a
stack overflow: thread 0, max 535822332KB, used 2KB, request -1082359024B

Could anyone elaborate on what's wrong here, I heard that the Megatest program tests things that are fairly crucial to NAMD's running?

Philip Peartree

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:45:41 CST