From: Gengbin Zheng (gzheng_at_ks.uiuc.edu)
Date: Tue Aug 10 2004 - 16:54:08 CDT
Hi Randy,
There is a NAMD wiki page about Myrinet at:
http://www.ks.uiuc.edu/Research/namd/wiki/index.cgi?NamdOnMyrinet
NAMD/Charm++ on old GM ver 1 used to work smoothly. Starting from GM 2, 
there is some incompatibility problem due to the introduction of pthread 
in GM. Charm++ has to switch to a pthread implementation (when use "gm2" 
in build command) of user level thread which seems to may or may not 
work depending on how GM is configured.
Normally, just compile Charm++ with or without "gm2" build option, one 
of which should work.
You may also want to fix the include and library search path. There are 
many ways to do this,
You can put everything in build (as in Shantenu's email below ) for example:
./build  charm++ mpi-linux-ia64  ecc gm  --no-shared -DCMK_OPTIMIZE=1
-I/usr/local/mpich/mpich-gm-1.2.5..10-intel-r2/include
-L/usr/local/mpich/mpich-gm-1.2.5..10- intel-r2/lib/   
-L/opt/gm/lib/   -lgm   -lpthread  -lmpich   -lpmpich
-lpthread
to something a little simplified like:
./build  charm++ mpi-linux-ia64  ecc gm  --no-shared 
--basedir /usr/local/mpich/mpich-gm-1.2.5..10-intel-r2
--libdir  /opt/gm/lib/  
-DCMK_OPTIMIZE=1 -lgm -lpthread  -lmpich   -lpmpich -lpthread
Or you can modify all compilation flags in:
charm/src/arch/mpi-linux-ia64,
specifically file "conv-mach.sh" (for default build) and "cc-ecc.sh" 
(when specifying ecc in build)
Gengbin
Shantenu Jha wrote:
>>Does anyone know if NAMD runs on Itanium2 Linux clusters?
>>    
>>
>
>Yes.
>
>I've pasted  some notes  and the build  commands used to  compile NAMD
>(with the assistance of Matt Harvey, UCL) on the NCSA teragrid Itanium
>cluster below, but please note, before using the pasted build commands
>locally, that  in all probability  they'll need some tweaking  to deal
>with your specific platform.
>
>[I'd have posted these on the NamdWiki as well, but it isn't accepting
>edits,  which   maybe  related  to  recent  web   server  problems  at
>ks.uiuc.edu].
>
>  
>
>>I've looked  at the FAQ, which  almost implies that  NCSA has ported
>>NAMD to their Itanium cluster.   But there are no details.  Has that
>>happened?  Who might I contact to learn more about how it was done?
>>    
>>
>
>The NCSA Itanium cluster is part  of the teragrid and has a prescribed
>software   environment,  called  the   common-teragrid-software  stack
>(CTSS).  There  probably isn't  much point going  into the  details of
>CTSS,  as   the  default  environment  (accessed   via  "softenv")  is
>documented else  where, but it is  useful to mention  that the default
>compiler is the intel 8.0.
>
>I personally  had lots of difficulty  getting the compiler  to use the
>right version  of mpich and maybe  something to watch  for.  Also, the
>teragrid  IA64  cluster  has   myrinet  interconnects  and  I  had  to
>explicitly load the GM libraries (myricom's addon to mpich) , though I
>got away with GM and not GM2.
>
>I used  charm 5.8  eventually, although  later I was  able to  get the
>default charm++ version that ships with NAMD 2.5 working as well.
>
>To build charm++, on the teragrid IA64 use "mpi-linux-ia64 ecc gm"
>platform option. The full command used was:
>
> ./build  charm++ mpi-linux-ia64  ecc gm  --no-shared -DCMK_OPTIMIZE=1
>-I/usr/local/mpich/mpich-gm-1.2.5..10-intel-r2/include
>-L/usr/local/mpich/mpich-gm-1.2.5..10- intel-r2/lib/   
>-L/opt/gm/lib/   -lgm   -lpthread  -lmpich   -lpmpich
>-lpthread
>
>There is  at least one other  subtelty: either tweak  the Makefiles to
>reflect a problem with directory naming; easier to make a softlink to
>  mpi-linux-ia64-gm -> mpi-linux-ia64-gm-ecc/
>
>Regarding the simplearrayhello validation program (in pgms) , although
>charm++ compiled, the validation program still required the following:
>
>make OPTS="-L/usr/local/mpich/mpich-gm-1.2.5..10-intel-r2/lib/  -L/opt/gm/lib/ -lmpich 
>  -lpmpich -lgm -lpthread"
>
>With charm++ built and the validation programs running, you're almost there. 
>
>For NAMD I used the Linux-ia64-MPI-GM-ecc option to config (the ecc stands
>for the Intel 64bit compiler, GM for Myrinet specific stuff). I didn't try
>the net-linux version, as we'll be using it on 100's of px , rather than
>10's.
>
>The  Makearch file in the directory Linux-ia64-MPI-GM-ecc is,
>
>include .rootdir/Make.charm
>include .rootdir/arch/Linux-ia64-MPI-GM-ecc.arch
>CHARM = $(CHARMBASE)/$(CHARMARCH)
>NAMD_PLATFORM = $(NAMD_ARCH)-MPI-GM
>include .rootdir/arch/$(NAMD_ARCH).base
>include .rootdir/arch/$(NAMD_ARCH).tcl
>include .rootdir/arch/$(NAMD_ARCH).fftw
>include .rootdir/arch/$(NAMD_ARCH).plugins
>
>and you  can get  the required  64 bit tcl,fftw  and plugins  from the
>ks.uiuc.edu website (as mentioned in the notes.txt file).
>
>Finally I had to add the following for the generated Makefile:
>
>MPICHLIB=-L/usr/local/mpich/mpich-gm-1.2.5..10-intel-r2/lib/
>-L/opt/gm/lib/ -lmpich -lpmpich -lgm -lpthread
>
>and introduce a $(MPICHLIB) in the dependency list of namd2 target.
>
>Don't know if that helps, but good luck!
>
>Yes, I  too had  hell getting  all this working,  but thanks  to great
>support locally (Matt Harvey), we was  able to get there.. I'm sure the
>charm++/NAMD  developers  know  this,  but currently  the  barrier  to
>deployment of source code is almost prohibitively high.
>
>Shantenu
>
>  
>
This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:38:49 CST