From: Shantenu Jha (s.jha_at_ucl.ac.uk)
Date: Tue Aug 10 2004 - 16:25:28 CDT
> Does anyone know if NAMD runs on Itanium2 Linux clusters?
Yes.
I've pasted  some notes  and the build  commands used to  compile NAMD
(with the assistance of Matt Harvey, UCL) on the NCSA teragrid Itanium
cluster below, but please note, before using the pasted build commands
locally, that  in all probability  they'll need some tweaking  to deal
with your specific platform.
[I'd have posted these on the NamdWiki as well, but it isn't accepting
edits,  which   maybe  related  to  recent  web   server  problems  at
ks.uiuc.edu].
> I've looked  at the FAQ, which  almost implies that  NCSA has ported
> NAMD to their Itanium cluster.   But there are no details.  Has that
> happened?  Who might I contact to learn more about how it was done?
The NCSA Itanium cluster is part  of the teragrid and has a prescribed
software   environment,  called  the   common-teragrid-software  stack
(CTSS).  There  probably isn't  much point going  into the  details of
CTSS,  as   the  default  environment  (accessed   via  "softenv")  is
documented else  where, but it is  useful to mention  that the default
compiler is the intel 8.0.
I personally  had lots of difficulty  getting the compiler  to use the
right version  of mpich and maybe  something to watch  for.  Also, the
teragrid  IA64  cluster  has   myrinet  interconnects  and  I  had  to
explicitly load the GM libraries (myricom's addon to mpich) , though I
got away with GM and not GM2.
I used  charm 5.8  eventually, although  later I was  able to  get the
default charm++ version that ships with NAMD 2.5 working as well.
To build charm++, on the teragrid IA64 use "mpi-linux-ia64 ecc gm"
platform option. The full command used was:
 ./build  charm++ mpi-linux-ia64  ecc gm  --no-shared -DCMK_OPTIMIZE=1
-I/usr/local/mpich/mpich-gm-1.2.5..10-intel-r2/include
-L/usr/local/mpich/mpich-gm-1.2.5..10- intel-r2/lib/   
-L/opt/gm/lib/   -lgm   -lpthread  -lmpich   -lpmpich
-lpthread
There is  at least one other  subtelty: either tweak  the Makefiles to
reflect a problem with directory naming; easier to make a softlink to
  mpi-linux-ia64-gm -> mpi-linux-ia64-gm-ecc/
Regarding the simplearrayhello validation program (in pgms) , although
charm++ compiled, the validation program still required the following:
make OPTS="-L/usr/local/mpich/mpich-gm-1.2.5..10-intel-r2/lib/  -L/opt/gm/lib/ -lmpich 
  -lpmpich -lgm -lpthread"
With charm++ built and the validation programs running, you're almost there. 
For NAMD I used the Linux-ia64-MPI-GM-ecc option to config (the ecc stands
for the Intel 64bit compiler, GM for Myrinet specific stuff). I didn't try
the net-linux version, as we'll be using it on 100's of px , rather than
10's.
The  Makearch file in the directory Linux-ia64-MPI-GM-ecc is,
include .rootdir/Make.charm
include .rootdir/arch/Linux-ia64-MPI-GM-ecc.arch
CHARM = $(CHARMBASE)/$(CHARMARCH)
NAMD_PLATFORM = $(NAMD_ARCH)-MPI-GM
include .rootdir/arch/$(NAMD_ARCH).base
include .rootdir/arch/$(NAMD_ARCH).tcl
include .rootdir/arch/$(NAMD_ARCH).fftw
include .rootdir/arch/$(NAMD_ARCH).plugins
and you  can get  the required  64 bit tcl,fftw  and plugins  from the
ks.uiuc.edu website (as mentioned in the notes.txt file).
Finally I had to add the following for the generated Makefile:
MPICHLIB=-L/usr/local/mpich/mpich-gm-1.2.5..10-intel-r2/lib/
-L/opt/gm/lib/ -lmpich -lpmpich -lgm -lpthread
and introduce a $(MPICHLIB) in the dependency list of namd2 target.
Don't know if that helps, but good luck!
Yes, I  too had  hell getting  all this working,  but thanks  to great
support locally (Matt Harvey), we was  able to get there.. I'm sure the
charm++/NAMD  developers  know  this,  but currently  the  barrier  to
deployment of source code is almost prohibitively high.
Shantenu
-- --Shantenu Jha, Research Fellow Centre for Computational Science University College London (UCL) http://www.realitygrid.org s.jha_at_ucl.ac.uk 020-7679-5300 PGP Key ID 0x78F823E6
This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:38:49 CST