build smp version of namd with intel 10.1

From: Vlad Cojocaru (
Date: Thu Aug 07 2008 - 10:44:02 CDT

Dear NAMD users,

Does anyone have any experience on how to build an smp version of namd
with the intel 10.1 compilers for linux-amd64 architecture. I would like
to test if it brings anything in speed comparing to the net tcp version
I already compiled on our new cluster.
The bottleneck is the charm build so I also asked the charm developers
but I believe that if anybody on the namd list has experience with
this, I will get an answer much quicker.

I managed to build the net-linux-amd64-smp version of charm++ with gcc
4.1.2 correctly but when I tried with intel 10.1 I get the error below
(the error is the same for the mpi or net versions so don't get
disturbed by the mpicc line in the error message - happens the same if
it was icc).

 The problem with the gcc compilation is that the namd executable is
about 30 % slower than any of the intel compilations !

Best wishes

-----error with intel 10.1 when building charm++---------------------------
 ./bin/charmc -DCMK_OPTIMIZE=1 -O3 -fPIC -c -I. msgmgr.c
msgmgr.c(91): (col. 18) remark: LOOP WAS VECTORIZED.
./bin/charmc -DCMK_OPTIMIZE=1 -O3 -fPIC -c -I. cpm.c
./bin/charmc -DCMK_OPTIMIZE=1 -O3 -fPIC -c -I. cpthreads.c
./bin/charmc -DCMK_OPTIMIZE=1 -O3 -fPIC -c -I. futures.c
./bin/charmc -DCMK_OPTIMIZE=1 -O3 -fPIC -c -I. cldb.c
./bin/charmc -DCMK_OPTIMIZE=1 -O3 -fPIC -c -I. topology.C
./bin/charmc -DCMK_OPTIMIZE=1 -O3 -fPIC -c -I. random.c
./bin/charmc -DCMK_OPTIMIZE=1 -O3 -fPIC -c -I. debug-conv.c
(0): internal error: 0_1561

compilation aborted for debug-conv.c (code 4)
Fatal Error by charmc in directory

  Command mpicc -D_REENTRANT -I../bin/../include -D__CHARMC__=1
-I/apps/mpi/mvapich/1.0.1-2533-intel-10.1/include -c debug-conv.c -o
debug-conv.o returned error code 4
charmc exiting...
gmake[2]: *** [debug-conv.o] Error 1
gmake[2]: Leaving directory

gmake[1]: *** [converse] Error 2
gmake[1]: Leaving directory

gmake: *** [charm++] Error 2
Charm++ NOT BUILT. Either cd into
mpi-linux-amd64-pthreads-smp-mpicxx/tmp and try
to resolve the problems yourself, visit
for more information. Otherwise, email the developers at

Dr. Vlad Cojocaru
EML Research gGmbH
Schloss-Wolfsbrunnenweg 33
69118 Heidelberg
Tel: ++49-6221-533266
Fax: ++49-6221-533298
EML Research gGmbH
Amtgericht Mannheim / HRB 337446
Managing Partner: Dr. h.c. Klaus Tschira
Scientific and Managing Director: Prof. Dr.-Ing. Andreas Reuter

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 05:21:14 CST