NAMD Wiki: NamdOnAMD64
Segfaulting on mpi-linux-amd64 in qt_block
One needs to either edit charm/src/arch/mpi-linux-amd64/conv-mach.h to add #define CMK_THREADS_USE_CONTEXT 1 or use -threads context in the build lines for namd and charm++ test programs.
Brian Bennion
Load balancer hangs on dual-core Opteron clusters, including Cray XD1
There are issues with the Charm++ timer on dual-core Opteron and Athlon 64 systems that can cause the load balancer to hang. A good way to diagnose this is to add "outputtiming 1" and watch for an occasional negative time/step. Clocks running backwards tend to confuse things.
The fix is to edit charm/src/arch/net-linux-amd64/conv-mach.h (or mpi-linux-amd64/conv-mach.h for MPI builds) and change these lines:
#define CMK_TIMER_USE_GETRUSAGE 0 #define CMK_TIMER_USE_SPECIAL 0 #define CMK_TIMER_USE_TIMES 0 #define CMK_TIMER_USE_RDTSC 1
It works if you move the 1 to CMK_TIMER_USE_TIMES for net-linux-amd64 and for mpi-linux-amd64 you probably want CMK_TIMER_USE_SPECIAL (which uses MPI_Wtime).
This problem is present in 2.6b1 and should be fixed in 2.6b2.
I ran a quick apoa1 benchmark for a user trying to choose between Athlon 64 and Opteron:
Athlon 64 3500+ (2.2 GHz) 2.95 s/step Opteron 148 (2.2 GHz) 2.90 s/step
So, virtually no difference. I've also measured that a dual-core Opteron 100 runs at nearly the same speed as a dual-processor machine with the same clock speed, so I'm guessing that a dual-core Opteron and an Athlon 64 X2 would have indistinguishable NAMD performance.
For NAMD 2.5 you need to download the latest Charm++ 5.8 and NAMD from cvs in order to compile for AMD Opteron
Starting from 12/25/2004, charm++ build option "opteron" is obsolete in the cvs version of Charm++, use "net-linux-amd64" or "mpi-linux-amd64":
./build charm++ net-linux-amd64
or,
./build charm++ mpi-linux-amd64
Charm++ 5.8 is untouched.
To build Charm++ for Opteron, use "opteron" option:
./build charm++ net-linux opteron
To build NAMD, use Linux-amd64-g++:
./config ... Linux-amd64-g++
Alternatively, one can compile with MPI, To build Charm++, make sure you have MPI library correctly installed and point "mpiCC" to the correct path. You may want to specify where you installed MPI to build script using option --incdir and --libdir when there is missing <mpi.h> compilation error (see ./build --help for more help)
First build Charm++:
./build charm++ mpi-linux opteron
and build NAMD with Linux-amd64-MPI:
./config ... Linux-amd64-MPI
Build instructions for the namd on an amd64 linux cluster with myrinet (gm2) support.
Notes: This is the process I had to follow to get everything working on our opteron cluster, you may have to customize some of the library and include paths. Also, I used proper installs of tcl and fftw, rather than compiling my own versions. I didn't use the vmd plugins, as apparently our molecule guys don't use it on the actual cluster. :) Damon Smith - Victorian Partnership for Advanced Computing] 14-Jan-2005
In a tmp directory (~/tmp)
* #Retrieve and build Charm++ lib * cvs -d :pserver:checkout@charm.cs.uiuc.edu:/cvsroot co charm * cd charm * ./build charm++ mpi-linux-amd64 gm2 gcc --libdir=/usr/local/mpich-1.2.6-gm-path/lib/ --incdir=/usr/local/mpich-1.2.6-gm-path/include/ * #Namd is a bit confused about charm directories, so we need a symlink here: * ln -s mpi-linux-amd64-gm2-gcc mpi-linux-amd64 * #Unpack and retrieve namd into ~/tmp/namd * #Note: you need a recent version of namd to get the mpi-linux-amd64 config. * cd namd * vi Make.charm * #In the file Make.charm, set the CHARMBASE variable as follows: * CHARMBASE = ${HOME}/tmp/charm * vi arch/Linux-amd64.tcl * #In the file Linux-amd64.tcl, set the TCLFLAGS and TCLLIB variables as follows: * TCLLIB=-ltcl8.4 -ldl * TCLFLAGS=-DNAMD_TCL -DUSE_NON_CONST * #And continuing on the command line: * ./config tcl fftw Linux-amd64-MPI gm2 * cd Linux-amd64-MPI * vi ~/tmp/charm/mpi-linux-amd64-gm2-gcc/include/conv-mach.sh * #In this file, conv.mach.sh * CMK_SYSLIBS="-lmpich -L/usr/gm/lib -lgm" * make