NAMD Wiki: NamdOnAIX

  You are encouraged to improve NamdWiki by adding content, correcting errors, or removing spam.

Feb 12 2008, Jim Phillips

There is a bug in the IBM xlC compiler that produces bad code (crashes during start) if WorkDistrib.C is compiled with -O3 or -qhot. Compiling this one file with -O2 or -O3 -qnohot fixes the problem.


May 28 2007, Joachim Hein (EPCC and HPCx)

There are problems when trying to build NAMD for a Power5 architecture when using early versions of IBM's xlc 8 compiler. These problems disappear with later releases. Version 08.00.0000.0013 appears ok on AIX 5.3.

When using the xlc 7 compiler we observe that executables are substantially faster (about 7%) on a 1.5 GHz Power5 system, when we compile with -qarch=pwr4 -qtune=pwr4 than the suggested -qarch=com -qtune=pwr3 (distribution). However don't expect the resulting executables to be suitable for a IBM Power3 or older system.


Aug 07 2006, Bernd Kallies (HLRN)

The released NAMD 2.6b1 binary hangs before printing the first "Initial time" info, when running with 128 or more tasks on IBM p690 (AIX 5.2, PE 4.1). The error does not occur with a smaller number of tasks. The test simulation is NPT (langevin) with PME.

The error does also occur when building NAMD 2.6b1 with charm-5.9 from source with -qnooptimize. It does also occur with charm-5.9 built for lapi.

It does not occur with the most recent charm version (checked out Aug 07 2006, built myself for mpi-sp). But the namd-2.6b1 exe produced with this is very slow (only about half the performance got with the official charm-5.9).

The problem seems to be fixed in NAMD 2.6.


The released NAMD 2.6b1 binaries produce this error when run in parallel on AIX 5.3:

Info: Entering startup phase 8 with 161825 kB of memory in use.
------------- Processor 1 Exiting: Called CmiAbort ------------
Reason: CthResume: swapcontext failed.

Info: Finished startup with 171224 kB of memory in use.
------------- Processor 0 Exiting: Called CmiAbort ------------
Reason: CthResume: swapcontext failed.

Newer builds don't seem to have this problem.

-JimPhillips


Jan 06 2006, Bernd Kallies (HLRN)

Building NAMD 2.6b1 for IBM Power4 AIX 5.2, IBM XL C/C++ Enterprise Edition 7.0.0.4

In addition to the remarks of J. Hein:

1) enable large file support for 32bit executables by adding #define _LARGE_FILES to

* src/: CollectionMaster.C, dcdlib.C
* plugins/molfile_plugin/src/dcdplugin.c

define consistent types in src/dcdlib.C:
#define NFILE_POS (off_t) 8
#define NPRIV_POS (off_t) 12
#define NSAVC_POS (off_t) 16
#define NSTEP_POS (off_t) 20

Note to the NAMD dev team: AIX compilers require _LARGE_FILES.
_LARGE_FILE is not supported.

2) fix types in the charm distribution in src/arch/mpi-sp/conv-mach:

#define CMK_TYPEDEF_INT8 long long
#define CMK_TYPEDEF_UINT8 unsigned long long

Note: this works for 32bit and 64bit executables. Only the long
type has different length in 32bit and 64bit.

3) fix link flag in the charm distribution in src/arch/mpi-sp/conv-mach.sh:

CMK_LD="mpcc_r -bmaxdata:0x80000000 -bmaxstack:0x10000000 -brtl "
CMK_LDXX="mpCC_r -bmaxdata:0x80000000 -bmaxstack:0x10000000 -brtl "

Note: this is not really necessary. But it is ugly to define link flags
as compiling flags. The real link flags are taken from the charm distrib.

4) fine-tune NAMD opt flags in arch/AIX-POWER-xlC.arch:

CXXOPTS = -O3 -qstrict -Q -qmaxmem=-1 -qarch=auto -qtune=auto
          -qfloat=rsqrt:fltint -qaggrcopy=nooverlap 
          -qalias=addr:noallp:notyp:ansi
CXXNOALIASOPTS = -O3 -qstrict -Q -qmaxmem=-1 -qarch=auto -qtune=auto
          -qfloat=rsqrt:fltint -qaggrcopy=nooverlap
          -qalias=addr:allp:typ:ansi
COPTS = -O4 -qinlglue -Q -qmaxmem=-1 -qarch=auto -qtune=auto

Note: VACPP 6.0.0.10 will do the job with CXXOPTS = ... -qalias=addr:noallp:typ:ansi,
      whereas XLC 7 does not.

An additional note to the NAMD dev team: NAMD 2.6b1 puts cell info in the dcd, if needed. The provided tools like flipdcd do not recognize this.


See also NamdAtSDSC.


05 Dec 2005, Joachim Hein (EPCC)

Building NAMD 2.6b1 for IBM Power5 using AIX 5.3

The following steps lead to a successful NAMD 2.6b1:

- Download nightly charm++ as of 28 Nov 2005 (see: http://charm.cs.uiuc.edu/autobuild/old.2005_11_28__02_00/) and build with:

./build charm++ mpi-sp -qtune=pwr4 -O3 -qarch=pwr4 -DCMK_OPTIMIZE=1

- In the NAMD source, in the file vmdsock.C, change:

SOCKLEN_T to NAMD_SOCKLEN_T, two occurences

- I used the following AIX-POWER-xlC.arch make file

############# Begin of AIX-POWER-xlC.arch ###########################
NAMD_ARCH = AIX-POWER

CXX = xlC_r -w -qstaticinline -DMEMUSAGE_USE_SBRK
CXXOPTS =  -O3 -qstrict -qaggrcopy=nooverlap -Q -qarch=pwr4 -qtune=pwr4 -qfloat=rsqrt:fltint bmaxdata:0x80000000
CXXNOALIASOPTS =  -O3 -qaggrcopy=nooverlap -Q -qarch=pwr4 -qtune=pwr4 -qfloat=rsqrt:fltint -bmaxdata:0x80000000
CC = xlc_r -w
COPTS =  -O3 -qinlglue -qarch=pwr4 -qtune=pwr4 -bmaxdata:0x80000000

############# End of AIX-POWER-xlC.arch ###########################

The use of -DMEMUSAGE_USE_SBRK is _ABSOLUTLY_ curcial


I hope that's it and I didn't miss anything. Good luck, Joachim


More from Joachim Hein:

I hope you are interested in this. On Wednesday we upgraded the C++ compiler on our system from vac 6 to xlc 7 (IBM changed the name, to get the C-compilers in line with the xlf Fortran products).

This completely broke NAMD2.6b1, it wouldn't even run APO-A1. So for a quick fix I tried removing the global assertion -qalias=typ from all files. The resulting executable gives a speed-up of about 15% on APO-A1 and similar for the user benchmark, please see the attached plot.

Funnily, the new compiler doesn't help the NAMD 2.5.

REMARK: this note has to some extent be superseeded by the above on building NAMD 2.6b1 for AIX5.3 - JH 05/12/05


On Fri, 10 Oct 2003, Bernd Kallies wrote:

I experience a problem running NAMD 2.5 on IBM p690 (Power4) when
requesting to use the SP switch network together with the user space
communication subsystem library.

The details:

- occurs when using the precompiled IBM-SP binaries (downloaded 9.10.03
from www.ks.uiuc.edu)

- occurs when building charm+ and NAMD 2.5 from Source using the mpi-sp
(charm+) and IBM-SP arch definitions as template (xlc/6.0.0.3,
fftw-2.1.5, tcl8.3 from IBM, 32bit, -D_LARGE_FILES, source package incl.
charm+ downloaded 1.10.03 from www.ks.uiuc.edu)

- works: namd2 -rmpool 0 -nodes 1 -procs 8

- works: namd2 -rmpool 0 -nodes 1 -procs 8 -shared_memory yes -euilib ip
          -euidevice csss

- works not: namd2 -rmpool 0 -nodes 1 -procs 8 -shared_memory yes -euilib us
          -euidevice csss

error msg:
munmap call failed to deallocate requested memory.
ERROR: 0031-250  task 7: Terminated

Bernd later found that the primary reason for failure using charm++ is bad error trapping in find_largest_free_region() in the source conv-core/isomalloc.c, when call_mmap_anywhere failed.

Orion Sky Lawlor (of PPL) reports on October 13:

I've checked in a trivial fix for this crash--
if the anonymous mmap fails, it should continue
trying to set up isomalloc normally; and even if
isomalloc setup fails it shouldn't affect NAMD.

Thanks a lot for isolating this Charm bug, Bernd!

                                 -Orion Sky Lawlor

Another workaround is to disable isomalloc by putting this macro in charm/mpi-sp/tmp/conv-mach.h

#define CMK_NO_ISO_MALLOC   1


12/16/2003

We have been doing some timing experiments with NAMD 2.5 on AIX using the HPCx (http://www.hpcx.ac.uk/) p690 system in the UK and here are the current set of environment variables being used (PPL recommendation):

MP_SHARED_MEMORY=yes

MP_EAGER_LIMIT=32768

MP_RETRANSMIT_INTERVAL=10000

Rick Kufrin (NCSA)


5 Jan 2004

We have written a detailed performance study comparing the performance of NAMD 2.4 and 2.5 on the HPCx system. We have investigated the performance for 4 different biomolecular systems in a range from 23000 to over 300000 atoms. The scaling has been investigated using up to 1024 processors. The report is available from our website http://www.hpcx.ac.uk/research/hpc as a pdf document http://www.hpcx.ac.uk/research/hpc/technical_reports/HPCxTR0310.pdf

Joachim Hein (EPCC and HPCx)