NAMD fails with LAM (can't run example)

From: Smith, Ian (I.C.Smith_at_liverpool.ac.uk)
Date: Tue Mar 27 2007 - 10:13:54 CDT

Hi,

I'm tyring to get NAMD 2.6 to work on our linux cluster but when I
run using MPI with the example provided:

$ mpisub 2 ./namd2 src/alanin

it fails. The stderr is :

------------------------------------------------------------------------
-----
One of the processes started by mpirun has exited with a nonzero exit
code. This typically indicates that the process finished in error.
If your process did not finish in error, be sure to include a "return
0" or "exit(0)" in your C code before exiting the application.

PID 9299 failed on node n1 (192.168.14.20) due to signal 11.
------------------------------------------------------------------------
-----

and stdout (tail) :

Info: Entering startup phase 7 with 7520 kB of memory in use.
Info: CREATING 11 COMPUTE OBJECTS
Info: NONBONDED TABLE R-SQUARED SPACING: 0.0625
Info: NONBONDED TABLE SIZE: 705 POINTS
Info: ABSOLUTE IMPRECISION IN FAST TABLE ENERGY: 6.77626e-21 AT 8.05838
Info: RELATIVE IMPRECISION IN FAST TABLE ENERGY: 1.0431e-14 AT 7.96477
Info: ABSOLUTE IMPRECISION IN FAST TABLE FORCE: 3.38813e-21 AT 7.99609
Info: RELATIVE IMPRECISION IN FAST TABLE FORCE: 4.43764e-16 AT 7.96477
Info: ABSOLUTE IMPRECISION IN VDWA TABLE ENERGY: 2.52435e-29 AT 8.05838
Info: RELATIVE IMPRECISION IN VDWA TABLE ENERGY: 5.31787e-15 AT 7.96477
Info: ABSOLUTE IMPRECISION IN VDWA TABLE FORCE: 2.52435e-29 AT 7.99609
Info: RELATIVE IMPRECISION IN VDWA TABLE FORCE: 5.18019e-16 AT 7.96477
Info: ABSOLUTE IMPRECISION IN VDWB TABLE ENERGY: 2.64698e-23 AT 8.12019
Info: RELATIVE IMPRECISION IN VDWB TABLE ENERGY: 9.10594e-16 AT 7.96477
Info: ABSOLUTE IMPRECISION IN VDWB TABLE FORCE: 3.30872e-24 AT 7.99609
Info: RELATIVE IMPRECISION IN VDWB TABLE FORCE: 2.60151e-16 AT 7.96477
Info: Entering startup phase 8 with 7520 kB of memory in use.
Info: Finished startup with 7520 kB of memory in use.

This doesn't seem much to go on. We are using LAM 7.1.1 under RH
Enterpise
on Intel Xeon. The charm++ was built with

./build charm++ mpi-linux --no-shared -O -DCMK_OPTIMIZE=1

and seems so work fine on multiple processors.

I setup NAMD using

./config tcl fftw Linux-i686-MPI

any help would be very useful as I'm totally stuck.

regards,

-ian.

------------------------------
Dr Ian C. Smith
e-Science Team,
University of Liverpool,
Computing Services Department

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:44:30 CST