Error when running NAMD at SDSC

From: Joshua D. Moore (jdmoore_at_unity.ncsu.edu)
Date: Sun Jun 19 2005 - 22:13:20 CDT

Hi,

I have obtained an error when running on the San Diego Datastar system.

Here is my error:

ATTENTION: 0031-408 8 tasks allocated by LoadLeveler, continuing...
   0:munmap call failed to deallocate requested memory.
   1:munmap call failed to deallocate requested memory.
   2:munmap call failed to deallocate requested memory.
   3:munmap call failed to deallocate requested memory.
   4:munmap call failed to deallocate requested memory.
   5:munmap call failed to deallocate requested memory.
   6:munmap call failed to deallocate requested memory.
   7:munmap call failed to deallocate requested memory.
ERROR: 0031-250 task 1: Terminated
ERROR: 0031-250 task 0: Terminated
ERROR: 0031-250 task 2: Terminated
ERROR: 0031-250 task 3: Terminated
ERROR: 0031-250 task 4: Terminated
ERROR: 0031-250 task 5: Terminated
ERROR: 0031-250 task 6: Terminated
ERROR: 0031-250 task 7: Terminated

I have found this page
(http://www.ks.uiuc.edu/Research/namd/wiki/index.cgi?NamdOnAIX) which
states that the error is caused by " bad error trapping in
find_largest_free_region() in the source conv-core/isomalloc.c, when
call_mmap_anywhere failed."

I had to compile my own NAMD from source because I needed to change the
default LJ combination rule of arithmetic to geometric to use the OPLS
force field.

Can anyone help me with this error?

Thanks

Josh

PS. Here is my script I used

#!/usr/bin/ksh

#@environment = COPY_ALL;\
#AIXTHREAD_COND_DEBUG=OFF;\
#AIXTHREAD_MUTEX_DEBUG=OFF;\
#AIXTHREAD_RWLOCK_DEBUG=OFF;\
#AIXTHREAD_SCOPE=S;\
#MP_ADAPTER_USE=dedicated;\
#MP_CPU_USE=unique;\
#MP_CSS_INTERRUPT=no;\
#MP_EAGER_LIMIT=64K;\
#MP_EUIDEVELOP=min;\
#MP_LABELIO=yes;\
#MP_POLLING_INTERVAL=100000;\
#MP_PULSE=0;\
#MP_SHARED_MEMORY=yes;\
#MP_SINGLE_THREAD=yes;\
#RT_GRQ=ON;\
#SPINLOOPTIME=0;\
#YIELDLOOPTIME=0

#@wall_clock_limit = 02:00:00
#@class = normal
#@node_usage = not_shared
#@node = 1
#@tasks_per_node = 8
#@job_type = parallel
#@network.MPI = sn_all,shared,US
#@notification = always
#@job_name = 29_1
#@output = /dsgpfs2/jdmoore/namd/29_1/my.output
#@error = /dsgpfs2/jdmoore/namd/29_1/my.error
#@initialdir = /dsgpfs2/jdmoore/namd/29_1/
#@queue

poe namd2 29wtethanol_water.conf > ethanol_29_1.out

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 05:18:50 CST