From: Erik Nordgren (nordgren_at_sas.upenn.edu)
Date: Sun Jul 10 2011 - 04:02:05 CDT
Pedro -- at a glance, not sure what the problem might be... but perhaps you
could send us a "diff" type comparison of your NAMD input files for the
smaller vs. the larger systems, and also a direct comparison of the output
files from those two cases, in order to see exactly what changed.
Also, when you say you "scaled up" your system, how exactly did you do this?
On Fri, Jun 24, 2011 at 3:28 PM, Pedro Gonnet <gonnet_at_maths.ox.ac.uk> wrote:
> I'm new to NAMD and was trying to run some simulations of bulk TIP3P
> water. This worked just fine in a 8x8x8 nm cube, but when I scaled it up
> to 16x16x16 nm (387072 atoms), NAMD crashed as follows:
> Info: Startup phase 7 took 0.0791871 s, 203.889 MB of memory in use
> Info: Startup phase 8 took 0.00410999 s, 370.621 MB of memory in use
> Info: Finished startup at 7856.1 s, 370.621 MB of memory in use
> ------------- Processor 0 Exiting: Called CmiAbort ------------
> Reason: Could not malloc() 93454760 bytes--are we out of memory?
> Charm++ fatal error:
> Could not malloc() 93454760 bytes--are we out of memory?
>  Stack Traceback:
> [0:0] CmiOutOfMemory+0x3c [0x85d9a38]
> which is odd since I have 4 GB of RAM on my local machine and the 8x8x8
> nm simulation only required a few hundred MB. I tried running the
> simulation on a machine with 64 GB with the same result (although it
> took significantly longer to fill-up the memory).
> In both cases I could watch namd2 gobble-up the memory on "top" before
> it crashed.
> This happens both with NAMD 2.7 for Linux-x86 and NAMD 2.8 for
> Linux-x86_64-MPI, both on a single CPU (e.g. without mpirun).
> I have attached the configuration file. Since the .pdb and .psf files
> are rather large, I will only send them on request.
> Am I doing anything wrong or could this be a bug?
-- C. Erik Nordgren, Ph.D. University of Pennsylvania
This archive was generated by hypermail 2.1.6 : Mon Dec 31 2012 - 23:20:33 CST