From: David Hardy (dhardy_at_ks.uiuc.edu)
Date: Thu Jul 26 2018 - 14:33:03 CDT
Hi Gerard,
The memory allocation error bug seems to be due to an uninitialized variable, now fixed in the git repository. If you are getting your NAMD updates through the nightly build binaries or source releases, they will have the fix tomorrow.
Best regards,
Dave
-- David J. Hardy, Ph.D. Beckman Institute University of Illinois at Urbana-Champaign 405 N. Mathews Ave., Urbana, IL 61801 dhardy_at_ks.uiuc.edu, http://www.ks.uiuc.edu/~dhardy/ > On Jul 24, 2018, at 4:28 PM, Marcelo C. R. Melo <melomcr_at_gmail.com> wrote: > > Hi Gerard, > I am looking into the issue with the data you sent me, but nothing like that came up over here. Like I mentioned before, it does not look like it has anything to do with the specific QM code you are running. > I'll let you (and the list) know when I figure this out. > Best > --- > Marcelo Cardoso dos Reis Melo > PhD Candidate > Luthey-Schulten Group > University of Illinois at Urbana-Champaign > crdsdsr2_at_illinois.edu <mailto:crdsdsr2_at_illinois.edu> > +1 (217) 244-5983 > > > On Tue, 24 Jul 2018 at 11:10, Gerard Rowe <GerardR_at_usca.edu <mailto:GerardR_at_usca.edu>> wrote: > I'm continuing to play with the QM/MM feature trying to get it to work correctly, and I found that I get the same memory allocation error when invoking mopac instead of orca. NAMD also exits with the same error message if launched with charmrun (I also built an MPI-enabled version that gave the same error, but I'm not certain I launched it correctly). The error persists with QMSimsPerNode=2 or if both QM regions are combined into one. > > I'm working with a single, dual-socket node with 48 GB of memory. Has anyone experienced this issue before? The only reference to this particular error message I can find is from years ago when million atom MD simulations were literally running out of memory. > > Thanks, > Gerard > > > From: Gerard Rowe > Sent: Wednesday, July 18, 2018 6:18 PM > To: namd-l_at_ks.uiuc.edu <mailto:namd-l_at_ks.uiuc.edu> > Subject: QM/MM with Orca: Memory allocation failed > > I'm having trouble getting hybrid QM/MM to run properly on my Linux machine. Every time I try to run a QM job using orca, namd dies with the message: > > ... > Info: Startup phase 15 took 0.000211954 s, 194.461 MB of memory in use > Info: Finished startup at 8.24108 s, 194.461 MB of memory in use > > TCL: Minimizing for 100 steps > FATAL ERROR: Memory allocation failed on processor 0. > > . > > The relevant part of the configuration file: > QMSimsPerNode 1 > QMBaseDir /dev/shm > qmConfigLine "! ZINDO/1 ENGRAD" > qmConfigLine "%output PrintLevel Mini Print\[P_Mulliken \] 1 Print\[P_AtCharges_M\] end" > qmMult 1 6 > qmCharge 1 2.00 > qmMult 2 6 > qmCharge 2 2.00 > qmSoftware orca > qmExecPath /usr/local/orca/orca_4_0_0_2_linux_x86-64/orca > > I have defined two QM regions with 83 atoms each. I have no problem running namd2 or orca by themselves. I find it hard to believe that this semiempirical job is running out of memory on a system with 48 GB of RAM (569 MB in use at the start of the the job). This error occurs regardless of the number of processors > > Is there some way to find out how much memory the program is attempting to allocate? Or is there a common reason this error may arise? > > Thanks, > Gerard > University of South Carolina Aiken > >
This archive was generated by hypermail 2.1.6 : Mon Dec 31 2018 - 23:21:19 CST