Re: QM/MM with Orca: Memory allocation failed

From: Marcelo C. R. Melo (melomcr_at_gmail.com)
Date: Thu Jul 19 2018 - 09:51:25 CDT

Hi Gerard,

This looks more like a problem in NAMD than in ORCA, since the memory
allocation error is coming from within NAMD.

What version of NAMD are you using?

I see you are asking for one simulation per node, but seem to have defined
two QM regions, that should not be a problem, bu could you try using two
simulations per node?

Last but not least, could you provide your test case? (PDB, PSF, and
parameter set being used)

Best,
Marcelo

---
Marcelo Cardoso dos Reis Melo
PhD Candidate
Luthey-Schulten Group
University of Illinois at Urbana-Champaign
crdsdsr2_at_illinois.edu
+1 (217) 244-5983
On Wed, 18 Jul 2018 at 17:21, Gerard Rowe <GerardR_at_usca.edu> wrote:
> I'm having trouble getting hybrid QM/MM to run properly on my Linux
> machine.  Every time I try to run a QM job using orca, namd dies with the
> message:
>
>
> ...
> Info: Startup phase 15 took 0.000211954 s, 194.461 MB of memory in use
> Info: Finished startup at 8.24108 s, 194.461 MB of memory in use
>
> TCL: Minimizing for 100 steps
> FATAL ERROR: Memory allocation failed on processor 0.
>
> ..
>
>
> The relevant part of the configuration file:
> QMSimsPerNode 1
> QMBaseDir       /dev/shm
> qmConfigLine   "! ZINDO/1 ENGRAD"
> qmConfigLine   "%output PrintLevel Mini Print\[P_Mulliken \] 1
> Print\[P_AtCharges_M\] end"
> qmMult            1 6
> qmCharge       1 2.00
> qmMult            2 6
> qmCharge       2 2.00
> qmSoftware     orca
> qmExecPath     /usr/local/orca/orca_4_0_0_2_linux_x86-64/orca
>
> I have defined two QM regions with 83 atoms each. I have no problem
> running namd2 or orca by themselves. I find it hard to believe that this
> semiempirical job is running out of memory on a system with 48 GB of RAM
> (569 MB in use at the start of the the job).  This error occurs regardless
> of the number of processors
>
> Is there some way to find out how much memory the program is attempting to
> allocate?  Or is there a common reason this error may arise?
>
> Thanks,
> Gerard
> University of South Carolina Aiken
>
>
>

This archive was generated by hypermail 2.1.6 : Tue Dec 31 2019 - 23:20:05 CST