Re: Problem with speed of QMMM (namd2-ORCA)

From: Herald Delis (bioinfo2021great_at_gmail.com)
Date: Thu Sep 09 2021 - 19:59:56 CDT

Hello Marcelo,

Thank you for reply,

Q: 1. Is 75.000 the total number of atoms in your entire system, or in your
QM regions? How large is the QM region itself?
If you have more than a few hundred atoms in your QM region, I would expect
ORCA to take a while to calculate it.

A1: 75,000 is the total atoms, 113 atoms in QM region.

Q: 2. In your NAMD execution line, you could ask for fewer nodes, like 5 or
10, and leave the rest for ORCA.
For example:
NAMD2 -> namd2 +p10 Min.conf
ORCA -> qmConfigLine "% PAL NPROCS 30 END" (from Min.conf)

A2: This works and speeds up. But it is still showing 52 hours remaining.

QMENERGY: 16 1.0000 -3517431.7765 -3517172.7403

(From Step 16)
Info: Writing QM charge output at step 16
Info: Writing QM position output at step 16
PRESSURE: 16 -3304.93 308.224 -262.003 312.544 -2742.12 -82.5491 -261.689
-85.8746 -2603.54
GPRESSURE: 16 -7758.39 303.504 -166.053 239.227 -6827.13 -132.006 -216.514
120.539 -6983.22
TIMING: 16 CPU: 4.09771, 0.11225/step Wall: 40290.3, 2258.54/step,
52.6993 hours remaining, 532.187500 MB of memory in use.
ENERGY: 16 8544.0808 5371.1756 4389.7518 47.8456
       -255440.1271 20369.6414 1.7922 -3517172.7403
0.0000 -3733888.5801 0.0000 -3733888.5801 -3733888.5801
    0.0000 -2883.5306 -7189.5819 806859.3967 -2883.5306
    -7189.5819

WRITING EXTENDED SYSTEM TO RESTART FILE AT STEP 16
LINE MINIMIZER BRACKET: DX 0.000456178 0.000121109 DU -1281.69 90.8142 DUDX
-5.59903e+06 2423.13 1.49124e+06
MINIMIZER RESTARTING CONJUGATE GRADIENT ALGORITHM
LINE MINIMIZER REDUCING GRADIENT FROM 1.29277e+07 TO 12927.7
WRITING COORDINATES TO DCD FILE QMMM-MinX.dcd AT STEP 16
WRITING COORDINATES TO RESTART FILE AT STEP 16
FINISHED WRITING RESTART COORDINATES
WRITING VELOCITIES TO RESTART FILE AT STEP 16
FINISHED WRITING RESTART VELOCITIES

Q:3. If you only have one QM region in your system, you only need to ask
for 1 QM simulations per node. This won't change your calculation.

A: This works too. But please check the above answer.

4. What kind of QM calculation are you running? semi empirical? HF?

A: QM Calculation:

! uks BP86 def2-TZVP def2/J EnGrad TightSCF
%maxcore 3000
% PAL NPROCS 30 END
% basis
newgto C "DEF2-SVP" end
newgto H "DEF2-SVP" end
end
% scf
MaxIter 500
end
%pointcharges "qmmm_exec/QMMM-Min/0/qmmm_0.input.pntchrg"

%coords
  CTyp xyz
  Charge 3.000000
  Mult 1.000000
  Units Angs
  coords

Additionationlly, I am using extrabonds for keeping my metal ion in place.
Will it affect the speed or the calculation in any way?

Thanks and regards,
Herald

On Fri, Sep 10, 2021 at 2:09 AM Marcelo C. R. Melo <melomcr_at_gmail.com>
wrote:

> Hi Herald,
>
> I may have a couple of suggestions and questions for you.
>
> 1. Is 75.000 the total number of atoms in your entire system, or in your
> QM regions? How large is the QM region itself?
> If you have more than a few hundred atoms in your QM region, I would
> expect ORCA to take a while to calculate it.
>
> 2. In your NAMD execution line, you could ask for fewer nodes, like 5 or
> 10, and leave the rest for ORCA.
> For example:
> NAMD2 -> namd2 +p10 Min.conf
> ORCA -> qmConfigLine "% PAL NPROCS 30 END" (from Min.conf)
>
> 3. If you only have one QM region in your system, you only need to ask for
> 1 QM simulations per node. This won't change your calculation.
>
> 4. What kind of QM calculation are you running? semi empirical? HF?
>
> Best,
> Marcelo
>
> On Thu, 9 Sept 2021 at 06:53, Herald Delis <bioinfo2021great_at_gmail.com>
> wrote:
>
>> [image: image.gif]Dear All,
>>
>> I am trying to run QMMM-MD using orca-namd2.
>> But it happens to be too slow, I wanted to know if I am doing something
>> wrong or is there a better way.
>> I have a CPU node with 40 cores (and 4 unused GPU), my calculations of
>> 75,000 atoms QMMM-Minimization takes 4 days.
>>
>> NAMD2 -> namd2 +p22 Min.conf
>> ORCA -> qmConfigLine "% PAL NPROCS 20 END" (from Min.conf)
>>
>> # Number of simultaneous QM simulations per node
>> QMSimsPerNode 20
>>
>> Thanks and regards,
>> Herald
>>
>>
>>
>>


image.gif

This archive was generated by hypermail 2.1.6 : Fri Dec 31 2021 - 23:17:11 CST