From: Josh Vermaas (vermaasj_at_msu.edu)
Date: Wed Sep 10 2025 - 08:33:50 CDT

Hi Joel,

QM does not scale very well. The complexity of the computation grows
polynomially with the number of atoms, something like O(N^7) if I
remember correctly. So if you know how long a 32 atom ligand would take
for this calculation, your bigger ligand would take 128x longer. This is
of course not great, but I've parameterized lignin dimers before, and
those are only a little smaller (~50 atoms), so 64 atoms is actually not
that bad if you just want a charge distribution.

However, looking at my logs, I only did charge optimization for those
big ones, using the monomers (which only had ~25-30 atoms) to do the
expensive dihedral scans and hessian calculations. On hardware I had
available, that took less than 2 days (the longest I see quickly is 1
day 17 hours). So, even if everything is *perfect*, on a system twice as
big I'd expect this to run full blast for 256 days, or about 8.4 months.
In the best case scenario, I might be too pessimistic in my timings,
since I did this when I was a postdoc and CPUs have gotten more capable
since then, and you'd only need to wait another 3 months for an answer.
But you'll likely also want to do dihedral scans, which are just as
expensive, so you may be better off with another approach entirely.

-Josh

On 7/11/25 1:29 PM, Joel Subach wrote:
> Hell Forum,
>
> I am running the command:
>
> orca 14C.hess.inp > 14C.hess.output.out
>
> to generate the output.out file towards the subsequent generation of
> Bond and Angle
> optimized reparametrization due to CGenFF penalty scores, the aim is
> towards Gromacs Protein-Ligand Complex Analysis.
>
> I am running the above command in the NSF Cluster exhibiting the below
> specs,
> how long should this execution take to generate final results for a
> ligand of 64-atoms?  (The above was executed almost three-months ago
> and is still running successfully without final results yet, maybe the
> delay is due to a sharing of this NSF Cluster.)
>
> Attribute Kentucky Research Informatics Cloud (KyRIC) Large Memory Nodes
> *Nodes* 5
> *CPU Type* Broadwell class
> *CPU Speed* 2 Ghz
> *CPU Cores per Node* 40
> *RAM per CPU Core* 75 GB
> *GPU* None
> *Local Storage per Node* 6,000 GB
>

-- 
Josh Vermaas
vermaasj_at_msu.edu
Assistant Professor, Plant Research Laboratory and Biochemistry and Molecular Biology
Michigan State University
vermaaslab.github.io