Date: Sun Feb 17 2019 - 15:50:41 CST
It's hard to give scientific advice, but on the technical side, you can use NAMD-multicore in a cluster, and allow ORCA (with its intrinsic MPI infrastructure) to use as many nodes as you give it.As I mentioned before, you will need to write some script to pass ORCA a "machinefile", getting the list of available nodes from your cluster (which different cluster makes available in different ways), and removing from the list the node where NAMD is running, to avoid them fighting for the same cores.This would allow you to take advantage of ORCA's scalability for as much as possible, depending on the available hardware and size of QM region.As for the science/system choice, I'll tell you this, keep link atoms as far away as possible from regions of interest. They are probably one of the largest (if not the largest) perturbation to the QM system.Best,MarceloOn Fri, 15 Feb 2019 at 02:35, Francesco Pietra <email@example.com> wrote:The only possible way I can think about mitigating the computational burden is shrinking the qm part by retaining, for each protein residue coordinated to the metal ions, only the charge group nearest to the metal ions. While waiting for smarter ideas from o.p., we will see whether this is enough and, if so, whether the biochemistry is still respectedfrancescoOn Thu, Feb 14, 2019 at 11:34 PM Francesco Pietra <firstname.lastname@example.org> wrote:I dare going back a few weeks, when I was advised by Marcelo that a MPI program like ORCA cannot be run within another MPI program like NAMD. Now, faced by the slowness of (very highly tuned) QM-MM on one node with my high (very high!) spin large system (multinode namd nightbuild, SCF OK in 174 cycles but only ENERGY 5 in five hours) I tried with four nodes, namd2/12 mpi. Notice that, in order to speed up initially the simulation, I used a very low level of DFT, probably too low for a system like that.Probably I had forgot Marcelo's warning because my recent observation that there is no gain going to more than 8 cores with orca was wrong. It was based on the small Polyala system of the tutorial. With my large system, all 34 process were actively doing their job, much faster than with eight processes.Suddenly I remembered Marcelo and ssh into each one of the four nodes confirmed that each core was occupied by namd.This long and boring story to conclude that in my hands, for the time being, common systems of biochemical interest embodying high spin Tmetals are hard to investigate by QM-MM, unless one has unlimited access to one good node.However, there are reports of such studies with high spin multi Tmetals. How could that work be carried out? I tried epr-silent antiferromagnetic coupling but, with the odd number of electrons of my system, a singlet was refused by QM-MM. Because of that, probably broken-symmetry will not help. Any idea?All the bestfrancescoOn Thu, Feb 14, 2019 at 6:33 PM Marcelo C. R. Melo <email@example.com> wrote:Thank you for the input, Gerard.I was about to look into that myself since I did not understand the source of the problem.bestOn Thu, 14 Feb 2019 at 10:39, Gerard Rowe <GerardR@usca.edu> wrote:Francesco,
You probably already figured it out, but for others who may be reading, the error you experienced was an Orca syntax error. The %maxcore ### line is a single-line command that doesn't get terminated with "end". Only input blocks like SCF, BASIS, and PAL should be terminated with "end". It's confusing because it starts with %, but "%maxcore ####" is setting a global variable rather than initializing an input block.
From: firstname.lastname@example.org <email@example.com> on behalf of Francesco Pietra <firstname.lastname@example.org>
Sent: Thursday, February 14, 2019 9:46 AM
To: Eric Smoll; email@example.com
Subject: Re: namd-l: About restarting QM-MM simulationOn a fresh approach, a few hours later, the following interpretation
qmConfigLine "! UKS BP86 RI SV def2/J enGrad SlowConv"qmConfigLine "%%maxcore 2500 %%pal nproc 34 end"# qmConfigLine "%%pal nproc 34 end"
allows the simulation running.francesco
On Thu, Feb 14, 2019 at 11:17 AM Francesco Pietra <firstname.lastname@example.org> wrote:
Hi Eric, perhaps you can help me further about an error message concerning input to ORCA that is unclear to me
setting in namd conf file about orca:qmConfigLine "! UKS BP86 RI SV def2/J enGrad SlowConv"
qmConfigLine "%%maxcore 2500 end"
qmConfigLine "%%pal nproc 34 end"
qmConfigLine "%%scf MaxIter 5000 end"
qmConfigLine "%%output Printlevel Mini Print\[ P_Mulliken \] 1 Print\[P_AtCharges_M\] 1 end"
Error shown in /0/..TmpOut[file orca_main/maininp1.cpp, line 14628]: ERROR: expect a '$', '!', '%', '*' or '[' in the input
By commenting out the lineqmConfigLine "%%maxcore 2500 end"
the simulations starts regularly. The default mem 1024 gave rise to warning in /0, while 2500 per core corresponds to less than 75% mem available on the node.I must have been unable to interpret correctly the Example furnished by ORCA, or there is something else that I don't understand.Example:! DLPNO-CCSD(T) def2-TZVP def2-TZVP/C%maxcore 3000%palnprocs 6endThanks for advicefrancesco
On Wed, Feb 13, 2019 at 7:48 PM Eric Smoll <email@example.com> wrote:
You asked how to increase the number of SCF iterations above the default. Try something like this:
On Wed, Feb 13, 2019 at 8:34 AM Francesco Pietra <firstname.lastname@example.org> wrote:
HelloHow can NAMD-ORCA QM-MM simulations be correctly restarted? I mean in the context of namd.config in Example1 tutorial:
# Output Parameters
In particular when no SCF convergence was not yet reached (in favorable cases, info says that SCF convergence is close to be reached), so that XXX_A1_out.restart was not provided.
I was also unable to find on ORCA manual how to increase the number of SFC iterations above the default 125.
Thanks for advicefrancesco pietraPS I understand that the ORCA calculation can be carried out separately but I have some reasons now to let NAM-ORCA working together.
This archive was generated by hypermail 2.1.6 : Thu Dec 31 2020 - 23:17:10 CST