Re: QM/MM Parallel Computing With Orca

From: Nisler, Collin R. (nisler.1_at_buckeyemail.osu.edu)
Date: Tue Sep 12 2017 - 11:08:47 CDT

Well that explains it. I didn't see the Orca output file in that directory earlier. It contained these two messages:

WARNING: Direct SCF is incompatible with Method<>HF and Method<>DFT
  ===> : conventional SCF is chosen

WARNING: NDO methods cannot be used in parallel runs yet
  ===> : Skipping actual calculation

So it appears Orca does not allow parallel semi-empirical methods. I suppose I can try MOPAC, but to be fair Orca seems to be (relatively) fast in serial as well. Thanks again for your help.

Collin

________________________________
From: Marcelo C. R. Melo <melomcr_at_gmail.com>
Sent: Tuesday, September 12, 2017 9:45:08 AM
To: NAMD; Nisler, Collin R.
Subject: Re: namd-l: QM/MM Parallel Computing With Orca

Hello Collin,

In your "qmBaseDir" you should find temporary files related to the (attempted) execution of ORCA. That could be a first place you could look for more informative error messages. NAMD doesn't parse all possible errors that ORCA could inform, it just knows that ORCA didn't end as expected.

You seem to have covered all your bases with the PATH and LD_LIBRARY_PATH modifications, and your input looks fine, so without more information, and since the serial simulations run ok, I could only assume that it has something to Stampede's control over the nodes reserved for your job. In ORCA's documentation for parallel jobs in a cluster, they make a point of creating a "node file" to indicate which hosts are available for your job (https://sites.google.com/site/orcainputlibrary/setting-up-orca).
Could you check the output files in the scratch folder and your PBS job submission file?

Best,
Marcelo

---
Marcelo Cardoso dos Reis Melo
PhD Candidate
Luthey-Schulten Group
University of Illinois at Urbana-Champaign
crdsdsr2_at_illinois.edu<mailto:crdsdsr2_at_illinois.edu>
+1 (217) 244-5983
On 12 September 2017 at 09:12, Nisler, Collin R. <nisler.1_at_buckeyemail.osu.edu<mailto:nisler.1_at_buckeyemail.osu.edu>> wrote:
Hello, I am attempting to run a QM/MM simulation with Orca on the Stampede2 supercomputer, and wanted to run the quantum simulation in parallel. I have installed Orca and openmpi as instructed on the Orca website, adding the bin and library directories of openmpi to my PATH and LD_LIBRARY_PATH. There is only one quantum region in my simulation. I submit the NAMD job and request 20 nodes, and in the NAMD conf file I use the following lines for the orca simulation:
qmForces on
qmParamPDB MmC23_ASI_QM.pdb
qmColumn beta
qmBondColumn occ
QMElecEmbed on
QMSwitching on
qmBaseDir "/scratch/04267/cnisler/QMMM/MmP15C23/QMMM/mineq-01/QMmin01"
qmConfigLine "! PM3 ENGRAD PAL8"
qmConfigLine "%%output PrintLevel Mini Print\[ P_Mulliken \] 1 Print\[P_AtCharges_M\] 1 end"
qmMult 1 1
qmCharge 1 -7
qmSoftware orca
qmExecPath /scratch/04267/cnisler/orca/orca_4_0_1_2_linux_x86-64_openmpi202/orca
QMOutStride 1
QMPositionOutStride 1
So I am requesting Orca run in parallel with the "PAL8" keyword. However, I am continuing to get the rather vague error message "FATAL ERROR: Error running command for QM forces calculation." When I attempt to run this exact same simulation in serial, without the PAL8 keyword, it runs fine.
Is there something more I need to do to run Orca in parallel with the QM/MM interface? Thanks in advance.
Collin Nisler

This archive was generated by hypermail 2.1.6 : Mon Dec 31 2018 - 23:20:36 CST