Re: ORCA NAMD Parallel

From: Gerard Rowe (
Date: Fri Mar 15 2019 - 09:45:30 CDT

You can specify a machinefile in your Orca input by placing a file called MyInp.nodes in the orca execution directory. In your case, it might be useful to play a little scripting magic to copy the one created by SLURM and remove the instances of the node being used by NAMD. You could hard-code the machinefile if you're on a smaller cluster and know which nodes you'll be assigned.

This is assuming that node allocation is the source of your error. What you pasted in is a generic "Orca crashed" message. The cause of the crash is usually found immediately above that message.


From: <> on behalf of McGuire, Kelly <>
Sent: Thursday, March 14, 2019 3:51 AM
Subject: namd-l: ORCA NAMD Parallel

I am trying to use PAL8 in my qmConfigLine for ORCA QMMM, but get in my ORCA output :

Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:

  Process name: [[51120,1],0]
  Exit code: 127
[file orca_main/gtoint.cpp, line 137]: ORCA finished by error termination in ORCA_GTOInt

Is this probably due to ORCA and NAMD fighting for the same threads? I haven't figured out how, yet, to tell ORCA which nodes are available. We use SLURM.

Kelly L. McGuire

PhD Candidate


Department of Physiology and Developmental Biology

Brigham Young University

LSB 3050

Provo, UT 84602

This archive was generated by hypermail 2.1.6 : Tue Dec 31 2019 - 23:20:37 CST