Re: Using NAMD and Orca together -- a QM/MM job

From: David Baker (D.J.Baker_at_soton.ac.uk)
Date: Thu May 07 2020 - 10:19:43 CDT

Hi Pietro,

Thank you for your advice. I have now made sure that the PATH contains the location of mpirun and the correct LD_LIBRARY_PATH is defined. It does appear that part of the simulation is running, however the following command is being aborted..

ORCA finished by error termination in GTOInt
Calling Command: mpirun -np 16 /scratch/djb1/orca_4_2_1_linux_x86-64_openmpi314/orca_gtoint_mpi /scratch/djb1/step10_qmmm_opt/0/qmmm_0.input.int.tmp /scratch/djb1/step10_qmmm_opt/0/qmmm_0.input
[file orca_tools/qcmsg.cpp, line 458]:
  .... aborting the run

This is accompanied by the following error in the slurm output file..

sh: mpirun: command not found
[file orca_tools/qcmsg.cpp, line 458]:
  .... aborting the run

Strange, since the mpirun command is in the PATH. Do you or anyone else have any ideas,please? I note the above command works perfectly on the command line (assuming the correct PATH and LD_LIBRARY_PATH).

Best regards,
David

________________________________
From: Pietro Amodeo <pamodeo_at_icb.cnr.it>
Sent: 07 May 2020 13:42
To: namd-l_at_ks.uiuc.edu <namd-l_at_ks.uiuc.edu>; David Baker <D.J.Baker_at_soton.ac.uk>
Cc: owner-namd-l_at_ks.uiuc.edu <owner-namd-l_at_ks.uiuc.edu>
Subject: Re: namd-l: Using NAMD and Orca together -- a QM/MM job

Hi David,

since different MPI implementations (or versions of a single implementation) can/must coexist on a machine or cluster, usually standard installations do not create links to MPI installed commands and libs like mpirun (and compiler-related commands: mpicc, ...) into system-wide directories.

Thus, users or system managers have to make these commands/libs visible. This can be made in different ways, depending on the number and kind of MPI implementations installed, to the number of users that want to share the same configuration, or the necessity that even different calls from a single user use different MPI implementations/versions.

A simple way is to add MPI installing dirs to PATH and LD_LIBRARY_PATH paths, exporting these variables, into either the user login scripts (OS dependent: .bashrc in user home directory under many linux distros) if only one MPI implementation and version is used by the user, or the namd+orca running script.

In the case of your script, you could try to modify it by:

1) changing "export PATH=$NAMD_DIR:$PATH" into: export PATH=$NAMD_DIR:/the/MPI/exe/dir:$PATH

and adding:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/the/MPI/lib/dir

Hope this helps,

Pietro

This archive was generated by hypermail 2.1.6 : Thu Dec 31 2020 - 23:17:13 CST