Using NAMD and Orca together -- a QM/MM job

From: David Baker (D.J.Baker_at_soton.ac.uk)
Date: Thu May 07 2020 - 06:12:57 CDT

Hello,

First of all, I am not a NAMD expert and so I not au fait with a great of the technology. I am supporting a user who is attempting to run NAMD 2.13 in conjunction with Orca. I can run a basic NAMD job (the benchmarks for example), and I am trying to get a NAMD/Orca job to work.

The user's configuration files look fine at first sight, however something is missing. In the Slurm batch job I have loaded the software module for OpenMPI 3.1.4 nevertheless it appears that the location of mpirun is not being communicated to Orca. In the Slurm output file I see...

sh: mpirun: command not found
[file orca_tools/qcmsg.cpp, line 458]:
  .... aborting the run

Could someone please advise me? Hopefully, it's a simple matter of adding another parameter to the charmrun command.

Best regards,
David

#!/bin/bash
#SBATCH --job-name systemNr.txt
#SBATCH --nodes 1
#SBATCH --ntasks-per-node 40

export NAMD_DIR=/scratch/djb1/NAMD_2.13_Linux-x86_64-ibverbs-smp
export PATH=$NAMD_DIR:$PATH

INPUT=step10_opt.inp
OUTPUT=step10_opt.out

# generate NAMD nodelist
for n in `echo $SLURM_NODELIST | scontrol show hostnames`; do
  echo "host $n ++cpus 40" >> nodelist.$SLURM_JOBID
done

# calculate total processes (P) and procs per node (PPN)
PPN=`expr $SLURM_NTASKS_PER_NODE - 1`
P="$(($PPN * $SLURM_NNODES))"

module load openmpi/3.1.4/intel

${NAMD_DIR}/charmrun ${NAMD_DIR}/namd2 ++p $P ++ppn $PPN ++nodelist nodelist.$SLURM_JOBID +setcpuaffinity $INPUT >$OUTPUT

This archive was generated by hypermail 2.1.6 : Fri Dec 31 2021 - 23:17:09 CST