Re: Running QM-MM multinode

From: Bennion, Brian (
Date: Sat Feb 16 2019 - 12:03:19 CST

Maybe you already tried this but change the ntasks to 4. This may reproduce the environment of the single node and free up CPU’s in the 4 nodes for orca.

Sent from Workspace ONE Boxer<>

On February 16, 2019 at 12:18:37 AM PST, Francesco Pietra <> wrote:
Although implied in a previous mail, the problem of running QM-MM multinode with NAMD is presented here in detail.

qmConfigLine "! UKS BP86 RI SV def2/J enGrad SlowConv"
qmConfigLine "%%maxcore 2500 %%pal nproc 34 end"
# qmConfigLine "%%pal nproc 34 end"
qmConfigLine "%%scf MaxIter 5000 end"
qmConfigLine "%%output Printlevel Mini Print\[ P_Mulliken \] 1 Print\[P_AtCharges_M\] 1 end"

namd.job (SLURM)
#SBATCH --nodes=1
#SBATCH --ntasks=36
#SBATCH --cpus-per-task=1
module load profile/archive
module load autoload openmpi/2.1.1--gnu--6.1.0
/galileo/home/userexternal/fpietra0/NAMD_Git-2018-11-22_Linux-x86_64-multicore/namd2 namd-01.conf +p1 +CmiSleepOnIdle > namd-01.log

whereby NAMD runs by occupying one cpu only, leaving the other ones to ORCA. It runs fantastically well but the number of processes proved to be insufficient for the large system at high spin.

as above

#SBATCH --nodes=4
#SBATCH --ntasks=144
#SBATCH --cpus-per-task=1
module load profile/archive
module load autoload namd/2.12
namd2=$(which namd.mpi)
srun $namd2 namd-01.conf +p1 +CmiSleepOnIdle > namd-01.log

whereby NAMD occpies all 144 cores, leaving scarce resources to ORCA.

My question is whether the multinode settings could be tuned so as NAMD occupies only a very small fraction of the hardware, in order to fulfill the alluring statement "For simulations of large systems that are distributed across several computer nodes" about Hybrid QM/MM NAMD <>

I must add that, on multinode, I tried to simplify the molecular system by using only the charge groups facing the Tmetal ions (instead of full residues as in the above work), but it did not work. I got the error message "ORCA needs multiplicity values for all QM regions", while there is a single QM region. Probably the qm.pdb file was not correctly constructed.

Thanks for advice
francesco pietra

This archive was generated by hypermail 2.1.6 : Tue Dec 31 2019 - 23:20:31 CST