Re: Running QM-MM MOPAC on a cluster

From: Jim Phillips (jim_at_ks.uiuc.edu)
Date: Fri Dec 14 2018 - 09:29:51 CST

The performance of a QM/MM simulation is typically limited by the QM
program, not the MD program. Do you know how many threads MOPAC is
launching? Do you need to set the MKL_NUM_THREADS environment variable?
You want the number of NAMD threads (+p#) plus the number of MOPAC threads
to be less than the number of cores on your machine.

Jim

On Fri, 14 Dec 2018, Francesco Pietra wrote:

> Hi all
> I resumed my attempts at finding the best settings for running namd qmmm on
> a cluster. I used Example1, Polyala).
>
> In order to use namd2/13 multicore night build, I was limited to a single
> multicore node, 2*18-core Intel(R) Xeon(R) E5-2697 v4 @ 2.30GHz and 128
> GB RAM (Broadwell)
>
> Settings
> qmConfigLine "PM7 XYZ T=2M 1SCF MOZYME CUTOFF=9.0 AUX LET GRAD QMMM
> GEO-OK"
>
> qmExecPath "/galileo/home/userexternal/fpietra0/mopac/MOPAC2016.exe"
>
> of course, on the cluster the simulation can't be run on shm
>
> execution line
> /galileo/home/userexternal/fpietra0/NAMD_Git-2018-11-22_Linux-x86_64-multicore/namd2
> namd-01.conf +p# > namd-01.log
>
> where # was either 4, 10, 15, 36
>
> With either 36 or 15 core; segmentation fault
>
> With either 4 of 10 core, execution of the 20,000 steps of Example 1 took
> nearly two hours. From the .ou file in folder /0, the execution took 0.18
> seconds.
>
> My question is what is wrong in my attempts to rationalize such
> disappointing performance.
>
> Thanks for advice
>
> francesco pietra
>

This archive was generated by hypermail 2.1.6 : Mon Dec 31 2018 - 23:21:35 CST