Re: Running QM-MM multinode

From: Francesco Pietra (chiendarret_at_gmail.com)
Date: Sun Feb 17 2019 - 01:41:05 CST

With ntasks=4 most threads visible with "top" command are of "rcuob/##"
character on each node. With the second, third and fourth node only,
namd.mpi (the executable) appears as one thread. ORCA does not appear at
all. In fact, /0/..TmpOut shows that ORCA has received a correct input from
NAMD but (it seems to me) had no space where to work.
francesco

On Sat, Feb 16, 2019 at 7:03 PM Bennion, Brian <bennion1_at_llnl.gov> wrote:

> Maybe you already tried this but change the ntasks to 4. This may
> reproduce the environment of the single node and free up CPU’s in the 4
> nodes for orca.
>
>
> ---
> Sent from Workspace ONE Boxer <https://whatisworkspaceone.com/boxer>
>
> On February 16, 2019 at 12:18:37 AM PST, Francesco Pietra <
> chiendarret_at_gmail.com> wrote:
>
> Hello
> Although implied in a previous mail, the problem of running QM-MM
> multinode with NAMD is presented here in detail.
>
> FIRST ON ONE NODE (SOLVED)
> namd.conf
> qmConfigLine "! UKS BP86 RI SV def2/J enGrad SlowConv"
> qmConfigLine "%%maxcore 2500 %%pal nproc 34 end"
> # qmConfigLine "%%pal nproc 34 end"
> qmConfigLine "%%scf MaxIter 5000 end"
> qmConfigLine "%%output Printlevel Mini Print\[ P_Mulliken \] 1
> Print\[P_AtCharges_M\] 1 end"
>
> namd.job (SLURM)
> #SBATCH --nodes=1
> #SBATCH --ntasks=36
> #SBATCH --cpus-per-task=1
> module load profile/archive
> module load autoload openmpi/2.1.1--gnu--6.1.0
> /galileo/home/userexternal/fpietra0/NAMD_Git-2018-11-22_Linux-x86_64-multicore/namd2
> namd-01.conf +p1 +CmiSleepOnIdle > namd-01.log
>
> whereby NAMD runs by occupying one cpu only, leaving the other ones to
> ORCA. It runs fantastically well but the number of processes proved to be
> insufficient for the large system at high spin.
>
> MULTINODE
> namd.conf
> as above
>
> namd.job
> #SBATCH --nodes=4
> #SBATCH --ntasks=144
> #SBATCH --cpus-per-task=1
> module load profile/archive
> module load autoload namd/2.12
> namd2=$(which namd.mpi)
> srun $namd2 namd-01.conf +p1 +CmiSleepOnIdle > namd-01.log
>
> whereby NAMD occpies all 144 cores, leaving scarce resources to ORCA.
>
> My question is whether the multinode settings could be tuned so as NAMD
> occupies only a very small fraction of the hardware, in order to fulfill
> the alluring statement "For simulations of large systems that are
> distributed across several computer nodes" about Hybrid QM/MM NAMD <
> http://www.ks.uiuc.edu/Research/qmmm/>
>
> I must add that, on multinode, I tried to simplify the molecular system by
> using only the charge groups facing the Tmetal ions (instead of full
> residues as in the above work), but it did not work. I got the error
> message "ORCA needs multiplicity values for all QM regions", while there is
> a single QM region. Probably the qm.pdb file was not correctly constructed.
>
> Thanks for advice
> francesco pietra
>
>

This archive was generated by hypermail 2.1.6 : Thu Dec 31 2020 - 23:17:10 CST