Re: Fwd: Tuning QM-MM with namd-orca on one cluster node

From: Francesco Pietra (chiendarret_at_gmail.com)
Date: Fri Feb 01 2019 - 09:10:40 CST

Another special lesson. With the suggested settings, using PAL8 eight cores
were engaged. The result was 47 SCF iterations in 30min, against previous
14.
Thanks
francesco

On Thu, Jan 31, 2019 at 10:47 PM Jim Phillips <jim_at_ks.uiuc.edu> wrote:

>
> My guess here is that a "slot" corresponds to a task, so you would want
> ntasks=36 and cpus-per-task=1 for a pure MPI program.
>
> Given how much slower you can expect ORCA to be, you can probably just run
> NAMD with +p1 +CmiSleepOnIdle to get it out of the way.
>
> Jim
>
>
> On Thu, 31 Jan 2019, Francesco Pietra wrote:
>
> > Are PAL4 and PAL8 expecting four or eight nodes, respectively, rather
> than
> > cores?
>
> >> There are not enough slots available in the system to satisfy the 4
> slots
> >> that were requested by the application:
> >> /cineca/prod/opt/applications/orca/4.0.1/binary/bin/orca_gtoint_mpi
> >
> >
> >> Either request fewer slots for your application, or make more slots
> >> available
> >> for use.
> >
> >
> > Settings were
> > qmConfigLine "! UKS BP86 RI SV def2/J enGrad PAL4 SlowConv" (or PAL8)
> > qmConfigLine "%%output Printlevel Mini Print\[ P_Mulliken \] 1
> > Print\[P_AtCharges_M\] 1 end"
> >
> >
> > #SBATCH --nodes=1
> > #SBATCH --ntasks=1
> > #SBATCH --cpus-per-task=36
> > #SBATCH --time=00:30:00
>
>

This archive was generated by hypermail 2.1.6 : Thu Dec 31 2020 - 23:17:10 CST