Re: FEP slowdown

From: Brian Radak (brian.radak_at_gmail.com)
Date: Mon Apr 23 2018 - 07:34:47 CDT

Hi Francesco,

Based on our previous exchanges, I think what you in fact want is:

frwd 0.00 0.20 0.05
back 0.20 0.00 -0.05

Since that is the region you want to overlap on (i.e. that region where
soft-core effects are dominant).

One very strong determinant of performance is the IO frequency -
outputEnergies, DCDFreq, and alchOutFreq. I've never been very partial to
the default value alchOutFreq (50 I think?) which seems incredibly low to
me. I would choose something closer to 1-10 ps. although I can't promise
that this would work perfectly for your application. If you don't actually
want the full trajectories, then it is a waste time, effort, and space to
save the DCD so you can safely make this rather high (or turn it off
completely by setting it to zero). Hopefully those changes will boost
performance.

HTH,
BKR

On Sat, Apr 21, 2018 at 11:34 AM, Francesco Pietra <chiendarret_at_gmail.com>
wrote:

> Hello
> I am carrying out a FEP for ligand Unbound
>
> frwd 0.00 0.20 0.05
>
> back 1.00 0.80 -0.05
>
>
> preeq 1,000,000
> FEP 6,000,000
>
> frwd was run at 0.002s/step performance throughout, and completed
> according to expectations.
>
> back also stated at 0.002s/step but changed to 0.005 at step 4,038,000 of
> the first window, and remained at this slow performance, until it was
> killed by wall time limit.
>
> Question: is any code reason for that slow down, or, likely, a filesystem
> problem? I must say that I have already noticed similar slowdown of this
> NextScale during FEP (on a single node!).
>
> Thanks for advice
>
> francesco pietra
>

This archive was generated by hypermail 2.1.6 : Tue Dec 31 2019 - 23:19:51 CST