Re: Ligand atoms moving too fast with FEP

From: Brian Radak (brian.radak_at_gmail.com)
Date: Tue Feb 20 2018 - 10:21:48 CST

Correct - there is no simple restart procedure at this point.

I'm not sure what you mean by increase lambda - do you mean increase the
lambda increment? That is entirely dependent on the system and the kind of
transformation. I really couldn't say anything regarding your situation.

I think I would need to see your stdout log to understand this question.
Assuming a forward perturbation you should always have alchLambda2 >
alchLambda. There should be no change in the simulation rate throughout.

On Tue, Feb 20, 2018 at 11:10 AM, Francesco Pietra <chiendarret_at_gmail.com>
wrote:

> Hi Brian
> Thanks a lot for answering. Which means, I understand, that there is still
> no way of going to ParseFEP from restarted FEP.
>
> Another alternative to shorten FEP is to increase /\lamda. How much for
> still getting reliable outputs?
>
> I still do not understand why lamda2=/\lambda is now 0.2, while at
> beginning it was set to 0.020. Does it mean that now FEP will run faster,
> or it is really safer killing it immediately to save credit?
>
> thanks
>
> francesco
>
> On Tue, Feb 20, 2018 at 4:24 PM, Brian Radak <brian.radak_at_gmail.com>
> wrote:
>
>> You've hit upon some very unfortunate limitations of the automated tools
>> for running FEP simulations (we're working to fix these problems!)
>>
>> You can monitor the the current lambda value by looking for resets of the
>> "alchLambda" variable which are logged to stdout.
>>
>> One way that you can decrease the overall length of the simulations is to
>> split them into multiple runs. For example:
>>
>> runFEP 0.0 1.0 0.1 <numsteps>
>>
>> Could be divided into two jobs:
>>
>> 1) runFEP 0.0 0.5 0.1 <numsteps>
>> 2) runFEP 0.5 1.0 0.1 <numsteps>
>>
>> You would then simply concatenate the alchOutFile results before
>> submitting them to ParseFEP.
>>
>> HTH,
>> BKR
>>
>>
>> On Tue, Feb 20, 2018 at 3:08 AM, Francesco Pietra <chiendarret_at_gmail.com>
>> wrote:
>>
>>> Mail escaped my control: actually my FEP is running NPT, like in the
>>> 2017 tutoriual:
>>>
>>> CONSTANT-T
>>>
>>> langevin on
>>> langevintemp 300.0
>>> langevindamping 2.0
>>>
>>>
>>> # CONSTANT-P
>>>
>>> langevinpiston on
>>> langevinpistontarget 1
>>> langevinpistonperiod 100
>>> langevinpistondecay 100
>>> langevinpistontemp 300
>>>
>>>
>>>
>>> ---------- Forwarded message ----------
>>> From: Francesco Pietra <chiendarret_at_gmail.com>
>>> Date: Tue, Feb 20, 2018 at 8:58 AM
>>> Subject: Fwd: namd-l: Ligand atoms moving too fast with FEP
>>> To: brian.radak_at_gmail.com, NAMD <namd-l_at_ks.uiuc.edu>
>>>
>>>
>>> I recognize now that my previous post was somewhat misleading.
>>>
>>> First, as I am following the 2017 tutorial, I am carrying out FEP at NVT
>>> in explicit water, charmm36.
>>>
>>> Second, I suspected that previous crash by ligand atoms moving too fast
>>> at beginning FEP was due to incomplete equilibration of the system. That
>>> was the case. After longer NPT equilibration of the system, now NVT FEP is
>>> running from 3hr on six nodes (204 core) at 0.1days/ns. I hope that six
>>> nodes are enough to complete the simulation, as I am limited to 24h. Given
>>> the size of the system (81,000 atoms) more that six nodes could not help.
>>>
>>> QUESTION: It is not easy for me understanding at which stage the
>>> simulation is, without reading line-by-line namd's log. Are there commands
>>> to verify the progress of the simulation? FEP analysis requires that the
>>> simulation is completed, because of analytical tools limimitation, as far
>>> as I know.
>>>
>>> thanks
>>>
>>> francesco
>>>
>>> ---------- Forwarded message ----------
>>> From: Francesco Pietra <chiendarret_at_gmail.com>
>>> Date: Fri, Feb 16, 2018 at 4:31 PM
>>> Subject: Re: namd-l: Ligand atoms moving too fast with FEP
>>> To: Brian Radak <brian.radak_at_gmail.com>
>>> Cc: namd-l <namd-l_at_ks.uiuc.edu>
>>>
>>>
>>> I meant NTP MD at ts=0.1fs, not minimization, which does not feel the
>>> value of ts. Actually, however, I checked that NTP with ts=1.0fs does not
>>> crash on the cluster, so that I launched a one-day long NPT on two nodes
>>> (72 cpus). After that, I'll also have (hopefully) a better conformation of
>>> the ligand to use as reference structure for FEP (the starting structure of
>>> the ligand is from NMR; it is a compound that we isolated long ago from
>>> corals). Clearly - as yoy supposed - that ligand atom were moving to fast
>>> was due to the conditions for FEP.
>>>
>>> I understand now that you meant NVT. I'll try to educate myself about
>>> NVT, about which I have no experience. I want to simulate human-body
>>> conditions as closely as possible. This is why I choose NTP.
>>>
>>> thanks
>>> francesco
>>>
>>> On Fri, Feb 16, 2018 at 2:47 PM, Brian Radak <brian.radak_at_gmail.com>
>>> wrote:
>>>
>>>> On Thu, Feb 15, 2018 at 10:47 AM, Francesco Pietra <
>>>> chiendarret_at_gmail.com> wrote:
>>>>
>>>>> Hi Brian:
>>>>>
>>>>> I generally don't recommend NpT simulations for FEP
>>>>>>
>>>>>
>>>>> I was following the tutorial, which in other quite similar cases
>>>>> worked well under NPT. At any event, as there is no membrane, which other
>>>>> type of simulation?
>>>>>
>>>>
>>>> This is certainly just my opinion and not necessarily the prevailing
>>>> wisdom. My understanding is that NVT generally has about 5-10% better
>>>> performance. It should also give nearly identical results for a well chosen
>>>> density.
>>>>
>>>>
>>>>>
>>>>>
>>>>> I commonly make the mistake of not correctly flagging atoms in the PDB
>>>>>>
>>>>>
>>>>> I am prone to follow your experience with FEP, and I could easily
>>>>> mismatch those flags. But should that be corrected later?
>>>>>
>>>>
>>>> I meant forgetting to flag an atom that should be included in the
>>>> ligand. That's just such a silly mistake that I look for it first (and has
>>>> solved the issue more times than I care to admit).
>>>>
>>>>
>>>>>
>>>>> make sure that you are flagging the atoms to start at the correct,
>>>>>> fully interacting, endpoint. Assuming you are scanning alchLambda 0 -> 1,
>>>>>> the flag should be -1, not 1.
>>>>>>
>>>>>
>>>>> I am at frwd-01.namd and filename.fep flags are -1.00
>>>>>
>>>>> Thanks for answering and beg pardon for what I might have
>>>>> misunderstood from your mail.
>>>>>
>>>>> francesco
>>>>>
>>>>> PS I had just submitted to the cluster a NPT MD with the FEP system
>>>>> and with ts=0.1fs to see what happens. Sometimes files run on inexpensive
>>>>> GPUs are not well transferable.
>>>>>
>>>>
>>>> I'd be a bit surprised if the equilibrated inputs are just "no good"
>>>> for FEP. I guess you could minimize and randomize velociites before doing
>>>> FEP? I've personally never had a situation where the solution was to use a
>>>> smaller timestep during equilibration.
>>>>
>>>>
>>>>
>>>>>
>>>>> On Thu, Feb 15, 2018 at 3:27 PM, Brian Radak <brian.radak_at_gmail.com>
>>>>> wrote:
>>>>>
>>>>>> I generally don't recommend NpT simulations for FEP (unless you have
>>>>>> a membrane) - the virial is goofy for dual-topology simulations, but
>>>>>> doubtfully catastrophically so. That's probably not the issue at all, but
>>>>>> it's something to try.
>>>>>>
>>>>>> At the risk of being slightly insulting, I commonly make the mistake
>>>>>> of not correctly flagging atoms in the PDB - maybe check the ALCH summary
>>>>>> after the PSF STRUCTURE SUMMARY? Also a silly error, but worth checking.
>>>>>> Also, make sure that you are flagging the atoms to start at the correct,
>>>>>> fully interacting, endpoint. Assuming you are scanning alchLambda 0 -> 1,
>>>>>> the flag should be -1, not 1.
>>>>>>
>>>>>> HTH,
>>>>>> BKR
>>>>>>
>>>>>>
>>>>>> On Thu, Feb 15, 2018 at 3:00 AM, Francesco Pietra <
>>>>>> chiendarret_at_gmail.com> wrote:
>>>>>>
>>>>>>> Hallo:
>>>>>>>
>>>>>>> While I had no problems so far with protein-ligand FEP, with a new
>>>>>>> similar ligand, parameterized for charmm36 at the same quality level as
>>>>>>> before for the other ligands, FEP crashed.
>>>>>>>
>>>>>>> The new system had been subjected to charmm36 npt MD for 1,000,000
>>>>>>> steps at ts=1.0fs on a single node CPU-GPU box, reaching constant rmsd vs
>>>>>>> frame. With frwd FEP, carried out on a nextscale cluster on two Broadwell
>>>>>>> nodes (72 cores) at ts=1.0fs, the simulation crashed nearly immediately
>>>>>>> (within 19s) with three atoms of the ligand moving two fast.
>>>>>>>
>>>>>>> What about resubmitting classical MD on the cluster for > 1,000,000
>>>>>>> steps? Or what could be better?
>>>>>>>
>>>>>>> thanks for advice
>>>>>>>
>>>>>>> francesco pietra
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>>
>>>
>>
>

This archive was generated by hypermail 2.1.6 : Mon Dec 31 2018 - 23:20:52 CST