Re: metadynamics/ABF simulations using collective variables - issues with restarting the calculations

From: Giacomo Fiorin (giacomo.fiorin_at_gmail.com)
Date: Wed Nov 14 2012 - 18:26:29 CST

Hello Krishnan, use lowerWall and upperWall. Their purpose is to provide
positions for the confining walls, that can be as tight as you need. The
boundaries of the grid can be much wider.

Solve the issue with the boundaries too tight, and your performance should
improve already.

The angle of such vector would not be advisable, because its derivatives
when the bond vector is parallel to Z are discontinuous. What about using
distanceZ, which is basically the cosine of this angle?

On Wed, Nov 14, 2012 at 7:22 PM, Marimuthu Krishnan <
marimuthu.krishnan_at_googlemail.com> wrote:

> Hello Giacomo,
>
> Thanks. You are right. The boundaries of certain reaction coordinates in
> my calculation are too tight. This choice was deliberate. I need to keep
> the boundaries tight to restrain (or to fix/constrain) a few reaction
> coordinates. Is there any way to improve the performance with tight
> boundaries?
>
> I did many fresh runs followed by restarts. In all cases, I notice this
> slow-down and thus it is reproducible.
>
> A related question: Is it possible to define the angle between a bond
> vector (connecting two atoms) and a reference axis (say, Z-axis) as a
> reaction coordinate but without using dummy atoms in ABF/metadynamics
> calculations? I would appreciate any help from experts.
>
> Krishnan
>
>
> On Tue, Nov 13, 2012 at 3:11 PM, Giacomo Fiorin <giacomo.fiorin_at_gmail.com>wrote:
>
>> Hello Krishnan, if you have a massive list of hills at the ends your
>> boundaries may be too tight. Those hills only appear when the variables
>> get too close to the boundaries of the grid.
>>
>> Make a new colvars configuration file with wider boundaries, and enable
>> rebinGrids only once (you may disable it in the later runs). This should
>> take care of the hills accumulated, if that is the problem.
>>
>> If you still see the performance decrease, is that reproducible? That
>> is, have you tried to re-run the first job of the ABF or metadynamics
>> calculations to see if the performance is still fast?
>>
>> Giacomo
>>
>>
>> On Tue, Nov 13, 2012 at 2:58 PM, Aron Broom <broomsday_at_gmail.com> wrote:
>>
>>> I'm not sure if it is all a result of the *.colvars.state file, but it
>>> seems like that could be the culprit. In general what one can do is to
>>> set
>>> Walls for your colvars that are somewhat far (multiple hill widths) from
>>> the Boundary, and this will prevent the accumulation of hills in the
>>> file.
>>> There may be some additional options to help with this in NAMD 2.9, I
>>> can't
>>> recall exactly, there might be old list messages with between myself and
>>> Giacomo Fiorin that will have better information. But most of those
>>> suggestions I guess are for starting a new simulation.
>>>
>>> Anyway, in essence what is happening is that when your colvar gets very
>>> near your boundary, rather than writing the hill to the grids, the hill
>>> must be written directly, otherwise a significant potion of the hill
>>> would
>>> be ignored. Given enough time (i.e. build up of hills everywhere else)
>>> the
>>> metadynamics bias would push your system past the colvar boundary and it
>>> would become trapped there, hence the need for those hills.
>>>
>>> I'm not sure exactly how to solve the problem once you've already
>>> started,
>>> BUT, if you want to just test to make sure this IS the problem, that is
>>> simple. Just open your .colvars.state file, delete all the hills from
>>> the
>>> bottom, and try restarting your simulation. Not having the hills won't
>>> be
>>> a problem for restarting, and you should get a sense of how the
>>> performance
>>> is comparing. Sadly you'll probably also see your system become trapped
>>> outside the colvar boundary.
>>>
>>> ~Aron
>>>
>>> On Tue, Nov 13, 2012 at 2:47 PM, Marimuthu Krishnan <
>>> marimuthu.krishnan_at_googlemail.com> wrote:
>>>
>>> > Aron,
>>> >
>>> > I do not use NAMD benchmarking to evaluate the performance. I simply
>>> use
>>> > the trajectory length obtained in 24 hours as a measure of performance.
>>> >
>>> > The *.colvars.state files are > 10 MB and in some cases it is even
>>> greater
>>> > than 20 MB. The "usegrids" option is ON by default. I use default
>>> values
>>> > for 'hillwidth' and 'hillweight'. Although one could vary 'hillwidth'
>>> and
>>> > 'hillweight' to achieve better performance/accuracy, it is not clear
>>> to me
>>> > why restart runs perform poor than new runs for same values of
>>> 'hillwidth'
>>> > and 'hillweight'. Do you have some suggestions for me to improve the
>>> > performance of restart runs at this point?
>>> >
>>> > You are correct. I see a massive list of hills at the bottom of the
>>> > *.colvars.state file.
>>> >
>>> > By the way, I am not sure whether to keep 'rebinGrids' ON or OFF for
>>> > restart runs. I have kept it 'OFF' (default value). Does it play any
>>> role
>>> > in the performance?
>>> >
>>> > best,
>>> > Krishnan
>>> >
>>> >
>>> > On Tue, Nov 13, 2012 at 11:47 AM, Aron Broom <broomsday_at_gmail.com>
>>> wrote:
>>> >
>>> >> One possibility is that the performance decrease is actually
>>> continuous.
>>> >> You should have a *.colvars.state file, how large is it?
>>> >>
>>> >> If you're saying the performance decreases because of the NAMD
>>> >> benchmarking that happens at the beginning of the run, this won't be
>>> >> accurate in the case where your *.colvars.state file becomes very
>>> large
>>> >> (i.e. > 10MB) during the course of the simulation. What you'll see
>>> in that
>>> >> case is a gradual decrease in performance over time, so make sure your
>>> >> performance numbers are based on timings taken throughout the
>>> simulation
>>>
>>
>>
>

This archive was generated by hypermail 2.1.6 : Mon Dec 31 2012 - 23:22:15 CST