Re: metadynamics/ABF simulations using collective variables - issues with restarting the calculations

From: Aron Broom (broomsday_at_gmail.com)
Date: Tue Nov 13 2012 - 13:58:29 CST

I'm not sure if it is all a result of the *.colvars.state file, but it
seems like that could be the culprit. In general what one can do is to set
Walls for your colvars that are somewhat far (multiple hill widths) from
the Boundary, and this will prevent the accumulation of hills in the file.
There may be some additional options to help with this in NAMD 2.9, I can't
recall exactly, there might be old list messages with between myself and
Giacomo Fiorin that will have better information. But most of those
suggestions I guess are for starting a new simulation.

Anyway, in essence what is happening is that when your colvar gets very
near your boundary, rather than writing the hill to the grids, the hill
must be written directly, otherwise a significant potion of the hill would
be ignored. Given enough time (i.e. build up of hills everywhere else) the
metadynamics bias would push your system past the colvar boundary and it
would become trapped there, hence the need for those hills.

I'm not sure exactly how to solve the problem once you've already started,
BUT, if you want to just test to make sure this IS the problem, that is
simple. Just open your .colvars.state file, delete all the hills from the
bottom, and try restarting your simulation. Not having the hills won't be
a problem for restarting, and you should get a sense of how the performance
is comparing. Sadly you'll probably also see your system become trapped
outside the colvar boundary.

~Aron

On Tue, Nov 13, 2012 at 2:47 PM, Marimuthu Krishnan <
marimuthu.krishnan_at_googlemail.com> wrote:

> Aron,
>
> I do not use NAMD benchmarking to evaluate the performance. I simply use
> the trajectory length obtained in 24 hours as a measure of performance.
>
> The *.colvars.state files are > 10 MB and in some cases it is even greater
> than 20 MB. The "usegrids" option is ON by default. I use default values
> for 'hillwidth' and 'hillweight'. Although one could vary 'hillwidth' and
> 'hillweight' to achieve better performance/accuracy, it is not clear to me
> why restart runs perform poor than new runs for same values of 'hillwidth'
> and 'hillweight'. Do you have some suggestions for me to improve the
> performance of restart runs at this point?
>
> You are correct. I see a massive list of hills at the bottom of the
> *.colvars.state file.
>
> By the way, I am not sure whether to keep 'rebinGrids' ON or OFF for
> restart runs. I have kept it 'OFF' (default value). Does it play any role
> in the performance?
>
> best,
> Krishnan
>
>
> On Tue, Nov 13, 2012 at 11:47 AM, Aron Broom <broomsday_at_gmail.com> wrote:
>
>> One possibility is that the performance decrease is actually continuous.
>> You should have a *.colvars.state file, how large is it?
>>
>> If you're saying the performance decreases because of the NAMD
>> benchmarking that happens at the beginning of the run, this won't be
>> accurate in the case where your *.colvars.state file becomes very large
>> (i.e. > 10MB) during the course of the simulation. What you'll see in that
>> case is a gradual decrease in performance over time, so make sure your
>> performance numbers are based on timings taken throughout the simulation--20cf301af31d88c27504ce65d7c3--

This archive was generated by hypermail 2.1.6 : Tue Dec 31 2013 - 23:22:45 CST