From: Marimuthu Krishnan (marimuthu.krishnan_at_googlemail.com)
Date: Tue Nov 13 2012 - 13:47:41 CST
I do not use NAMD benchmarking to evaluate the performance. I simply use
the trajectory length obtained in 24 hours as a measure of performance.
The *.colvars.state files are > 10 MB and in some cases it is even greater
than 20 MB. The "usegrids" option is ON by default. I use default values
for 'hillwidth' and 'hillweight'. Although one could vary 'hillwidth' and
'hillweight' to achieve better performance/accuracy, it is not clear to me
why restart runs perform poor than new runs for same values of 'hillwidth'
and 'hillweight'. Do you have some suggestions for me to improve the
performance of restart runs at this point?
You are correct. I see a massive list of hills at the bottom of the
By the way, I am not sure whether to keep 'rebinGrids' ON or OFF for
restart runs. I have kept it 'OFF' (default value). Does it play any role
in the performance?
On Tue, Nov 13, 2012 at 11:47 AM, Aron Broom <broomsday_at_gmail.com> wrote:
> One possibility is that the performance decrease is actually continuous.
> You should have a *.colvars.state file, how large is it?
> If you're saying the performance decreases because of the NAMD
> benchmarking that happens at the beginning of the run, this won't be
> accurate in the case where your *.colvars.state file becomes very large
> (i.e. > 10MB) during the course of the simulation. What you'll see in that
> case is a gradual decrease in performance over time, so make sure your
> performance numbers are based on timings taken throughout the simulation.
> If the above ends up being the case, the problem is that too much is being
> dumped into that file. If you're doing metadynamics the "usegrids" option
> needs to be on, as its entire function is to prevent exactly this kind of
> slowdown. If you're already using grids, it may be that too many hills are
> being written to the file rather than being discretized onto the grids, and
> you can see this as a massive list of hills at the bottom of the file.
> On Tue, Nov 13, 2012 at 11:21 AM, Jérôme Hénin <jhenin_at_ifr88.cnrs-mrs.fr>wrote:
>> Thank you for this report.
>> In principle the restart file has the extension colvars.state. The .old
>> extension denotes a backup file, if you only see that, it might indicate
>> some sort of I/O issue.
>> Can you send me (off-list) a colvars input file, and a series of NAMD
>> standard output logs that shows the performance decrease? I would at least
>> need that to try and diagnose the problem.
>> Finally, can you check the standard error log and see if there is
>> anything of interest in there?
>> On 13 November 2012 15:42, Marimuthu Krishnan <
>> marimuthu.krishnan_at_googlemail.com> wrote:
>>> Hello everybody,
>>> While calculating free energies using the Collective Variable Module
>>> implemented in NAMD 2.9, I noticed something unusual. Both ABF and
>>> metadynamics calculations run perfectly fine when I perform new
>>> calculations. When I try to restart my simulations (both for ABF and
>>> metadynamics), I notice 4-5 fold decrease in performance (relative to a new
>>> run) irrespective of the type of reaction coordinate (distance-based or
>>> angle-based) used. These calculations were run on Kraken (NICS) and same
>>> number of cores was used for both the fresh and restart runs. I do remember
>>> that I did not have this problem before when using earlier versions of
>>> NAMD. I would appreciate help from experts.
>>> A related minor issue with naming of the restart file. One of the output
>>> files from ABF/metadynamics is named file_rst.colvars.state.old but if you
>>> want to restart a run using this old restart file, NAMD 2.9 expects its
>>> name be file_rst.old.colvars.state. It would be useful if this mismatch in
>>> names is fixed.
> Aron Broom M.Sc
> PhD Student
> Department of Chemistry
> University of Waterloo
This archive was generated by hypermail 2.1.6 : Mon Dec 31 2012 - 23:22:15 CST