Re: Fep performance

From: matteo.rotter (matteo.rotter_at_uniud.it)
Date: Thu Sep 17 2009 - 08:43:00 CDT

Hi Floris

> Hi Matteo,
>
> have you isolated the slowdown to the FEP module? Try to rerun the same protocol (e.g. multiple 60k segments) without FEP and see if you still see a slowdown.

I have isolated the slowdown to the FEP module, because the same
protocol without FEP have better benchamark values.

> With regards to your stability issues - if your crashes are at the start of the run, include some minimisation for each lambda to make sure the change is accommodated. Also make sure you're running with the soft core vdW potential as documented for NAMD 2.7.
> best,
> Floris
>
About the stability issue: the crashes are not located at the start of
the run, but after thousands of steps. The simulation is preceded by a
slow heating phase. I'm using the soft core vdW potential, with the
following values:

fepVdwShiftCoeff 5
fepElecLambdaStart 0.5
fepVdwLambdaEnd 0.5
decouple on

Regards, Matteo

>
>
>
> ----- Original Message ----
> From: matteo.rotter <matteo.rotter_at_uniud.it>
> To: namd-l_at_ks.uiuc.edu
> Sent: Monday, 24 August, 2009 17:51:27
> Subject: namd-l: Fep performance
>
> Dear all,
>
> I use NAMD 2.7 to perform alchemical fep simulations and i have some questions about the performance of the program.
> Usually i perform one single long simulation divided in two parts: (1) healing of the solvated system composed by ~30k atoms at lambda zero, then (2) the real fep, incrementing lambda values and doing 60k steps for every window of dLambda. The timestep value is 2.0.
> Doing a "grep Benchmark" of the output of the simulation i see that the velocity is not the same, and it decreases.
>
> Log:
>
> Info: Benchmark time: 8 CPUs 0.296415 s/step 1.71536 days/ns 60.0635 MB memory
> Info: Benchmark time: 8 CPUs 0.299998 s/step 1.7361 days/ns 60.0762 MB memory
> Info: Benchmark time: 8 CPUs 0.301172 s/step 1.74289 days/ns 60.0842 MB memory
> Info: Benchmark time: 8 CPUs 0.302582 s/step 1.75105 days/ns 60.3094 MB memory
> Info: Benchmark time: 8 CPUs 0.298734 s/step 1.72878 days/ns 60.3107 MB memory
> Info: Benchmark time: 8 CPUs 0.30036 s/step 1.73819 days/ns 60.3828 MB memory
> Info: Benchmark time: 8 CPUs 0.394402 s/step 2.28242 days/ns 48.1992 MB memory
> Info: Benchmark time: 8 CPUs 0.390341 s/step 2.25892 days/ns 48.1989 MB memory
> Info: Benchmark time: 8 CPUs 0.395859 s/step 2.29085 days/ns 48.2004 MB memory
> Info: Benchmark time: 8 CPUs 0.438661 s/step 2.53855 days/ns 61.9679 MB memory
> Info: Benchmark time: 8 CPUs 0.463273 s/step 2.68098 days/ns 61.9408 MB memory
> Info: Benchmark time: 8 CPUs 0.458809 s/step 2.65515 days/ns 61.9406 MB memory
> Info: Benchmark time: 8 CPUs 0.46636 s/step 2.69884 days/ns 64.4346 MB memory
> Info: Benchmark time: 8 CPUs 0.465821 s/step 2.69573 days/ns 64.4345 MB memory
> Info: Benchmark time: 8 CPUs 0.488558 s/step 2.8273 days/ns 64.4349 MB memory
> ....
>
> the simulation was done with the ++local option on Quad core xeon.
> So my first question is: why this change in velocity?
>
>
> Trying to solve this issue i searched in the NAMD mailing list and i found this approach: http://www.ks.uiuc.edu/Research/namd/mailing_list/namd-l/2476.html , doing separate FEP simulations.
> So i tried to follow a similar approach doing separate simulations, but with consequent dLambda values. That is for example that the output of dLambda 0.1 0.2 is the input of dLambda 0.2 0.3 and on.
> The system was first equilibrated with an initial simulation at lambda 0 of 40 ps.
> All the values for the .conf files are the same of the one single long simulation.
> For every dLambda window were done 60k steps.
> What i found is that with a timestep value 2.0, the Fep simulations exit prematurely due to constraint failure in rattle algorithm. The benchmark lines show good values, quite similar to the those at the beginning of the single long simulation.
>
> Info: Benchmark time: 8 CPUs 0.243659 s/step 1.41006 days/ns 54.2092 MB memory
> Info: Benchmark time: 8 CPUs 0.243631 s/step 1.4099 days/ns 54.2057 MB memory
> Info: Benchmark time: 8 CPUs 0.242427 s/step 1.40293 days/ns 54.435 MB memory
>
>
> I tried to solve the problem of crashing using a timestep value of 1.0. The simulation didn't exit and go without errors to the end.
> Looking at the benchmark lines i see that the s/steps value is quite the same of simulation with timestep 2.0.
>
> Info: Benchmark time: 8 CPUs 0.245462 s/step 2.84099 days/ns 53.2536 MB memory
> Info: Benchmark time: 8 CPUs 0.24511 s/step 2.83692 days/ns 53.6811 MB memory
> Info: Benchmark time: 8 CPUs 0.245676 s/step 2.84347 days/ns 53.3514 MB memory
>
> So the questions:
> why the first single simulation with timestep 2.0 arrives to normal end, showing an increase in s/steps value?
> and why the divided simulations with the same input does crash? And, last, why using timestep of 1.0 the simulations don't crash?
>
> thanks for your attention.
>
> Regards,
> Matteo Rotter.
>
>
>
>
>
>

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:53:17 CST