Re: Running multiple-replicas metadynamics

From: Prapasiri Pongprayoon (fsciprpo_at_ku.ac.th)
Date: Mon Nov 06 2017 - 16:35:51 CST

Hi Josh and Giacomo,

Thanks for your kind help.
I have set up the run, but there is an error. I have 5 replicas, they didn’t run at the same time due to the queuing system. 3’re running while 2 ‘re waiting in the queue. Those 3 could run for few hours and died at the end with the error below (the rest 2 jobs are still in the queue).

colvars: Error: failed to read all of the grid points from file. Possible explanations: grid parameters in the
configuration (lowerBoundary, upperBoundary, width) are different from those in the file, or the file is corrupt
/incomplete.

I searched online, but still don’t find the solution.

My system is drug-membrane protein system (want to observe a drug permeation). Except initial positions of drug and replicaID, the rest are the same. All boundaries are the same in all replicas.

I have a few questions to ask:
1.Did the jobs die because they couldn’t communicate to each other? All inputs work fine if I run normal well-tempered metadynamics.
2.Does this mean that all replicas have to run nearly at the same time? If so, is there any way that I can solve the problem if I can’t get all replicas run at the same time?

I still have a problem with the metadynamics and need your help for better understanding. This work was run before I moved to multi-walker metadynamics.
Since I want to observe the drug transport, I set up 2 colvars (orientation angle and z-distance along a pore axis) for metadynamics. The lower and upper boundaries are obtained physically from pdb file (with Upper/Lower WallConstant = 5kcal/mol). While running normal metadynamics (I got 2 metadynamics work for 2 drugs), I observed that, one of my system, the drug translocated out of the upper boundaries. I suspected that this was due to too low force constant so I increased Upper/LowerWallConstant to 10 and 20 kcal/mol, but the problem still persists. Do you have any idea why this happens?

Any advice you give me would be appreciated.

Regards,
Prapasiri

> On Nov 5, 2560 BE, at 11:17 PM, Giacomo Fiorin <giacomo.fiorin_at_gmail.com> wrote:
>
> Hi Prapasiri, your overall understanding (1-6) is correct. The file-based muliple-replicas infrastructure is designed to work as an extension of the single-replica workflow, by adding the options replicaID and optionally replicaRegistry.
>
> Regarding your two questions:
> 1. The replicas will communicate through files, the paths of which are maintained in the registry file. For this reason, you should run the simulations on a fast file system, preferably not NFS (contact your sysadmins to know what kind of shared filesystem is there between nodes).
> 2. The replicas should explore their own space, but this is a condition that you have to check for (e.g. checking that each explores a different energy basin).
>
> If you want to run a quick test, you can try the example input files at:
> https://github.com/Colvars/colvars/tree/master/namd/tests/library/011_multiple_walker_mtd <https://github.com/Colvars/colvars/tree/master/namd/tests/library/011_multiple_walker_mtd>
>
> Giacomo
>
>
> On Fri, Nov 3, 2017 at 7:30 PM, Prapasiri Pongprayoon <fsciprpo_at_ku.ac.th <mailto:fsciprpo_at_ku.ac.th>> wrote:
> Hi All,
>
> I’m new to NAMD and need your valuable help. I ‘m now doing metadynamics of drug translocation through membrane protein. The simulations go well, but I suspect that it will require large CPU time for my drug to explore all configuration space. So, I decide to move to the multiple-replicas metadynamics. Based on the manual, it seems that I need to:
>
> 1. turn on “multipleReplicas”
> 2. Add replicaID, replicasRegistry, replicasUpdateFrequency, and dumpPartialFreeEnergyFile
>
> After going through the manual, I still don’t understand the process clearly. So, it would be very appreciated, if you could explain how to set up the run.
>
> From my understanding: (Pls correct me if it’s wrong)
> 1. To run multiple-replicas meta dynamics, I need a number of pdbs with different drug’s positions. Each system is called “replica” where each is defined as replicaID. 5 systems in their individual folder.
> 2. I still need to have a single file (put in replicasRegistry) containing the paths of “colvar.state" and “hill.traj” files that will be generated when all five start running.
> 3. If I have 5 systems = 5 replicas (everything is the same except position of drug in each system), I need to run them separately with its own .conf and colvar files. The only difference among them are “replicaID”. Is this correct?
> 4. When all are run, they will talk to each other via “colvar.state" and “hill.traj” files defined in “replicasRegistry”. The file just has lines showing the location of state and hill files.
> 5. 5 runs will generate their own outputs and .pmf, but the pmf obtained from each replica is generated by combining data among 5 replicas. So, 5 pmfs from 5 replicas are generated, but there are the same. Is this correct? For the partial.pmf, does this file reflect the influence of individual run on the overall pmf file?
> 6. To restart the runs, I just add “colvarsInput input.colvars.state" in .conf of all five.
>
> These are what I understand from the manual and NAMD list.
>
> If these are correct, I still have some questions
> 1. If I have 5 replicas, I have to run 5 replicas independently. How do they communicate if they don’t start running at the same time?
> 2. Based on the recipe above, does it mean that each replica explores its own configuration space and then the data obtained from each replica will be combined and used to get the overall pmf?
>
> Is there any tutorial for multiple-replicas metadynamics that I can go through?
>
> Thanks for your help and patience in advance.
>
> Regards,
> Prapasiri
>
>
>
>
> --
> Giacomo Fiorin
> Associate Professor of Research, Temple University, Philadelphia, PA
> Contractor, National Institutes of Health, Bethesda, MD
> http://goo.gl/Q3TBQU <http://goo.gl/Q3TBQU>
> https://github.com/giacomofiorin <https://github.com/giacomofiorin>

This archive was generated by hypermail 2.1.6 : Sun Dec 31 2017 - 23:21:45 CST