From: Pawel Kedzierski (pawel.kedzierski_at_pwr.edu.pl)
Date: Thu Jun 20 2019 - 10:11:19 CDT
W dniu 18.06.2019 o 16:57, mariano spivak pisze:
> I will try the same calculation without implicit solvent since there
> is not much to gain in terms of speed in the MM part when the
> bottleneck is the QM calculation.
> Try without "GBIS on" and let me know if you still have the same
> issue, you might need to solvate the system (try VMD solvate plugin).
> In any case, I would suggest you send me privately the files necessary
> so I can test and figure out the problem.
Yes, I tried without GBIS and with explicit solvent. I also tried with
and without electrostatic embedding, with and without extraBonds, with
different starting coordinates, different QM programs, and even with
quite different force fields (CHARMM36 and AMBER99).
Here are links to two zip files with my system. They show two of many
setups which I have tested. The first one (implicit_CHARMM.zip) uses
ORCA 4 called by a modified runORCA.py script do do the QM calculation.
The parameters are CHARMM36 and this one is with "GBIS on".
http://mml.ch.pwr.wroc.pl/p.kedzierski/namd/implicit_CHARMM.zip
The second example was parametrized with AMBER99 forcefield and solvated
with a sphere of water. The solvated part was first optimized using NAMD
with "qmForces off" and the QM part fixed (no oscillations observed with
MM only minimization). Here the QM software employed is MOPAC2016, but
one can still observe the same behaviour indirectly, by "grep OMENERGY:
output.log" (exactly same energies appear every 4 steps).
http://mml.ch.pwr.wroc.pl/p.kedzierski/namd/explicit_AMBER.zip
In both examples you will need to adapt the paths like qmBaseDir and
paths to the QM software.
Thanks,
Pawel Kędzierski
>
> Mariano Spivak, Ph.D.
>
> Theoretical and Computational Biophysics Group
>
> Beckman Institute, University of Illinois.
>
> mariano_at_ks.uiuc.edu <mailto:mariano_at_ks.uiuc.edu>
>
> mspivak_at_illinois.edu <mailto:mspivak_at_illinois.edu>
>
>
> On Tue, Jun 18, 2019, 2:49 AM Pawel Kedzierski
> <pawel.kedzierski_at_pwr.edu.pl <mailto:pawel.kedzierski_at_pwr.edu.pl>> wrote:
> Dear NAMD experts,
>
> I have adapted the script runORCA.py distributed with
> NAMD_2.13_Linux-x86_64 in the subdirectory lib/qmmm to see, what is
> going on during QM/MM minimization.
> The observation is, that *every few steps NAMD saves the exactly same
> coordinates for the QM program to calculate forces* (my script
> compares them with epsilon 2e-6 as the precision in the file written
> by NAMD is to the sixth decimal point). This behavior is consistent
> independently from the QM program used (MOPAC, ORCA or CUSTOM), the
> minimizer (cg or velocity quenching) or any other QM options I tried
> to set in the config file. Even if I modify the forces returned from
> the QM calculations, still the program keeps to cycle the same set of
> coordinates again and again.
> Identical coordinates of the QM are not saved every time, but this
> behavior start to appear early on:
> REPEATED COORDS from step000003 at step 6
>
> and later i see:
> REPEATED COORDS from step000003 at step 13
> REPEATED COORDS from step000006 at step 13
> REPEATED COORDS from step000009 at step 13
>
> and at the end:
> ... cut out ..
> REPEATED COORDS from step004977 at step 5001
> REPEATED COORDS from step004980 at step 5001
> REPEATED COORDS from step004983 at step 5001
> REPEATED COORDS from step004986 at step 5001
> REPEATED COORDS from step004989 at step 5001
> REPEATED COORDS from step004992 at step 5001
> REPEATED COORDS from step004995 at step 5001
> REPEATED COORDS from step004998 at step 5001
>
> The stride is also not always 3. There is something very wrong, no
> minimization algorithm should behave like this; and it only happens
> with "qmForceson".
>
> With regards,
> Paweł Kędzierski
>
This archive was generated by hypermail 2.1.6 : Tue Dec 31 2019 - 23:20:50 CST