Re: Re: AMBER input and REST2 leads to crash

From: Victor Zhao (yzhao01_at_g.harvard.edu)
Date: Tue Sep 15 2020 - 17:33:54 CDT

Hi Dave,

I will compile together my files and share with you.

I’d like to add that I spoke a little too soon about my issue being /fully/ resolved. I am able tor run my simulations using PSF files generated using the CHARMM-converted version of ff14SB, but I did not succeed in running simulations using AMBER-format input files (.prmtop). After changing to LJcorrection off, the latter still fails with a different error, "REPLICA 0 FATAL ERROR: Function get_vdw_params failed to derive Lennard-Jones sigma and epsilon from A and B values.” This is despite the same prmtop file working fine for ordinary equilibrium simulations.

(Also, as an aside for future readers, in order to use parm14sb_all for NAMD, I had to modify the parameter files and rename the atom types 2C and 3C to C2 and C3, respectively, since 2C and 3C makes NAMD think that the PSF file is CHARMM-type, not XPLOR-type. This is just so I can get things running.)

Best,
Victor

> On Sep 15, 2020, at 1:26 PM, David Hardy <dhardy_at_ks.uiuc.edu> wrote:
>
> Hi Victor,
>
> This seems like a bug. Would it be possible for you to provide input files to reproduce this behavior to help us track down the problem? You can send off-list either directly to me or to namd_at_ks.uiuc.edu <mailto:namd_at_ks.uiuc.edu>.
>
> Thanks,
> Dave
>
> --
> David J. Hardy, Ph.D.
> Beckman Institute
> University of Illinois at Urbana-Champaign
> 405 N. Mathews Ave., Urbana, IL 61801
> dhardy_at_ks.uiuc.edu <mailto:dhardy_at_ks.uiuc.edu>, http://www.ks.uiuc.edu/~dhardy/ <http://www.ks.uiuc.edu/~dhardy/>
>> On Sep 15, 2020, at 12:01 PM, Victor Zhao <yzhao01_at_g.harvard.edu <mailto:yzhao01_at_g.harvard.edu>> wrote:
>>
>> Hi all, I managed to resolve my issue.
>>
>> I went through my conf file, and I compared differences between my previous, successful runs, and these new runs. The conf file in the new run (using AMBER ff) was modified from the conf file in my previous, successful runs to match the recommendations here, https://ambermd.org/namd/namd_amber.html <https://ambermd.org/namd/namd_amber.html>, and here, https://www.ks.uiuc.edu/Research/namd/2.9/ug/node13.html <https://www.ks.uiuc.edu/Research/namd/2.9/ug/node13.html>. I modified the conf file for my new run to better match my previous conf files, and I found one change that enabled my simulation to run: LJcorrection off.
>>
>> My previous simulations used LJcorrection off, but I had LJcorrection on in my new simulations. After switching it back to off, my simulations can run.
>>
>>> On Sep 15, 2020, at 12:04 PM, Victor Zhao <yzhao01_at_g.harvard.edu <mailto:yzhao01_at_g.harvard.edu>> wrote:
>>>
>>> Hello,
>>>
>>> I tried another approach to simulating my system. I took my input coordinates and structure, and I created a PSF file. Then I tried launching my REST2 simulation using the charmm version of ff14SB downloaded from the charmm website. I encountered the same crash and error message:
>>>
>>> stdout/stderr:
>>> Redirecting stdout to files output/0/log_rest.0.log through 7
>>> EXCHANGE_ACCEPT 0 AND 100000
>>> EXCHANGE_ACCEPT 0 AND 100000
>>> EXCHANGE_ACCEPT 0 AND 100000
>>> EXCHANGE_ACCEPT 0 AND 100000
>>> EXCHANGE_ACCEPT 0 AND 100000
>>> EXCHANGE_ACCEPT 0 AND 100000
>>> EXCHANGE_ACCEPT 0 AND 100000
>>> EXCHANGE_ACCEPT 0 AND 100000
>>> *** Error in `/n/home00/vzhao/opt/NAMD_2.14_Linux-x86_64-netlrts-smp-CUDA/namd2': free(): invalid next size (normal): 0x0000000011c87190 ***
>>> ======= Backtrace: =========
>>> /lib64/libc.so.6(+0x81489)[0x2b5d1c7af489]
>>>
>>> output/3/log_rest.3.log shows:
>>> Info: Reading solute scaling data from file: ./DHFR_wt_apo_equild_converted_temper_helix.pdb
>>> Info: Reading solute scaling flags from column: 4
>>> [0] Stack Traceback:
>>> [0:0] libc.so.6 0x2b5d1c764207 gsignal
>>> [0:1] libc.so.6 0x2b5d1c7658f8 abort
>>> [0:2] libc.so.6 0x2b5d1c7a6d27
>>> [0:3] libc.so.6 0x2b5d1c7af489
>>> [0:4] namd2 0xba12d2
>>> [0:5] namd2 0xbe788c
>>> [0:6] namd2 0xbeb828
>>> [0:7] namd2 0xd0b9ab
>>> [0:8] namd2 0x195b678 TclInvokeStringCommand
>>> [0:9] namd2 0x195e190
>>> [0:10] namd2 0x19a0f73
>>> [0:11] namd2 0x19a6d33
>>> [0:12] namd2 0x195fde6
>>> [0:13] namd2 0x1972844
>>> [0:14] namd2 0x195e190
>>> [0:15] namd2 0x195f576
>>> [0:16] namd2 0x195fd56
>>> [0:17] namd2 0x19c1b61
>>> [0:18] namd2 0x196ed4a
>>> [0:19] namd2 0x195e190
>>> [0:20] namd2 0x195f576
>>> [0:21] namd2 0x195fd56
>>> [0:22] namd2 0x19c1b61
>>> [0:23] namd2 0x19c1d1e
>>> [0:24] namd2 0xd08580
>>> [0:25] namd2 0x511210
>>> [0:26] libc.so.6 0x2b5d1c7503d5 __libc_start_main
>>> [0:27] namd2 0x4161a5
>>>
>>> So, the problem is not that I was using AMBER-format input files. But I am not sure what is the problem.
>>>
>>> Victor
>>>
>>>> On Sep 15, 2020, at 1:53 AM, Victor Zhao <yzhao01_at_g.harvard.edu <mailto:yzhao01_at_g.harvard.edu>> wrote:
>>>>
>>>> Hello,
>>>>
>>>> I’ve previously run REST2 simulations using a PSF file prepared with c36m with no issues. The REST2 implementation is the one included with the download of NAMD (lib/replica/REST2). Now, when I try to run a REST2 simulation using an AMBER .prmtop file, I get the following crash (from the stdout/stderr log):
>>>>
>>>> *** Error in `/n/home00/vzhao/opt/NAMD_2.14_Linux-x86_64-netlrts-smp-CUDA/namd2': free(): invalid next size (fast): 0x0000000013267df0 ***
>>>>
>>>> A bunch of backtrace lines follow. As can be seen in the above error message, I am running replica exchange using the netlrts-smp-CUDA version of NAMD. When I look into the individual replica log files for the crash, I see the following:
>>>>
>>>> Info: Reading solute scaling data from file: ./DHFR_wt_apo_equild_temper_helix.pdb
>>>> Info: Reading solute scaling flags from column: 4
>>>> [0] Stack Traceback:
>>>> [0:0] libc.so.6 0x2af4c0aea207 gsignal
>>>> [0:1] libc.so.6 0x2af4c0aeb8f8 abort
>>>> [0:2] libc.so.6 0x2af4c0b2cd27
>>>> [0:3] libc.so.6 0x2af4c0b35489
>>>> [0:4] namd2 0xba12d2
>>>> [0:5] namd2 0xbe788c
>>>> [0:6] namd2 0xbeb828
>>>> [0:7] namd2 0xd0b9ab
>>>> [0:8] namd2 0x195b678 TclInvokeStringCommand
>>>> [0:9] namd2 0x195e190
>>>> [0:10] namd2 0x19a0f73
>>>> [0:11] namd2 0x19a6d33
>>>> [0:12] namd2 0x195fde6
>>>> [0:13] namd2 0x1972844
>>>> [0:14] namd2 0x195e190
>>>> [0:15] namd2 0x195f576
>>>> [0:16] namd2 0x195fd56
>>>> [0:17] namd2 0x19c1b61
>>>> [0:18] namd2 0x196ed4a
>>>> [0:19] namd2 0x195e190
>>>> [0:20] namd2 0x195f576
>>>> [0:21] namd2 0x195fd56
>>>> [0:22] namd2 0x19c1b61
>>>> [0:23] namd2 0x19c1d1e
>>>> [0:24] namd2 0xd08580
>>>> [0:25] namd2 0x511210
>>>> [0:26] libc.so.6 0x2af4c0ad63d5 __libc_start_main
>>>> [0:27] namd2 0x4161a5
>>>>
>>>> Seeing these crash messages, I am not sure how to proceed. The occurrence of the crash after the solute scaling file read suggests there’s something wrong there. I verified that my input files work in a normal (non-REST) simulation. Since my procedure to setup and run REST simulations worked previously with a psf input, I am wondering whether the issue is that my input files have been changed to the AMBER format (“amber yes”). Has anyone successfully run REST2 simulations using AMBER-format file inputs?
>>>>
>>>> [As an aside, I encountered an issue where I cannot start new NAMD simulations using crd files written using the writecrd function in VMD. Instead, providing a PDB file to “coordinates” instead of using “ambercoor” works.]
>>>>
>>>> Thanks,
>>>> Victor
>>>
>>>
>>
>

This archive was generated by hypermail 2.1.6 : Fri Dec 31 2021 - 23:17:09 CST