Re: FEP/λ-REMD Simulation error

From: ROPÓN-PALACIOS G. (biodano.geo_at_gmail.com)
Date: Mon Nov 09 2020 - 12:51:31 CST

This approach need that you runs this On 24 cores, you have 24 cores into your machine?

Enviado desde mi iPhone

> El 9 nov. 2020, a la(s) 12:03 p. m., Alao, John-Paul <alaooj_at_miamioh.edu> escribió:
>
> 
> Dear mailing list,
>
> I generated the configuration files to run absolute protein-ligand binding free energy using Charmm-GUI. However, this part of the code crashes.
>
> # FEP/λ-REMD Simulation, Runing 4 ns (200 ps x 20)
> cnt=0
> cntmax=20
>
> while [ ${cnt} -le ${cntmax} ]
> do
> if [ ${cnt} -eq 0 ]
> then
> charmrun /software/NAMD/NAMD_2.13_Linux-x86_64-multicore/namd2 +p24 +replicas 24 fep_${system}.conf --source FEP_remd_softcore.namd +stdout output_${system}/%d/job${cnt}.%d.log
> else
> charmrun /software/NAMD/NAMD_2.13_Linux-x86_64-multicore/namd2 +p24 +replicas 24 restart_${cnt}.conf --source FEP_remd_softcore.namd +stdout output_${system}/%d/job${cnt}.%d.log
> fi
> cnt=$[$cnt+1]
> done
>
> The error message is:
>
> Running command: /software/NAMD/NAMD_2.13_Linux-x86_64-multicore/namd2 +p24 +replicas 24 fep_solv.conf --source FEP_remd_softcore.namd +stdout output_solv/%d/job0.%d.log
>
> Charm++: standalone mode (not using charmrun)
> Charm++> Running in Multicore mode: 24 threads
> ./fep.sh: line 40: 106230 Segmentation fault (core dumped) charmrun /software/NAMD/NAMD_2.13_Linux-x86_64-multicore/namd2 +p24 +replicas 24 fep_${system}.conf --source FEP_remd_softcore.namd +stdout output_${system}/%d/job${cnt}.%d.log
> Running command: /software/NAMD/NAMD_2.13_Linux-x86_64-multicore/namd2 +p24 +replicas 24 restart_1.conf --source FEP_remd_softcore.namd +stdout output_solv/%d/job1.%d.log
>
> Charm++: standalone mode (not using charmrun)
> Charm++> Running in Multicore mode: 24 threads
> ./fep.sh: line 40: 106232 Segmentation fault (core dumped) charmrun /software/NAMD/NAMD_2.13_Linux-x86_64-multicore/namd2 +p24 +replicas 24 restart_${cnt}.conf --source FEP_remd_softcore.namd +stdout output_${system}/%d/job${cnt}.%d.log
>
> Please can anyone assist with information on how to get around this? Thanks a lot.
>
>
>
>

This archive was generated by hypermail 2.1.6 : Fri Dec 31 2021 - 23:17:10 CST