Re: Amber forcefield in namd simulation in parallel version

From: Teerapong Pirojsirikul (tpirojsi_at_ucsd.edu)
Date: Fri Dec 02 2016 - 03:30:11 CST

Deleting SCEE and SCNB section should do the job for NAMD 2.8. Only the
header part is sufficient. Not sure what went wrong in your case though.
Anyway, try NAMD 2.9 or later version since it should fix this problem
without any extra handling.

Best,

On Thu, Dec 1, 2016 at 10:05 PM, Santanu Santra <santop420_at_gmail.com> wrote:

> Hi to all,
> I have created prmtop file in xleap and then minimize my protein in
> serial version (CPU) , then it was working, but after that when I started
> to rescale in HPC (Parellel version) it is writing few lines in output file
> and some error came like that above .
> For reference I am sending the configuration file and prmtop file.
> If anyone can suggest anything, this will be highly appreciated.
> Thanks in advance.
>
> On Fri, Dec 2, 2016 at 10:30 AM, Santanu Santra <santop420_at_gmail.com>
> wrote:
>
> > I have created prmtop file in xleap and then minimize my protein in
> serial
> > version (CPU) , then it was working, but after that when I started to
> > rescale in HPC (Parellel version) it is writing few lines in output file
> > and some error came like that above:
> >
> > Reading parm file (protein_test.prmtop) ..
> > PARM file in AMBER 7 format
> > FATAL ERROR: Failed to read AMBER parm file!
> > [0] Stack Traceback:
> > [0:0] _Z8NAMD_diePKc+0x72 [0x4f11a2]
> > [0:1] _ZN9NamdState14configListInitEP10ConfigList+0xee4 [0x942cb4]
> > [0:2] _ZN9ScriptTcl7Tcl_runEPvP10Tcl_InterpiPPc+0x42d [0x9c28fd]
> > [0:3] TclInvokeStringCommand+0x88 [0xbc5b98]
> > [0:4] [0xbc87b7]
> > [0:5] [0xbc9bd2]
> > [0:6] Tcl_EvalEx+0x16 [0xbca3f6]
> > [0:7] Tcl_FSEvalFileEx+0x151 [0xc2c5a1]
> > [0:8] Tcl_EvalFile+0x2e [0xc2c75e]
> > [0:9] _ZN9ScriptTcl4loadEPc+0xf [0x9c166f]
> > [0:10] main+0x425 [0x4f5995]
> > [0:11] __libc_start_main+0xfd [0x36c361ed1d]
> > [0:12] [0x4f0a39]
> > ------------------------------------------------------------
> --------------
> > MPI_ABORT was invoked on rank 9 in communicator MPI_COMM_WORLD
> > with errorcode 1.
> >
> > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> > You may or may not see output from other processes, depending on
> > exactly when Open MPI kills them.
> >
> >
> > On Thu, Dec 1, 2016 at 7:28 PM, Norman Geist <norman.geist_at_uni-greifswald
>

This archive was generated by hypermail 2.1.6 : Sun Dec 31 2017 - 23:20:51 CST