AW: Amber forcefield in namd simulation in parallel version

From: Norman Geist (norman.geist_at_uni-greifswald.de)
Date: Fri Dec 02 2016 - 01:59:20 CST

I can't see any issue with your files. Maybe try another NAMD version installed on the HPC system.

> -----Ursprüngliche Nachricht-----
> Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im
> Auftrag von Santanu Santra
> Gesendet: Freitag, 2. Dezember 2016 07:06
> An: Norman Geist <norman.geist_at_uni-greifswald.de>
> Cc: namd-l_at_ks.uiuc.edu
> Betreff: Re: namd-l: Amber forcefield in namd simulation in parallel version
>
> Hi to all,
> I have created prmtop file in xleap and then minimize my protein in
> serial version (CPU) , then it was working, but after that when I started
> to rescale in HPC (Parellel version) it is writing few lines in output file
> and some error came like that above .
> For reference I am sending the configuration file and prmtop file.
> If anyone can suggest anything, this will be highly appreciated.
> Thanks in advance.
>
> On Fri, Dec 2, 2016 at 10:30 AM, Santanu Santra <santop420_at_gmail.com>
> wrote:
>
> > I have created prmtop file in xleap and then minimize my protein in serial
> > version (CPU) , then it was working, but after that when I started to
> > rescale in HPC (Parellel version) it is writing few lines in output file
> > and some error came like that above:
> >
> > Reading parm file (protein_test.prmtop) ..
> > PARM file in AMBER 7 format
> > FATAL ERROR: Failed to read AMBER parm file!
> > [0] Stack Traceback:
> > [0:0] _Z8NAMD_diePKc+0x72 [0x4f11a2]
> > [0:1] _ZN9NamdState14configListInitEP10ConfigList+0xee4 [0x942cb4]
> > [0:2] _ZN9ScriptTcl7Tcl_runEPvP10Tcl_InterpiPPc+0x42d [0x9c28fd]
> > [0:3] TclInvokeStringCommand+0x88 [0xbc5b98]
> > [0:4] [0xbc87b7]
> > [0:5] [0xbc9bd2]
> > [0:6] Tcl_EvalEx+0x16 [0xbca3f6]
> > [0:7] Tcl_FSEvalFileEx+0x151 [0xc2c5a1]
> > [0:8] Tcl_EvalFile+0x2e [0xc2c75e]
> > [0:9] _ZN9ScriptTcl4loadEPc+0xf [0x9c166f]
> > [0:10] main+0x425 [0x4f5995]
> > [0:11] __libc_start_main+0xfd [0x36c361ed1d]
> > [0:12] [0x4f0a39]
> > --------------------------------------------------------------------------
> > MPI_ABORT was invoked on rank 9 in communicator MPI_COMM_WORLD
> > with errorcode 1.
> >
> > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> > You may or may not see output from other processes, depending on
> > exactly when Open MPI kills them.
> >
> >
> > On Thu, Dec 1, 2016 at 7:28 PM, Norman Geist <norman.geist_at_uni-
> greifswald=

This archive was generated by hypermail 2.1.6 : Tue Dec 27 2016 - 23:22:40 CST