Re: PME Seg Fault

From: Vibhor Agrawal (vibhora_at_g.clemson.edu)
Date: Mon Aug 10 2015 - 11:31:56 CDT

One of the reason could be system need bit more time to equilibrate .You
might want to check you box size of the system.

Thanks
Vibhor

On Mon, Aug 10, 2015 at 10:11 AM, Richard Overstreet <roverst_at_g.clemson.edu>
wrote:

> All,
>
> I am simulating water transport through pore in a graphene monolayer and
> I am not sure where this seg fault is originating from. I think it has
> something to do with PMEGridSize. Any help would be appreciated. My conf
> file is attached.
>
> ETITLE: TS BOND ANGLE DIHED
> IMPRP ELECT VDW BOUNDARY
> MISC KINETIC TOTAL TEMP
> POTENTIAL TOTAL3 TEMPAVG PRESSURE
> GPRESSURE VOLUME PRESSAVG GPRESSAVG
>
> ENERGY: 0 7877.9641 5095.8520 0.0000
> 0.0000 -256270.1972 36460.9436 0.0000
> 0.0000 0.0000 -206835.4375 0.0000
> -206835.4375 -206835.3515 0.0000 -928.7153
> -926.7003 664828.0159 -928.7153 -926.7003
>
> OPENING EXTENDED SYSTEM TRAJECTORY FILE
> REASSIGNING VELOCITIES AT STEP 1 TO 0.001 KELVIN.
> ------------- Processor 0 Exiting: Caught Signal ------------
> Reason: Segmentation fault
> [0] Stack Traceback:
> [0:0]
> _ZN12PmeRealSpace12fill_chargesEPPfS1_RiS2_PcS3_P11PmeParticle+0x32e5
> [0xaf1b15]
> [0:1] _ZN10ComputePme6doWorkEv+0x1250 [0x7fef50]
> [0:2]
> _ZN19CkIndex_WorkDistrib29_call_enqueuePme_LocalWorkMsgEPvS0_+0xe
> [0xb7e2fe]
> [0:3] CkDeliverMessageFree+0x94 [0xc89024]
> [0:4] [0xc781fb]
> [0:5] _Z15_processHandlerPvP11CkCoreState+0x888 [0xc76348]
> [0:6] CsdScheduler+0x47d [0xe3d70d]
> [0:7] _ZN9ScriptTcl7Tcl_runEPvP10Tcl_InterpiPPc+0x2c5 [0xb36be5]
> [0:8] TclInvokeStringCommand+0x7f [0x39476359ef]
> [0:9] [0x3947636a53]
> [0:10] [0x394763751f]
> [0:11] Tcl_FSEvalFileEx+0x241 [0x39476a3e31]
> [0:12] Tcl_EvalFile+0x2f [0x39476a3f6f]
> [0:13] _ZN9ScriptTcl4loadEPc+0xf [0xb3389f]
> [0:14] main+0x3f1 [0x61aea1]
> [0:15] __libc_start_main+0xfd [0x3946a1ed5d]
> [0:16] [0x57ed19]
> --------------------------------------------------------------------------
> MPI_ABORT was invoked on rank 14 in communicator MPI COMMUNICATOR 3 DUP
> FROM 0
> with errorcode 1.
>
>
> -Richard
>

This archive was generated by hypermail 2.1.6 : Tue Dec 27 2016 - 23:21:15 CST