Fwd: Re: Charm++ fatal error with QM-MM MOPAC

From: Francesco Pietra (chiendarret_at_gmail.com)
Date: Sun Jan 15 2017 - 02:53:17 CST

I forgot mentioning that namd error message can also be

TCL: Minimizing for 100 steps
> Info: List of ranks running QM simulations: 0.
> Sun Jan 15 09:42:11 2017 Job:
> '/dev/shm/NAMD_4IEV/0/qmmm_0.input' started successfully
>
>
> MOPAC Job: "/dev/shm/NAMD_4IEV/0/qmmm_0.input" ended normally on
> Jan 15, 2017, at 09:42.
>
> ERROR: Could not find QM output file!
> FATAL ERROR: No such file or directory
> ------------- Processor 0 Exiting: Called CmiAbort ------------
> Reason: FATAL ERROR: No such file or directory
>

That occurred erratically, in alternative to a correct mopac job, while
attempting to launch the same identical namd job, and neither the .aux not
the .out file were written in /0

OS Debian GNU Linux amd64, where the QM-MM examples furnished with NAM12
run correctly, with both mopac and orca.

francesco

---------- Forwarded message ----------
From: Francesco Pietra <chiendarret_at_gmail.com>
Date: Sun, Jan 15, 2017 at 8:55 AM
Subject: Re: namd-l: Re: Charm++ fatal error with QM-MM MOPAC
To: NAMD <namd-l_at_ks.uiuc.edu>, "Marcelo C. R. Melo" <melomcr_at_gmail.com>
Cc: rcbernardi_at_ks.uiu.edu, Till Rudack <trudack_at_ks.uiuc.edu>

Hi Marcelo:

As I wrote, MOPAC halted because MOZYME can't be used for open shell
systems. Removing the key word MOZYME lets MOPAC2016 working correctly with
open shell systems. Following the complex case of my first mail, I have now
run a simpler system at either multiplicit 3 or 7. In both cases MOPAC
ended without error messages. In the following read please abou the mult 7
case from the /0 folder:

>From qmmm_0.input.ou
>
> * *
> * JOB ENDED NORMALLY *
> * *
> **********************
>
> TOTAL JOB TIME: 10.40 SECONDS
>
> == MOPAC DONE ==
>
>

>From qmmm_0.input.aux:
BETA_M.O.SYMMETRY_LABELS[0020]=
 109A 110A 111A 112A 113A 114A 115A 116A 117A
118A
 119A 120A 121A 122A 123A 124A 125A 126A 127A
128A
 ALPHA_M.O.SYMMETRY_LABELS[0020]=
 109A 110A 111A 112A 113A 114A 115A 116A 117A
118A
 119A 120A 121A 122A 123A 124A 125A 126A 127A
128A
 ALPHA_EIGENVALUES[0020]=
    0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
0.000 0.000
    0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
0.000 0.000
 BETA_EIGENVALUES[0020]=
      NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN
      NaN NaN NaN NaN NaN NaN NaN
NaN NaN NaN
 ALPHA_MOLECULAR_ORBITAL_OCCUPANCIES[00020]=
 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
 BETA_MOLECULAR_ORBITAL_OCCUPANCIES[00020]=
 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 CPU_TIME:SECONDS[1]= 10.40
 CPU_TIME:ARBITRARY_UNITS[1]= 14.79
 END OF MOPAC FILE

 However, NAMD12 procedure stopped with error message

TCL: Minimizing for 100 steps
> Info: List of ranks running QM simulations: 0.
> Sat Jan 14 23:09:08 2017 Job: '/dev/shm/NAMD_4IEV/0/qmmm_0.input'
> started successfully
>
>
> MOPAC Job: "/dev/shm/NAMD_4IEV/0/qmmm_0.input" ended normally
> on Jan 14, 2017, at 23:09.
>
> FATAL ERROR: Error reading QM forces file. Not all data could be read!
> ------------- Processor 0 Exiting: Called CmiAbort ------------
> Reason: FATAL ERROR: Error reading QM forces file. Not all data could be
> read!
>

May I ask whether you already applied NAM12-MOPAC to open shell system
(probably not because of MOZYME). At any event, could you suggest how to
set the system to have QM-MM running? As I asked initially, what does
charmm++ expects to read? Forces in which format? Or the above charmm++
message is a fake error message and the problem lies elsewhere?

Thanks

francesco pietra

On Tue, Jan 10, 2017 at 3:11 PM, Marcelo C. R. Melo <melomcr_at_gmail.com>
wrote:

> Hello!
>
> In NAMD, the keyword qmMult is only used when building the input for ORCA.
> When using MOPAC, all options and keywords are to be passed using the
> qmConfigLine keyword.
>
> Without more information, it seems that MOPAC is not accepting the
> requested spin-state. You could find more info in the output file inside
> the execution folder: "/dev/shm/NAMD_4ztt_C/0/"
> MOPAC will print its standard output and errors in the file:
> qmmm_0.input.aux
>
> There is nothing in NAMD that would check the options you passed in
> qmConfigLine and prevent you from using UHF or RHF, so there must be a
> conflict somewhere between the keywords and your specific system.
>
> I hope this could be of help,
> Marcelo
>
> ---
> Marcelo Cardoso dos Reis Melo
> PhD Candidate
> Luthey-Schulten Group
> University of Illinois at Urbana-Champaign
> crdsdsr2_at_illinois.edu
> +1 (217) 244-5983 <%28217%29%20244-5983>
>
> On 10 January 2017 at 01:56, Francesco Pietra <chiendarret_at_gmail.com>
> wrote:
>
>> Hello:
>>
>> PROBLEM:
>> I am trying to apply QM-MM with MOPAC2016 to a protein system with
>> electron unpaired QM part. The procedure ends without errors by using the
>> default configuration
>>
>> qmMult "1 x"
>>
>>
>> qmConfigLine "PM7 XYZ T=2M 1SCF MOZYME CUTOFF=9.0 AUX LET GRAD QMMM
>> GEO-OK"
>>
>> with "x" as the supposed total multiplicity (guessed in various modes,
>> even by supposing that electrons from different unpaired species will get
>> mixed). However any output from such a configugaration is highly suspicious
>> to my eyes.
>>
>> I tried by inserting either "UHF" alone or "UHF MS=y" (with "y"
>> representing the supposed multiplicity in MOPAC mode, commenting out, or
>> not, the "qmMult "1 x"" line. UHF was made to replace 1SCF, or follow
>> it .
>>
>> In all cases, I got the error
>>
>> MOPAC Job: "/dev/shm/NAMD_4ztt_C/0/qmmm_0.input" ended normally on Jan
>>> 9, 2017, at 22:21.
>>>
>>> FATAL ERROR: Error reading QM forces file. Not all data could be read!
>>> ------------- Processor 0 Exiting: Called CmiAbort ------------
>>> Reason: FATAL ERROR: Error reading QM forces file. Not all data could be
>>> read!
>>>
>>> Charm++ fatal error:
>>> FATAL ERROR: Error reading QM forces file. Not all data could be read!
>>>
>>
>> QUESTION:
>> I would appreciate any indication where to look for the QM forces file
>> that CHARMM++ expects. Or any other suggestion that could allow running
>> QM-MM/MOPAC under UHF.
>>
>> thanks
>>
>> francesco pietra
>>
>>
>

This archive was generated by hypermail 2.1.6 : Sun Dec 31 2017 - 23:20:59 CST