AW: AW: AW: FATAL ERROR: PME offload requires exactly one CUDA device per process.

From: Norman Geist (norman.geist_at_uni-greifswald.de)
Date: Fri Aug 22 2014 - 08:56:32 CDT

So did you try using "EXACLTY" 8 cores?

Norman Geist.

> -----Ursprüngliche Nachricht-----
> Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im
> Auftrag von Maxime Boissonneault
> Gesendet: Freitag, 22. August 2014 15:23
> An: Norman Geist
> Cc: Namd Mailing List
> Betreff: Re: AW: AW: namd-l: FATAL ERROR: PME offload requires exactly
> one CUDA device per process.
>
> There are 8 K20 GPUs per node.
>
> I was hoping to be able to use the multicore binaries instead of the
> MPI
> one.
>
> Maxime
>
> Le 2014-08-22 09:21, Norman Geist a écrit :
> > How many GPUs do you have?
> >
> > Norman Geist.
> >
> >
> >> -----Ursprüngliche Nachricht-----
> >> Von: Maxime Boissonneault
> [mailto:maxime.boissonneault_at_calculquebec.ca]
> >> Gesendet: Freitag, 22. August 2014 14:45
> >> An: Norman Geist
> >> Betreff: Re: AW: namd-l: FATAL ERROR: PME offload requires exactly
> one
> >> CUDA device per process.
> >>
> >> Threads do not work.
> >>
> >> So that means that it is impossible to do PME offload on multiple
> GPUs
> >> without MPI.
> >>
> >> Thanks,
> >>
> >> Maxime
> >>
> >> Le 2014-08-22 08:44, Norman Geist a écrit :
> >>> I guess the message tells everything. You need to start more
> >> processes.
> >>> Maybe also threads will work so +ppn
> >>>
> >>> Norman Geist.
> >>>
> >>>> -----Ursprüngliche Nachricht-----
> >>>> Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im
> >>>> Auftrag von Maxime Boissonneault
> >>>> Gesendet: Freitag, 22. August 2014 13:40
> >>>> An: Namd Mailing List
> >>>> Betreff: namd-l: FATAL ERROR: PME offload requires exactly one
> CUDA
> >>>> device per process.
> >>>>
> >>>> Hi,
> >>>> I'm getting this error for some computations using the Multicore
> >>>> binary.
> >>>> FATAL ERROR: PME offload requires exactly one CUDA device per
> >> process.
> >>>> I was wondering, is it a matter of configuration of the problem,
> or
> >>>> just
> >>>> of the problem size ? I understand I can use the MPI binary to
> >> address
> >>>> that and use multiple GPUs, but I'm wondering if I can use the
> >>>> multicore
> >>>> one (because it performs slightly better than the MPI one).
> >>>>
> >>>>
> >>>> Thanks,
> >>>>
> >>>>
> >>>> --
> >>>> ---------------------------------
> >>>> Maxime Boissonneault
> >>>> Analyste de calcul - Calcul Québec, Université Laval
> >>>> Ph. D. en physique
> >>> ---
> >>> Diese E-Mail ist frei von Viren und Malware, denn der avast!
> >> Antivirus Schutz ist aktiv.
> >>> http://www.avast.com
> >>>
> >>
> >> --
> >> ---------------------------------
> >> Maxime Boissonneault
> >> Analyste de calcul - Calcul Québec, Université Laval
> >> Ph. D. en physique
> >
> > ---
> > Diese E-Mail ist frei von Viren und Malware, denn der avast!
> Antivirus Schutz ist aktiv.
> > http://www.avast.com
> >
> >
>
>
> --
> ---------------------------------
> Maxime Boissonneault
> Analyste de calcul - Calcul Québec, Université Laval
> Ph. D. en physique

---
Diese E-Mail ist frei von Viren und Malware, denn der avast! Antivirus Schutz ist aktiv.
http://www.avast.com

This archive was generated by hypermail 2.1.6 : Thu Dec 31 2015 - 23:21:09 CST