Re: AW: AW: AW: AW: FATAL ERROR: PME offload requires exactly one CUDA device per process.

From: Maxime Boissonneault (maxime.boissonneault_at_calculquebec.ca)
Date: Mon Aug 25 2014 - 07:34:34 CDT

I cannot say. I am not a Namd user. I merely run benchmark with samples
prepared by some of our users to determine the most efficient way to run
their sample on our cluster.

Maxime

Le 2014-08-25 02:07, Norman Geist a écrit :
> And do you notice any significant speedup due the PME offload, so means
> compared to 2.9 ??
> I'll need to update by gpu driver in order to try the 2.10, will do it today
> maybe.
>
> Norman Geist.
>
>> -----Ursprüngliche Nachricht-----
>> Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im
>> Auftrag von Maxime Boissonneault
>> Gesendet: Freitag, 22. August 2014 15:59
>> An: Norman Geist
>> Cc: Namd Mailing List
>> Betreff: Re: AW: AW: AW: namd-l: FATAL ERROR: PME offload requires
>> exactly one CUDA device per process.
>>
>> I did. As far as I understand it, the PPN number is the number of
>> thread
>> per *process*.
>>
>> In the multicore version, there is *one* process.
>>
>> The only way I managed to use 8 GPUs is by using the MPI version. In
>> that case, there are as many processes as the number of MPI ranks, so I
>> can start it with
>> mpiexec -np 8 namd2 +ppn2 ...
>>
>> That will start 8 processes (one for each GPU), and 2 threads per
>> process.
>>
>>
>> Maxime
>>
>> Le 2014-08-22 09:56, Norman Geist a écrit :
>>> So did you try using "EXACLTY" 8 cores?
>>>
>>> Norman Geist.
>>>
>>>> -----Ursprüngliche Nachricht-----
>>>> Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im
>>>> Auftrag von Maxime Boissonneault
>>>> Gesendet: Freitag, 22. August 2014 15:23
>>>> An: Norman Geist
>>>> Cc: Namd Mailing List
>>>> Betreff: Re: AW: AW: namd-l: FATAL ERROR: PME offload requires
>> exactly
>>>> one CUDA device per process.
>>>>
>>>> There are 8 K20 GPUs per node.
>>>>
>>>> I was hoping to be able to use the multicore binaries instead of the
>>>> MPI
>>>> one.
>>>>
>>>> Maxime
>>>>
>>>> Le 2014-08-22 09:21, Norman Geist a écrit :
>>>>> How many GPUs do you have?
>>>>>
>>>>> Norman Geist.
>>>>>
>>>>>
>>>>>> -----Ursprüngliche Nachricht-----
>>>>>> Von: Maxime Boissonneault
>>>> [mailto:maxime.boissonneault_at_calculquebec.ca]
>>>>>> Gesendet: Freitag, 22. August 2014 14:45
>>>>>> An: Norman Geist
>>>>>> Betreff: Re: AW: namd-l: FATAL ERROR: PME offload requires exactly
>>>> one
>>>>>> CUDA device per process.
>>>>>>
>>>>>> Threads do not work.
>>>>>>
>>>>>> So that means that it is impossible to do PME offload on multiple
>>>> GPUs
>>>>>> without MPI.
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Maxime
>>>>>>
>>>>>> Le 2014-08-22 08:44, Norman Geist a écrit :
>>>>>>> I guess the message tells everything. You need to start more
>>>>>> processes.
>>>>>>> Maybe also threads will work so +ppn
>>>>>>>
>>>>>>> Norman Geist.
>>>>>>>
>>>>>>>> -----Ursprüngliche Nachricht-----
>>>>>>>> Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu]
>> Im
>>>>>>>> Auftrag von Maxime Boissonneault
>>>>>>>> Gesendet: Freitag, 22. August 2014 13:40
>>>>>>>> An: Namd Mailing List
>>>>>>>> Betreff: namd-l: FATAL ERROR: PME offload requires exactly one
>>>> CUDA
>>>>>>>> device per process.
>>>>>>>>
>>>>>>>> Hi,
>>>>>>>> I'm getting this error for some computations using the Multicore
>>>>>>>> binary.
>>>>>>>> FATAL ERROR: PME offload requires exactly one CUDA device per
>>>>>> process.
>>>>>>>> I was wondering, is it a matter of configuration of the problem,
>>>> or
>>>>>>>> just
>>>>>>>> of the problem size ? I understand I can use the MPI binary to
>>>>>> address
>>>>>>>> that and use multiple GPUs, but I'm wondering if I can use the
>>>>>>>> multicore
>>>>>>>> one (because it performs slightly better than the MPI one).
>>>>>>>>
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> ---------------------------------
>>>>>>>> Maxime Boissonneault
>>>>>>>> Analyste de calcul - Calcul Québec, Université Laval
>>>>>>>> Ph. D. en physique
>>>>>>> ---
>>>>>>> Diese E-Mail ist frei von Viren und Malware, denn der avast!
>>>>>> Antivirus Schutz ist aktiv.
>>>>>>> http://www.avast.com
>>>>>>>
>>>>>> --
>>>>>> ---------------------------------
>>>>>> Maxime Boissonneault
>>>>>> Analyste de calcul - Calcul Québec, Université Laval
>>>>>> Ph. D. en physique
>>>>> ---
>>>>> Diese E-Mail ist frei von Viren und Malware, denn der avast!
>>>> Antivirus Schutz ist aktiv.
>>>>> http://www.avast.com
>>>>>
>>>>>
>>>> --
>>>> ---------------------------------
>>>> Maxime Boissonneault
>>>> Analyste de calcul - Calcul Québec, Université Laval
>>>> Ph. D. en physique
>>> ---
>>> Diese E-Mail ist frei von Viren und Malware, denn der avast!
>> Antivirus Schutz ist aktiv.
>>> http://www.avast.com
>>>
>>>
>>
>> --
>> ---------------------------------
>> Maxime Boissonneault
>> Analyste de calcul - Calcul Québec, Université Laval
>> Ph. D. en physique
>
> ---
> Diese E-Mail ist frei von Viren und Malware, denn der avast! Antivirus Schutz ist aktiv.
> http://www.avast.com
>

-- 
---------------------------------
Maxime Boissonneault
Analyste de calcul - Calcul Québec, Université Laval
Ph. D. en physique

This archive was generated by hypermail 2.1.6 : Thu Dec 31 2015 - 23:21:09 CST