From: David McGiven (davidmcgivenn_at_gmail.com)
Date: Tue May 17 2011 - 05:34:21 CDT
It's strange ...
If I use : namd ++idplepoll [...] +devices 0
If I use : namd ++idplepoll [...] +devices x
It doesn't : Fatal error on PE 2> FATAL ERROR: CUDA error on Pe 2 (alf
device 0): All CUDA devices are in prohibited mode, of compute
capability 1.0, or otherwise unusable.
I, of course, have checked the COMPUTE mode ruleset for the devices,
and it's in 0, Default. I have set the two last GPU's to 1 and 2, so
NAMD has all kinds of compute ruleset GPU's to chose from, but it
still gives the same error.
# nvidia-smi -s
COMPUTE mode rules for GPU 0: 0
COMPUTE mode rules for GPU 1: 0
COMPUTE mode rules for GPU 2: 1
COMPUTE mode rules for GPU 3: 2
Do you know what might be happening ?
On 17/05/2011, at 12:05, Jesper Sørensen wrote:
> Hi David,
> If you just leave out the "+devices x,x,x" it will use any
> available GPU's
> on the nodes that you have requested, at least that works for us and
> we have
> 3 GPU's per node. I will say that we have compiled NAMD with MPI and
> I don't
> know if that makes a difference.
> -----Original Message-----
> From: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] On
> Of David McGiven
> Sent: 17. maj 2011 11:59
> To: namd-l
> Subject: namd-l: Automatic GPU selection in NAMD ?
> Dear NAMD Users,
> Is there any way to tell NAMD to use a number of GPU's, but not the
> device numbers ?
> Normally one would do :
> namd ++idplepoll [...] +devices 0,1 (for 2 GPU's) or namd +
> +idplepoll [...]
> +devices 0,1,2,3 (for 4 GPU's)
> But I would like not to state the specific device numbers, but
> rather, just
> "use 2 GPU's".
> This is because the cluster users do not know which GPU's are in use
> or not.
> I know the best setup is to have one GPU per machine, and then one
> this kind of problems. But it's now like that and I would like to
> know if
> someone has found any simple solution, or if NAMD has a command for
> that (I
> Haven't found it in documentation).
This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:57:08 CST