From: Luis Cebamanos (luiceur_at_gmail.com)
Date: Fri Jan 14 2022 - 10:57:51 CST
Hello,
I am trying to understand what NAMD to pin GPUs. Currently running on 8
nodes, 40 cores + 4x GPUs per node:
charmrun ++verbose ++remote-shell ssh ++nodelist nodeListFile ++p 288
++ppn 9 namd2 +devices 0,1,2,3 +isomalloc_sync +setcpuaffinity +idlepoll
+pemap 1-9,11-19,21-29,31-39 +commap 0,10,20,30 stmvs.namd
My nodeListFile looks like:
group main
host a10 ++cpus 40 ++shell ssh
host a11 ++cpus 40 ++shell ssh
host a12 ++cpus 40 ++shell ssh
host a13 ++cpus 40 ++shell ssh
host a14 ++cpus 40 ++shell ssh
host a15 ++cpus 40 ++shell ssh
host a16 ++cpus 40 ++shell ssh
host a17 ++cpus 40 ++shell ssh
And the stdout info I get says:
Pe 41 physical rank 5 will use CUDA device of pe 36
Pe 38 physical rank 2 will use CUDA device of pe 36
Pe 40 physical rank 4 will use CUDA device of pe 36
Pe 44 physical rank 8 will use CUDA device of pe 36
Pe 43 physical rank 7 will use CUDA device of pe 36
Pe 6 physical rank 6 will use CUDA device of pe 8
Pe 42 physical rank 6 will use CUDA device of pe 36
Pe 4 physical rank 4 will use CUDA device of pe 8
Pe 2 physical rank 2 will use CUDA device of pe 8
Pe 39 physical rank 3 will use CUDA device of pe 36
Pe 3 physical rank 3 will use CUDA device of pe 8
Pe 5 physical rank 5 will use CUDA device of pe 8
Pe 37 physical rank 1 will use CUDA device of pe 36
Pe 7 physical rank 7 will use CUDA device of pe 8
Pe 25 physical rank 25 will use CUDA device of pe 18
Pe 36 physical rank 0 binding to CUDA device 0 on a414: 'Tesla
V100-SXM2-32GB' Mem: 32510MB Rev: 7.0 PCI: 0:60:0
Pe 26 physical rank 26 will use CUDA device of pe 18
Pe 22 physical rank 22 will use CUDA device of pe 18
Pe 1 physical rank 1 will use CUDA device of pe 8
Pe 0 physical rank 0 will use CUDA device of pe 8
Pe 16 physical rank 16 will use CUDA device of pe 9
Pe 15 physical rank 15 will use CUDA device of pe 9
Pe 24 physical rank 24 will use CUDA device of pe 18
Pe 50 physical rank 14 will use CUDA device of pe 45
Pe 23 physical rank 23 will use CUDA device of pe 18
Pe 12 physical rank 12 will use CUDA device of pe 9
Pe 47 physical rank 11 will use CUDA device of pe 45
Pe 52 physical rank 16 will use CUDA device of pe 45
Pe 21 physical rank 21 will use CUDA device of pe 18
Pe 11 physical rank 11 will use CUDA device of pe 9
Pe 48 physical rank 12 will use CUDA device of pe 45
Pe 20 physical rank 20 will use CUDA device of pe 18
Pe 34 physical rank 34 will use CUDA device of pe 27
Pe 8 physical rank 8 binding to CUDA device 0 on a412: 'Tesla
V100-SXM2-32GB' Mem: 32510MB Rev: 7.0 PCI: 0:60:0
Pe 14 physical rank 14 will use CUDA device of pe 9
Pe 49 physical rank 13 will use CUDA device of pe 45
Pe 30 physical rank 30 will use CUDA device of pe 27
Pe 13 physical rank 13 will use CUDA device of pe 9
Pe 29 physical rank 29 will use CUDA device of pe 27
Pe 62 physical rank 26 will use CUDA device of pe 54
Pe 17 physical rank 17 will use CUDA device of pe 9
Pe 32 physical rank 32 will use CUDA device of pe 27
Pe 19 physical rank 19 will use CUDA device of pe 18
Pe 60 physical rank 24 will use CUDA device of pe 54
Pe 53 physical rank 17 will use CUDA device of pe 45
Pe 58 physical rank 22 will use CUDA device of pe 54
Pe 35 physical rank 35 will use CUDA device of pe 27
Pe 33 physical rank 33 will use CUDA device of pe 27
Pe 10 physical rank 10 will use CUDA device of pe 9
Pe 51 physical rank 15 will use CUDA device of pe 45
Pe 61 physical rank 25 will use CUDA device of pe 54
Pe 31 physical rank 31 will use CUDA device of pe 27
Pe 57 physical rank 21 will use CUDA device of pe 54
Pe 68 physical rank 32 will use CUDA device of pe 63
Pe 46 physical rank 10 will use CUDA device of pe 45
Pe 70 physical rank 34 will use CUDA device of pe 63
Pe 28 physical rank 28 will use CUDA device of pe 27
Pe 69 physical rank 33 will use CUDA device of pe 63
Pe 56 physical rank 20 will use CUDA device of pe 54
Pe 59 physical rank 23 will use CUDA device of pe 54
Pe 71 physical rank 35 will use CUDA device of pe 63
Pe 18 physical rank 18 binding to CUDA device 2 on a412: 'Tesla
V100-SXM2-32GB' Mem: 32510MB Rev: 7.0 PCI: 0:88:0
Pe 55 physical rank 19 will use CUDA device of pe 54
Pe 66 physical rank 30 will use CUDA device of pe 63
Pe 65 physical rank 29 will use CUDA device of pe 63
Pe 67 physical rank 31 will use CUDA device of pe 63
Pe 9 physical rank 9 binding to CUDA device 1 on a412: 'Tesla
V100-SXM2-32GB' Mem: 32510MB Rev: 7.0 PCI: 0:61:0
Pe 45 physical rank 9 binding to CUDA device 1 on a414: 'Tesla
V100-SXM2-32GB' Mem: 32510MB Rev: 7.0 PCI: 0:61:0
Pe 27 physical rank 27 binding to CUDA device 3 on a412: 'Tesla
V100-SXM2-32GB' Mem: 32510MB Rev: 7.0 PCI: 0:89:0
Pe 64 physical rank 28 will use CUDA device of pe 63
Pe 54 physical rank 18 binding to CUDA device 2 on a414: 'Tesla
V100-SXM2-32GB' Mem: 32510MB Rev: 7.0 PCI: 0:88:0
Pe 63 physical rank 27 binding to CUDA device 3 on a414: 'Tesla
V100-SXM2-32GB' Mem: 32510MB Rev: 7.0 PCI: 0:89:0
The question is, am I utilizing all GPUs in all nodes? From this info I
do not seem to use more than 2 nodes GPUs, i.e. 8 GPUs. How could I use
all of them?
Thanks for the help
This archive was generated by hypermail 2.1.6 : Tue Dec 13 2022 - 14:32:44 CST