AW: Hybrid MPI + Multicore + Cuda build

From: Norman Geist (norman.geist_at_uni-greifswald.de)
Date: Thu Aug 14 2014 - 02:13:32 CDT

Yes its possible, but the keyword isn't "multicore" which always indicate a
single node build, but "SMP". So you need to build your charm++ with
network+smp support ;)

BUT, unfortunately namd threats the +devices list for each node. Therefore
it's not possible to assign different gpus on different nodes for the same
job. I hacked the code to allow the +devices list to be accessed by the
process's global rank, rather than the local rank on the node. So it's
possible to assign a individual GPU for every process, which wouldn't also
require a smp build. I can send you the changes I made.

Norman Geist.

> -----Ursprüngliche Nachricht-----
> Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im
> Auftrag von Maxime Boissonneault
> Gesendet: Mittwoch, 13. August 2014 18:47
> An: namd-l_at_ks.uiuc.edu
> Betreff: namd-l: Hybrid MPI + Multicore + Cuda build
>
> Hi,
> I am managing a GPU cluster. The configuration of the nodes is :
> - 8 x K20 per node
> - 2 x 10-core sockets per node.
>
> The GPUs are run in exclusive process mode to prevent one user to
> hijack
> the GPU allocated to another user.
>
> I would like to be able to run Namd with Cuda, with 1 MPI rank per GPU
> and 2 or 3 threads.
>
> Is it possible to build namd with such hybrid support ? All I see on
> the
> website and documentation is either multicore or mpi, but never both.
>
>
>
> Thanks,
>
>
> --
> ---------------------------------
> Maxime Boissonneault
> Analyste de calcul - Calcul Québec, Université Laval
> Ph. D. en physique

---
Diese E-Mail ist frei von Viren und Malware, denn der avast! Antivirus Schutz ist aktiv.
http://www.avast.com

This archive was generated by hypermail 2.1.6 : Thu Dec 31 2015 - 23:21:07 CST