AW: AW: Hybrid MPI + Multicore + Cuda build

From: Norman Geist (norman.geist_at_uni-greifswald.de)
Date: Thu Aug 14 2014 - 08:14:30 CDT

Looks like, ./build is interactive if you start it without arguments.

Norman Geist.

> -----Ursprüngliche Nachricht-----
> Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im
> Auftrag von Maxime Boissonneault
> Gesendet: Donnerstag, 14. August 2014 14:20
> An: Norman Geist
> Cc: Namd Mailing List
> Betreff: Re: AW: namd-l: Hybrid MPI + Multicore + Cuda build
>
> Hi,
> That should not be a problem, since if a job uses more than one node,
> we
> expect it will use the nodes to their full extent. In other words, I
> don't see a use case where you would want to use two half nodes instead
> of one full node.
>
> I've been given the pointer that building charm++ with
> ./build charm++ mpi-linux-x86_64 smp ....
>
> should do the trick. Is that correct ?
>
>
> Maxime
>
>
> Le 2014-08-14 03:13, Norman Geist a écrit :
> > Yes its possible, but the keyword isn't "multicore" which always
> indicate a
> > single node build, but "SMP". So you need to build your charm++ with
> > network+smp support ;)
> >
> > BUT, unfortunately namd threats the +devices list for each node.
> Therefore
> > it's not possible to assign different gpus on different nodes for the
> same
> > job. I hacked the code to allow the +devices list to be accessed by
> the
> > process's global rank, rather than the local rank on the node. So
> it's
> > possible to assign a individual GPU for every process, which wouldn't
> also
> > require a smp build. I can send you the changes I made.
> >
> > Norman Geist.
> >
> >> -----Ursprüngliche Nachricht-----
> >> Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im
> >> Auftrag von Maxime Boissonneault
> >> Gesendet: Mittwoch, 13. August 2014 18:47
> >> An: namd-l_at_ks.uiuc.edu
> >> Betreff: namd-l: Hybrid MPI + Multicore + Cuda build
> >>
> >> Hi,
> >> I am managing a GPU cluster. The configuration of the nodes is :
> >> - 8 x K20 per node
> >> - 2 x 10-core sockets per node.
> >>
> >> The GPUs are run in exclusive process mode to prevent one user to
> >> hijack
> >> the GPU allocated to another user.
> >>
> >> I would like to be able to run Namd with Cuda, with 1 MPI rank per
> GPU
> >> and 2 or 3 threads.
> >>
> >> Is it possible to build namd with such hybrid support ? All I see on
> >> the
> >> website and documentation is either multicore or mpi, but never
> both.
> >>
> >>
> >>
> >> Thanks,
> >>
> >>
> >> --
> >> ---------------------------------
> >> Maxime Boissonneault
> >> Analyste de calcul - Calcul Québec, Université Laval
> >> Ph. D. en physique
> >
> > ---
> > Diese E-Mail ist frei von Viren und Malware, denn der avast!
> Antivirus Schutz ist aktiv.
> > http://www.avast.com
> >
>
>
> --
> ---------------------------------
> Maxime Boissonneault
> Analyste de calcul - Calcul Québec, Université Laval
> Ph. D. en physique

---
Diese E-Mail ist frei von Viren und Malware, denn der avast! Antivirus Schutz ist aktiv.
http://www.avast.com

This archive was generated by hypermail 2.1.6 : Thu Dec 31 2015 - 23:21:07 CST