Re: Running NAMD on Forge (CUDA)

From: Aron Broom (broomsday_at_gmail.com)
Date: Tue Jul 10 2012 - 12:21:45 CDT

if it is truly just one node, you can use the multicore-CUDA version and
avoid the MPI charmrun stuff. Still, it boils down to much the same thing
I think. If you do what you've done below, you are running one job with 12
CPU cores and all GPUs. If you don't specify the +devices, NAMD will
automatically find the available GPUs, so I think the main benefit of
specifying them is when you are running more than one job and don't want
the jobs sharing GPUs.

I'm not sure you'll see great scaling across 6 GPUs for a single job, but
that would be great if you did.

~Aron

On Tue, Jul 10, 2012 at 1:14 PM, Gianluca Interlandi <
gianluca_at_u.washington.edu> wrote:

> Hi,
>
> I have a question concerning running NAMD on a CUDA cluster.
>
> NCSA Forge has for example 6 CUDA devices and 16 CPU cores per node. If I
> want to use all 6 CUDA devices in a node, how many processes is it
> recommended to spawn? Do I need to specify "+devices"?
>
> So, if for example I want to spawn 12 processes, do I need to specify:
>
> charmrun +p12 -machinefile $PBS_NODEFILE +devices 0,1,2,3,4,5 namd2
> +idlepoll
>
> Thanks,
>
> Gianluca
>
> ------------------------------**-----------------------
> Gianluca Interlandi, PhD gianluca_at_u.washington.edu
> +1 (206) 685 4435
> http://artemide.bioeng.**washington.edu/>
>
> Research Scientist at the Department of Bioengineering
> at the University of Washington, Seattle WA U.S.A.
> ------------------------------**-----------------------
>
>

-- 
Aron Broom M.Sc
PhD Student
Department of Chemistry
University of Waterloo

This archive was generated by hypermail 2.1.6 : Tue Dec 31 2013 - 23:22:13 CST