Re: Running NAMD in MPI environment

From: Jim Phillips (jim_at_ks.uiuc.edu)
Date: Wed Feb 20 2019 - 11:04:09 CST

It appears that your cluster admins have written their own queueing
system, or at least a wrapper for whatever they are using. Without the
documentation for the "thorq" command it is impossible to give any
specific suggestions.

Depending on the simulation you want to run, you probably want to start
with running a non-MPI multicore-CUDA NAMD binary (which you can download
from the NAMD website) on a node with NVIDIA GPUs. Be sure to allocate
and use all of the available CPU cores on the node by specifying the
+p<ncores> and +pemap <0-(ncores-1)> option to NAMD. Only once this works
would you possibly want to attempt running on multiple nodes if necessary.

Jim

On Thu, 21 Feb 2019, ê¹~@민ì~^¬ wrote:

> Hi
>
> I am relatively new to NAMD, and I read the user guide on running NAMD in
> Linux clusters. However, I needed some clarification on some things.
>
> So, the supercomputer system I am accessing is a heterogeneous Linux
> cluster, and it uses Infiniband. (It also has NVIDIA and AMD GPUs, so I was
> wondering which NAMD Package I should use.) I need to follow this template
> to submit a job to its queuing system:
>
> thorq -- add -- mode [ mode ] [ options ] executable _ file [ arg1 arg2
> . . . ]
>
> In the -- mode, I would specify the job type, which would be MPI and the
> number of nodes I would like to use. However, I am currently stumped on
> what I should type in the "executable_file" spot. The supercomputer center
> people didn't seem to be familiar with NAMD, and they told me to insert a
> binary file and type in input files afterwards. So, my question is should I
> type in "charmrun" and then "ex.conf" ? Also, as questioned above, which
> NAMD package should I use? The supercomputer user manual states that I can
> use OpenCL or CUDA to write programs before running them, and that the
> system supports MPI and SnuCL.
>
> I would really appreciate help on this.
> Thanks.
>

This archive was generated by hypermail 2.1.6 : Thu Dec 31 2020 - 23:17:10 CST