From: Mitchell Gleed (
Date: Wed Nov 12 2014 - 12:57:16 CST

Hi Abhishek,

This question is better to ask your supercomputer administrator(s). I
haven't really used GPU's with NAMD, but my university does use SLURM for
job scheduling.

Here is a general submission script that has given me the best results for
a 1-node multicore build on the clusters I run my simulations on (with a
line added for gpu's, for your case):



#SBATCH --time=24:00:00 # walltime
#SBATCH --ntasks=16 # number of processor cores (i.e. tasks)
#SBATCH --nodes=1 # number of nodes
#SBATCH --gres=gpu:1
#SBATCH --mem-per-cpu=1024M # memory per CPU core
#SBATCH -J "jobname" # job name

module purge
module load icc/14.0.3
${namdexec} +setcpuaffinity `numactl --show | awk '/^physcpubind/ {printf
"+p%d +pemap %d",(NF-1),$2; for(i=3;i<=NF;++i){printf ",%d",$i}}'`
$configfile > $logfile


Request resources (processors, memory, etc.) according to the setup of the
clusters you want to run on where the lines labeled #SBATCH are.

You may need to load up the specific compiler(s) you used to compile NAMD
(in my case, icc/14.0.3). Set the variables for your namd configuration
file (configfile), the file to store standard output (logfile), and the
NAMD executable you are using (namdexec). I'm not sure if the particular
launching scheme I use will work for you, so this is where testing
different options will work and where a supercomputing administrator may be
able to help, too.

My university has a handy SLURM submission script generator you could try
out here: The
man page for slurm and sbatch can be helpful too.

Good luck

On Wed, Nov 5, 2014 at 10:30 PM, Abhishek TYAGI <>

> Hi,
> Recently the server in my university is update to new configuration (
> and we have to use SLURM to run our programs. I am using previously
> NAMD-multicore-Cuda version on this GPU cluster. But, as they have upgraded
> it and we need to execute our jobs using slurm I am not able to start NAMD
> as dont understand how to do it, the SLURM job script (
> i:
> #!/bin/bash
> #SBATCH -J slurm_job #Slurm job name
> # Set the maximum runtime, uncomment if you need it
> ##SBATCH -t 72:00:00 #Maximum runtime of 72 hours
> # Enable email notificaitons when job begins and ends, uncomment if you need it
> ##SBATCH #Update your email address
> ##SBATCH --mail-type=begin
> ##SBATCH --mail-type=end
> # Choose partition (queue), for example, choose gpu-k
> #SBATCH -p gpu-k
> # To use 2 cpu cores in a node, uncomment the statement below
> ##SBATCH -N 1 -n 2
> # To use 1 cpu core and 1 gpu device, uncomment the statement below
> ##SBATCH -N 1 -n 1 –-gres=gpu:1
> # Setup runtime environment if necessary
> # Or you can source ~/.bashrc or ~/.bash_profile
> source ~/.bash_profile
> # Go to the job submission directory and run your application
> cd $HOME/apps/slurm
> ./your_gpu_application
> 1. I want to know which version of NAMD should I use for this cluster.
> 2. How to edit the script (above) to run NAMD job on slurm.
> 3. I have searched but not find suitable answer for this.
> I know you people are experts to deal with this kind of problem, plese let
> me how to proceed.
> Thanks in advance
> Abhishek

This archive was generated by hypermail 2.1.6 : Wed Dec 31 2014 - 23:23:00 CST