Re: [External] running NAMD with Slurm

From: Bennion, Brian (
Date: Wed Jan 09 2019 - 11:57:54 CST


If you use the mpi aware namd executable all the searching for hostnames can be handled by srun just using:

srun -N 4 -n 96 namd2 blah.....


From: <> on behalf of Sharp, Kim <>
Sent: Wednesday, January 9, 2019 9:15:19 AM
To:; Seibold, Steve Allan
Subject: Re: [External] namd-l: running NAMD with Slurm

details might differ as I am running a different version of namd, and
hardware is obviously different.

here is a slurm script we use on our cluster:

#SBATCH --mail-type=ALL
#SBATCH --partition=namd
#SBATCH --nodes=4
#SBATCH --ntasks=96
echo 'nodes: ' $SLURM_NNODES 'tasks/node: ' $SLURM_TASKS_PER_NODE 'total
tasks: ' $SLURM_NTASKS
set WORKDIR=/home/sharp/work/c166/
module load namd
charmrun ++nodelist nodelist.$SLURM_JOBID ++p $SLURM_NTASKS `which
namd2` +setcpuaffinity c166_s2000.conf > sharp_3Dec2018_.log

namd is launched via charmrun with the
  ++nodelist hostnamefile

this hostnamefile contains lines like:

host node023 ++cpus 2
host node024 ++cpus 2

for however many nodes you requested. It is generated at the time slurm
starts your job, because only at that time does slurm know the names of
the nodes it is allocating to this job. you get this node list by
executing the slurm command

scontrol show hostnames

in your job script and capturing/reformatting the output into the
nodenamelist file. here we have a script make_namd_nodelist that does
this, you can see it is executed right before the namd command


On 1/9/19 10:54 AM, Seibold, Steve Allan wrote:
> Thanks Kim for your response. Here is my Slurm script as follows:
> =====================================================
> #!/bin/bash
> #SBATCH --job-name=Seibold # Job name
> #SBATCH --partition=mridata # Partition Name (Required)
> #SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL)
> #SBATCH # Where to send mail
> #SBATCH --nodes=2 --ntasks-per-node=15 --mem-per-cpu=250M --time=12:00:00
> #SBATCH --output=md7_3BP_%j.log # Standard output and error log
> pwd; hostname; date
> #module load namd/2.12_multicore
> echo "Running on $SLURM_CPUS_ON_NODE cores"
> ~/NAMD2/NAMD_2.13_Linux-x86_64/namd2 md7_3BP.namd
> ===========================================================
> Thanks, Steve

Kim Sharp, Ph.D,
Dept. of Biochemistry & Biophysics
Chair, Biochemistry & Molecular Biophysics Graduate Group
Perelman School of Medicine at the University of Pennsylvania
805A Stellar Chance
Philadelphia, PA 19104

This archive was generated by hypermail 2.1.6 : Tue Dec 31 2019 - 23:20:25 CST