Re: MPIRUN SLURM SCRIPT

From: Zeki Zeybek (zeki.zeybek_at_bilgiedu.net)
Date: Tue May 09 2017 - 10:58:29 CDT

Joshua, thank you for your detailed answer. I now am able to run the simulation as I want and I can see the increase in the performance. However I did not use mpirun or anything. I just used the script I posted before your message, charmrun....

The missing point is that since I did not use mpirun but I know that using charm run I was able use 4 nodes and cores per each. Does that mean that I still cannot benefit from parallel work ?

________________________________
From: Vermaas, Joshua <Joshua.Vermaas_at_nrel.gov>
Sent: 09 May 2017 18:48:25
To: namd-l_at_ks.uiuc.edu; Zeki Zeybek; Susmita Ghosh
Subject: Re: namd-l: MPIRUN SLURM SCRIPT

Hi Zeki,

ntasks-per-node is how many mpi processes to start on a single node. ntasks is the total number of processes running across all nodes. 4 nodes * 24 mpi processes per node = 96 cores in Susmita's example, which would give you 96 tasks. Originally, you were asking for 4 nodes to execute a serial process (ntasks=1), and so the scheduler yelled at you since that is a big waste of resources.

If you are using mpirun, you do need to specify the number of tasks beforehand. So for you, it might be something like this:

#!/bin/bash
#SBATCH --clusters=AA
#SBATCH --account=AA
#SBATCH --partition=AA
#SBATCH --job-name=AA
#SBATCH --nodes=4
#SBATCH --ntasks=80
#SBATCH --time=120:00:00

source /truba/sw/centos6.4/comp/
intel/bin/compilervars.sh intel6i4
source /truba/sw/centos6.4/lib/impi/4.1.1.036/bin64/mpivars.sh<https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2F4.1.1.036%2Fbin64%2Fmpivars.sh&data=02%7C01%7CJoshua.Vermaas%40nrel.gov%7Cb7cd871b21384a45f1ad08d496cb6352%7Ca0f29d7e28cd4f5484427885aee7c080%7C1%7C0%7C636299247672723697&sdata=8lqg83lxgVX7J8AJPZU7OB9snXOPkTeClctk8cWkiN0%3D&reserved=0>
module load centos6.4/app/namd/2.9-multicore
module load centos6.4/lib/impi/4.1.1

echo "SLURM_NODELIST $SLURM_NODELIST"
echo "NUMBER OF CORES $SLURM_NTASKS"

#$NAMD_DIR/namd2 +p$OMP_NUM_THREADS namd_input.conf > namd_multinode_output.log for single node

mpirun -np 80 $NAMD_DIR/namd2 namd_input.conf > namd_multinode_output.log

Unfortunately it sometimes isn't that clear. Some slurm machines require using srun instead of mpirun, and that is something that is specific to the supercomputer configuration. Susmita provided one example configuration. What I found helpful when first dealing with slurm was the manual page. NERSC has what I think is an especially good overview of common arguments and what they actually mean (http://www.nersc.gov/users/computational-systems/edison/running-jobs/batch-jobs/), although most supercomputing centers provide their own examples of how to run stuff.
-Josh

On 05/09/2017 05:06 AM, Zeki Zeybek wrote:

what does ntasks stand for ?

________________________________
From: Susmita Ghosh <g.susmita6_at_gmail.com><mailto:g.susmita6_at_gmail.com>
Sent: 09 May 2017 13:39:23
To: namd-l_at_ks.uiuc.edu<mailto:namd-l_at_ks.uiuc.edu>; Zeki Zeybek
Subject: Re: namd-l: MPIRUN SLURM SCRIPT

Dear Zeki,

If you are using single job on mutiple node, then I think you should use "export OMP_NUM_THREADS=1". I have given an example in the following SBATCH script:

#!/bin/sh
#BATCH -A Name
#SBATCH -N 4
#SBATCH --ntasks-per-node=24
#SBATCH --error=job.%J.err
#SBATCH --output=job.%J.out
#SBATCH --time=1-02:00:00
module load slurm
### To launch mpi job through srun user have to export the below library file##
export I_MPI_PMI_LIBRARY=/cm/shared/apps/slurm/15.08.13/lib64/libpmi.so
export LD_LIBRARY_PATH=/home/cdac/jpeg/lib:$LD_LIBRARY_PATH
#export PATH=/home/cdac/lammps_10oct2014/bin:$PATH
export OMP_NUM_THREADS=1
time srun -n 96 namd2 eq_NVT_run1.conf > eq_NVT_run1.log

Susmita Ghosh,
Research Scholar,
Department of Physics,
IIT Guwahati, India.

On Tue, May 9, 2017 at 3:38 PM, Zeki Zeybek <zeki.zeybek_at_bilgiedu.net<mailto:zeki.zeybek_at_bilgiedu.net>> wrote:

Hi!

I am trying to come up with a slurm script file for my simulation but I failed miserably. The point is that in the uni. super computer for a single node there exist 20 cores. What I want to do is for my single job, lets say aaaa.conf, I want to use 80 cores. However to allocate such numbers of cores I need to use 4 nodes (4*20=80). However slurm gives error if I try to run one task on multiple nods. How can I overcome this situation ?

#!/bin/bash
#SBATCH --clusters=AA
#SBATCH --account=AA
#SBATCH --partition=AA
#SBATCH --job-name=AA
#SBATCH --nodes=4
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=80
#SBATCH --time=120:00:00

source /truba/sw/centos6.4/comp/intel/bin/compilervars.sh intel6i4
source /truba/sw/centos6.4/lib/impi/4.1.1.036/bin64/mpivars.sh<https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2F4.1.1.036%2Fbin64%2Fmpivars.sh&data=02%7C01%7CJoshua.Vermaas%40nrel.gov%7Cb7cd871b21384a45f1ad08d496cb6352%7Ca0f29d7e28cd4f5484427885aee7c080%7C1%7C0%7C636299247672723697&sdata=8lqg83lxgVX7J8AJPZU7OB9snXOPkTeClctk8cWkiN0%3D&reserved=0>
module load centos6.4/app/namd/2.9-multicore
module load centos6.4/lib/impi/4.1.1

export OMP_NUM_THREADS=20
echo "SLURM_NODELIST $SLURM_NODELIST"
echo "NUMBER OF CORES $SLURM_NTASKS"

#$NAMD_DIR/namd2 +p$OMP_NUM_THREADS namd_input.conf > namd_multinode_output.log for single node

#$mpirun NAMD_DIR/namd2 +p$OMP_NUM_THREADS namd_input.conf > namd_multinode_output.log for multiple node

THE ABOVE SCRIPT GIVES ERROR as in --ntasks=1 is not valid. However if I make --ntasks=4 and --cpus-per-task=20 it works. But it does not enhance the run speed. (Note: each user can use at most 80 cores in the super computer server)

This archive was generated by hypermail 2.1.6 : Mon Dec 31 2018 - 23:20:17 CST