From: Chris Taylor (cht_at_imipolex-g.com)
Date: Thu Apr 06 2023 - 21:14:08 CDT

I got VMD compiled with mpich to work as a parallel job submitted to SLURM. I'm using a submit script like this:

```
#!/bin/bash
#SBATCH --nodes=3
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=4
module load gcc
module load ospray/1.8.0
module load mpich
module load vmd-mpich
mpirun /cm/shared/apps/vmd-mpich/current/vmd/vmd_LINUXAMD64 -dispdev text -e noderank-parallel-render2.tcl
```

The "vmd-mpich" module is just where I have my compiled binary and libraries stored. When it starts up it says:

```
Info) VMD for LINUXAMD64, version 1.9.4a55 (October 7, 2022)
Info) http://www.ks.uiuc.edu/Research/vmd/
Info) Email questions and bug reports to vmd_at_ks.uiuc.edu
Info) Please include this reference in published work using VMD:
Info) Humphrey, W., Dalke, A. and Schulten, K., `VMD - Visual
Info) Molecular Dynamics', J. Molec. Graphics 1996, 14.1, 33-38.
Info) -------------------------------------------------------------
Info) Initializing parallel VMD instances via MPI...
Info) Found 3 VMD MPI nodes containing a total of 12 CPUs and 0 GPUs:
Info) 0: 4 CPUs, 30.4GB (97%) free mem, 0 GPUs, Name: node001
Info) 1: 4 CPUs, 30.4GB (97%) free mem, 0 GPUs, Name: node002
Info) 2: 4 CPUs, 30.4GB (97%) free mem, 0 GPUs, Name: node003
```

When I built VMD I used this configure.options file:
```LINUXAMD64 MPI IMD LIBOSPRAY LIBPNG ZLIB TCL PTHREADS```

Chris
cht_at_imipolex-g.com