From: René Hafner TUK (hamburge_at_physik.uni-kl.de)
Date: Sun Apr 09 2023 - 10:02:02 CDT

Hi together,

thanks for the hints. I got it working by now!

The trouble mainly resided in choosing the correct mpi to be used. I
could not make it work with mpich.

The cluster atm is a bit heterogenous due to transition from el7 to el8
and not all modules are available from every host I could submit jobs to
my cluster.

I got it working using the intel 2019 mpi compiler (+gcc 9.3.0)

for the intel compiler to work one must additionally set the
I_MPI_PMI_LIBRARY env variable.

" export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi.so" in my case.

Then I could successfully start VMD in MPI mode with srun

cores=48
export VMDFORCECPUCOUNT=$cores
export WKFFORCECPUCOUNT=$cores
export RTFORCECPUCOUNT=$cores

tasks=8; time srun --ntasks-per-node=1 --x11=all   -v --cpus-per-task=48
-n $tasks -N $tasks -p mpi3 vmd194a57MPICUDA -dispdev text -e
testscript_vmd_parallel_rendering.tcl

Kind regards

René Hafner

On 4/7/2023 4:14 AM, Chris Taylor wrote:
> I got VMD compiled with mpich to work as a parallel job submitted to SLURM. I'm using a submit script like this:
>
> ```
> #!/bin/bash
> #SBATCH --nodes=3
> #SBATCH --ntasks-per-node=1
> #SBATCH --cpus-per-task=4
> module load gcc
> module load ospray/1.8.0
> module load mpich
> module load vmd-mpich
> mpirun /cm/shared/apps/vmd-mpich/current/vmd/vmd_LINUXAMD64 -dispdev text -e noderank-parallel-render2.tcl
> ```
>
> The "vmd-mpich" module is just where I have my compiled binary and libraries stored. When it starts up it says:
>
> ```
> Info) VMD for LINUXAMD64, version 1.9.4a55 (October 7, 2022)
> Info) http://www.ks.uiuc.edu/Research/vmd/
> Info) Email questions and bug reports to vmd_at_ks.uiuc.edu
> Info) Please include this reference in published work using VMD:
> Info) Humphrey, W., Dalke, A. and Schulten, K., `VMD - Visual
> Info) Molecular Dynamics', J. Molec. Graphics 1996, 14.1, 33-38.
> Info) -------------------------------------------------------------
> Info) Initializing parallel VMD instances via MPI...
> Info) Found 3 VMD MPI nodes containing a total of 12 CPUs and 0 GPUs:
> Info) 0: 4 CPUs, 30.4GB (97%) free mem, 0 GPUs, Name: node001
> Info) 1: 4 CPUs, 30.4GB (97%) free mem, 0 GPUs, Name: node002
> Info) 2: 4 CPUs, 30.4GB (97%) free mem, 0 GPUs, Name: node003
> ```
>
> When I built VMD I used this configure.options file:
> ```LINUXAMD64 MPI IMD LIBOSPRAY LIBPNG ZLIB TCL PTHREADS```
>
> Chris
> cht_at_imipolex-g.com
>

-- 
--
Dipl.-Phys. René Hafner
TU Kaiserslautern
Germany