Main Page   Namespace List   Class Hierarchy   Alphabetical List   Compound List   File List   Namespace Members   Compound Members   File Members   Related Pages  

VMD Execution with MPI

When VMD has been compiled with support for MPI it can be run in parallel on thousands of compute nodes at a time, but there are presently a few noteworthy considerations that affect how it is compiled, launched, and how it behaves in a parallel environment as compared with conventional interactive desktop usage.

Due to the fact that MPI does not (yet) include a standarized binary interface, MPI support requires that VMD be compiled from source code on the target platform for each MPI implementation to be supported. For example, VMD would have to be compiled separately for each MPI to be supported on the system, e.g., MPICH, OpenMPI, and/or other MPI versions. This means that in the general case, unlike the approach taken by the VMD development team where binary VMD distributions are provided for all mainstream computer and operating system platforms, this is not possible in the context of MPI, and those users wishing to use VMD with MPI must compile VMD from source code.

Some MPI implementations require special interactions with batch queueing systems or storage systems, and in such cases it is necessary to modify the standard VMD launcher scripts to perform any extra steps or to invoke any platform-specific parallel launch commands. By modifying the VMD launch script, users can continue to use familiar VMD launch syntax while gaining the benefits of parallel analysis with MPI. The VMD launch script has been modified so that it can automatically recognize cases where VMD has been launched within batch schedulers used on Cray XK and XC supercomputers such as NCSA Blue Waters, ORNL Titan, CSCS Piz Daint, and related systems, where the VMD executable must be launched using the 'aprun' or 'srun' utilities, depending on the scheduling system in use.

MPI-enabled builds of VMD can typically also be run on login nodes or interactive visualization nodes that may not managed by the batch scheduler and/or may not support MPI, so long as MPI and other shared libraries used on the compute nodes are also available on the login or interactive visualization nodes. To run an MPI-enabled VMD build outside of MPI, i.e., without 'mpirun', the environment variable VMDNOMPI can be set, which will prevent VMD from calling any MPI APIs during the run, allowing it to behave like a normal non-MPI build for convenience. In most cases, this makes it possible to use a single VMD build on all of the different compute node types on a system.

At present, the interactive console handling used in interactive text interpreters conflicts with the behavior of some mainstream MPI implementations, so when VMD is run in parallel using MPI, the interactive console is disabled and VMD instead reads commands only from script files specified with the "-e" command line argument.

Aside from the special launch behavior and lack of the interactive text console, MPI runs of VMD support high performance graphics with full support for OpenGL via GLX or EGL, and ray tracing with Tachyon, OptiX, and OSPRay.

Id:
pg_mpi.dox,v 1.4 2016/12/01 05:50:02 johns Exp


Generated on Mon Apr 22 04:29:34 2024 for VMD (current) by doxygen1.2.14 written by Dimitri van Heesch, © 1997-2002