Next: play
Up: Tcl Text Commands
Previous: mouse
Contents
Index
The parallel command enables large scale parallel
scripting when VMD has been compiled with MPI support.
In absence of MPI support, the parallel command is still
available, but it operates the same way it would if an
MPI-enabled VMD would when run on only a single node.
The parallel command enables large analysis scripts to be
easily adapted for execution on large clusters and supercomputers
to support simulation, analysis, and visualization operations that
would otherwise be too computationally demanding for conventional
workstations [14,35,26,40].
- nodename: Return the hostname of the current compute node.
- noderank: Return the MPI rank of the current compute node.
- nodecount: Return the total number of MPI ranks in the
currently running VMD job.
- allgather object:
Perform a parallel allgather operation across all MPI ranks,
taking the user defined object as input to each caller.
All VMD MPI ranks must participate in the allgather operation.
- allreduce user_reduction_procedure object:
Perform a parallel reduction across all MPI ranks by calling
the user-supplied reduction procedure, passing in a user defined
object.
All VMD MPI ranks must participate in the allreduce operation.
- barrier: Perform a barrier synchronization across all
MPI ranks in the running VMD job.
- for startcount endcount user_worker_procedure object:
Invoke VMD parallel work scheduler to run a computation over
all MPI ranks. The VMD work scheduler uses dynamic load balancing
to assign work indices to workers, calling the user-defined
worker callback procedure for each work item.
Next: play
Up: Tcl Text Commands
Previous: mouse
Contents
Index
vmd@ks.uiuc.edu