From: Yinglong Miao (ylmiao.indiana_at_gmail.com)
Date: Tue Feb 10 2009 - 15:21:34 CST
Thanks for your reply!
Yes, it works with "ssh $node vmd ..." to run vmd on a specific node.
Just one note that you may need to change the running directory upon
login to the node, so that a batch script can be used for all the
commands and executed as "ssh $node sh $script.sh".
For running an mpi program inside an mpi program you are trying to
describe and as mentioned by Peter in another following message, I
actually tried calling a fortran mpi program inside a parallel NAMD
simulation before and made it work for most of the time simply with
"exec mpirun -np $nprocessors -machinefile $machines $prog ...".
Sometimes it gives out error messages like "child process exited
abnormally" upon exit of the embedded mpi program. As you said, this
depends on the program running environment.
Axel Kohlmeyer wrote:
> On Tue, Feb 10, 2009 at 11:45 AM, Yinglong Miao <yimiao_at_indiana.edu> wrote:
>> Dear VMD/NAMD users/developers,
> dear yinglong,
>> I have been calling VMD in my NAMD simulations by using "exec -dispdev text
>> -e $.tcl". To avoid memory overloading and outages on the head processor of
>> the NAMD simulation, I want to assign specific processors for running VMD.
>> The question is how to specify processors for running VMD on a cluster. Your
>> help will be greatly appreciated.
> vmd as such is (currently) not an MPI parallel program. that being said, you
> can probably still generate a nodefile with just one node name in it and run
> mpirun on it, but just doing "ssh <nodename> vmd ..." should do the same.
> following up on your previous discussion, i think that if you want to
> do something
> as sophisticated as what you seem to be trying to do, you should probably invest
> a little extra effort to write a "toplevel" mpi program that controls
> which job is
> running where. you should also pay attention to the fact, that quite
> often running
> an exec from an MPI program can result in bad interactions with the MPI layer
> (older myrinet and infiniband installations suffer from that and can
> result into everything
> from crashing nodes to corrupt data). other machines (cray xt, blue
> gene) don't even
> support spawning a subprocess from a running code.
> since you don't seem to suffer from these problems right now, you could try
> a rather minimalistic approach and have the toplevel MPI code distribute the
> tasks across the available nodes by writing custom machinefiles and
> then exec mpirun on the corresponding head nodes of those tasks and then
> call an MPI_barrier() until they are all done and you can go on to the
> next step.
> incidentally, i'm currently trying to do something with similar requirements,
> but i want to avoid the exec because i know i'll be running on machines where
> this could become a problem. what i've managed so far is to write a
> in python that is MPI parallel and controls the parallel MD
> calculations (i run multiple
> of them simultaneously) and now i'm trying to import a special compile of VMD
> (based on patches from justin gullingsrud) directly into the python
> interpreter as well,
> which i want to run on another subset of MPI tasks so that i can run
> analysis in
> parallel to the simulation of the next chunk...
> hope that helps,
>> Yinglong Miao
>> Ph.D. Candidate
>> Center for Cell and Virus Theory
>> Chemistry Department, Indiana University
>> 800 E Kirkwood Ave Room C203A, Bloomington, IN 47405
>> Tel: 1-812-856-0981
-- Yinglong Miao Ph.D. Candidate Center for Cell and Virus Theory Department of Chemistry, Indiana University Address: 800 E Kirkwood Ave Room C203A, Bloomington, IN 47405 Phone: 1-812-856-0981 (office) 1-812-272-8196 (cell)
This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:50:29 CST