From: Axel Kohlmeyer (akohlmey_at_cmm.chem.upenn.edu)
Date: Tue Feb 10 2009 - 15:35:46 CST
On Tue, 10 Feb 2009, Yinglong Miao wrote:
YM> Dear Axel,
YM> Thanks for your reply!
YM> Yes, it works with "ssh $node vmd ..." to run vmd on a specific node. Just
YM> one note that you may need to change the running directory upon login to the
YM> node, so that a batch script can be used for all the commands and executed
YM> as "ssh $node sh $script.sh".
or you can do: ssh $node "cd $PWD; vmd ..."
YM> For running an mpi program inside an mpi program you are trying to describe
YM> and as mentioned by Peter in another following message, I actually tried
YM> calling a fortran mpi program inside a parallel NAMD simulation before and
YM> made it work for most of the time simply with "exec mpirun -np $nprocessors
YM> -machinefile $machines $prog ...". Sometimes it gives out error messages
now, what i am trying to do is much cleaner and will avoid
the fork()/exec() altogether. the individual programs are
all turned into libraries and then for parallel programs i
split the original MPI_COMM_WORLD communicator into
subcommunicators and pass them as arguments and bypass the
individual calls to MPI_Init()/MPI_Finalize().
starting an mpirun on top of an existing one, is inherently
problematic and you can consider yourself very lucky that it
has worked for you. personally, i would have been be too
scared to mess up the machine or my session with it. it has
happened to me with both infiniband and myrinet with a simple
CALL SYSTEM() already. N.B.: the myrinet crashes were very
long ago, though, when it was still a very new technology.
YM> like "child process exited abnormally" upon exit of the embedded mpi
YM> program. As you said, this depends on the program running environment.
YM> Axel Kohlmeyer wrote:
YM> > On Tue, Feb 10, 2009 at 11:45 AM, Yinglong Miao <yimiao_at_indiana.edu>
YM> > wrote:
YM> > > Dear VMD/NAMD users/developers,
YM> > >
YM> > dear yinglong,
YM> > > I have been calling VMD in my NAMD simulations by using "exec -dispdev
YM> > > text
YM> > > -e $.tcl". To avoid memory overloading and outages on the head processor
YM> > > of
YM> > > the NAMD simulation, I want to assign specific processors for running
YM> > > VMD.
YM> > > The question is how to specify processors for running VMD on a cluster.
YM> > > Your
YM> > > help will be greatly appreciated.
YM> > >
YM> > vmd as such is (currently) not an MPI parallel program. that being said,
YM> > you
YM> > can probably still generate a nodefile with just one node name in it and
YM> > run
YM> > mpirun on it, but just doing "ssh <nodename> vmd ..." should do the same.
YM> > following up on your previous discussion, i think that if you want to
YM> > do something
YM> > as sophisticated as what you seem to be trying to do, you should probably
YM> > invest
YM> > a little extra effort to write a "toplevel" mpi program that controls
YM> > which job is
YM> > running where. you should also pay attention to the fact, that quite
YM> > often running
YM> > an exec from an MPI program can result in bad interactions with the MPI
YM> > layer
YM> > (older myrinet and infiniband installations suffer from that and can
YM> > result into everything
YM> > from crashing nodes to corrupt data). other machines (cray xt, blue
YM> > gene) don't even
YM> > support spawning a subprocess from a running code.
YM> > since you don't seem to suffer from these problems right now, you could
YM> > try
YM> > a rather minimalistic approach and have the toplevel MPI code distribute
YM> > the
YM> > tasks across the available nodes by writing custom machinefiles and
YM> > then exec mpirun on the corresponding head nodes of those tasks and then
YM> > call an MPI_barrier() until they are all done and you can go on to the
YM> > next step.
YM> > incidentally, i'm currently trying to do something with similar
YM> > requirements,
YM> > but i want to avoid the exec because i know i'll be running on machines
YM> > where
YM> > this could become a problem. what i've managed so far is to write a
YM> > script/program
YM> > in python that is MPI parallel and controls the parallel MD
YM> > calculations (i run multiple
YM> > of them simultaneously) and now i'm trying to import a special compile of
YM> > VMD
YM> > (based on patches from justin gullingsrud) directly into the python
YM> > interpreter as well,
YM> > which i want to run on another subset of MPI tasks so that i can run
YM> > analysis in
YM> > parallel to the simulation of the next chunk...
YM> > hope that helps,
YM> > axel.
YM> > > Thanks!
YM> > >
YM> > > --
YM> > > Yinglong Miao
YM> > > Ph.D. Candidate
YM> > > Center for Cell and Virus Theory
YM> > > Chemistry Department, Indiana University
YM> > > 800 E Kirkwood Ave Room C203A, Bloomington, IN 47405
YM> > > Tel: 1-812-856-0981
YM> > >
YM> > >
-- ======================================================================= Axel Kohlmeyer akohlmey_at_cmm.chem.upenn.edu http://www.cmm.upenn.edu Center for Molecular Modeling -- University of Pennsylvania Department of Chemistry, 231 S.34th Street, Philadelphia, PA 19104-6323 tel: 1-215-898-1582, fax: 1-215-573-6233, office-tel: 1-215-898-5425 ======================================================================= If you make something idiot-proof, the universe creates a better idiot.
This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:50:29 CST