Re: how is NAMD output written in parallel mode?

From: Ravinder Abrol (abrol_at_wag.caltech.edu)
Date: Tue Dec 06 2005 - 17:30:01 CST

Hi Nicolas,
I use PBS as well. I used the temp space of first node on my nodelist
to specify the DCDfile option in my configuration file and it worked.
Thanks,
Ravi

--------------------------------------------------------------
On Mon, Dec 05, 2005 at 03:17:58PM +0100, Nicolas Sapay wrote:
> Dear Ravinder,
>
> In my case, I write outputs on the "master" node, i.e. the first node of
> the nodelist (I'm not sure it can be called "master" in fact).
> Assuming that the job runs on 3 nodes: node1.myserver.edu,
> node2.myserver.edu, node3.myserver.edu. The process is launched : a
> unique directory is created in the </tmp> of the master node (e.g
> node2.myserver.edu). Outputs will be written is this directory. However,
> you will be able to see NAMD processes on each node (= 1 charmrun + 1
> namd2 processes by processor). This processes consumes CPU and memory
> (depend on the load balancing) but no space in the /tmp (nodes 2
> excepted of course). So, you can use the /tmp of non-master nodes if it
> is necessary since nothing is written on theses nodes.
>
> Cheers
>
> Nicolas Sapay
>
> PS : I'm using a PBS server to run my jobs, I'm not sure this answer is
> available in a different case...
>
> Le lun 05/12/2005 à 09:09, Ravinder Abrol a écrit :
> > Hi All,
> >
> > I am running NAMD using multiple processors and want to find out
> > how the output (e.g. the dcd file) is written. More specifically,
> > if only one processor writes the output how do I find out which one
> > writes it and if there is a way for me specify that.
> >
> > The reason for this question is that since we are running NAMD on a cluster
> > of processors, each of which has a temp area not readable by other processors,
> > we want to see if we can utilize this temp space till the job is finished.
> > The size of dcd files for >10ns simulations reaches >5GB and if we can
> > use the temp space of one processoIf I have understand your question, r, that will help us greatly.
> >
> > Thanks,
> > Ravi

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:40:11 CST