From: Axel Kohlmeyer (akohlmey_at_gmail.com)
Date: Thu Dec 23 2010 - 04:07:46 CST
On Wed, Dec 22, 2010 at 10:37 PM, Ng Yao Zong <ngyz_at_bii.a-star.edu.sg> wrote:
> Dear all,
>
> I ran NAMD2.7 on a cluster using mpirun and encountered a similar problem
> where I received multiple, repeating output. In this case, I was reassigning
> velocities:
>
> REASSIGNING VELOCITIES AT STEP 41 TO 0.082 KELVIN.
> REASSIGNING VELOCITIES AT STEP 40 TO 0.08 KELVIN.
> REASSIGNING VELOCITIES AT STEP 42 TO 0.084 KELVIN.
> REASSIGNING VELOCITIES AT STEP 43 TO 0.086 KELVIN.
> REASSIGNING VELOCITIES AT STEP 41 TO 0.082 KELVIN.
[...]
> I believe the namd compilation was originally for charmrun.
>
> 1) Do I need to recompile namd for mpirun?
yes. MPI binaries are in general machine specific and
thus need to be generated for the local machine.
> 2) Will the multiple output affect my results (apart from the output file)?
yes. you are not running in parallel, but you are running
multiple copies of the same simulation. those copies will
try to write to the same files and thus all your output is
likely to be corrupted. furthermore, you are wasting cpu
time. why run the same calculation multiple times?
> 3) Is there a way to specify output from namd from only one processor/node
> to solve this problem?
yes. just run only on one processors, or compile NAMD
for your local MPI installation.
axel.
> Thank you for your kind advice.
>
> Regards,
>
> Andy
>
-- Dr. Axel Kohlmeyer akohlmey_at_gmail.com http://goo.gl/1wk0 Institute for Computational Molecular Science Temple University, Philadelphia PA, USA.
This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:54:53 CST