AW: Regarding NAMD and MD simulations

From: Norman Geist (norman.geist_at_uni-greifswald.de)
Date: Mon Jul 07 2014 - 02:48:08 CDT

 

Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im Auftrag
von Joel Moniz
Gesendet: Montag, 7. Juli 2014 08:42
An: namd-l_at_ks.uiuc.edu
Betreff: namd-l: Regarding NAMD and MD simulations

 

Hello everyone.

 

I'm using NAMD for benchmarking a GPU cluster, and I don't have any
experience with molecular dynamic simulations.I had a few doubts regarding
NAMD, as well as regarding molecular dynamics:

 

Not the best base to carry out benchmarks, as it relies also on
understanding what the code is doing.

 

1. Regarding the Linux-x86_64 and the Linux-x86_64-TCP versions, I have
noticed better performance of the former, UDP version when compared to the
TCP version on a Gigabit Ethernet network with 4 nodes on it. Is this
expected?

 

Of course, as TCP has time consuming safety protocols which UDP have not.
The required integrity checks are implemented in the charm++.

Please notice that Gigabit is mostly insufficient for a GPU cluster.

 

2. Is there anyway that I could specify which GPU devices I want to use on
each individual node when using the ibverbs-CUDA-smp distribution, like I
can specify the number of CPUs per node with the ++cpus parameter in the
nodelist file? Right now, I can only specify one +devices parameter, which
seems to affect all the nodes, and a +devices next to each node of the
nodelist seems to have no effect.

 

No. You need to hack the code to do that. I've done it so I could provide
you a patch. Which means you have to recompile.

 

3. What does the stepspercycle parameter represent? I read that this
represents the number of timesteps in a cycle, and each cycle represents the
number of timesteps between pairlist generations and atom reassignments, but
what exactly are pairlist generations and atom reassignments?

 

The periodic simulation cell is departed into patches (box slices) to depart
the work (parallelization). As the atoms move, they may change the patches.
But, the patches are build with parameters that do allow to not update these
patches every simulation step, only every stepspercycle. So it is not
expected that a atom changes the patch within 20 timesteps or get into the
interaction cutoff distance to an atom which is not considered by the
pairlists already.

 

4. Would the stmv example be representative of a typical MD simulation run
(for example, in terms of number of particles)? Where could I find other
example simulations?

 

Its quite large. Another example for small to medium sized is "apoa1".

 

5. Would the performance of a simulation vary significantly across
simulations (provided parameters like numsteps, timestep, stepspercycle,
etc., as well as the number of particles, remains constant)?

 

No, if same compute power is used.

 

6. What are the most frequently used values (or ranges) for numsteps,
timestep and stepspercycle (though I realise that this would definitely
depend on the application)?

 

None of them should be changed for benchmarks but you may want to use

 

Command line switches to namd2:

 

+ignoresharing

 

In namd script:

 

Fullelectfrequency 4; #in any case

 

Twoawayx yes; #if it brings speed

Twoawayy yes; #if twoawayx brought speed and this brings speed

 

I'm extremely sorry if these doubts are a little too basic, but I have not
been able to find any information regarding these doubts, and have no
experience in molecular dynamics.

 

Thanking you,

Joel

---
Diese E-Mail ist frei von Viren und Malware, denn der avast! Antivirus Schutz ist aktiv.
http://www.avast.com

This archive was generated by hypermail 2.1.6 : Thu Dec 31 2015 - 23:20:57 CST