From: Joel Moniz (jram802_at_yahoo.co.uk)
Date: Mon Jul 07 2014 - 01:42:04 CDT
I'm using NAMD for benchmarking a GPU cluster, and I don't have any experience with molecular dynamic simulations.I had a few doubts regarding NAMD, as well as regarding molecular dynamics:
1. Regarding the Linux-x86_64 and the Linux-x86_64-TCP versions, I have noticed better performance of the former, UDP version when compared to the TCP version on a Gigabit Ethernet network with 4 nodes on it. Is this expected?
2. Is there anyway that I could specify which GPU devices I want to use on each individual node when using the ibverbs-CUDA-smp distribution, like I can specify the number of CPUs per node with the ++cpus parameter in the nodelist file? Right now, I can only specify one +devices parameter, which seems to affect all the nodes, and a +devices next to each node of the nodelist seems to have no effect.
3. What does the stepspercycle parameter represent? I read that this represents the number of timesteps in a cycle, and each cycle represents the number of timesteps between pairlist generations and atom reassignments, but what exactly are pairlist generations and atom reassignments?
4. Would the stmv example be representative of a typical MD simulation run (for example, in terms of number of particles)? Where could I find other example simulations?
5. Would the performance of a simulation vary significantly across simulations (provided parameters like numsteps, timestep, stepspercycle, etc., as well as the number of particles, remains constant)?
6. What are the most frequently used values (or ranges) for numsteps, timestep and stepspercycle (though I realise that this would definitely depend on the application)?
I'm extremely sorry if these doubts are a little too basic, but I have not been able to find any information regarding these doubts, and have no experience in molecular dynamics.
This archive was generated by hypermail 2.1.6 : Thu Dec 31 2015 - 23:20:57 CST