From: Tristan Croll (tristan.croll_at_qut.edu.au)
Date: Fri Jan 24 2014 - 02:44:05 CST
I have seen similar behaviour with NAMD 2.9 on the Australian National Compute Facility cluster (Raijin) for a ~500k atom system. Scaling was good up to 384 cores, then went essentially inversely proportional to core count from there out to 1024 cores. Interestingly, I'm told upon commission of the cluster NAMD was tested against a much larger system (a few million atoms) out to thousands of cores with good scaling. I very rarely use resources on that scale so it doesn't affect me personally, but as far as I'm aware they have yet to solve the problem.
From: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] On Behalf Of Bennion, Brian
Sent: Thursday, 23 January 2014 10:22 AM
Subject: namd-l: recent comparison of NAMD to other MD engines
Based on this recent publication http://onlinelibrary.wiley.com/doi/10.1002/jcc.23501/abstract
NAMD2.9 stumbles compared with gromacs and an improved version of charmm on a large system (465404 atoms and 500 cores).
Any ideas as to the cause of this dramatic difference in speed between 256 and 400 cores?
This archive was generated by hypermail 2.1.6 : Thu Dec 31 2015 - 23:20:25 CST