Re: namd scale-up

From: Axel Kohlmeyer (
Date: Wed Sep 04 2013 - 03:00:30 CDT

On Wed, Sep 4, 2013 at 9:43 AM, Revthi Sanker

> Dear all,
> I am running NAMD on the super cluster at my institute. My system consists
> of 3 L atoms roughly.

please keep in mind that most people on this mailing list (and in the world
in general) do not know what a lakh is and better talk about 300,000 atoms
instead. what would you think if somebody would talk to you about a system
with 2000 gross atoms?

> I am aware that the scale up depends on the configuration of the cluster I
> am currently using. But the people at the computer center would like to get
> a rough estimate of the the Benchmark (ns/day) for a system size of mine.
> Anybody who is aware of the yield for this system size, please let me know
> as I am not sure if what I am getting currently (*2.5 ns/day *for 8
> nodes* 16 processors=128) is optimum or can it be tweaked further.

the only way to find out the optimum, is by doing a (strong) scaling
benchmark, i.e. use a different number of nodes and plot the resulting
speedup. the performance depends not only on the hardware (CPU
(type,generation,clock rate), memory bandwith, interconnect, BIOS
configuration (e.g. hyper-threading, turbo boost)), but also on software
(kernel, NAMD version, compiler, configuration (SMP, MPI, ibverbs)) and
your system and input. so there is no way to tell from the number of atoms
in the system and the number of nodes/cores whether you have a good
performance or a bad performance.

you can compare your numbers (absolute per cpu core performance and
speedup) to other published data from other machines (even if much older).
there should be some on the NAMD home page and in the NAMD wiki.


> Thank you so much for your time :)
> Revathi.S
> M.S. Research Scholar
> Indian Institute Of Technology, Madras
> India
> _________________________________

Dr. Axel Kohlmeyer
International Centre for Theoretical Physics, Trieste. Italy.

This archive was generated by hypermail 2.1.6 : Wed Dec 31 2014 - 23:21:36 CST