Re: NAMD2.7b1 performance

From: Axel Kohlmeyer (akohlmey_at_gmail.com)
Date: Fri Sep 25 2009 - 09:45:53 CDT

On Fri, Sep 25, 2009 at 9:54 AM, Myunggi Yi <myunggi_at_gmail.com> wrote:
> Dear NAMD users,
>
> I am running a simulation with 67005 atoms (lipid bilayer with water).
> Using openmpi, infiniband, and 86 CPU's, I've got the following performance.
> Is this reasonable?

it is difficult to make comments on the performance of an arbitrary
system, and next to impossible without any details about the hardware.

why don't you run some of the benchmark system inputs for which
benchmark numbers on existing machines exist. that would make it
much easier to compare the performance of your setup.

on top of that, it is always recommended to first make scaling tests,
i.e. run with a different number of CPUs and nodes to establish the
points where you run with maximum efficiency and maximum speed
(which is not the same). depending on the type of cpus, you may not
want to use all cpu cores. there are some useful discussion on the
various teragrid machines and related hardware in the namd wiki that
may be helpful in that respect.

cheers,
   axel.
>
> Info: Benchmark time: 86 CPUs 0.0318438 s/step 0.184281 days/ns 70.0844 MB
> memory
> Info: Benchmark time: 86 CPUs 0.0315817 s/step 0.182764 days/ns 70.0845 MB
> memory
> Info: Benchmark time: 86 CPUs 0.0312832 s/step 0.181037 days/ns 70.085 MB
> memory
>
>
> conf file
> ++++++++++++++++++++
> exclude scaled1-4
> 1-4scaling 1.0
> cutoff 12.
> switching on
> switchdist 10.
> pairlistdist 13.5
>
> timestep 2.0
> rigidBonds all
> nonbondedFreq 1
> fullElectFrequency 2
> stepspercycle 10
>
> langevin on
> langevinDamping 1.0
> langevinTemp $temperature
> langevinHydrogen off
>
> wrapAll on
>
> PME yes
> PMEGridSizeX 92
> PMEGridSizeY 92
> PMEGridSizeZ 90
>
> useGroupPressure yes
> useFlexibleCell yes
> useConstantRatio yes
>
> langevinPiston on
> langevinPistonTarget 1.01325
> langevinPistonPeriod 200.
> langevinPistonDecay 100.
> langevinPistonTemp $temperature
> ++++++++++++++++++++++++++++
>
>
> Due to the Charm warning, I used "+isomalloc_sync" option. I don't see any
> difference though.
>
>
> Charm++> Running on MPI version: 2.0 multi-thread support: MPI_THREAD_SINGLE
> (max supported: MPI_THREAD_SINGLE)
> Charm warning> Randomization of stack pointer is turned on in Kernel, run
> 'echo 0 > /proc/sys/kernel/randomize_va_space' as root to disable it. Thread
> migration may not work!
> Charm++> synchronizing isomalloc memory region...
> [0] consolidated Isomalloc memory region: 0x2ba9c0000000 - 0x7ffb00000000
> (88413184 megs)
> Charm++> cpu topology info is being gathered!
> Charm++> 17 unique compute nodes detected!
> Info: NAMD 2.7b1 for Linux-x86_64-MPI
> Info:
>
>
>
>
> --
> Best wishes,
>
> Myunggi Yi
> ==================================
> 91 Chieftan Way
> Institute of Molecular Biophysics
> Florida State University
> Tallahassee, FL 32306
>
> Office: +1-850-645-1334
>
> http://sites.google.com/site/myunggi/
> http://people.sc.fsu.edu/~myunggi/
>
>

-- 
Dr. Axel Kohlmeyer    akohlmey_at_gmail.com
Institute for Computational Molecular Science
College of Science and Technology
Temple University, Philadelphia PA, USA.

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:53:18 CST