NAMD2.7b1 performance

From: Myunggi Yi (myunggi_at_gmail.com)
Date: Fri Sep 25 2009 - 08:54:33 CDT

Dear NAMD users,

I am running a simulation with 67005 atoms (lipid bilayer with water).
Using openmpi, infiniband, and 86 CPU's, I've got the following performance.
Is this reasonable?

Info: Benchmark time: 86 CPUs 0.0318438 s/step 0.184281 days/ns 70.0844 MB
memory
Info: Benchmark time: 86 CPUs 0.0315817 s/step 0.182764 days/ns 70.0845 MB
memory
Info: Benchmark time: 86 CPUs 0.0312832 s/step 0.181037 days/ns 70.085 MB
memory

conf file
++++++++++++++++++++
exclude scaled1-4
1-4scaling 1.0
cutoff 12.
switching on
switchdist 10.
pairlistdist 13.5

timestep 2.0
rigidBonds all
nonbondedFreq 1
fullElectFrequency 2
stepspercycle 10

langevin on
langevinDamping 1.0
langevinTemp $temperature
langevinHydrogen off

wrapAll on

PME yes
PMEGridSizeX 92
PMEGridSizeY 92
PMEGridSizeZ 90

useGroupPressure yes
useFlexibleCell yes
useConstantRatio yes

langevinPiston on
langevinPistonTarget 1.01325
langevinPistonPeriod 200.
langevinPistonDecay 100.
langevinPistonTemp $temperature
++++++++++++++++++++++++++++

Due to the Charm warning, I used "+isomalloc_sync" option. I don't see any
difference though.

Charm++> Running on MPI version: 2.0 multi-thread support: MPI_THREAD_SINGLE
(max supported: MPI_THREAD_SINGLE)
Charm warning> Randomization of stack pointer is turned on in Kernel, run
'echo 0 > /proc/sys/kernel/randomize_va_space' as root to disable it. Thread
migration may not work!
Charm++> synchronizing isomalloc memory region...
[0] consolidated Isomalloc memory region: 0x2ba9c0000000 - 0x7ffb00000000
(88413184 megs)
Charm++> cpu topology info is being gathered!
Charm++> 17 unique compute nodes detected!
Info: NAMD 2.7b1 for Linux-x86_64-MPI
Info:

-- 
Best wishes,
Myunggi Yi
==================================
91 Chieftan Way
Institute of Molecular Biophysics
Florida State University
Tallahassee, FL 32306
Office: +1-850-645-1334
http://sites.google.com/site/myunggi/
http://people.sc.fsu.edu/~myunggi/

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:51:30 CST