Re: Yet another NAMD speed concerns

From: Gianluca Interlandi (gianluca_at_u.washington.edu)
Date: Sat Apr 25 2009 - 12:05:17 CDT

Sorry, I have to correct my last e-mail. As you are using PME you can use:

switchdist 8.0
cutoff 10.0
pairlistdist 12.0

This should speed up of at least 20%. Also, I don't know if it makes any
difference on speed, for PMEGridSize you can use 120 since the ratio
sidelength/PMEGridSize has to be only slightly less than 1.

Gianluca

On Sat, 25 Apr 2009, Gianluca Interlandi wrote:

> I have a system of ca. 100000 atoms and I run on 32 CPUs with infiniband. It
> takes 8 hours for 1 ns. I'm not sure the system scales on 256 CPUs, I haven't
> tried. The more CPUs you use the more communication there will be and this
> will be the bottleneck. The problem is the latency time, i.e., the time it
> takes for a message to be prepared and sent over the network. MD uses domain
> decomposition where the system is subdivided into domains. The more CPUs you
> use the smaller will be each subdomain and the more time a CPU will spend in
> waiting until the message is sent instead of computing.
>
> A comment about your parameters. I think you can choose a cutoff of 12 and
> you can also set "pairlistdist 14.0".
>
> Gianluca
>
> On Sat, 25 Apr 2009, DimitryASuplatov wrote:
>
>> Hello,
>>
>> I am running a simulation of my protein in water (cell = 15 angstroms
>> from the protein). System contains 110349 atoms total.
>> I have compiled namd 2.6 on xeon infiniband cluster uusing amd64
>> installation procedure. I`ve also used icc and ifort with -xSSSE3
>> flag.
>>
>> My force field parameters are
>> =====================
>> exclude scaled1-4
>> 1-4scaling 1.0
>> cutoff 15.
>> switching on
>> switchdist 10.
>>
>>
>> My integrator parameters are
>> =====================
>> timestep 2.0 ;# 2fs/step
>> rigidBonds all ;# needed for 2fs steps
>> nonbondedFreq 1
>> fullElectFrequency 4
>> stepspercycle 20
>>
>>
>> I use
>> ====
>> langevin on
>> langevinPiston on
>>
>> cellBasisVector1 108 0. 0.
>> cellBasisVector2 0. 102 0.
>> cellBasisVector3 0. 0 105.
>> cellOrigin 63.87940979 13.3578186035 32.2524185181
>>
>> PME yes
>> PMEGridSizeX 125
>> PMEGridSizeY 125
>> PMEGridSizeZ 125
>>
>> MY PROBLEM IS THAT MY SYSTEM IS TOO SLOW
>> ======================================================
>> Utilization of 256 x Xeon 3Gz CPUs requires 41 hours for 10 ns
>> !!!!!!!!!!!!!!!!!!!
>> ======================================================
>>
>> I consider my system to be of a typical size. 100k atoms fro 10 ns is
>> normal in our days. But it is very hard calculate with NAMD even on
>> modern clusters.
>>
>> 1/ Is this speed normal for namd or did i do something wrong?
>> 2/ Can I use 4fs for timestep with this parameters?
>> 3/ what do you do to shortcut your calculations?
>>
>> Thanks
>> SDA
>>
>>
>
> -----------------------------------------------------
> Gianluca Interlandi, PhD gianluca_at_u.washington.edu
> +1 (206) 685 4435
> +1 (206) 714 4303
> http://artemide.bioeng.washington.edu/
>
> Postdoc at the Department of Bioengineering
> at the University of Washington, Seattle WA U.S.A.
> -----------------------------------------------------
>
>

-----------------------------------------------------
Gianluca Interlandi, PhD gianluca_at_u.washington.edu
                     +1 (206) 685 4435
                     +1 (206) 714 4303
                     http://artemide.bioeng.washington.edu/

Postdoc at the Department of Bioengineering
at the University of Washington, Seattle WA U.S.A.
-----------------------------------------------------

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:52:40 CST