Re: peculiar scaling on the GPU

From: Axel Kohlmeyer (akohlmey_at_gmail.com)
Date: Wed Aug 01 2012 - 01:40:56 CDT

On Wed, Aug 1, 2012 at 8:14 AM, Norman Geist
<norman.geist_at_uni-greifswald.de> wrote:
> Hi,
>
>
>
> this looks in fact a little strange. What kind of machine is that?

well, for as little as a one and a half thousand atoms,
strange things *can* happen. GPUs need a *lot* of
work units to work efficiently and at this point, it may
be difficult to achieve a good partitioning of the system
across the tasks. using more processes may help.

axel.

> How many CPU sockets and what cpus are inside?
>
> Is this reproducible?
>
>
>
> Norman Geist.
>
>
>
> Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im Auftrag
> von Ajasja Ljubetic
> Gesendet: Dienstag, 31. Juli 2012 16:07
> An: namd-l
> Betreff: namd-l: peculiar scaling on the GPU
>
>
>
> Dear all,
>
>
>
> I'm noticing peculiar scaling when using a GPU (GTX 560). The graph is shown
> here. If I run the calculation only on the CPUs I get excellent linear
> scaling, but when using the GPU there are huge jumps at 3 and 5 cores.
>
>
>
> What could be the reason for these jumps?
>
>
>
> I'm running NAMD 2.9 Linux-x86_64-multicore, the systems is ~1.5k atoms
> large. The important performance parts of the conf file are below. I'm also
> using the colvars module to keep track of two dihedral angles. An example
> logfile is attached.
>
>
>
> Thanks for your help,
>
> Ajasja
>
>
>
> nonbondedFreq 1
>
> fullElectFrequency 1
>
> stepspercycle 20
>
>
>
> switching on
>
> longSplitting C2
>
> switchdist 10
>
> cutoff 12
>
> pairlistdist 14
>
>
>
> PME yes
>
> PMEGridSpacing 1
>
>
>
>
>
>
>
>
>
>

-- 
Dr. Axel Kohlmeyer  akohlmey_at_gmail.com  http://goo.gl/1wk0
International Centre for Theoretical Physics, Trieste. Italy.

This archive was generated by hypermail 2.1.6 : Tue Dec 31 2013 - 23:22:20 CST