Re: Two GPU-based workstation

From: James Starlight (jmsstarlight_at_gmail.com)
Date: Mon Nov 11 2013 - 06:55:01 CST

Norman,

1) I dont know why but I have this error with the PBC vectors in case when
I change fullElectFrequency value only (not any changing in barostat
options etc)

2) some benchmarks for the system of membrane receptor (60k atoms)

using use +p6 +ppn3 +devices 0,1

Info: Benchmark time: 3 CPUs 0.0472163 s/step 0.273242 days/ns 290.363 MB
memory

using +ppn6 +devices 0,1

Info: Benchmark time: 6 CPUs 0.0288685 s/step 0.167063 days/ns 325.297 MB
memory

However n the second case I have still bad performance (~1ns/4 hour). For
comparison in Gromacs with the same setups I have twice better of the
performance (although it can be possible also to obtain gain for the usage
of the both GPUs simultaneously)

Does anybody else have some experience with the GeForce titans ?

James

2013/11/11 Norman Geist <norman.geist_at_uni-greifswald.de>

> Read something about „Intel HT-Hyperthreading“.
>
>
>
> Yes.
>
>
>
> Norman Geist.
>
>
>
> *Von:* owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] *Im
> Auftrag von *James Starlight
>
> *Gesendet:* Montag, 11. November 2013 09:49
> *An:* Norman Geist; Namd Mailing List
>
> *Betreff:* Re: namd-l: Two GPU-based workstation
>
>
>
> 2) about cores
>
> so If my physical number is 6 ( I dont really know why debian recognize 12
> cores)
>
> I should use +p6 +ppn3 +devices 0,1 do adjust each 3 cores for each gpu
> shouldn't it ?
>
>
>
> 2013/11/11 James Starlight <jmsstarlight_at_gmail.com>
>
> Norman,
>
> using
>
> pmegridspacing 1;
> fullElectFrequency 2;
>
> instead of
>
> PMEGridSizeX $fftx; # should be close to the cell size
> PMEGridSizeY $ffty; # corresponds to the charmm input
> fftx/y/z
> PMEGridSizeZ $fftz;
> pmegridspacing 1;
>
> have crashed my simulation at the beginning
>
> namd2 +idlepoll +p12 +ppn6 +devices 0,1 ./aMD.conf >>
> b2ar_p0gDiheBoostlog_20000
> ------------- Processor 0 Exiting: Called CmiAbort ------------
> Reason: FATAL ERROR: Periodic cell has become too small for original patch
> grid!
> Possible solutions are to restart from a recent checkpoint,
> increase margin, or disable useFlexibleCell for liquid simulation.
>
>
> (this time I've start from the checkpoint of the previous run, performed
> with the defined XYZ of the PME. Does it means that I should to begin
> simulation from the beginning of the equilibration phase with
> fullElectFrequency >1 or there are alternative solutions?
>
> James
>
>
>
>
>
> 2013/11/11 Norman Geist <norman.geist_at_uni-greifswald.de>
>
> Why not simply setting „pmegridspacing 1“?
>
>
>
> I usually use numcpus/numgpus for each gpu. And really believe me, you
> only got 6 cores not 12. And running 2 simulations on only one cpu socket
> is inefficient.
>
>
>
> Norman Geist.
>
>
>
> *Von:* owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] *Im
> Auftrag von *James Starlight
>
>
> *Gesendet:* Samstag, 9. November 2013 15:53
>
> *An:* Namd Mailing List
>
>
> *Betreff:* Re: namd-l: Two GPU-based workstation
>
>
>
> by the way increasing of the fullElectFrequency > 1 has end simulation
> with the errors about unproperly set XYZ of the PME boundaries ( with
> fullElectFrequency 1 I use 80 80 120 simulating membrane protein and have
> not any errors. How could I change pme options ?
>
> also my question is the optimal balancing of the number of CPUs for each
> GPU. Is there some impirical relationships showing what amount of CPUs is
> needed for each GPU ?
>
> assuming that I obtained best performance using
> namd2 +idlepoll +p12 +devices 0 ./aMD.conf
>
> I'd like to share some CPUs between both available GPUs for the 2 parallel
> simulations.
>
> James
>
>
>
> 2013/11/8 James Starlight <jmsstarlight_at_gmail.com>
>
> Could fullelectfrequency 4 increase performance exactly dual-gpu regime ?
>
> In case of running two simulations will it be enough to provide each gpu
> with the 6 cores ? ( I suppose that I have not obtain good performance in 2
> gpu regime exactly due to small number of cores for each gpu)
>
> James
>
>
>
> 2013/11/7 Ajasja Ljubetič <ajasja.ljubetic_at_gmail.com>
>
> On 7 November 2013 06:32, James Starlight <jmsstarlight_at_gmail.com> wrote:
>
> I've gone to conclusion that using 2 GPUs simultaneously gave me the
> same performance as 1 GPU like
>
>
>
> Yes, this is expected, for such small systems there is too little work
> at each step to scale efficiently. You can however run one (or two or
> three) independent simulations on each GPU.
>
>
>
> Regards,
>
> Ajasja
>
>
>
>
>
>
>
>
>

This archive was generated by hypermail 2.1.6 : Tue Dec 31 2013 - 23:23:58 CST