Re: charm++ how to handle SMP/Multicore

From: Bjoern Olausson (bjoern.olausson_at_biochemtech.uni-halle.de)
Date: Thu Aug 20 2009 - 11:41:16 CDT

On Monday 10 August 2009 00:01:48 Bjoern Olausson wrote:
> Hi all,
>
> we have a Cluster build on new AMD Hexacores (Six-Core AMD Opteron 2427)
> with two Hexacores per Node. The cluster should mainly run NAMD
>
> The 24 Nodes are connected to each other with DDR Infiniband.
>
> So now my question:
> Which Option should I choose when
> 1) Compiling charm with mpi (mvapich2) to run on >12Cores
> 2) Compiling a version that should perform best on one node
> <=12Cores
>
> These are the Options Charm provides:
> 1) single-threaded [default]
> 2) multicore(single node only)
> 3) SMP
> 4) POSIX Shared Memory
>
> Thanks a lot for any suggestions.
>
> Kind regards
> Bjoern

Okay, I finally run all flavours of NAMD and got some help from Chao Mei.

Actually I wasn't able to get the runtime configuration for the SMP/PXSHM
version right (I couldn't assign the processes to nodes so that every process
launches x threads on each node [1 Process per Node with 11 Threads would have
been the optimum in my case]).

Anyway, I figured out that the following...

Use "multicore" as long as you do not use any interconnect.

If you need to use some interconnect, go for ConnectX DDR Infiniband and use
the charm (6.1.2) build in IB support (ibverbs).

I couldn't benchmark against mvapich v1.1 or mvapich 1.4 cause I wasn't able
to compile charm against them.

I Didn't test any of the net versions cross nodes, cause when you can use
Infiniband, ethernet gets uninteresting :-)

But draw you own conclusion from the benchmarks:
http://olausson.de
Direkt Link:
http://olausson.de/component/content/article/6-Blog/57-benchmarking-different-
flavours-of-namd27b1

If you need some more information, let me know.

Cheers
Bjoern

-- 
Bjoern Olausson
Martin-Luther-Universität Halle-Wittenberg 
Fachbereich Biochemie/Biotechnologie
Kurt-Mothes-Str. 3
06120 Halle/Saale
Phone: +49-345-55-24942

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:53:12 CST