Re: charmrun simulations over macpros

From: Cesar Luis Avila (cavila_at_fbqf.unt.edu.ar)
Date: Wed Jun 20 2007 - 13:30:10 CDT

Indeed the bottleneck is using ethernet to interconnect the nodes. Is it
fast or gigabit ethernet? Nevertheless I don't think a cat6 cable would
improve that much your performance. You should consider upgrading your
interconnection system to get better scaling. Nevertheless I don't know
if upgrading to myrinet or infiniband worth the price for just two nodes.
Ethernet has proved to be good enough for connecting up to 8 nodes. I
think the problem you are facing is that 4 processors on each node are
competing for use of just one network card. I don't know if namd would
take advantage of having more than one network card on each node (say
one for each CPU).

pedro.borkowski_at_utoronto.ca escribió:
> Hey all,
>
> I was able to set up two mac pros (4 cores each) to run some
> simulations. I have gigabit Ethernet between connected through a
> CAT5e crossover cable. I was able to run simulation on all 8 cores
> but after some benchmarking, i found that a simulation in a single
> machine (4 processors) is much faster that a simulation on the two
> machines (8 processors). I looked more into it and i found that when
> running the simulation, i am only using upto 66% of each processor. I
> was wondering if there is any way to tell the computers to allocate as
> much of the processors into the simulation instead of just the 66%. I
> was also wondering if the ethernet is the limiting factor into these
> simulations? If it is, would a cat6 crossover cable improve the
> performance of the simulation?
> Also i tried to run a simulation where both computers were logged off
> the accounts instead when some1 is logged on. The logging off made
> the simulation a little faster.
>
> anyone have any ideas?
>
> Thanks
> -Pedro
>
>

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:44:51 CST