Re: NAMD distribute

From: Brian Bennion (brian_at_youkai.llnl.gov)
Date: Thu Oct 13 2005 - 16:05:59 CDT

Hello Dan,

Yes, your idea is very doable however:

There is no security by anonymity. The Schulten group found this out last
year although that problem was linked to other supercomputing centers and
universities. I watched my home linux server on a dsl connection get
pounded with windows IIS attacks in less than 5 minutes after turning it on.

You have to assume that each machine not physically under your control is
comprised and that any data generated is suspect. (It maybe
already be comprised and one just doesn't know it yet.)

Just ponder what might happen if your central controling server received a
a "calculation" packet which actually was a trojan. Now someone else has
control over your entire distribution. So instead of cancer_at_home, or
folding_at_home your distributed computers are running Johntheripper_at_home.

Its not feasible to watch for all this. If your are going to localize a
grid to a facility then your back to the same idea as buying a large
cluster that you have some (lots of) control over.

A university is the very last place I would look for unused cpu cycles.
You wouldn't find any, they are already running some adware or spyware...

Sorry for the cynical view, but there are bad guys out there...

Regards
Brian

On Thu, 13 Oct 2005, Dan Strahs wrote:

>
> Thanks for the responses, Michael and Brian.
>
> So what I gather is that this would be doable if;
> 1) the communication between the grid and server is not significant
> and 2) if the calculations could be secured.
>
> These seem tractable issues for the most part, even for large simulations
> a la Schulten etc. Perhaps even load balancing could be partially
> parallelized and farmed out.
>
> Security will always be a problem in large-scale distributed computing due
> to systemic hacking, but small-scale? If the grid is geographically
> localized to a managable facility, such as a university or company, then
> security is enhanced by anonymity.
>
> Dan Strahs
>
>
> On Thu, 13 Oct 2005, Brian Bennion wrote:
>
> > Hello,
> >
> > While the idea of distributed computing sounds great and has seen some
> > success, I would never trust the results completely.
> >
> > Even the most popular venture, seti_at_home, has been plagued with secure
> > data transfer. Nefarious people just love throwing random data around
> > or makeing trojans. Then the data gets picked up and put into a
> > publication. IMHO the time trying to
> > check and secure your data stream isn't worth it and can be better spent
> > on sustaining your own machines.
> >
> > Just my 0.02 (can't tell I work at a secure (paranoid) facility can you ;)
> >
> > Regards
> > Brian
> >
> >
> > On Wed, 12 Oct 2005, Dan Strahs wrote:
> >
> > >
> > > Hello:
> > >
> > > I have a bit of an odd idea, but it struck me as I walked past a bank of
> > > computers all running Folding_at_Home. Has anyone done anything towards
> > > integrating NAMD with distributed grid computing methods? While this is
> > > clearly feasible, is it useful?
> > >
> > > Perhaps a NAMD based screen saver, working with the BOINC (Berkeley Open
> > > Infrastucture for Network Computing)? Just a thought - I think I could
> > > harvest some significant CPU cycles if this pans out - the source of the
> > > CPU cycles I hope to claim is currently in the top 10% of Folding_at_Home
> > > teams.
> > >
> > > Dan Strahs
> > >
> > >
> >
> > ************************************************
> > Brian Bennion, Ph.D.
> > Bioscience Directorate
> > Lawrence Livermore National Laboratory
> > P.O. Box 808, L-448 bennion1_at_llnl.gov
> > 7000 East Avenue phone: (925) 422-5722
> > Livermore, CA 94550 fax: (925) 424-6605
> > ************************************************
> >
> >
>

************************************************
  Brian Bennion, Ph.D.
  Bioscience Directorate
  Lawrence Livermore National Laboratory
  P.O. Box 808, L-448 bennion1_at_llnl.gov
  7000 East Avenue phone: (925) 422-5722
  Livermore, CA 94550 fax: (925) 424-6605
************************************************

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:40:02 CST