AW: remd across gpus

From: Norman Geist (norman.geist_at_uni-greifswald.de)
Date: Mon Mar 31 2014 - 00:39:03 CDT

Hmm O_o ??

 

I’m running REMD without problems over our four Dual-GPU-Nodes with NAMD 2.10 NB from many months ago. Why should this be a problem?

 

Norman Geist.

 

Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im Auftrag von Francesco Pietra
Gesendet: Samstag, 29. März 2014 08:29
An: NAMD
Betreff: namd-l: remd across gpus

 

Hello:

 

On Tue, May 10, 2011 at 4:29 PM, Massimiliano Porrini
<M.Porrini_at_ed.ac.uk <mailto:M.Porrini_at_ed.ac.uk?Subject=Re:%20%20REMD%20across%20gpus> > wrote:
> Dear Axel,
>
> Thanks for the useful reply.
>
> Only one clarification:
>
> Your word "mess" was referred only to the hybrid (gpu/cpu) simulation
> or also to the REMD across multiple GPUs ?

mostly.

> I mean, is REMD across GPUs easily doable?

i don't know. as far as i know, the NAMD implementation
is entirely based on Tcl script code using a socket interface.
that is an elegant way of implementing REMD without having
the rewrite the executable. i did something similar a while ago
in python using pypar and HOOMD and it was fun to see it
work, but somehow a bit unsatisfactory.

i know of other codes that have such features and other
multi-replica method implemented on a lower level and
they appear to me more elegant to use. that may be
the price to pay.

cheers,
    axel.

 

translated into the present, we found it difficult to run t-remd across cpu/gpus with either namd2.9 or namd2.10 (remd hard cored into the latter). For example, on IBM DX360M3. Difficult, means that it failed in our hands.

cheers

francesco pietra

---
Diese E-Mail ist frei von Viren und Malware, denn der avast! Antivirus Schutz ist aktiv.
http://www.avast.com

This archive was generated by hypermail 2.1.6 : Thu Dec 31 2015 - 23:20:39 CST