From: John Hamre (johnhamre3_at_gmail.com)
Date: Fri Mar 29 2019 - 09:38:55 CDT
Thank you, Norman!
Yes, I am having a difficult time getting the REMD example to go. I will
look into SLURM. I am using 3 GPU's and 12 cores. I can't even get the
example to work on my home CPU without CUDA, there is always some mpi
error. I copy the namd executable in the folder I'm using so I use mpirun
./charmrun -n 3 ./namd2 +replicas 12 ... Thanks again.
John
On Fri, Mar 29, 2019 at 10:29 AM Norman Geist <
norman.geist_at_uni-greifswald.de> wrote:
> Hey there,
>
>
>
> I’m only using MPI builts of NAMD. If you have a queuing system like SLURM
> with e.g. OpenMPI (and SLURM support), simply “mpirun namd2 in.conf” is
> enough.
>
> Otherwise, you need of course to tell NAMD aka OpenMPI the ressources it
> may use.
>
>
>
> I do it like that for any NAMD version with and without GPU since version
> 2.10 up to todays 2.13
>
>
>
> Norman
>
>
>
> *Von:* owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] *Im
> Auftrag von *John Hamre
> *Gesendet:* Freitag, 29. März 2019 15:19
> *An:* NAMD List <namd-l_at_ks.uiuc.edu>
> *Betreff:* namd-l: mpi with NAMD
>
>
>
> Does anyone use the mpirun command for use with their NAMD sims?
>
>
>
> If so, can you tell me what your command run line is (mpirun -n 3 ./namd2
> XXX) and what versions of mpi and namd you run? Thank you.
>
This archive was generated by hypermail 2.1.6 : Tue Dec 31 2019 - 23:20:40 CST