From: Giacomo Fiorin (giacomo.fiorin_at_gmail.com)
Date: Thu Jan 25 2018 - 08:22:24 CST
Hi Souvik, this seems connected to the compilation options. Compiling with
MPI + SMP + CUDA used to be very poor performance, although I haven't tried
with the new CUDA kernels (2.12 and later).
On Thu, Jan 25, 2018 at 4:02 AM, Souvik Sinha <souvik.sinha893_at_gmail.com>
> NAMD Users,
> I am trying to run replica exchange ABF simulations in a machine with 32
> cores and 2 Tesla K40 cards. NAMD_2.12, compiled from source is what I am
> From this earlier thread, *http://www.ks.uiuc.edu/Research/namd/mailing_list/namd-l.2014-2015/2490.htm
> I find out that using "twoAwayX" or "idlepoll" might help the GPUs to
> work but somehow in my situation it's not helping the GPUs to work
> ("twoAwayX" is speeding up the jobs though). The 'idlepoll' switch
> generally works fine for Cuda build NAMD versions for non-replica jobs.
> From the aforesaid thread, I get that running 4 replicas in 32 CPUs and 2
> GPUs may not provide a big boost to my simulations but I just want to check
> whether it works or not?
> I am running command for the job:
> mpirun -np 32 /home/sgd/program/NAMD_2.12_Source/Linux-x86_64-g++/namd2
> +idlepoll +replicas 4 $inputfile +stdout log/job0.%d.log
> My understanding is not helping me much, so any advice will be helpful.
> Thank you
> Souvik Sinha
> Research Fellow
> Bioinformatics Centre (SGD LAB)
> Bose Institute
> Contact: 033 25693275
-- Giacomo Fiorin Associate Professor of Research, Temple University, Philadelphia, PA Contractor, National Institutes of Health, Bethesda, MD http://goo.gl/Q3TBQU https://github.com/giacomofiorin
This archive was generated by hypermail 2.1.6 : Tue Dec 31 2019 - 23:19:37 CST