Launching namd GPU-MPI vs GPU-binary

From: Francesco Pietra (chiendarret_at_gmail.com)
Date: Sat Jul 06 2013 - 02:54:00 CDT

I would be very grateful for help with namd2.9 cuda4.0 on GPUs on a
shared-mem 6-core, two GTX-680, with a large protein system in a water
box.

(1) First about the mpi compilation. Was

./config Linux-x86_64-g++ --charm-arch mpi-linux-x86_64 --with-cuda
--cuda-prefic /usr

correct? Or should '--charm-arch mpi-linux-x86_64' have been omitted?

(2) Compiled as above, command

charmrun $NAMD_HOME/bin/namd2 filename.conf +p6 +idlepoll

resulted in using GPU 0 only, while GPU 1 was dormant (both had been
activated with
nvidia-smi -L
nvidia -pm 1

a posteriori checking that namd cuda binary uses both GPUs on the same system).

(3) The benchmark time from the log file for the cuda-mpi compiled
namd in s/step was 80% longer (and fewer GPU mem used, 59MB) than with
the namd cuda binary with two GPUs running (198 GB total GPU mem
engaged) or 70% longer than with namd-cuda binary with only one GPU
installed.

I wonder whether the above dealing (namd compilation and simulation
launching) was correct. All that in view of using the cuda-mpi
compilation with this machine to prepare files for a
parallel-tempering simulation on a mainframe.

Thanks
francesco pietra

This archive was generated by hypermail 2.1.6 : Wed Dec 31 2014 - 23:21:24 CST