AW: Running namd on linux in cpu+gpu mode

From: Norman Geist (norman.geist_at_uni-greifswald.de)
Date: Sun Jan 20 2013 - 23:52:48 CST

There's a big NAMD tutorial also with some test simulations. As you could
see, NAMD found your GPUs and seems to be will to use it. But without some
information what to do (simulation input like config) NAMD will stop with
this message of course. The simulation config file mainly doesn't differ
from CPU version. Most people use multiple timestepping with
fullelectfrequency 4 and nonbondedfreq 2 to increase the gain through the
GPU due less pcie communication.

Norman Geist.

> -----Ursprüngliche Nachricht-----
> Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im
> Auftrag von James Starlight
> Gesendet: Samstag, 19. Januar 2013 08:20
> An: namd-l_at_ks.uiuc.edu
> Betreff: namd-l: Running namd on linux in cpu+gpu mode
>
> Dear Namd Users!
>
> I have some questions about ussing NAMD on the linux-64 bit
> workstation on which some other MD package have been installed.
>
> 1) I want to use NAMD with the CUDA-gpu acceleration on my core-i5 +
> GTX 670 work-station.
>
> I've downloaded and install latest NAMD binaries where found
> libcudart.so.4 library.
>
> The main problem is that on that work-station I have had already
> installed latest cuda-5 libraries in /local/usr/cuda-5/ dir on which
> some another software is dependend
>
> Consequently in the bashrc I have
>
> export LD_LIBRARY_PATH=/usr/local/openmm/lib:/usr/local/cuda-
> 5.0/lib64:/usr/local/cuda-5.0/lib:/usr/lib/nvidia:$LD_LIBRARY_PATH
>
> So I dont wont to add /home/own/NAMD (where cuda so libraries for
> namd is located) becauce of possible conflct between two cuda's as
> well as software on which depend it. Should I compile NAMD for
> binaries with my Cuda or is there any other possibilities of solving
> such problem ?
>
> 2) When I test installed NAMD binaries from the NAMD dir by means of
> ./namd2 +idlepoll +p4 +devices 0
>
> I've obtained
>
> Charm++: standalone mode (not using charmrun)
> Converse/Charm++ Commit ID: v6.5.0-beta1-28-gabc7887
> CharmLB> Load balancer assumes all CPUs are same.
> Charm++> Running on 1 unique compute nodes (4-way SMP).
> Charm++> cpu topology info is gathered in 0.004 seconds.
>
> Charm++> Warning: the number of SMP threads is greater than the number
> of physical cores, use +CmiNoProcForComThread runtime option.
>
> Info: NAMD CVS-2013-01-18 for Linux-x86_64-multicore-CUDA
> Info:
> Info: Please visit http://www.ks.uiuc.edu/Research/namd/
> Info: for updates, documentation, and support information.
> Info:
> Info: Please cite Phillips et al., J. Comp. Chem. 26:1781-1802 (2005)
> Info: in all publications reporting results obtained with NAMD.
> Info:
> Info: Based on Charm++/Converse 60401 for multicore-linux64-iccstatic
> Info: Built Fri Jan 18 02:25:42 CST 2013 by jim on lisboa.ks.uiuc.edu
> Info: 1 NAMD CVS-2013-01-18 Linux-x86_64-multicore-CUDA 4
> starlight own
> Info: Running on 4 processors, 1 nodes, 1 physical nodes.
> Info: CPU topology information available.
> Info: Charm++/Converse parallel runtime startup completed at 0.019299 s
> Pe 2 physical rank 2 binding to CUDA device 0 on starlight: 'GeForce
> GTX 670' Mem: 2047MB Rev: 3.0
> Pe 3 physical rank 3 will use CUDA device of pe 2
> Pe 1 physical rank 1 will use CUDA device of pe 2
> Pe 0 physical rank 0 will use CUDA device of pe 2
> FATAL ERROR: No simulation config file specified on command line.
> ------------- Processor 0 Exiting: Called CmiAbort ------------
> Reason: FATAL ERROR: No simulation config file specified on command
> line.
>
>
> Does it meas that my gpu was recognized correctly ? ( actually I want
> to run simulation in the gpu+cpu (4 cores) mode ). Could you also
> provide me with the test configure file and some test system suitable
> for simulation on GPU?
>
> Thanks for help,
>
> James

This archive was generated by hypermail 2.1.6 : Wed Dec 31 2014 - 23:20:52 CST