Re: Two GPU-based workstation

From: James Starlight (jmsstarlight_at_gmail.com)
Date: Thu Oct 31 2013 - 12:24:16 CDT

Dear Namd users,

I've build my new workstations consisted of two Titans with i6 (linux
recognize it like 12 core process but actually it consist of 6 nodes).

Than I've installed lattest nvidia cuda-5.5 drivers (driver, toolkit as
well as samples) and define all paths to the bash as well as libconf files.

When I tried to lunch namd I've obtain error that libcudart.so.4 is not
found (indeed in the cuda/lib and lib64 only libcudart.so.5 files are
present).

I've found libcudart.so.4 only in VMDs folder and when I've provide it in
bash Namd have been worked.

Should I change some modification to replace lib libcudart.so.5 to
libcudart.so.4 ? My namd output is

CharmLB> Load balancer assumes all CPUs are same.
Charm++> Running on 1 unique compute nodes (12-way SMP).
Charm++> cpu topology info is gathered in 0.001 seconds.
Info: NAMD CVS-2013-10-31 for Linux-x86_64-multicore-CUDA
Info:
Info: Please visit http://www.ks.uiuc.edu/Research/namd/
Info: for updates, documentation, and support information.
Info:
Info: Please cite Phillips et al., J. Comp. Chem. 26:1781-1802 (2005)
Info: in all publications reporting results obtained with NAMD.
Info:
Info: Based on Charm++/Converse 60500 for multicore-linux64-iccstatic
Info: Built Thu Oct 31 02:26:47 CDT 2013 by jim on lisboa.ks.uiuc.edu
Info: 1 NAMD CVS-2013-10-31 Linux-x86_64-multicore-CUDA 1
drunk_telecaster own
Info: Running on 1 processors, 1 nodes, 1 physical nodes.
Info: CPU topology information available.
Info: Charm++/Converse parallel runtime startup completed at 0.00467801 s
Did not find +devices i,j,k,... argument, using all
Pe 0 physical rank 0 binding to CUDA device 0 on drunk_telecaster: 'GeForce
GTX TITAN' Mem: 6143MB Rev: 3.5
FATAL ERROR: No simulation config file specified on command line.

does it means the both of my GPUs ready to use ?

James

2013/10/30 Norman Geist <norman.geist_at_uni-greifswald.de>

> 1 - No.****
>
> 2 - Doesn't matter here most of the cases. ****
>
> ** **
>
> Norman Geist.****
>
> ** **
>
> *Von:* owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] *Im
> Auftrag von *James Starlight
>
> *Gesendet:* Mittwoch, 30. Oktober 2013 14:13
> *An:* Namd Mailing List
>
> *Betreff:* Re: namd-l: Two GPU-based workstation****
>
> ** **
>
> Some extra questions-****
>
> 1- Do I need special drivers optimizing dual GPU in Debian ?****
>
> 2- Should I compile NAMD from sources for optimal performance ?
> Previouslu I've used NAMD from Binaries ( using 1 gPU + 4 cores of i5 )***
> *
>
> James****
>
> ** **
>
> 2013/10/29 Norman Geist <norman.geist_at_uni-greifswald.de>****
>
> >Just remember, NAMD is very memory bandwidth hungry.****
>
> ****
>
> Guess you mean PCIE bandwidth hungry?****
>
> ****
>
> Norman Geist.****
>
> ****
>
> *Von:* owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] *Im
> Auftrag von *Ajasja Ljubetic
> *Gesendet:* Dienstag, 29. Oktober 2013 14:34
> *An:* James Starlight
> *Cc:* Norman Geist; Namd Mailing List****
>
>
> *Betreff:* Re: namd-l: Two GPU-based workstation****
>
> ****
>
> ****
>
> As I've told I have typical desktop with 6 pci/e slots and 1 coreI7 (4
> cores) + 2 GPU + 4 RAM slots (each of 4gb or 8gb I dont remember it now
> clearly :) ). I'd like to launch simulations ( water soluble as well as
> membrane proteins using NAMD with the explicit solvents (50k and 80k atoms
> resp) using both GPUs simultaneously and CPU for one run. ****
>
> What another extra modification of my desktop as well as simulation
> parameters should I take into account? Are any other specified drivers
> needed for typical Linux-based multi GPU workstation?****
>
> ****
>
> ****
>
> Make sure to pick a motherboard with two 16x speed PCIE ports. (Probably
> PCIE 3.0?). Personally I don't think you will see the scaling you desire.
> I.e., the GPUs will be underutilized. But then again, YMMV. Just remember,
> NAMD is very memory bandwidth hungry.****
>
> ****
>
> Best regards,****
>
> Ajasja****
>
> ** **
>

This archive was generated by hypermail 2.1.6 : Tue Dec 31 2013 - 23:23:54 CST