Re: CUDA version status

From: Dow_Hurst (dhurst_at_mindspring.com)
Date: Fri Jan 23 2009 - 13:44:07 CST

Peter,
Thanks so much for the information! Some additional questions that occurred to me after re-reading the SC08 paper:

1. Do you know if GPU process scheduling is going to be part of the 2.7beta2 release, or will that be something dealt with separately? A single CPU per core seems the only practical way to build out a cluster until a good GPU process scheduler becomes integrated. Showerman's paper talked about the Phoenix framework to address this general problem.

2. Is DDR Infiniband required for large scaling rather than SDR? Currently SDR seemed sufficient but inter node communication increases with GPU acceleration. I guess you might not have had a chance to scale up beyond the size of the the QP cluster at NCSA to test this?

4. If you need a DDR Infiniband PCI-Express slot and two PCI-Express 2.0 16x slots per node to support 8 cores and 8 gpus, are there motherboards that support this configuration in a 1U or 2U configuration? This configuration would require two Nvidia Tesla C1070 nodes per dual cpu quad core computational node. Going denser than this configuration could pose a real problem in getting commodity hardware that fits the requirements!

Best wishes,
Dow

-----Original Message-----
>From: Peter Freddolino <petefred_at_ks.uiuc.edu>
>Sent: Jan 22, 2009 4:40 PM
>To: Dow_Hurst <Dow.Hurst_at_mindspring.com>
>Cc: namd-l_at_ks.uiuc.edu
>Subject: Re: namd-l: CUDA version status
>
>Hi Dow,
>the version currently in CVS isn't usable for production runs due to a
>couple missing features (most notably, 1-4 exclusion). A fully working
>implementation is scheduled for namd 2.7beta2; while I don't have an
>exact release date, I've been informed that there will be heart attacks
>if that hasn't been released within 3 months, so that should give a good
>rough estimate.
>Peter
>
>Dow_Hurst wrote:
>> I've found two excellent papers on NAMD/CUDA development:
>>
>> http://portal.acm.org/ft_gateway.cfm?id=1413379&type=pdf&doid2=1413370.1413379
>> http://www.ncsa.uiuc.edu/Projects/GPUcluster/qc_tr.pdf
>>
>> A hardware description and software description of current developments basically show that the benchmarks STMV and Apoa1 run just fine and provide good speedups. A problem exists with scheduling processes reliably on gpus but is possibly being addressed by the Phoenix application development framework. So, I think from reading these papers that it is just a matter of time before NAMD/CUDA becomes available as "stable". It is not clear how long that will take and if NAMD 2.7 would be released with a CUDA enabled "stable" set of features.
>>
>> Has anyone on the list tried other protein systems using QP, the gpu enabled cluster at NCSA, than the benchmarks to see how those systems compare to traditional NAMD? Has anyone tried a protein in a bilayer?
>> Thanks,
>> Dow Hurst
>>
>>
>>
>> No sig.
>

No sig.

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:52:17 CST