AW: Is GPU double-precision floating point performance important for NAMD?

From: Norman Geist (norman.geist_at_uni-greifswald.de)
Date: Fri Jun 03 2016 - 01:49:06 CDT

Hi,

 

as far as I know, NAMD doesn’t use any double precision numbers on the GPU, so your assumptions are right so far.

 

The use of Tesla cards is meant for professional clusters, not for individual workstations. Their advantages for big cluster systems are:

1. Longer product cycle

2. Built for 24/7 use

3. Some exclusive features like GPU-Direct

4. ECC Support (Easier to find a broken GPU out of many)

Disadvantages are:

1. Expensive

2. Lower clock for the sake of stability and reliability

3. Slower due ECC for the sake of stability and reliability

 

So it’s common knowlegde that GeForce are faster, as they have higher clocks and lack ecc., but therefore they are likely to die earlier, which is not important if you have just one of them, since you could buy any newer card then, but it would be if you would have 100s of them in a cluster, where you would have to find the broken one 1st, and would want to replace it with the same modell again.

 

Hope that clears up some things ;)

 

Norman Geist

 

Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im Auftrag von Mert Gür
Gesendet: Donnerstag, 2. Juni 2016 23:48
An: NAMD list <namd-l_at_ks.uiuc.edu>
Betreff: namd-l: Is GPU double-precision floating point performance important for NAMD?

 

Dear all,

 

When I am running GPU accelerated MD simulations in NAMD, are there any double precision floating number calculations performed on the GPU? In other words how much should I care about the double precision performance on the GPU if I am running NAMD?

 

Is there any source/documents that explains this extensively?

 

What is the double-precision point performance I am looking for in a GPU.

 

 

For example the K80 has

Peak double-precision floating point performance: 1.87 Tflops

Peak single-precision floating point performance 5.6 Tflops

CUDA cores 4,992

Memory size per board (GDDR5) 24 GB

 

whereas the Titan x has (if I am not mistaken)

 

Peak double-precision floating point performance: 200 GGflops

Peak single-precision floating point performance 7 Tflops

CUDA cores 4,992

 

The new GTX1080 is said to give 10.7 TFLOPs of single precision performance.

 

So if GPU doube-precision performance is not that important for NAMD;

 

Obviously GTX 1090 and Titan X would give me better single precision performance than K80. Doesn't that mean I would get faster MD simualtions on GTX1080 compared to the others?

 

If that is the case why should I select a K80?

 

 

Does double-precision performance of the GPU matter if I apply any type of bias or perform accelerated MD?

 

My understanding (from past emails on the list) is that the "ECC error correction" GPU feature is also not (that) important for NAMD. Can someone ellaborate on this or point me to a link/document which I can read.

 

 

Thanks,

 

Mert

This archive was generated by hypermail 2.1.6 : Sun Dec 31 2017 - 23:20:32 CST