From: Jim Phillips (jim_at_ks.uiuc.edu)
Date: Wed May 18 2011 - 17:37:45 CDT
At the beginning of the run there is probably a line like this:
Info: 59.8477 MB of memory in use based on /proc/self/stat
Would you by chance be using CUDA 4.0 drivers? We have a machine with
256GB of memory that shows 272g VIRT in top for any process using CUDA but
machines with older drivers don't have this issue.
What I think is happening is that /proc/self/stat's vsize field is
including the entire virtual address space allocated by CUDA, which is
pretty useless for most purposes but does at least match what top reports
in its VIRT column.
Sine this looks like an unavoidable property of the new CUDA drivers I've
modified memusage.C to avoid reporting vsize in CUDA builds.
On Wed, 18 May 2011, Jesper Sørensen wrote:
> I have just been benchmarking our new cluster with GPUs and the memory usage that NAMD prints out at the end of the simulation run is MUCH larger with GPU's than without the GPU's.
> With CUDA:
> WallClock: 1246.348877 CPUTime: 1246.348877 Memory: 41306.324219 MB
> Without CUDA
> WallClock: 4025.868896 CPUTime: 4025.868896 Memory: 350.937500 MB
> I am assuming that the Memory number with CUDA is wrong - mostly because I know that we don't have that much memory in these new machines. Is it taking memory on the GFX-card into account, or what is going on?
> I've looked through the mailinglist, but I haven't been able to find anything on this issue...
> Best regards,
> Jesper Sørensen
> Dept. of Chemistry
> Aarhus University, Denmark
This archive was generated by hypermail 2.1.6 : Mon Dec 31 2012 - 23:20:17 CST