From: Jesper Sørensen (lists_at_jsx.dk)
Date: Wed May 18 2011 - 08:41:34 CDT
It is getting a bit off topic now... It is not that I don't have different
versions installed, I do, but mainly I try to stick with installing the
final versions. Modules is a very good solution, mostly we just use
different directories and reset the paths... I am not opposed to upgrading
to NAMD 2.8b2, but I don't see a reason to right now as NAMD 2.7 has all the
features I/we need...
As far as I know/understand there wasn't a memory leak bug in NAMD 2.7 final
with CUDA, so I still haven't really figured out the answer to my original
question. If it is a bug then of course I need to upgrade and that is a
From: Axel Kohlmeyer [mailto:akohlmey_at_gmail.com]
Sent: 18. maj 2011 15:29
To: Jesper Sørensen
Cc: Norman Geist; Namd Mailing List
Subject: Re: namd-l: CUDA simulation memory usage
2011/5/18 Jesper Sørensen <lists_at_jsx.dk>:
> Thanks Norman - I don't have a problem with an increase in memory during
the simulation, so I don't know exactly what is going on...
> I would prefer not upgrading to NAMD 2.8 until it is a "final" (or at
least not beta) version. NAMD will be shared by many users on our cluster,
so I prefer not to have test-versions out there, people can have those on
their own desktops if there are new functionalities they would like to check
out before the final release (in my opinion).
this extremely weak reasoning.
you can just install software in a different directory and then reset the
path to it or just use absolute paths.
..and there are even more elegant solutions.
have you ever heard of the "modules" software?
this very conveniently allows you have multiple versions of the same
software installed concurrently.
people just use: module load namd/2.7
or: module load namd/2.8
in their job script and they get the version they want. ...and this is just
the tip of the iceberg.
> Best regards,
> -----Original Message-----
> From: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] On
> Behalf Of Norman Geist
> Sent: 18. maj 2011 14:11
> To: 'Jesper Sørensen'
> Cc: Namd Mailing List
> Subject: AW: namd-l: CUDA simulation memory usage
> Hi Jesper,
> For me the memory consumption of the cuda namd is also a bit greater than
on cpu but I think that’s quite normal. I had a problem with a constant
increase of memory during the simulation, maybe it’s the same to you. For me
the newer nightly build solved the problem while fixing a memory leak in
trajectory output. Maybe try the newer NAMD 2.8b2.
> Best regards
> Norman Geist.
> -----Ursprüngliche Nachricht-----
> Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im
> Auftrag von Jesper Sørensen
> Gesendet: Mittwoch, 18. Mai 2011 13:15
> An: c00jsw00_at_nchc.narl.org.tw
> Cc: 'namd-l'
> Betreff: RE: namd-l: CUDA simulation memory usage
> Thanks, but that doesn't really solve my problem with NAMD...
> -----Original Message-----
> From: c00jsw00_at_nchc.narl.org.tw [mailto:c00jsw00_at_nchc.narl.org.tw]
> Sent: 18. maj 2011 13:09
> To: lists
> Subject: Re: namd-l: CUDA simulation memory usage
> Dear Sir,
> I used the command "nvidia-smi" to view the performance of my gpu card.
Then I found that the performance of NAMD2.8b2 was very low (~ 13%). But the
performance of ACEMD was very high (~ 100%).
> -----Original message-----
> From:Jesper Sørensen <lists_at_jsx.dk>
> To:'namd-l' <namd-l_at_ks.uiuc.edu>
> Date:Wed, 18 May 2011 12:03:17 +0200
> Subject:namd-l: CUDA simulation memory usage
> I have just been benchmarking our new cluster with GPUs and the memory
usage that NAMD prints out at the end of the simulation run is MUCH larger
with GPU's than without the GPU's.
> With CUDA:
> WallClock: 1246.348877 CPUTime: 1246.348877 Memory: 41306.324219 MB
> Without CUDA
> WallClock: 4025.868896 CPUTime: 4025.868896 Memory: 350.937500 MB
> I am assuming that the Memory number with CUDA is wrong - mostly because I
know that we don't have that much memory in these new machines. Is it taking
memory on the GFX-card into account, or what is going on?
> I've looked through the mailinglist, but I haven't been able to find
anything on this issue...
> Best regards,
> Jesper Sørensen
> Dept. of Chemistry
> Aarhus University, Denmark
-- Dr. Axel Kohlmeyer akohlmey_at_gmail.com http://goo.gl/1wk0 Institute for Computational Molecular Science Temple University, Philadelphia PA, USA.
This archive was generated by hypermail 2.1.6 : Mon Dec 31 2012 - 23:20:17 CST