From: Jérôme Hénin (jhenin_at_ifr88.cnrs-mrs.fr)
Date: Mon Mar 28 2011 - 09:51:21 CDT
The colvars are still not parallelized, but:
1) there were general improvements in NAMD that could affect this
2) the amount of data that the colvars pass around has been reduced in
many cases: this is listed as "reduced overhead" in the features page.
It should improve scaling.
Finally, if your problems persist, send me the data needed to
reproduce your calculation, I'll have a look.
Best,
Jerome
On 28 March 2011 15:40, Wang Yi <dexterwy_at_gmail.com> wrote:
> Hi Jérôme,
> I checked the "new feature" page of the 2.8b1 release. The ColVar part
> didn't say about the parallelization improvement. So I'm a bit doubtful. But
> as you said, I will definitely give it a shot and I will keep you guys
> updated. Thanks again for the kind help~
> Best,
> ___________________________
> Yi (Yves) Wang
> Duke University
>
>
>
>
> 在 2011-3-28,上åˆ9:33, Jérôme Hénin 写é“:
>
> Hi Yi,
>
> There have been improvements in this respect recently. Can you try
> again with 2.8b1?
>
> Thanks,
> Jerome
>
> On 22 March 2011 18:54, Wang Yi <dexterwy_at_gmail.com> wrote:
>
> Hi Jerome,
>
> I tried turning off the colvar output. It didn't appear to change the
>
> performance. I got the same benchmark time of 0.5 days/ns, on the same node,
>
> with colvarsTrajFrequency being 0 and 100. Therefore, it doesn't seem to be
>
> an I/O issue. Here is my colvar input:
>
> colvarsTrajFrequency 100
>
> colvar {
>
> name complex
>
> distance {
>
> group1 {atomNumbersRange 1 - 126}
>
> group2 {atomNumbersRange 127 - 156}
>
> }
>
> }
>
> harmonic {
>
> colvars complex
>
> centers 5.00
>
> forceConstant 45.0
>
> }
>
> Thanks.
>
> ___________________________
>
> Yi (Yves) Wang
>
> Duke University
>
>
>
>
> 在 2011-3-22,下åˆ12:30, Jérôme Hénin 写é“:
>
> Hi,
>
> This is strange indeed. Is it possible that the colvars are I/O bound
>
> here. Are the colvars writing large volumes of data? Can you try
>
> setting colvarsTrajFrequency to 0?
>
> Also, you can generally request a full node even if you do not use all
>
> cores. Indeed, job queuing systems often assume this (and accordingly,
>
> they charge you for using the whole node).
>
> If the performance problem persists, you can post your colvars input
>
> file, we'll have a look.
>
> Best,
>
> Jerome
>
>
> 2011/3/21 Wang Yi <dexterwy_at_gmail.com>:
>
> Hi Giacomo,
>
> Thanks for your reply.
>
> The colvar has a harmonic potential restraining about 90 non-hydrogen atoms.
>
> It is weird because, without the colvar, I can scale up to 10~12 CPUs and
>
> get 0.18 days/ns benchmark time.
>
> Best,
>
> ___________________________
>
> Yi (Yves) Wang
>
> Duke University
>
>
>
>
> 在 2011-3-21,下åˆ4:46, Giacomo Fiorin 写é“:
>
> Hello Yi, does the benchmark time have the same trend without colvars
>
> enabled? Â And how many atoms did you include in the collective variables you
>
> defined?
>
> Giacomo
>
> ---- ----
>
> Â Dr. Giacomo Fiorin
>
> Â ICMS - Institute for Computational Molecular Science - Temple University
>
> Â 1900 N 12 th Street, Philadelphia, PA 19122
>
> Â giacomo.fiorin_at_temple.edu
>
> ---- ----
>
>
>
> On Mon, Mar 21, 2011 at 2:49 PM, Wang Yi <dexterwy_at_gmail.com> wrote:
>
> NAMDers,
>
> Greetings from new subscriber, Yves.
>
> I have been using NAMD 2.7 Colvar for some umbrella sampling simulations.
>
> The system has about 3000 solvent molecules (water and DMSO) and the solute
>
> is about 1500 Da. What I observed as "strange" is the following:
>
> Say I submit the job to a node that has an 8-core CPU. And I request to
>
> use 6 cores. The benchmark time is about 0.6~0.8 days/ns. I have also tried
>
> requesting 4, 5, 7, 8 cores, the benchmark time all increased to about 2~3
>
> days/ns. I even tried to submit the job to a 12-core node, the benchmark
>
> time is still about 2 days/ns.
>
> The other observation is that: if my job is using 6 cores of an 8-core
>
> CPU, and another job is using 1 core on that CPU, my calculation would be
>
> significantly slowed down.
>
> I read on the Colvar manual that the Colvar functionality is not a
>
> parallelized process with the MD simulation. I'm speculating that this is
>
> the reason. But this is kind of frustrating, because, for an 8-core CPU, I
>
> can't use all 8 cores because it's gonna be slow. But if I use 6 cores and
>
> leave the other 2 cores open, someone might come in and use them, which will
>
> eventually slow my simulation down as well.
>
> I'm just wondering if anyone has similar experience, or hopefully a
>
> solution for that. Thanks a lot~
>
> Best,
>
> ___________________________
>
> Yi (Yves) Wang
>
> Duke University
>
>
>
>
>
>
>
>
>
>
>
This archive was generated by hypermail 2.1.6 : Mon Dec 31 2012 - 23:20:01 CST