From: Nicholus Bhattacharjee (nicholusbhattacharjee_at_gmail.com)
Date: Tue Oct 25 2022 - 07:43:04 CDT
Hello Daipayan,
Thanks for the message. Yes now it shows ns/day. But I have another little
problem. I have fixedAtoms on in my file and namd3 doesn't work with it
when I try to run. I thought there might be something else in the user
guide. But namd3 user guide also shows there is fixedAtoms parameter.
Kindly let me know how to do this.
P.S. it works with extraBonds but not fixedAtoms
Thanks
On Tue, Oct 25, 2022, 12:16 PM Daipayan Sarkar <sdaipayan_at_gmail.com> wrote:
> Hi Nicholus,
>
>
>
> Do you have “CUDASOAintegrate on” in your NAMD configuration script? Same
> applies for the keyword margin, check
> https://www.ks.uiuc.edu/Research/namd/alpha/3.0alpha/
>
> I say this because of you mentioned in your email that namd3 log gives you
> “0.030 days/ns”. Usually, the benchmark lines in namd3 log would be in
> “ns/day”. Please check and confirm if CUDASOAintegrate is turned on. Also,
> it seems like your system is minimized and you can perform equilibration
> dynamics using minimization output. So, when using NAMD3, comment out
> minimize in your NAMD configuration script.
>
>
>
> In addition, please follow the steps listed here under section getting
> good performance with NAMD v3 (
> https://urldefense.com/v3/__https://developer.nvidia.com/blog/delivering-up-to-9x-throughput-with-namd-v3-and-a100-gpu/__;!!DZ3fjg!5S8osI3gmsgwSMTlic-sv3cTlIGuxjy_jo32xfM7PCjbytrm2P2xyeCnIQZ0veZxrJ8YOeX52FMpjS2tDCsjF_SAAqJ2$ ).
> Note, the performance here is reported when benchmarking on A100 GPUs and
> will be different when using V100 GPUs for a similar system size, but I
> expect it to get better than what you currently have.
>
>
>
> -Daipayan
>
>
>
> *From: *Nicholus Bhattacharjee <nicholusbhattacharjee_at_gmail.com>
> *Date: *Tuesday, October 25, 2022 at 4:21 AM
> *To: *Daipayan Sarkar <sdaipayan_at_gmail.com>
> *Cc: *"namd-l_at_ks.uiuc.edu" <namd-l_at_ks.uiuc.edu>, René Hafner TUK <
> hamburge_at_physik.uni-kl.de>
> *Subject: *Re: namd-l: Query regarding performance
>
>
>
> Hello Rene and Daipayan,
>
>
>
> I am now using namd3. The system is little bigger 85k instead of 80k
> earlier. Also I have change the following parameters
>
>
>
> Timestep 2 (it was 1 earlier)
>
> Cutoff. 12 (it was 14)
>
> Pairlistdist. 14 (it was 16)
>
> fullElectFrequency 2 (it was 4)
>
>
>
>
>
> The benchmark time in the logfile is showing around 0.030 days/ns which
> should have given me around 32 ns per day. This value for the earlier
> simulation with namd2 was around 0.080 days/ns.
>
>
>
> However, both the simulations are giving me around 12 ns per day.
> Considering the benchmark time value for earlier simulation it makes sense.
> But I don't understand why I am getting the same for my current simulation.
>
>
>
> Kindly let me know if I am doing something wrong.
>
>
>
> Thank you
>
>
>
> Regards
>
>
>
> On Tue, Oct 4, 2022, 1:46 PM Daipayan Sarkar <sdaipayan_at_gmail.com> wrote:
>
> Hi Nicholus,
>
>
>
> Just to confirm are you using any collective variables based on your
> choice of using NAMD 2.14? If not, as Rene suggests, NAMD3 should offer
> better performance.
>
>
>
> -Daipayan
>
>
>
> *From: *"owner-namd-l_at_ks.uiuc.edu" <owner-namd-l_at_ks.uiuc.edu> on behalf
> of René Hafner TUK <hamburge_at_physik.uni-kl.de>
> *Reply-To: *"namd-l_at_ks.uiuc.edu" <namd-l_at_ks.uiuc.edu>, René Hafner TUK <
> hamburge_at_physik.uni-kl.de>
> *Date: *Tuesday, October 4, 2022 at 5:50 AM
> *To: *"namd-l_at_ks.uiuc.edu" <namd-l_at_ks.uiuc.edu>
> *Cc: *Nicholus Bhattacharjee <nicholusbhattacharjee_at_gmail.com>
> *Subject: *Re: namd-l: Query regarding performance
>
>
>
> Multiple timestep scheme:
>
>
>
> timestep 2; # timestep of 2fs
> nonbondedFreq 1 ; # evaluation of nonbonded interactions in every
> step
> fullElectFrequency 2; # electrostratics only evaluated every 2nd step
>
> The factor of 2 may be explained by the power of the V100 vs. P40 but I
> have no P40 at hand.
>
> If you want/need to run for a real long time then consider HMR and/or
> NAMD3 if applicable, see below.
>
> Regards
>
> René
>
> On 10/4/2022 11:07 AM, Nicholus Bhattacharjee wrote:
>
> Hello Rene,
>
>
>
> Thanks for the reply. It seems to be a huge difference.
>
>
>
> I am using the following for 80k system
>
>
>
> namd 2.14 multicore cuda
>
> CPU 32 Xeon 6242
>
> GPU Nvidia Tesla p40 (x1)
>
> Timestep 1fs
>
>
>
> What do you mean by nonbeval,fullelecteval
>
> Is it
>
>
>
> nonbindedFreq 1
>
> fullElectFrequency 4
>
>
>
> I am getting 12.5 ns/day
>
>
>
> With increasing the timestep to 2 I can get around 25 ns/day. But still
> less than what you mentioned.
>
>
>
> Please let me know.
>
>
>
> Thanks
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Tue, Oct 4, 2022, 10:45 AM René Hafner TUK <hamburge_at_physik.uni-kl.de>
> wrote:
>
> Hi Nicholus,
>
> this depends on:
>
> - exact NAMD Version (2.x or 3.alphaY)
> - GPU type
> - CPU type
> - timestepping scheme
> - atom number
>
> example one on a cluster I have access to:
>
> - using NAMD2.14 multicore CUDA (but selfcompiled)
> - GPU: 1xV100
> - CPU: 24xCores on XEON_SP_6126
> - timestep,nonbeval,fullelecteval: 2-1-2
> - with ~80k atoms
>
> *Results in: ~56ns/day (averaged value over multiple runs)*
>
>
>
> Though there may be space to tweak for you (cannot compare without knowing
> your GPU and namd version).
>
> Depending on the system, simulation features you need and quantities
> you're interested in you may try out HMR (
> http://www.ks.uiuc.edu/Research/namd/mailing_list/namd-l.2019-2020/0735.html)
> and/or NAMD3 (https://www.ks.uiuc.edu/Research/namd/alpha/3.0alpha/ ).
>
> Cheers
>
> René
>
>
>
>
>
> On 10/3/2022 10:13 AM, Nicholus Bhattacharjee wrote:
>
> Hello everyone,
>
>
>
> I am running a system of around 80k atoms with namd cuda. I am using 32
> CPUs with 1 GPU. I am getting a performance of around 12.5 ns/day. I would
> like to know if this is fine or I am getting a bad performance.
>
>
>
> Thank you in advance.
>
>
>
> Regards
>
> --
>
> --
>
> Dipl.-Phys. René Hafner
>
> TU Kaiserslautern
>
> Germany
>
> --
>
> --
>
> Dipl.-Phys. René Hafner
>
> TU Kaiserslautern
>
> Germany
>
>
This archive was generated by hypermail 2.1.6 : Tue Dec 13 2022 - 14:32:44 CST