From: Miro Astore (miro.astore_at_gmail.com)
Date: Tue Oct 01 2019 - 22:46:56 CDT
---------- Forwarded message ---------
From: Miro Astore <miro.astore_at_gmail.com>
Date: Wed, 2 Oct 2019 at 11:04
Subject: Re: namd-l: namd on AWS
To: Sanjay Hari <shari_at_universalsequencing.com>
I have tried a few things and experimented with numactl. At the moment I'm
using, on a p3.8xlarge
charmrun namd +p8 +devices 3 smd.conf | tee smd.conf.log &
and my benchmark is 0.141994 days/ns
On Tue, 1 Oct 2019 at 20:44, Sanjay Hari <shari_at_universalsequencing.com>
> I really like the AMI. Some details would help here, such as your run
> command, number of processors invoked, and benchmark statistics. The gpu
> tutorial gives a good primer on benchmarking cuda-accelerated runs.
> *From:* owner-namd-l_at_ks.uiuc.edu <owner-namd-l_at_ks.uiuc.edu> on behalf of
> Miro Astore <miro.astore_at_gmail.com>
> *Sent:* Tuesday, October 1, 2019 1:02:14 AM
> *To:* Namd Mailing List <namd-l_at_ks.uiuc.edu>
> *Subject:* namd-l: namd on AWS
> Hello everyone.
> I am trying to run the namd AMI package and I'm not getting very good
> performance. I've tried a few types of p2 and p3 node types and I can't get
> comparable performance to my university's HPC clusters when running 2
> workloads on the same node. ~200000 atom system. The performance doesn't
> appear to be very different from copying over the precompiled binary from
> the website. Just wondering if anyone else has encountered this issue.
> Completely possible I'm just not running things in the correct way, so I
> would be very interested to see how you guys run things.
> Best, Miro
This archive was generated by hypermail 2.1.6 : Tue Dec 31 2019 - 23:20:58 CST