RE: namd on AWS

From: Sanjay Hari (
Date: Wed Oct 02 2019 - 11:00:55 CDT

That sounds about right. I get 0.06 days/ns on a system with a fourth as many atoms, on a p2.xlarge with 4 cpus. I think the main advantages are cost (overnight run on a spot instance was $3) and not having any overhead or maintenance for maintaining a cluster.

From: <> On Behalf Of Miro Astore
Sent: Tuesday, October 1, 2019 11:47 PM
To: Namd Mailing List <>
Subject: Fwd: namd-l: namd on AWS

---------- Forwarded message ---------
From: Miro Astore <<>>
Date: Wed, 2 Oct 2019 at 11:04
Subject: Re: namd-l: namd on AWS
To: Sanjay Hari <<>>

Thanks Sanjay,

I have tried a few things and experimented with numactl. At the moment I'm using, on a p3.8xlarge
charmrun namd +p8 +devices 3 smd.conf | tee smd.conf.log &

and my benchmark is 0.141994 days/ns

Best, Miro

On Tue, 1 Oct 2019 at 20:44, Sanjay Hari <<>> wrote:
I really like the AMI. Some details would help here, such as your run command, number of processors invoked, and benchmark statistics. The gpu tutorial gives a good primer on benchmarking cuda-accelerated runs.
From:<> <<>> on behalf of Miro Astore <<>>
Sent: Tuesday, October 1, 2019 1:02:14 AM
To: Namd Mailing List <<>>
Subject: namd-l: namd on AWS

Hello everyone.

I am trying to run the namd AMI package and I'm not getting very good performance. I've tried a few types of p2 and p3 node types and I can't get comparable performance to my university's HPC clusters when running 2 workloads on the same node. ~200000 atom system. The performance doesn't appear to be very different from copying over the precompiled binary from the website. Just wondering if anyone else has encountered this issue.

Completely possible I'm just not running things in the correct way, so I would be very interested to see how you guys run things.

Best, Miro

This archive was generated by hypermail 2.1.6 : Tue Dec 31 2019 - 23:20:58 CST