Re: Re: CPU vs GPU Question

From: Rafael Bernardi (rcbernardi_at_auburn.edu)
Date: Thu Dec 10 2020 - 21:39:48 CST

Hello Kelly, going from 1 GPU to 2 GPUs I would expect almost double the speed with NAMD 3.

I am not aware of a 4-way NVLINK for gaming GPUs, so I guess that you will be only able to run with 2 GPUs per job. However, you will be able to run 2 jobs at the time if you have 2 pairs of NVLINKed 2080s. That is what I have in my machine, but with 3090s. As far as I know only server GPUs can be fully connect when more than 2 GPUs are present.

Regarding CPUs , they only have a marginal influence in NAMD3’s performance.

Best

Rafael

……………………………………………………………………...
Rafael C. Bernardi
Biophysics Cluster - Department of Physics at Auburn University
NIH Center for Macromolecular Modeling & Bioinformatics
rcbernardi_at_auburn.edu
rcbernardi_at_ks.uiuc.edu
www.ks.uiuc.edu/~rcbernardi
+1 (334) 844-4393

________________________________
From: McGuire, Kelly <mcg05004_at_byui.edu>
Sent: Thursday, December 10, 2020 7:40:46 PM
To: Rafael Bernardi <rcbernardi_at_auburn.edu>; Bennion, Brian <bennion1_at_llnl.gov>; namd-l_at_ks.uiuc.edu <namd-l_at_ks.uiuc.edu>; Gumbart, JC <gumbart_at_physics.gatech.edu>
Subject: Re: namd-l: Re: CPU vs GPU Question

Actually, we have 4 2080ti's now on the local machine. This should give at least a 50% boost with NAMD3 and NVLINK I would think...the CPU doesn't make much of a difference with NAMD3, correct?

Dr. Kelly L. McGuire

PhD Biophysics

Department of Physiology and Developmental Biology

Brigham Young University

LSB 3050

Provo, UT 84602

________________________________
From: Rafael Bernardi <rcbernardi_at_auburn.edu>
Sent: Thursday, December 10, 2020 6:35 PM
To: McGuire, Kelly <mcg05004_at_byui.edu>; Bennion, Brian <bennion1_at_llnl.gov>; namd-l_at_ks.uiuc.edu <namd-l_at_ks.uiuc.edu>; Gumbart, JC <gumbart_at_physics.gatech.edu>
Subject: Re: namd-l: Re: CPU vs GPU Question

In terms of scaling I would expect something similar. In terms of speed I believe that the 3090s are much faster, my guess is at least 50% faster. But I have no data, that is just a guess.

……………………………………………………………………...
Rafael C. Bernardi
Biophysics Cluster - Department of Physics at Auburn University
NIH Center for Macromolecular Modeling & Bioinformatics
rcbernardi_at_auburn.edu
rcbernardi_at_ks.uiuc.edu
www.ks.uiuc.edu/~rcbernardi
+1 (334) 844-4393

________________________________
From: McGuire, Kelly <mcg05004_at_byui.edu>
Sent: Thursday, December 10, 2020 7:27:16 PM
To: Rafael Bernardi <rcbernardi_at_auburn.edu>; Bennion, Brian <bennion1_at_llnl.gov>; namd-l_at_ks.uiuc.edu <namd-l_at_ks.uiuc.edu>; Gumbart, JC <gumbart_at_physics.gatech.edu>
Subject: Re: namd-l: Re: CPU vs GPU Question

Thanks for all of the responses, very helpful information! Rafael, we have two 2080ti's and are wondering if you thinkg using NVLINK with them would produce similar results with NAMD3 that you show here...?

Dr. Kelly L. McGuire

PhD Biophysics

Department of Physiology and Developmental Biology

Brigham Young University

LSB 3050

Provo, UT 84602

________________________________
From: Rafael Bernardi <rcbernardi_at_auburn.edu>
Sent: Wednesday, December 9, 2020 10:36 PM
To: Bennion, Brian <bennion1_at_llnl.gov>; namd-l_at_ks.uiuc.edu <namd-l_at_ks.uiuc.edu>; Gumbart, JC <gumbart_at_physics.gatech.edu>; McGuire, Kelly <mcg05004_at_byui.edu>
Subject: Re: namd-l: Re: CPU vs GPU Question

Sorry about the mistake.

……………………………………………………………………...

Rafael C. Bernardi

Biophysics Cluster - Department of Physics at Auburn University

NIH Center for Macromolecular Modeling & Bioinformatics

rcbernardi_at_auburn.edu<mailto:rcbernardi_at_auburn.edu>

rcbernardi_at_ks.uiuc.edu<mailto:rcbernardi_at_ks.uiuc.edu>

www.ks.uiuc.edu/~rcbernardi<http://www.ks.uiuc.edu/~rcbernardi>

+1 (334) 844-4393

From: "Bennion, Brian" <bennion1_at_llnl.gov>
Date: Wednesday, December 9, 2020 at 11:35 PM
To: Rafael Bernardi <rcbernardi_at_auburn.edu>, "namd-l_at_ks.uiuc.edu" <namd-l_at_ks.uiuc.edu>, JC Gumbart <gumbart_at_physics.gatech.edu>, "McGuire, Kelly" <mcg05004_at_byui.edu>
Subject: Re: namd-l: Re: CPU vs GPU Question

Thank you for the info. It isn't me that is curious about a single machine timings, it was Kelly.

However, it has sparked a light to go back to see if I can get namd3 running on some of our newer amd and intel clusters. in the past the interconnect just never worked well with namd.

brian

________________________________

From: Rafael Bernardi <rcbernardi_at_auburn.edu>
Sent: Wednesday, December 9, 2020 9:26 PM
To: namd-l_at_ks.uiuc.edu <namd-l_at_ks.uiuc.edu>; Gumbart, JC <gumbart_at_physics.gatech.edu>; McGuire, Kelly <mcg05004_at_byui.edu>
Cc: Bennion, Brian <bennion1_at_llnl.gov>
Subject: Re: namd-l: Re: CPU vs GPU Question

Hello Brian,

If what you want is a node to run NAMD3 don’t worry so much about buying CPUs. To be honest I would just buy any AMD EPYC because of the motherboards with PCIe 4.0 support.

I just got a machine with 2x EPYC 7302, so 32 cores in total. I was running some benchmarks today, and using only 2 CPU cores, 2 RTX 3090 with NVLINK, I can get:

STMV (1.06M atoms)

4fs timestep, NVE, and Amber-like parameters (8A cutoff, etc) = 76 ns/day

4fs timestep, NPT, and Amber-like parameters (8A cutoff, etc) = 52 ns/day

4fs timestep, NPT, with regular (safe) parameters (12A cutoff, etc) = 30 ns/day

2fs timestep, NPT, with regular (safe) parameters (12A cutoff, etc) = 16 ns/day

I hope these numbers give you an idea about what you can do with a new local machine. The other nice thing is that you can have a machine with 4x 3090s, installed as 2 pairs of NVLINKed 3090s. That way you can run two simulations at these performances at the same time.

Also, I wouldn’t use any of the Amber-like parameters with CHARMM force field in a production dynamics. That was done just as good way to compare the speed of NAMD vs AMBER (software). If you want to read more about that JC Gumbart wrote a great paper testing these parameters for a membrane system--_000_BY5PR19MB34742EDF77F6F2CF80B53D50ABCA0BY5PR19MB3474namp_--

This archive was generated by hypermail 2.1.6 : Fri Dec 31 2021 - 23:17:10 CST