Re: NAMD performance

From: Josh Vermaas (joshua.vermaas_at_gmail.com)
Date: Thu Mar 05 2020 - 15:03:07 CST

Don't forget to compare against multicore builds. On one node with shared
memory, those builds often win for maximum 1 gpu throughput. Since you have
2 on the same node, an smp build without communication threads may win.
Josh

On Thu, Mar 5, 2020, 10:23 AM Victor Kwan <vkwan8_at_uwo.ca> wrote:

> Hi Stefano,
>
> Since you already have a system in mind, you can compare the time it takes
> to perform a 10ps simulation with different setups.
>
> > one or both gpu, number of cores
> * With NAMD 2.13 comes a large improvement in dual gpu/single node
> performance and we observe almost linear scaling when going from 1 to 2
> GPUs.
> * 16core/GPU is sufficient, from our experience 6-8core/GPU is the lower
> limit.
> * For GPU runs, hyperthreading should not increase affect performance.
> > pemap/commap options
> * check the output of nvidia-smi topo matrix - leaving cpu/gpu affinity
> as default should be fine.
>
>
> On Thu, Mar 5, 2020 at 10:12 AM Stefano Guglielmo <
> stefano.guglielmo_at_unito.it> wrote:
>
>> Dear NAMD users,
>> I am using a workstation with an AMD Ryzen Threadripper 2990WX 32-Core
>> Processor, 128 GB RAM and two RTX 2080 Ti cards with NVlink; I am here to
>> ask for suggestions on what could be the "best" options to run a single
>> simulation on a 200K atom system with NAMD 2.13 (one or both gpu, number of
>> cores, hyperthreading or not, pemap/commap options...)
>>
>> Thanks in advance for your time
>> Stefano
>>
>> --
>> Stefano GUGLIELMO PhD
>> Assistant Professor of Medicinal Chemistry
>> Department of Drug Science and Technology
>> Via P. Giuria 9
>> 10125 Turin, ITALY
>> ph. +39 (0)11 6707178
>>
>>
>>
>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Mail
>> priva di virus. www.avast.com
>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
>> <#m_-1322833019291542345_m_6570179501783989647_m_3030489902633034999_m_-6753987527350397131_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>
>

This archive was generated by hypermail 2.1.6 : Fri Dec 31 2021 - 23:17:08 CST