Re: NAMD performance on a supercomputer with Intel Xeon Platinum 8160 and 100Gb Intel Omni-Path Full-Fat Tree

From: Giacomo Fiorin (giacomo.fiorin_at_gmail.com)
Date: Tue Nov 30 2021 - 08:33:18 CST

Something in addition to what Axel says (all of which is absolutely true,
even the counter-intuitive part about making the single-node performance
artificially slower to get through the bottom-most tier of technical
review).

One possible issue to look at is how the cluster's network is utilized by
other users/applications. In a local cluster that I use, the InfiniBand
network is also used by the nodes to access data storage and there are many
other users processing MRI, cryo-EM or bioinformatics data (all
embarrassingly-parallel by design). So the InfiniBand network is
constantly busy and does not necessarily offer very low latency for NAMD or
other message-passing applications.

Something that helped in that case was building Charm++ on top of the UCX
library instead of IBverbs directly. I am wholly unfamiliar with the
details of how UCX works, but in essence it provides better utilization of
the network when the ratio of compute cores vs. network links is high. If
the cluster's staff has a copy of UCX, try that. It wasn't easy to build,
but it paid off specifically for those runs that were communication-bound.

The main significant addition in 2.15 is the AVX-512 tiles algorithm, which
would help with the most expensive Intel CPUs like those, but would also
make the computation part faster with the caveat that Axel mentioned.

Giacomo

On Tue, Nov 30, 2021 at 6:16 AM Axel Kohlmeyer <akohlmey_at_gmail.com> wrote:

> Actually, if you optimize how NAMD is compiled better than the system
> provided executable, your parallel efficiency will go down. Please recall
> Amdahl's law: the parallel efficiency is determined by the relation of time
> spent on parallel execution and serial execution.
>
> A better optimized executable will spend even less time computing and
> thus have more parallel overhead.
>
> To get better parallel efficiency, you have to avoid or reduce all non
> parallel operations like output or use of features like Tcl scripting or
> make your computations more expensive by increasing the cutoff or the
> system size or make the executable slower by compiling a less optimized
> version.
>
> --
> Dr. Axel Kohlmeyer akohlmey_at_gmail.com https://urldefense.com/v3/__http://goo.gl/1wk0__;!!DZ3fjg!pwzY0TGjPpkoWGMl1spgzEGTqZiiPI6q5hqp2_Z40c3KK2cxBOkrYzqiONt88a7OEw$
> <https://urldefense.com/v3/__http://goo.gl/1wk0__;!!DZ3fjg!vTfAy2yEX2CbE-RC_oXIbJCP-TYotczi7lvqSPqNSBEGEfUDyM103t2gWOCxycm24A$>
> College of Science & Technology, Temple University, Philadelphia PA, USA
> International Centre for Theoretical Physics, Trieste, Italy
>
> On Tue, Nov 30, 2021, 05:32 Vlad Cojocaru <
> vlad.cojocaru_at_mpi-muenster.mpg.de> wrote:
>
>> Dear all,
>>
>> We submitted a proposal to run some extensive atomistic simulations with
>> NAMD of systems ranging between 500 K to 2M atoms on a supercomputer
>> with Intel Xeon Platinum 8160 processors and 100Gb Intel Omni-path
>> Full-Fat Tree interconnection.
>>
>> Apparently, our project may fail the technical evaluation because during
>> our tests we did not achieve a 75 % parallel efficiency between 2 to 48
>> nodes (each node has 2 CPUs - 24 cores/CPU). We have tested the NAMD
>> 2.14 provided by default at the site and we do not know how this was
>> built. Looking at the NAMD benchmarks available for the Frontera
>> supercomputer (quite similar architecture if I understand it correctly
>> but for larger systems), it seems we should definitely achieve with NAMD
>> 2.15 (maybe even 2.14) much better performance and parallel efficiency
>> up to 48/64 nodes on this architecture than we actually achieved in our
>> tests.
>>
>> So, my reasoning is that probably the NAMD built by default was not
>> really carefully optimized.
>>
>> I would appreciate if anyone who has experience with building and
>> optimizing NAMD on such an architecture could recommend any
>> compiler/MPI/configuration/options for building an NAMD with a better
>> performance and parallel efficiency. If I have some clear ideas about
>> how to optimize NAMD, maybe I could make the case for our project to not
>> fail the technical evaluation.
>>
>> Thank you very much for any advice
>>
>> Best wishes
>> Vlad
>>
>>
>>
>> --
>> Vlad Cojocaru, PD (Habil.), Ph.D.
>> -----------------------------------------------
>> Project Group Leader
>> Department of Cell and Developmental Biology
>> Max Planck Institute for Molecular Biomedicine
>> Röntgenstrasse 20, 48149 Münster, Germany
>> -----------------------------------------------
>> Tel: +49-251-70365-324; Fax: +49-251-70365-399
>> Email: vlad.cojocaru[at]mpi-muenster.mpg.de
>> <https://urldefense.com/v3/__http://mpi-muenster.mpg.de__;!!DZ3fjg!vTfAy2yEX2CbE-RC_oXIbJCP-TYotczi7lvqSPqNSBEGEfUDyM103t2gWOC6ifHfyA$>
>>
>> https://urldefense.com/v3/__http://www.mpi-muenster.mpg.de/43241/cojocaru__;!!DZ3fjg!ouau8vpkIDbQ8KrgRCSrc8Ng4YRHk1w7tQfeHsxoB5VnnkEQuC3CQj5uCvq0Gx8Paw$
>>
>>
>>

This archive was generated by hypermail 2.1.6 : Fri Dec 31 2021 - 23:17:12 CST