Re: NAMD+PLUMED

From: Giacomo Fiorin (giacomo.fiorin_at_gmail.com)
Date: Mon Jun 04 2018 - 06:48:05 CDT

As Josh already pointed out, you are discussing codes that ignore the GPUs,
and thus will work with NAMD whether or not it uses GPUs.

Tto optimize the performance of the regular MD simulation, you can consult
the notes.txt file.

For a calculation as simple as 2 atoms (Na-Cl) pair, there should be
essentially no additional cost from Colvars or Plumed, and the calculation
be just as fast as regular MD.

For more complex cases (and that depends entirely on your choice of
variables), NAMD will try to compensate the extra cost through
load-balancing, but that works only if the cost of the collective
variables' computation can be handled by a single CPU core.

Most variables supported by Colvars have O(N) cost, and some are
parallelized:
http://colvars.github.io/colvars-refman-namd/colvars-
refman-namd.html#sec:cvc_common
For O(N^2) variables like coordNum, you should consider using the nightly
build of NAMD, because it includes a pairlist-based optimization,
contributed by Josh.

Overall, it's safe to assume that losses in performance come only from
computing the collective variables: the actual metadynamics bias (as much
as other restraints) has usually negligible cost.

Giacomo

On Sun, Jun 3, 2018 at 6:15 AM, jing liang <jingliang2015_at_gmail.com> wrote:

> Hi,
>
> thanks for your comments, they are insightful. I am cc'ing this message to
> your emails (you may receive this message twice) let me
> know if this is the common procedure or just replying to namd-l.
>
> Being more specific, I tried a water box ~40A with two Na+ and Cl- ions
> and used the distance between them to get the PMF using
> Metadynamics (standard and well-tempered) and Colvars. Just now, I
> realized that my script is performing faster with GPU if I use the
> following scheme:
>
> namd2 +p28 metadynamics.inp
>
> previously, I used +p1, I thought it was required by NAMD-GPU version.
> With the scheme above, the GPU version runs 1.6 times
> faster than the same scheme on CPUs (50,000 steps).
>
> I just wonder, on what set of conditions PLUMED would run faster than
> Colvars, i.e. system size, number of CVs, etc.? Or maybe
> PLUMED is better suited for CVs that are not supported by Colvars?
>
> Thanks.
>
>
>
>
>
>
>
>
>
>
> 2018-06-03 5:03 GMT+02:00 Vermaas, Joshua <Joshua.Vermaas_at_nrel.gov>:
>
>> Just my two cents: PLUMED and Colvars both effectively ignore the GPU,
>> and only do the calculations on the CPU. The rest of the MD dynamics will
>> use GPUs as appropriate, but if your collective variable isn't reasonable,
>> it may well end up bottlenecking performance with both codes.
>>
>> -Josh
>>
>>
>>
>> On 2018-06-02 15:31:08-06:00 owner-namd-l_at_ks.uiuc.edu wrote:
>>
>> Hi, the use of GPUs is supported with either code, in that the
>> GPU-enabled version of NAMD can run with either Colvars, or PLUMED if you
>> specifically compile it with NAMD.
>> The error you saw was probably unrelated to the combination of Colvars,
>> metadynamics and GPU.
>> As for the performance, you need to give some specifics regarding, the
>> type of system and the type of collective variables, and the hardware you
>> plan on running on, to get useful advice.
>> Regards,
>> Giacomo
>>
>> On Sat, Jun 2, 2018 at 2:34 PM, jing liang <jingliang2015_at_gmail.com>
>> wrote:
>>>
>>> Hi,
>>> I tried to run a Metadynamics simulation in NAMD using Colvars on a GPU
>>> card but it seems that this type of simulations is not supported yet on
>>> GPUs,
>>> right?
>>> Then, I read about an alternative for Colvars called PLUMED. Do you know
>>> if NAMD+PLUMED can be run on GPUs?
>>> Compared to Colvars, does PLUMED runs faster or scales better with the
>>> number of cores for instance?
>>> Any comment is welcome.
>>>
>>
>>
>>
>> --
>> Giacomo Fiorin
>> Associate Professor of Research, Temple University, Philadelphia, PA
>> Contractor, National Institutes of Health, Bethesda, MD
>> http://goo.gl/Q3TBQU
>> <https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgoo.gl%2FQ3TBQU&data=02%7C01%7CJoshua.Vermaas%40nrel.gov%7Ce3bef8886475406129e208d5c8d02536%7Ca0f29d7e28cd4f5484427885aee7c080%7C0%7C0%7C636635718678162011&sdata=Slxo%2FqAtCWOF1QjRXJmJsjQs9BNrMBuDkjMB9g8UoXg%3D&reserved=0>
>> https://github.com/giacomofiorin
>> <https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fgiacomofiorin&data=02%7C01%7CJoshua.Vermaas%40nrel.gov%7Ce3bef8886475406129e208d5c8d02536%7Ca0f29d7e28cd4f5484427885aee7c080%7C0%7C0%7C636635718678162011&sdata=axs0l%2FR20jfyHHX3nAVPzvw4xEVebcdyAPfqt9R5kUw%3D&reserved=0>
>>
>>
>

-- 
Giacomo Fiorin
Associate Professor of Research, Temple University, Philadelphia, PA
Contractor, National Institutes of Health, Bethesda, MD
http://goo.gl/Q3TBQU
https://github.com/giacomofiorin

This archive was generated by hypermail 2.1.6 : Tue Dec 31 2019 - 23:19:58 CST