Re: GBIS in NAMD 2.10 CUDA with IMD

From: Jim Phillips (jim_at_ks.uiuc.edu)
Date: Sat Feb 14 2015 - 12:18:25 CST

I tracked down the root cause of the crashes (a message race condition)
and it is fixed in the current nightly build.

Also, GBIS will likely get better GPU (or CPU) performance with the option
"twoAwayX yes" and possibly also "twoAwayY yes" and "twoAwayZ yes".

Thanks again for reporting this.

Jim

On Thu, 12 Feb 2015, Jim Phillips wrote:

>
> After further investigation, a more reliable workaround is to add
> "+mergegrids" to the command line.
>
> Jim
>
>
> On Wed, 11 Feb 2015, Tristan Croll wrote:
>
>> Hi Jim,
>>
>> Thanks for the workaround.
>>
>> Cheers,
>>
>> Tristan
>>
>> ________________________________________
>> From: Jim Phillips <jim_at_ks.uiuc.edu>
>> Sent: Thursday, 12 February 2015 2:51 AM
>> To: namd-l_at_ks.uiuc.edu; Tristan Croll
>> Subject: Re: namd-l: GBIS in NAMD 2.10 CUDA with IMD
>>
>> Hi,
>>
>> Thanks for reporting this. I can reproduce the segfault but don't have an
>> explanation other than some interference between CUDA and the IMD sockets
>> connection. The following config file options seem to make it go away:
>>
>> noPatchesOnZero yes
>> noPatchesOnOne yes
>>
>> Jim
>>
>>
>> On Wed, 11 Feb 2015, Tristan Croll wrote:
>>
>>> Hi all,
>>>
>>>
>>> Apologies for cross-posting, but I'm not quite sure where this problem
>>> lies. When running an IMD simulation using NAMD 2.10-CUDA (GTX970M card,
>>> for what it's worth), I find that if I use too many CPU cores (5-6 out of
>>> 8 total available) the simulation will inevitably crash after a few
>>> hundred steps with no error message. If I reduce the number of cores to
>>> 2-3 it runs stably.
>>>
>>>
>>> Cheers,
>>>
>>>
>>> Tristan
>>>
>>
>

This archive was generated by hypermail 2.1.6 : Thu Dec 31 2015 - 23:21:39 CST