RE:

From: Vermaas, Joshua (Joshua.Vermaas_at_nrel.gov)
Date: Fri Jun 15 2018 - 04:34:48 CDT

Just curious, but what are your hardware specs? At some point, unless you are using the compressed psf (described at http://www.ks.uiuc.edu/Research/namd/wiki/?NamdMemoryReduction),>), you need to be able to load the system in one shot. 44 million atoms is going to be a big system, and it could just be very simply a matter of you running out of memory on one of the nodes depending on how you start NAMD.

-Josh

On 2018-06-14 12:43:22-06:00 owner-namd-l_at_ks.uiuc.edu wrote:

Hi David,
I have taken out the part that doubles the patches, but I have got the same memory problem.
Do you know what else could be causing this problem?

Thank you in advance

2018-06-07 21:18 GMT+02:00 David Hardy <dhardy_at_ks.uiuc.edu<mailto:dhardy_at_ks.uiuc.edu>>:
What is the "patch problem" that you are trying to avoid by doubling the number of patches in each dimension? Since this doubling in each dimension actually increases the number of patches by a factor of 8, doing so could be the cause of the out-of-memory error that you are experiencing.

--
David J. Hardy, Ph.D.
Beckman Institute
University of Illinois at Urbana-Champaign<https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmaps.google.com%2F%3Fq%3DChampaign%2B405%2BN.%2BMathews%2BAve.%2C%2BUrbana%2C%2BIL%2B61801%26entry%3Dgmail%26source%3Dg&data=02%7C01%7CJoshua.Vermaas%40nrel.gov%7C86a3fb7e56f044228c6508d5d226b3ae%7Ca0f29d7e28cd4f5484427885aee7c080%7C0%7C0%7C636645986025616422&sdata=kQIgRRhWpfH1ljzDIXa4ElZOx45PeHtXFCmo2UXVRvE%3D&reserved=0>
405 N. Mathews Ave., Urbana, IL 61801<https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmaps.google.com%2F%3Fq%3DChampaign%2B405%2BN.%2BMathews%2BAve.%2C%2BUrbana%2C%2BIL%2B61801%26entry%3Dgmail%26source%3Dg&data=02%7C01%7CJoshua.Vermaas%40nrel.gov%7C86a3fb7e56f044228c6508d5d226b3ae%7Ca0f29d7e28cd4f5484427885aee7c080%7C0%7C0%7C636645986025626427&sdata=ZMtUS625fU7DcYEyT6eOrE%2FlyXAqyALwMt2KA2PP48I%3D&reserved=0>
dhardy_at_ks.uiuc.edu<mailto:dhardy_at_ks.uiuc.edu>, http://www.ks.uiuc.edu/~dhardy/&sdata=aM2MvWrV6KwhMBnhkiaSH4ITJz33FfLXUjSyoa4Mmw8%3D&reserved=0>
On Jun 7, 2018, at 6:24 AM, Laura Tiessler <lauratiesslersala_at_gmail.com<mailto:lauratiesslersala_at_gmail.com>> wrote:
Hi all,
running MD using NAMD version: NAMD/2.12-CrayIntel-17.08-cuda-8.0 on a system of about 44 millions atoms (proteins in explicit water).
At the stage of the first minimization, I got this memory problem:
Reason: Could not malloc()--are we out of memory?
I am using structure and coordinates in binary format, and this the input file:
#
# Input Namd Configuration File.
# Protein Minimization.
#
# molecular system
usePluginIO yes
structure   ionized.js
bincoordinates ionized.coor
# force field
paratypecharmm on
parameters ./par_all36_prot.prm
parameters ./par_all36_na.prm
parameters ./toppar_water_ions_namd.str
exclude scaled1-4
1-4scaling 1.0
# Doubling the number of patches (trying to avoid patch problem)
twoAwayX yes
twoAwayY yes
twoAwayZ yes
# approximations
switching on
switchdist 10
cutoff 12
pairlistdist 13.5
# constraints
constraints on
conskfile restraint.pdb
conskcol B
consref ionized.pdb
# output files
binaryoutput yes
noPatchesOnOne yes
outputname min_solv_namd.md
# run minimization
minimization on
minimize 500
Does anyone have suggestions how could I solve the memory problems? Anyone that worked with large systems >10millions
thanks
regards

This archive was generated by hypermail 2.1.6 : Mon Dec 31 2018 - 23:21:11 CST