AW: CUDA error in cuda_check_local_progress

From: Norman Geist (norman.geist_at_uni-greifswald.de)
Date: Tue Apr 22 2014 - 00:39:49 CDT

Is it running on the other GPUs without error? Please also check the ECC
error count on the GPUs. The M series Tesla are passively cooled and often
suffer from that with a lot of memory errors.

 

Norman Geist.

 

Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im Auftrag
von Abhishek TYAGI
Gesendet: Donnerstag, 17. April 2014 11:24
An: Norman Geist
Cc: Namd Mailing List
Betreff: RE: namd-l: CUDA error in cuda_check_local_progress

 

Hi ,

 

I tried to run on +p1, +p2, +p3, p4 separately too. But, it run on +p1 for
few minutes then the same output appears. But when I run nvidia-smi, I
observe that namd is still running. Finally the output from log file is as
follows:

 

**

WRITING EXTENDED SYSTEM TO RESTART FILE AT STEP 10000
WRITING COORDINATES TO DCD FILE AT STEP 10000
WRITING COORDINATES TO RESTART FILE AT STEP 10000
FINISHED WRITING RESTART COORDINATES
WRITING VELOCITIES TO RESTART FILE AT STEP 10000
FINISHED WRITING RESTART VELOCITIES
REINITIALIZING VELOCITIES AT STEP 10000 TO 288 KELVIN.
TCL: Running for 5000 steps
ERROR: Constraint failure in RATTLE algorithm for atom 4170!
ERROR: Constraint failure; simulation has become unstable.
ERROR: Constraint failure in RATTLE algorithm for atom 4236!
ERROR: Constraint failure; simulation has become unstable.
ERROR: Exiting prematurely; see error messages above.
====================================================

WallClock: 50792.585938 CPUTime: 50638.324219 Memory: 1220.495789 MB
Program finished.

 

 

 

 

Can you suggest some more way to do it.

 

regards

Abhi

  _____

From: Norman Geist <norman.geist_at_uni-greifswald.de>
Sent: Thursday, April 17, 2014 4:21 PM
To: Abhishek TYAGI
Cc: Namd Mailing List
Subject: AW: namd-l: CUDA error in cuda_check_local_progress

 

What GPUs are that? This error occurs for example if your cutoff or
pairlistdist, etc. are too large to fit the GPUs memory and stuff. Whats the
output of "nvidia-smi -q". Maybe there are multiple GPUs where one is only
for display and therefore hasn't enough memory. Try setting +devices to
select the GPU ids manually and see if it works with one GPU separately.

 

 

Norman Geist.

 

Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im Auftrag
von Abhishek TYAGI
Gesendet: Donnerstag, 17. April 2014 09:41
An: namd-l_at_ks.uiuc.edu
Betreff: namd-l: CUDA error in cuda_check_local_progress

 

Hi,

 

I am running a simulation for graphene and dna system. While running in my
CPU their is no error, but while running on GPU Cluster (Nvidia, Cuda) I am
using NAMD tool available on website
(NAMD_2.9_Linux-x86_64-multicore-CUDA.tar.gz). The following error appears
all the time. I tried to change timesteps, frequencies and other things too
but i really dont understand what to do in this case=

---
Diese E-Mail ist frei von Viren und Malware, denn der avast! Antivirus Schutz ist aktiv.
http://www.avast.com

This archive was generated by hypermail 2.1.6 : Wed Dec 31 2014 - 23:22:22 CST