Fwd: SOLVED, however multiple CUDA efficiency?

From: Francesco Pietra (chiendarret_at_gmail.com)
Date: Sun Mar 11 2012 - 11:36:55 CDT

By installing NAMD_CVS-2012-03-10_Linux-x86_64-multicore-CUDA I got it
working. However, now the flag "++local" is offending and "charmrun"
is no more used.

Question: is multiple CUDA efficiency on a shared-memory machine, like
this one, preserved with respect to previous use of "charmrun"?

Thenks

francesco pietra

---------- Forwarded message ----------
From: Francesco Pietra <chiendarret_at_gmail.com>
Date: Sun, Mar 11, 2012 at 4:28 PM
Subject: Fwd: namd-l: CUDA error 0 attaching to node
To: NAMD <namd-l_at_ks.uiuc.edu>

Closed. Opened new therad
fp

---------- Forwarded message ----------
From: Francesco Pietra <chiendarret_at_gmail.com>
Date: Sat, Mar 10, 2012 at 7:32 AM
Subject: Fwd: namd-l: CUDA error 0 attaching to node
To: NAMD <namd-l_at_ks.uiuc.edu>

I forgot to add that I tried with either AMBER ff and CHARMM ff
(all27). In both cases also with previously proven systems and
scripts. Debian amd 64 is "testing", therefore it might well be that
the last upgrading has introduced errors in the system. However, I am
using the precompiled NAMD, not message passing from Debian.
francesco

---------- Forwarded message ----------
From: Francesco Pietra <chiendarret_at_gmail.com>
Date: Fri, Mar 9, 2012 at 4:12 PM
Subject: namd-l: CUDA error 0 attaching to node
To: NAMD <namd-l_at_ks.uiuc.edu>

Hello:
I was running NAMD-CUDA 2.8 4JUN2011nb successfully on nvidia
280.13-1. Bach to namd after a few months, on the same macjhine, now
nvidia 295.20-1 (which matches with debian amd64 xserver and all
libraries), on attempted minimization:

# nvidia-smi -L
# nvidia-smi -pm 1

$ charmrun $NAMD_HOME/bin/namd2 ++local +p6 +idlepoll ++verbose
filename.conf 2>&1 | tee filename.log

Charmrun> charmrun started...
Charmrun> node programs all started
Charmrun> error 0 attaching to node:
Timeout waiting for node-program to connect
Charmrun> adding client 0: "127.0.0.1", IP:127.0.0.1
Charmrun> adding client 1: "127.0.0.1", IP:127.0.0.1
Charmrun> adding client 2: "127.0.0.1", IP:127.0.0.1
Charmrun> adding client 3: "127.0.0.1", IP:127.0.0.1
Charmrun> adding client 4: "127.0.0.1", IP:127.0.0.1
Charmrun> adding client 5: "127.0.0.1", IP:127.0.0.1
Charmrun> Charmrun = 127.0.0.1, port = 41824
Charmrun> start 0 node program on localhost.
Charmrun> start 1 node program on localhost.
Charmrun> start 2 node program on localhost.
Charmrun> start 3 node program on localhost.
Charmrun> start 4 node program on localhost.
Charmrun> start 5 node program on localhost.
Charmrun> Waiting for 0-th client to connect.

Hardware

Gigabyte Technology Co., Ltd. GA-890FXA-UD5/GA-890FXA-UD5, BIOS F6 11/24/2010

 AMD Phenom(tm) II X6 1075T Processor (6 cpu cores) (version 2.20.00)

16GB RAM

Two GTX-580

Scanning NUMA topology in Northbridge 24
[    0.000000] No NUMA configuration found  (SHOULD NUMA BE ACTIVATED?
it was not when running parallel in the past)

Has failure to do with the nvidia new version, or what else?

Thanks for advice

francesco pietra

This archive was generated by hypermail 2.1.6 : Tue Dec 31 2013 - 23:21:45 CST