run NAMD command in GPU

From: Adrian (
Date: Tue Jul 16 2013 - 14:05:17 CDT


We have a workstation with 12 CPU's and 4 GPU's and we are trying to run
NAMD simulations in it, but we are not sure we are actually using the GPU's.

Here is the first part of a log file using the workstation.

Charm++> scheduler running in netpoll mode.
Charm++> Running on 1 unique compute nodes (12-way SMP).
Charm++> cpu topology info is gathered in 0.007 seconds.
Info: NAMD 2.8 for Linux-x86_64-CUDA
Info: Please visit
Info: for updates, documentation, and support information.
Info: Please cite Phillips et al., J. Comp. Chem. 26:1781-1802 (2005)
Info: in all publications reporting results obtained with NAMD.
Info: Based on Charm++/Converse 60303 for net-linux-x86_64-iccstatic
Info: Built Sat May 28 11:30:15 CDT 2011 by jim on
Info: 1 NAMD 2.8 Linux-x86_64-CUDA 4 Guest
Info: Running on 4 processors, 4 nodes, 1 physical nodes.
Info: CPU topology information available.
Info: Charm++/Converse parallel runtime startup completed at 0.00935698 s
Pe 3 physical rank 3 binding to CUDA device 3 on 'Tesla C2070' Mem: 4095MB Rev: 2.0
Pe 1 physical rank 1 binding to CUDA device 1 on 'Tesla C2070' Mem: 4095MB Rev: 2.0
Did not find +devices i,j,k,... argument, using all
Pe 0 physical rank 0 binding to CUDA device 0 on 'Tesla C2070' Mem: 4095MB Rev: 2.0
Pe 2 physical rank 2 binding to CUDA device 2 on 'Tesla C2070' Mem: 4095MB Rev: 2.0
Info: 1.63564 MB of memory in use based on CmiMemoryUsage
Info: Configuration file is
Info: Changed directory to /home/Guest/Adrian/asyn/ProductionNa15

Which is the correct command to run the NAMD in the workstation?

We are currently using one similar to this:

charmrun ++local +p4 namd2 +idlepoll [configuration file] > [log file]

Thanks for your time!
Adrian Palacios

This archive was generated by hypermail 2.1.6 : Wed Dec 31 2014 - 23:21:26 CST