NAMD Wiki: NamdOnLSF

  You are encouraged to improve NamdWiki by adding content, correcting errors, or removing spam.

If your NAMD binary was built using an MPI build of Charm++ then you should ignore the following and run NAMD like any other MPI program on your cluster. The following only applies to NAMD binaries that are not built on MPI and require the charmrun program to launch parallel runs. If your charmrun is a script that calls mpirun then you may ignore it.

Platform Computing's LSF scheduler (http://www.platform.com/Products/Platform.LSF.Family/)

Here is a sample namd job sumbmission script to an LSF queuing system [on a 128 node dual core, dual processor EM64T Power Edge 1850 cluster running rocks 4.0]


#!/bin/tcsh
#BSUB -J   your_job_name
#_Define the basename of your namd files 
set file = my_namd_file
#_select desired queue
#BSUB -q queue_name
#_no need for output/error files
#BSUB -o /dev/null   -e /dev/null
#_send e-mail notification when job finishes
#BSUB -N 
#BSUB -u your_email_address
#_cd into your working directory
cd  your_working_directory
#_set the namd directory
set bindir = directory_with_namd_executables
#_Set number of CPUs (integer)
#BSUB -n   total_number_of_CPUs_requested
#_create nodelist file in the namd working directory using $LSB_HOSTS
#_Note: Ncpu = requested number of CPUs (see above) is needed by charmrun to start namd (see below)
set Ncpu = 0
set nodefile = nodelist
echo group main > $nodefile
set nodes = `echo $LSB_HOSTS `
echo $nodes
foreach node ( $nodes )
echo host $node >> $nodefile
@ Ncpu = $Ncpu + 1
end
#_Finally, Start the NAMD job
$bindir/charmrun ++remote-shell ssh ++nodelist $nodefile +p${Ncpu}  $bindir/namd2 ${file}.conf >& ${file}.log