From: Peter Freddolino (petefred_at_ks.uiuc.edu)
Date: Sun Mar 29 2009 - 09:42:38 CDT
Hi,
it would be helpful to know the dimensions of your system and the total
number of atoms. Are you using timestepa nd multiple timestepping
parameters equivalent to those in the simulations in gromacs that you're
comparing to?
Also, you're using the wrong compilation target -- you've compiled for
itanium, and thus have probably missed out on some compilation flags
that would significantly improve your performance. You want to be using
Linux-x86_64 as a base architecture. You can probably use more
aggressive compilation options since you're running on xeons -- adding
-xT -DNAMD_SSE
to the FLOATOPTS in your .arch file should help. Are you using namd 2.7b1?
Also... why are you running without PME? Aside from the fact that long
range electrostatics are generally important in proteins, in my
experience MD water models don't do well with shifted electrostatics...
Best,
Peter
DimitryASuplatov wrote:
> Hello,
>
> I am calculating a system of 700 aa protein in a water box (15
> angstroms around the protein), using electrostatic switch 10A and
> cutoff 15 A. PME is off. The estimation on 64 Intel Xeon 3Gz cores is
> 0.44 days/ns and about 4 days for 10ns. I was using intel compilers,
> mvapich and the following options
> CHARMM: ./build charm++ mpi-linux-ia64 icc ifort --no-shared -O
> -DCMK_OPTIMIZE=1 -I/usr/lib/mvapich-intel-x86_64/include/
> NAMD: ./config tcl fftw Linux-ia64-MPI-GM-icc
> and I use mpirun -np 64 namd2 <config> to run the program
>
> I was using gromacs previosly and I find it to be 10 times faster.
>
> My question is -
> is this a common estimate for a system of that size or I have problems
> with my cluster or my installation?
>
> Thanks
> SDA
This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:52:32 CST