From: Carsten Olbrich (ocarsten_at_googlemail.com)
Date: Mon Jun 18 2007 - 17:12:37 CDT
Victor,
I compiled charm++ first with the option net-linux-amd64 - I used
amd64 because ia64 stands for Intel Itanium processors, which are in
my opinon not comparable with the xeon processors. But unfortionally
there is no special option for this kind of processors.
Ok, than I understood you right with your comment. When I build NAMD I
compiled charm++ too, as discribed in the "notes.txt".
Carsten
On 6/18/07, Victor Ovchinnikov <ovchinnv_at_mit.edu> wrote:
> Carsten,
>
> The log file you included indicates you are using a NAMD version built
> for amd64, but you say you ran on a Xeon cluster. When you recompiled,
> did you also recompile charm++, and, if so, with what options (the
> README file in the charm-5.9 directory has, for example, the
> net-linux-ia64 option, which sounds like what you need)?
>
> By my last comment I meant that you compile charm++ first and then
> compile & link NAMD against it, as discussed in the NAMD installation
> instructions.
>
> GL,
> Victor
>
> On Mon, 2007-06-18 at 20:34 +0200, Carsten Olbrich wrote:
> > Thanks,
> >
> > I will check this...
> >
> > There is just one "problem" with the TCP version: For my system (about
> > 80000 atoms) the TCP version is a quarter day/ns slower than the UDP
> > version and is as fast as the 32bit version of NAMD.
> >
> > With "built on top of a different compilation of charm++" do you mean
> > a mpi-compilation or something else?
> >
> > Carsten
> >
> > On 6/18/07, Victor Ovchinnikov <ovchinnv_at_mit.edu> wrote:
> > > Hi,
> > >
> > > I've had problems using TCL forces on an 8-node 64-bit Opteron cluster
> > > using the non-TCP binary; when I switched to the TCP binary, the
> > > calculations ran fine. Maybe another binary (i.e. built on top of a
> > > different compilation of charm++) would work for you?
> > >
> > > Victor
> > >
> > > On Mon, 2007-06-18 at 18:04 +0200, Carsten Olbrich wrote:
> > > > Dear users,
> > > >
> > > > I use the binaries from NAMD download side and apply tclfoces to my molecule.
> > > > When I'm using the 32bit binary on our cluster (quad core Intel Xeon
> > > > processors) the job runs fine.
> > > > But if I'm using the 64bit binary after some while (200000-1000000
> > > > steps) I get an error.
> > > > One time the error are "Atoms moving too fast", another time "Stray
> > > > PME grid charges detected!" or "Bad global exclusion count"....
> > > > The same things happen if I compile the binary myself.
> > > > Is there a way to work around or any idea for a reason?
> > > >
> > > > Thanks for your help...
> > > > Carsten
> > > >
> > > > A sample output:
> > > > Info: NAMD 2.6 for Linux-amd64
> > > > Info:
> > > > Info: Please visit http://www.ks.uiuc.edu/Research/namd/
> > > > Info: and send feedback or bug reports to namd_at_ks.uiuc.edu
> > > > Info:
> > > > Info: Please cite Phillips et al., J. Comp. Chem. 26:1781-1802 (2005)
> > > > Info: in all publications reporting results obtained with NAMD.
> > > > Info:
> > > > Info: Based on Charm++/Converse 50900 for net-linux-amd64-iccstatic
> > > > Info: Built Wed Aug 30 12:54:51 CDT 2006 by jim on belfast.ks.uiuc.edu
> > > > Info: 1 NAMD 2.6 Linux-amd64 24 node23 colbrich
> > > > Info: Running on 24 processors.
> > > > Info: 7612 kB of memory in use.
> > > > Info: Memory usage based on mallinfo
> > > > Info: Configuration file is Conf_pull_dna.conf
> > > > TCL: Suspending until startup complete.
> > > > Info: EXTENDED SYSTEM FILE out_eq_xy_dna.restart.xsc
> > > > Info: SIMULATION PARAMETERS:
> > > > Info: TIMESTEP 1
> > > > Info: NUMBER OF STEPS 0
> > > > Info: STEPS PER CYCLE 20
> > > > Info: PERIODIC CELL BASIS 1 68.2539 0 0
> > > > Info: PERIODIC CELL BASIS 2 0 165.76 0
> > > > Info: PERIODIC CELL BASIS 3 0 0 48.7528
> > > > Info: PERIODIC CELL CENTER 0 0 10
> > > > Info: WRAPPING ALL CLUSTERS AROUND PERIODIC BOUNDARIES ON OUTPUT.
> > > > Info: WRAPPING TO IMAGE NEAREST TO PERIODIC CELL CENTER.
> > > > Info: LOAD BALANCE STRATEGY Other
> > > > Info: LDB PERIOD 4000 steps
> > > > Info: FIRST LDB TIMESTEP 100
> > > > Info: LDB BACKGROUND SCALING 1
> > > > Info: HOM BACKGROUND SCALING 1
> > > > Info: PME BACKGROUND SCALING 1
> > > > Info: MAX SELF PARTITIONS 50
> > > > Info: MAX PAIR PARTITIONS 20
> > > > Info: SELF PARTITION ATOMS 125
> > > > Info: PAIR PARTITION ATOMS 200
> > > > Info: PAIR2 PARTITION ATOMS 400
> > > > Info: MIN ATOMS PER PATCH 100
> > > > Info: VELOCITY FILE out_eq_xy_dna.restart.vel
> > > > Info: CENTER OF MASS MOVING INITIALLY? NO
> > > > Info: DIELECTRIC 1
> > > > Info: EXCLUDE SCALED ONE-FOUR
> > > > Info: 1-4 SCALE FACTOR 1
> > > > Info: DCD FILENAME out_pull_dna.dcd
> > > > Info: DCD FREQUENCY 5000
> > > > Info: DCD FIRST STEP 5000
> > > > Info: DCD FILE WILL CONTAIN UNIT CELL DATA
> > > > Info: XST FILENAME out_pull_dna.xst
> > > > Info: XST FREQUENCY 10000
> > > > Info: NO VELOCITY DCD OUTPUT
> > > > Info: OUTPUT FILENAME out_pull_dna
> > > > Info: BINARY OUTPUT FILES WILL BE USED
> > > > Info: RESTART FILENAME out_pull_dna.restart
> > > > Info: RESTART FREQUENCY 10000
> > > > Info: BINARY RESTART FILES WILL BE USED
> > > > Info: SWITCHING ACTIVE
> > > > Info: SWITCHING ON 10
> > > > Info: SWITCHING OFF 12
> > > > Info: PAIRLIST DISTANCE 13.5
> > > > Info: PAIRLIST SHRINK RATE 0.01
> > > > Info: PAIRLIST GROW RATE 0.01
> > > > Info: PAIRLIST TRIGGER 0.3
> > > > Info: PAIRLISTS PER CYCLE 2
> > > > Info: PAIRLISTS ENABLED
> > > > Info: MARGIN 5
> > > > Info: HYDROGEN GROUP CUTOFF 2.5
> > > > Info: PATCH DIMENSION 21
> > > > Info: ENERGY OUTPUT STEPS 5000
> > > > Info: CROSSTERM ENERGY INCLUDED IN DIHEDRAL
> > > > Info: TIMING OUTPUT STEPS 10000
> > > > Info: PRESSURE OUTPUT STEPS 5000
> > > > Info: TCL GLOBAL FORCES ACTIVE
> > > > Info: TCL GLOBAL FORCES SCRIPT Force_pull_dna.tcl
> > > > Info: LANGEVIN DYNAMICS ACTIVE
> > > > Info: LANGEVIN TEMPERATURE 310
> > > > Info: LANGEVIN DAMPING COEFFICIENT IS 5 INVERSE PS
> > > > Info: LANGEVIN DYNAMICS NOT APPLIED TO HYDROGENS
> > > > Info: PARTICLE MESH EWALD (PME) ACTIVE
> > > > Info: PME TOLERANCE 1e-06
> > > > Info: PME EWALD COEFFICIENT 0.257952
> > > > Info: PME INTERPOLATION ORDER 4
> > > > Info: PME GRID DIMENSIONS 81 180 64
> > > > Info: PME MAXIMUM GRID SPACING 1.5
> > > > Info: Attempting to read FFTW data from FFTW_NAMD_2.6_Linux-amd64.txt
> > > > Info: Optimizing 6 FFT steps. 1... 2... 3... 4... 5... 6... Done.
> > > > Info: Writing FFTW data to FFTW_NAMD_2.6_Linux-amd64.txt
> > > > Info: FULL ELECTROSTATIC EVALUATION FREQUENCY 4
> > > > Info: USING VERLET I (r-RESPA) MTS SCHEME.
> > > > Info: C1 SPLITTING OF LONG RANGE ELECTROSTATICS
> > > > Info: PLACING ATOMS IN PATCHES BY HYDROGEN GROUPS
> > > > Info: NONBONDED FORCES EVALUATED EVERY 2 STEPS
> > > > Info: RANDOM NUMBER SEED 1182151013
> > > > Info: USE HYDROGEN BONDS? NO
> > > > Info: COORDINATE PDB dna_xy.pdb
> > > > Info: STRUCTURE FILE dna_xy.psf
> > > > Info: PARAMETER file: CHARMM format!
> > > > Info: PARAMETERS par_all27_prot_lipid_na.inp
> > > > Info: USING ARITHMETIC MEAN TO COMBINE L-J SIGMA PARAMETERS
> > > > Info: BINARY COORDINATES out_eq_xy_dna.restart.coor
> > > >
> > > > Warning: DUPLICATE ANGLE ENTRY FOR CPH1-NR1-CPH2
> > > > PREVIOUS VALUES k=130 theta0=107.5 k_ub=0 r_ub=0
> > > > USING VALUES k=130 theta0=107 k_ub=0 r_ub=0
> > > > Info: SUMMARY OF PARAMETERS:
> > > > Info: 299 BONDS
> > > > Info: 729 ANGLES
> > > > Info: 1145 DIHEDRAL
> > > > Info: 84 IMPROPER
> > > > Info: 0 CROSSTERM
> > > > Info: 161 VDW
> > > > Info: 0 VDW_PAIRS
> > > > Info: ****************************
> > > > Info: STRUCTURE SUMMARY:
> > > > Info: 55933 ATOMS
> > > > Info: 37689 BONDS
> > > > Info: 18164 ANGLES
> > > > Info: 0 DIHEDRALS
> > > > Info: 0 IMPROPERS
> > > > Info: 0 CROSSTERMS
> > > > Info: 0 EXCLUSIONS
> > > > Info: 167799 DEGREES OF FREEDOM
> > > > Info: 19143 HYDROGEN GROUPS
> > > > Info: TOTAL MASS = 344144 amu
> > > > Info: TOTAL CHARGE = 2.8424e-06 e
> > > > Info: *****************************
> > > > Info: Entering startup phase 0 with 21864 kB of memory in use.
> > > > Info: Entering startup phase 1 with 21864 kB of memory in use.
> > > > Info: Entering startup phase 2 with 30292 kB of memory in use.
> > > > Info: Entering startup phase 3 with 30292 kB of memory in use.
> > > > Info: PATCH GRID IS 3 (PERIODIC) BY 7 (PERIODIC) BY 2 (PERIODIC)
> > > > Info: REMOVING COM VELOCITY -0.0263814 0.0100015 0.00261442
> > > > Info: LARGEST PATCH (31) HAS 1460 ATOMS
> > > > Info: CREATING 9644 COMPUTE OBJECTS
> > > > Info: Entering startup phase 4 with 38448 kB of memory in use.
> > > > Info: PME using 21 and 23 processors for FFT and reciprocal sum.
> > > > Info: PME GRID LOCATIONS: 1 2 3 4 5 6 7 8 9 10 ...
> > > > Info: PME TRANS LOCATIONS: 1 2 3 4 5 6 7 8 9 10 ...
> > > > Info: Entering startup phase 5 with 38448 kB of memory in use.
> > > > Info: Entering startup phase 6 with 38448 kB of memory in use.
> > > > Measuring processor speeds... Done.
> > > > Info: Entering startup phase 7 with 38448 kB of memory in use.
> > > > Info: CREATING 9644 COMPUTE OBJECTS
> > > > Info: NONBONDED TABLE R-SQUARED SPACING: 0.0625
> > > > Info: NONBONDED TABLE SIZE: 769 POINTS
> > > > Info: Entering startup phase 8 with 40012 kB of memory in use.
> > > > Info: Finished startup with 40012 kB of memory in use.
> > > > TCL: Running for 10000000 steps
> > >
> > >
> >
> >
> > --
> > With best regards / Mit freundlichen Grüßen
> >
> > Carsten Olbrich
> >
> > -------------------------------------------------------------------------
> > Carsten Olbrich
> > Jacobs University Bremen *
> > Phone: (+49) 421 200 3222
> > Fax: (+49) 421 200 3229
> > Campus Ring 1 (Room III.49b)
> > 28759 Bremen
> > Germany
> > * formerly International University Bremen
> > -------------------------------------------------------------------------
> >
>
>
-- With best regards / Mit freundlichen Grüßen Carsten Olbrich ------------------------------------------------------------------------- Carsten Olbrich Jacobs University Bremen * Phone: (+49) 421 200 3222 Fax: (+49) 421 200 3229 Campus Ring 1 (Room III.49b) 28759 Bremen Germany * formerly International University Bremen -------------------------------------------------------------------------
This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:44:51 CST