When NAMD runs, important information on the progress of the simulation is written to standard output, appearing on the console unless output from NAMD is redirected to a file. This output is designed to be easy for both humans and programs to interpret.
At the beginning of the file, the version of NAMD being used is presented, along with other useful information, some of which is reported back to the developers over the network to help us track NAMD's popularity.
Info: NAMD 2.5b2ss03 for Linux-i686-Clustermatic Info: Info: Please visit http://www.ks.uiuc.edu/Research/namd/ Info: and send feedback or bug reports to namd@ks.uiuc.edu Info: Info: Please cite James C. Phillips, Rosemary Braun, Wei Wang, James Gumbart, Info: Emad Tajkhorshid, Elizabeth Villa, Christophe Chipot, Robert D. Skeel, Info: Laxmikant Kale, and Klaus Schulten. Scalable molecular dynamics with Info: NAMD. Journal of Computational Chemistry, 26:1781-1802, 2005. Info: in all publications reporting results obtained with NAMD. Info: Info: Built Fri May 30 13:09:06 CDT 2003 by jim on umbriel Info: Sending usage information to NAMD developers via UDP. Info: Sent data is: 1 NAMD 2.5b2ss03 Linux-i686-Clustermatic 47 umbriel jim Info: Running on 47 processors.
This is followed by a long list of the various configuration options and summaries of information extracted from the parameter and structure files. This is mostly self-explanatory and just restates the config file options, or the defaults for options not specified.
NAMD goes through several phases to setup data structures for the actual simulation, reporting memory usage (for the first process only!). Patches are the cells that NAMD uses to spread atoms across a parallel simulation, although the size of the patch grid is independent of the number of processors being used.
Info: Entering startup phase 0 with 6668 kB of memory in use. Info: Entering startup phase 1 with 6668 kB of memory in use. Info: Entering startup phase 2 with 10212 kB of memory in use. Info: Entering startup phase 3 with 10212 kB of memory in use. Info: PATCH GRID IS 4 (PERIODIC) BY 4 (PERIODIC) BY 4 (PERIODIC) Info: REMOVING COM VELOCITY -0.0765505 0.00822415 -0.00180344 Info: LARGEST PATCH (31) HAS 408 ATOMS Info: Entering startup phase 4 with 13540 kB of memory in use. Info: Entering startup phase 5 with 13540 kB of memory in use. Info: Entering startup phase 6 with 13540 kB of memory in use. Info: Entering startup phase 7 with 13540 kB of memory in use. Info: COULOMB TABLE R-SQUARED SPACING: 0.0625 Info: COULOMB TABLE SIZE: 705 POINTS Info: NONZERO IMPRECISION IN COULOMB TABLE: 4.03897e-28 (692) 1.00974e-27 (692) Info: NONZERO IMPRECISION IN COULOMB TABLE: 2.5411e-21 (701) 5.92923e-21 (701) Info: Entering startup phase 8 with 13540 kB of memory in use. Info: Finished startup with 13540 kB of memory in use.
Energies, along with temperatures and pressures, are printed as a single line with a unique prefix for easy processing with utilities such as grep and awk. An ETITLE line is printed once for every ten ENERGY lines. The spacing of the fields, however, is designed to be easy to scan on an 80-column display, as seen below:
ETITLE: TS BOND ANGLE DIHED IMPRP ELECT VDW BOUNDARY MISC KINETIC TOTAL TEMP TOTAL2 TOTAL3 TEMPAVG PRESSURE GPRESSURE VOLUME PRESSAVG GPRESSAVG ENERGY: 1000 0.0000 0.0000 0.0000 0.0000 -97022.1848 9595.3175 0.0000 0.0000 14319.5268 -73107.3405 300.2464 -73076.6148 -73084.1411 297.7598 -626.5205 -636.6638 240716.1374 -616.5673 -616.6619
Energy values are in kcal/mol. This example is from a simulation of pure, rigid water, so the BOND, ANGLE, DIHED, and IMPRP terms are all zero. BOUNDARY energy is from spherical boundary conditions and harmonic restraints, while MISC energy is from external electric fields and various steering forces. TOTAL is the sum of the various potential energies, and the KINETIC energy. TOTAL2 uses a slightly different kinetic energy that is better conserved during equilibration in a constant energy ensemble. TOTAL3 is another variation with much smaller short-time fluctuations that is also adjusted to have the same running average as TOTAL2. Defects in constant energy simulations are much easier to spot in TOTAL3 than in TOTAL or TOTAL2.
Pressure values are in bar. Temperature values are in Kelvin. PRESSURE is the pressure calculated based on individual atoms, while GPRESSURE incorporates hydrogen atoms into the heavier atoms to which they are bonded, producing smaller fluctuations. The TEMPAVG, PRESSAVG, and GPRESSAVG are the average of temperature and pressure values since the previous ENERGY output; for the first step in the simulation they will be identical to TEMP, PRESSURE, and GPRESSURE.
To estimate NAMD's performance for a long simulation, look for the ``Benchmark time'' lines printed in the first several hundred steps. These are a measure of NAMD's performance after startup and initial load balancing have been completed; this number should be very close to the long-time average.
Info: Benchmark time: 47 CPUs 0.0475851 s/step 0.275377 days/ns 13540 kB memory
Periodic performance output, if enabled, shows the CPU and wallclock time used by the first processor (not the aggregate of all processors in the simulation). The hours remaining are estimated based on performance since the last TIMING output.
TIMING: 1000 CPU: 18.35, 0.01831/step Wall: 50.1581, 0.0499508/step, 6.92374 hours remaining, 14244 kB of memory in use.
Lines such as these are printed whenever file output occurs.
OPENING COORDINATE DCD FILE WRITING COORDINATES TO DCD FILE AT STEP 1000
It is important to search the NAMD output for Warning and ERROR lines, which report unusual and possibly erroneous conditions. In the example below, the pairlist distance is too short or the cycle length is too long, possibly leading to reduced performance.
Warning: Pairlistdist is too small for 1 patches during timestep 17. Warning: Pairlists partially disabled; reduced performance likely. Warning: 20 pairlist warnings since previous energy output.
It is foolish to ignore warnings you do not understand!