How are the virial related variables passed from GPU to CPU memory?

From: 张驭洲 (zhangyuzhou15_at_mails.ucas.edu.cn)
Date: Tue Feb 04 2020 - 04:13:21 CST

Dear NAMD users,

I want to know if there is any version of NAMD that can run on AMD GPU?

I'm trying to port the NAMD 2.13 to rocm 2.9.6. Now the ported code can run on AMD MI50. I tested the code with the standard benchmark f1atpase, and got correct results in the fields of positions, velocities and energy. However, the pressure related results like PRESSURE, GPRESSURE, PRESSAVG and GPRESSAVG are wrong. Actually, there are only the four pressure related results that are wrong. All other paremeters including VOLUME, ELECT amd VDW are correct, which confuses me.

I can find that in the Controller::receivePressure function that is implemented in the file src/Controller.C, the received virial_nbond is wrong, while the intVirial_nbond is correct. However, I can't find where the virial results are passed from GPU to CPU memory. I printed the two variable in both original CUDA version and my ported version:

CUDA at step 0:

Info: virial_nbond = 982056 -711.075 1983.2 -900.045 977007 -299.555 947.141 -682.239 982602

intVirial_nbond = 968096 -1591.12 -2630.27 -1053.01 960311 -1195.62 -964.738 -3044.46 965481
Warning: Energy evaluation is expensive, increase outputEnergies to improve performance.
ETITLE: TS BOND ANGLE DIHED IMPRP ELECT VDW BOUNDARY MISC KINETIC TOTAL TEMP POTENTIAL TOTAL3 TEMPAVG PRESSURE GPRESSURE VOLUME PRESSAVG GPRESSAVG

ENERGY: 0 108419.6846 81372.9795 16181.2942 1688.6761 -1334014.0739 111247.0192 0.0000 0.0000 289897.8780 -725206.5424 296.9606 -1015104.4204 -722630.4999 296.9606 272.0237 115.2403 3104392.8098 272.0237 115.2403

my ported version at step 0:

Info: virial_nbond = -4.87909e+08 -4.29996e+09 2633.77 -1.49149e+09 1.07192e+06 7.44905e+09 8.50603e+09 -8.58081e+07 982601

intVirial_nbond = 968096 -1591.12 -2630.25 -1053 960311 -1195.61 -964.74 -3044.46 965481
Warning: Energy evaluation is expensive, increase outputEnergies to improve performance.
ETITLE: TS BOND ANGLE DIHED IMPRP ELECT VDW BOUNDARY MISC KINETIC TOTAL TEMP POTENTIAL TOTAL3 TEMPAVG PRESSURE GPRESSURE VOLUME PRESSAVG GPRESSAVG

ENERGY: 0 108419.6843 81373.0073 16181.2946 1688.6762 -1334013.7631 111246.8476 0.0000 0.0000 289897.8780 -725206.3751 296.9606 -1015104.2531 -722630.3327 296.9606 -3647391.5234 -3647548.3033 3104392.8098 -3647391.5234 -3647548.3033

the input paremeters are:

coordinates f1wations_xplor.pdb
structure f1wations_xplor.psf
BinCoordinates f1wations-pme-nve-200ps.coor
BinVelocities f1wations-pme-nve-200ps.vel
ExtendedSystem f1wations-pme-nve-200ps.xsc
parameters IONS27_par_all22_prot_na.inp
paratypecharmm on
outputname /usr/local/test/f1atpase/results/GPU
binaryoutput no

#fixed atom params
fixedAtoms on
fixedAtomsFile f1_pme_fixed_3.pdb
fixedAtomsCol X

# force field parameters
exclude scaled1-4
1-4scaling 1.0
cutoff 11
switching On
switchdist 9
pairlistdist 13

# integrator params
timestep 1.0
numsteps 500
nonbondedFreq 1
PME yes
PMEGridSizeX 192
PMEGridSizeY 144
PMEGridSizeZ 144
fullElectFrequency 4
stepspercycle 20

# output
outputEnergies 1
outputTiming 20

#GPU
bondedCUDA 255
usePMECUDA off
PMEoffload off

Any help is highly appreciated!

Sincerely,

Zhang

This archive was generated by hypermail 2.1.6 : Fri Dec 31 2021 - 23:17:08 CST