From: Francesco Pietra (chiendarret_at_gmail.com)
Date: Sat Mar 29 2014 - 02:38:58 CDT
May I ask whether there are plans to let running accelMD on GPUs, single
node or clusters. That is, plans to calculate energies on GPUs.
It seems that other codes are already able to do so, at much greater
performance. Or is the real world different from advertisements?
I am not referreing to the size of the system to simulate, assuming that a
GPU cluster will be advantageous for large systems.
This archive was generated by hypermail 2.1.6 : Wed Dec 31 2014 - 23:22:15 CST