From: V.Ovchinnikov (ovchinnv_at_MIT.EDU)
Date: Tue Apr 15 2008 - 17:19:43 CDT
I noticed very little difference in performance between namd running in
parallel with charm++, and namd in parallel with Open-Mpi. Mpich1 was
definitely slower for me. I have an 8cpu xeon system
I don't know what kind of simulations your users will be running, but 2
cpus isn't much at all for proteins in explicit water.
On Tue, 2008-04-15 at 16:26 -0400, Corenflos, Steven Charles wrote:
> I ran into some troubles with getting namd to run in parallel with charm++ when I initially installed from the binaries. Now I'm attempting to compile from source.
> The target machine is a dual-processor quad-core Xeon 64-bit computer running 64-bit Ubuntu Linux.
> Considering the processing power of this computer I'd imagine most simulations are going to be running on the local machine. We may attempt to spread processing over some of our other servers as well, but that would be later on. That said, what is the best option to compile namd with? In particular I'm curious if I should be using MPI or not if it's going to be on the local machine. The documentation says to use it if the system has a good MPI implementation, but I have no idea how good the implementation for Linux on this architecture is or isn't. Should I use the smp option? Or would I be best served going ahead and using the TCP option to avoid having to recompile if we later want to distribute the workload onto multiple servers?
> Thank you for your advice.
This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:49:23 CST