Re:

From: Jim Phillips (jim_at_ks.uiuc.edu)
Date: Tue Aug 26 2014 - 10:38:36 CDT

I strongly recommend using a net-linux-x86_64-ibverbs-smp (with optionally
icc or iccstatic) Charm++ build for running CUDA on multiple cluster
nodes, and binaries for this platform are available for download. You can
use "charmrun ++mpiexec" to launch these binaries using the standard
mpiexec script so they will no be less convenient than the MPI version.

Basically the CUDA version really benefits from shared memory within a
node, and the MPI SMP Charm++ port is limited by the MPI library much more
than the non-SMP port. There are better Charm++ machine layers than MPI
for pretty much every network at this point.

Jim

On Tue, 26 Aug 2014, ukulililixl wrote:

> When i compile NAMD,i use mpiicpc,and compiler is icpc,with cuda 6.5.I can
> compile it with no error message,but when i run a program,after it finished
> start up then it told me Segmentation fault (core dumped)
>
> Info: Startup phase 10 took 0.032253 s, 162.555 MB of memory in use
> Info: Startup phase 11 took 8.60691e-05 s, 162.555 MB of memory in use
> Info: Startup phase 12 took 0.00061202 s, 163.301 MB of memory in use
> Info: Finished startup at 6.82453 s, 163.301 MB of memory in use
>
> Segmentation fault (core dumped)
>
>
> But if i don't use cuda,the program can run.
> If i compile NAMD with mpicxx,gcc with cuda,the program can run.
>
> here is my compiling progress:
> tar -xvf NAMD_CVS-2014-08-25_Source.tar.gz
>
> cd NAMD_CVS-2014-08-25_Source
>
> tar -xvf charm-6.6.0-rc4.tar
>
> cd charm-6.6.0-rc4
>
> env MPICXX=mpiicpc ./build charm++ mpi-linux-x86_64 --with-production
>
> ./config Linux-x86_64-icc --charm-base charm-6.6.0-rc4 --charm-arch
> mpi-linux-x86_64 --with-tcl --tcl-prefix ~/tcl-threaded/ --with-fftw
> --fftw-prefix ~/fftw --with-cuda --cuda-prefix /public/opt/cuda-6.5
> --cuda-gencode arch=compute_35,code=sm_35 --cuda-gencode
> arch=compute_35,code=compute_35
>
> gmake -j16
>
> can anyone help me ?
>

This archive was generated by hypermail 2.1.6 : Thu Dec 31 2015 - 23:21:10 CST