AW: AW: charm++ MPI build errors on LSF Cluster: "Can not compile a MPI program"

From: Norman Geist (norman.geist_at_uni-greifswald.de)
Date: Mon Dec 02 2013 - 05:27:37 CST

Frank,

if nobody else can tell here, that charm++ should work with PMPI you might
want to have a look at charmruns ++mpiexec option
[http://charm.cs.illinois.edu/manuals/html/charm++/C.html]. Possibly you can
bypass the problem in that way, by maintaining charm++ and using your MPI
environment.

Good luck.

Norman Geist.

> -----Ursprüngliche Nachricht-----
> Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im
> Auftrag von Frank Thommen
> Gesendet: Montag, 2. Dezember 2013 11:08
> An: Namd Mailing List
> Cc: Norman Geist
> Betreff: Re: AW: namd-l: charm++ MPI build errors on LSF Cluster: "Can
> not compile a MPI program"
>
> Hi Norman,
>
> we need this to run MPI jobs on the cluster. The MPI compilers - yes,
> I'm aware that these are just wrappers - basically work. The MPI
> installation as such is working fine. We have compiled several other
> software packages using it and these applications are running fine on
> the cluster.
>
> PMPI is the only MPI implementation which is really integrated into the
> LSF queueing system. We might use other MPI implementations if there
> is
> no other way, but submitting clusterjobs will then become less
> straightforward and sometimes quite cumbersome for the users. So the
> idea is to have MPI software compiled for PMPI.
>
> frank
>
>
> On 02.12.13 09:05, Norman Geist wrote:
> > Hi Frank,
> >
> > just for my interest, is there a special reason why you want a MPI
> instead
> > of a simpler charm++ or precompiled build, this would save you from
> the
> > trouble actually? If you really need a own MPI build of NAMD you
> should 1st
> > make sure that your MPI installation is working at all. Please notice
> that
> > "mpicc" and "mpic++" etc. are only wrappers that will call the
> original
> > compiler used to build the mpi with the required libs and flags to
> compile a
> > mpi using code. Therefore the compilers under this wrappers need to
> work,
> > too. But it's not guaranteed from my understanding that every MPI
> will
> > implement fine into charm++. Therefore I suggest trying another, more
> common
> > MPI distribution, OpenMPI is known to work fine. From the log I can't
> see an
> > obvious error regarding user input or usage, indicating some level of
> > incompatibility of your MPI with charm++.
> >
> > Norman Geist.
> >
> >
> >> -----Ursprüngliche Nachricht-----
> >> Von: owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] Im
> >> Auftrag von Frank Thommen
> >> Gesendet: Samstag, 30. November 2013 21:52
> >> An: namd-l_at_ks.uiuc.edu
> >> Betreff: namd-l: charm++ MPI build errors on LSF Cluster: "Can not
> >> compile a MPI program"
> >>
> >> Hi,
> >>
> >> I'm currently having probleme building charm++ (6.4.0) with MPI for
> >> namd
> >> 2.9 on an CentOS 6.2 cluster running the LSF queueing system. As
> MPI
> >> version we are using PMPI (Platform MPI), which has been delivered
> with
> >> the cluster:
> >>
> >> # which mpiCC
> >> /opt/platform_mpi/bin/mpiCC
> >> # mpiCC --version
> >> g++ (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3)
> >> Copyright (C) 2010 Free Software Foundation, Inc.
> >> This is free software; see the source for copying conditions. There
> is
> >> NO
> >> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
> >> PURPOSE.
> >>
> >> #
> >>
> >> When running the charm++ build command (`env
> >> MPICXX=/opt/platform_mpi/bin/mpiCC ./build charm++ mpi-linux-x86_64
> >> --with-production`) the configure step reports the following checks
> as
> >> failing:
> >>
> >> ===============================
> >> checking "whether __int64 works"... "no"
> >> checking "whether ucontext uses uc_regs"... "no"
> >> checking "whether ucontext has pointer (v_regs) of vector type"...
> "no"
> >> checking "whether GCC IA64 assembly works"... "no"
> >> checking "whether PPC assembly works"... "no"
> >> ===============================
> >>
> >>
> >> And finally exits with an error:
> >>
> >> ===============================
> >> [...]
> >> Error: Can not compile a MPI program
> >> *** Please find detailed output in charmconfig.out ***
> >> gmake[1]: *** [conv-autoconfig.h] Error 1
> >> gmake[1]: Leaving directory
> >> `/g/software/linux/pack/namd-2.9/SRC/NAMD_2.9_Source/charm-
> 6.4.0/mpi-
> >> linux-x86_64/tmp'
> >> gmake: *** [headers] Error 2
> >> [...]
> >> ===============================
> >>
> >> (The complete build output can be seen on
> http://pastebin.com/nBS62eWm)
> >>
> >>
> >> In tmp/charmconfig.out above checks seem to relate to the following
> >> errors:
> >>
> >> ===============================
> >> test.cpp:2: error: '__int64' does not name a type
> >> test.cpp:6: error: 'struct mcontext_t' has no member named 'uc_regs'
> >> test.cpp:2: error: expected constructor, destructor, or type
> conversion
> >> before '*' token
> >> test.cpp:4: Error: invalid character '=' in operand 1
> >> test.cpp:4: Error: no such instruction: `eieio'
> >> ===============================
> >>
> >> (The complete content of tmp/charmconfig.out can be found on
> >> http://pastebin.com/abuXNHKW)
> >>
> >>
> >> Any ideas about what is going wrong and/or how this can be fixed, is
> >> highly appreciated.
> >>
> >> Thank you very much in advance
> >> Frank
> >
> >
> > ---
> > Diese E-Mail ist frei von Viren und Malware, denn der avast!
> Antivirus Schutz ist aktiv.
> > http://www.avast.com
> >
>
>
> --
> Frank Thommen - Structures IT Management and Support - EMBL Heidelberg
> structures-it_at_embl-heidelberg.de - +49 6221 387 8353

---
Diese E-Mail ist frei von Viren und Malware, denn der avast! Antivirus Schutz ist aktiv.
http://www.avast.com

This archive was generated by hypermail 2.1.6 : Wed Dec 31 2014 - 23:21:57 CST