Cross-platform dev Qs (autoconf, etc) for new, open-source project - Programmer

This is a discussion on Cross-platform dev Qs (autoconf, etc) for new, open-source project - Programmer ; Summary: I'm managing a newly-open-sourced project, and I'm looking to accomplish these goals: 1) Ensure the source packages can build on all systems 2) Ensure the application (binary) packages run on all supported systems 3) Ensure libraries we deliver integrate ...

+ Reply to Thread
Results 1 to 3 of 3

Thread: Cross-platform dev Qs (autoconf, etc) for new, open-source project

  1. Cross-platform dev Qs (autoconf, etc) for new, open-source project

    Summary:

    I'm managing a newly-open-sourced project, and I'm looking to
    accomplish these goals:

    1) Ensure the source packages can build on all systems
    2) Ensure the application (binary) packages run on all supported
    systems
    3) Ensure libraries we deliver integrate properly with other software
    projects

    Any feedback or guidance from this community would be greatly
    appreciated. I provide further details, as well as more-specific
    questions, below.


    Details:

    I manage a significant, C++ based software project that is on the
    verge of presenting an open-source format. Up until now, my group had
    tight control on the systems/environments (mostly Windows-MinGW, RHEL,
    Fedora, and Debian to this point) for which we built and tested our
    software. No longer; we now believe we have to support most of the
    worlds' systems, representing a set of very heterogeneous
    environments, and that's not trivial.

    I'm looking for feedback and guidance on what tools and paradigms are
    available to my C++ based, open-source project to best ensure
    consistent cross-platform build and execution capability. I provide
    my research findings and general understanding of the issues below.
    Thanks in advance for any help.

    (I plan on posting this note in a couple different email lists,
    forums, and newsgroups. Please forgive me if my post is not
    appropriate for your community. For what it's worth, I have found it
    difficult reaching a conclusion for this
    cross-platform-software-distribution issue, and I think it best to
    query several different communities involved with this stuff and get
    their combined take on the matter.)

    I break this issue down into these goals:

    1) Ensure the source packages can build on all systems
    2) Ensure the application (binary) packages run on all supported
    systems
    3) Ensure libraries we deliver integrate properly with other software
    projects

    I write more details, including associated questions, for each one of
    these goals in 3 different sections of my note below.

    I'm looking for the best and/or most-accepted ways to solve these
    problems. I've done a little research (coupled with my many years of
    user/developer experience with similar systems facing the same
    problems), and the following notes reflect what I've come up with thus
    far.

    For what it's worth: our project has taken extreme care to create
    portable and modular C++ code and to use only the most-common and
    highly-portable libraries. We feel that our problem lies not in
    making our code more portable, but rather making our build process and
    our binary-distribution systems more portable.

    ------------------------------------------------------------------
    ---- 1) Ensure the source packages can build on all systems ------
    ------------------------------------------------------------------

    The target stakeholder for this problem seems to be one of:

    a) user looking to build binaries from source, or
    b) a developer looking to modify the source for some reason, or
    c) some combination thereof.

    While this stakeholder set may be more knowledgeable and experienced
    then a general "binary-only" user, I still believe an automated system
    needs to check the build environment to make sure it's suitable to
    build my software package.

    This task appears to be centered on 2 basic issues:

    * Check that a proper compiler-and-linker system exists with all the
    appropriate system headers and libraries
    * Check availability and compatibility of "external" libraries and
    their headers

    Am I missing anything here?

    For what it's worth: the external libraries (besides "standard" system
    libraries) that we currently use include but may not be limited to the
    following: various Boost-C++ libraries, OpenSSL, BZip2, libpqxx,
    Xerces-c, and optionally ACE
    (), libcurl, and xmlrpc-c.

    Up until now, we were including each library (and its associated
    header files) from the above toolset in a Subversion-controlled
    "external" directory for each platform (eg, one for MinGW, Fedora,
    RHEL, Debian, etc). However, we're finding that some boost libs don't
    work for all debian3.1 systems, etc. I suspect we are going to run
    into this problem more and more over time, and as such we need to let
    system in question provide the library that's compatible with said
    system (in the aforementioned case: let Debian's apt download/build
    the right boost library). Is my understanding correct?

    My project also uses an extensive and modular GNU-make Makefile system
    based on a core Makefile we authored that builds rules dynamically (by
    heavily leveraging the $(eval) function in make) based up on
    per-application "input" Makefiles. Further, we do not hard-code lists
    of source files in our Makefiles, for we auto-find the source files in
    each application or library subdirectory on the fly; therefore, when
    one adds or removes source-code files to our repos (we use Subversion,
    although it may not matter that much), we require no changes to the
    Makefiles. This system has served us well, and we're not inclined to
    move away from this system unless absolutely necessary. (And if we
    want to support single-source control of all build processes, even
    with non-MinGW Windows systems, we might have to move away from this
    to something like bakefile or CMake...but more on this in a minute.)

    However, I doubt the Makefile system will be robust enough to handle
    the nuances of truly cross-platform builds; maybe that's an
    understatement. The tried-and-true tool to address this seems to be
    autoconf, and I'm currently gearing myself up to author some
    autoconf-based control files. However, autoconf does not appear to
    address non-MinGW Windows environments. For that reason, my project
    is currently supporting only MinGW in Windows environments. However,
    I'd like to be able to "single-source control" the build process for
    non-autoconf-supported systems like VisualStudio systems (and to a
    lesser extent CodeBlocks, Dev-C++, etc...although they are a safer bet
    to read GNUmake Makefiles) in the future. bakefile is the only thing
    that I've seen that yet supports this approach. CMake (CMake?) might,
    but I'm not sure about VStudio; further, CMake requires that all my
    developer-users change their usage patterns (from './configure && make
    && make install') and to build and use CMake...and I'm not yet
    inclined to change this paradigm.

    A note about autoconf: I'm hoping it provides a *supplement* to my
    existing Makefile system, instead of replacing such system with new,
    auto-generated makefiles, etc. (For this and possibly other reasons
    I'm steering clear of using automake, as per experiences like these:
    ). I'm a control
    freak about my build-control process, and I don't want some automated
    tool specifying what my build rules and dependencies are. Rather, I
    want autoconf (or some tool that replaces it) to simply make sure that
    the build environment (on said machine) is sufficient and then set the
    make variables accordingly as inputs to my existing make/Makefile
    process. Is this the way it works...or at least can work...with
    autoconf? Another way to ask this: can autoconf essentially be made a
    "slave" to the Makefile.in file?

    ------------------------------------------------------------------
    ------ 2) Ensure the app packages run on all supported systems ---
    ------------------------------------------------------------------

    The target stakeholder here seems to be: A "binary-only" user that
    simply needs to install the package and have it run, no fuss, no muss.

    This seems to boil down address this problem (or set of problems):
    making sure that all the shared-object/DLL library and other binary
    dependencies (like having a PostgreSQL or MySQL system installed if
    the system stores data in a database like ours does) are satisfied for
    the target operating system (and can be referenced in the appropriate
    binary/library "lookup" paths). And if said
    libraries/binaries/software can not be found, auto-downloading and
    installing them if necessary.

    Mechanisms like Fedora's yum, RHEL's up2date (hopefully I didn't get
    those mixed up), Debian's apt, *BSD's pkg, Solaris' pkg, etc, all seem
    to handle this with more or less sufficient capability. (Eg, an .rpm
    ".spec" file will map out the necessary rpm dependencies on a
    fedora/reel system.) Assuming they do, a couple questions:

    * Is there ever a time when I should package the binary libraries or
    binaries for external products (eg, OpenSSL, PostgreSQL) for my
    project's binary distribution?
    * Do tools exist where I can write one "control spec" for all the
    above auto-package toolsets instead of having to write a different
    spec for each of .rpm, .deb, BSD-pkg, etc?

    Separately: what does one do about Windows? What about other
    "non-Unix" operating systems like VMS?

    Do .msi-like installation packages do the work that of Debian's apt in
    that said system will automatically download missing library
    dependencies? I highly doubt it. Must my project include all the
    binary packages for my external libs/bins (eg, OpenSSL, PostgreSQL) in
    these distributions? (For what it's worth, our project currently
    supports MinGW-based Windows distributions of our stuff.)

    An additional note/question about "common" Linux binaries:

    For Linux flavors of my binary distribution, I'd like to be able to
    support a "build once, support many" paradigm. ie, if I could make
    one binary to support many different Linux distributions (Fedora,
    Debian, RHEL, SuSE, Mandiva/Mandrake, etc) and many different versions
    of each distribution (FC 1, 2, 3, 4, 5), I feel like I could save
    myself a lot of headache running different builds on all these
    platforms. At the very least, I'd like to not have to make a
    different build for every single Fedora flavor.

    Alas, I suspect this boils down to which "native, Linux-system"
    libraries does my app/apps/libraries depend on? Is there implicit
    kernel-level dependence in this scenario? Is it more then
    libc/libc++/etc? Where can I read more about this? This is the area
    for which I'm least experienced and knowledgeable.

    To be clear: I still want to make separate, "native," .rpm and .deb
    packages (and any other packages I need to support for Linux distros),
    but if they could all use the same underlying binary...and we could do
    this reliably and consistently with no problems...it would seem to
    make our life a lot easier. I see mature projects (like CMake) use
    one binary tarball (again, I'm not saying I'm going to distribute in a
    tarball-only fashion) for all Linux distros, so I'm hopeful this can
    be done.

    ------------------------------------------------------------------
    -- 3) Ensure our libraries integrate w/ other software projects --
    ------------------------------------------------------------------

    We want to provide the core functionality of our project's technology
    as an "embed-able" library (either in static or
    dynamic-shared-object/dll fashion).

    The libtool section of the autoconf manual
    (), quoted
    below, appears to sum up the issues surrounding this...I guess. I
    have to take libtool's word for it, for I have to experience the
    library distribution problems in our project that libtool says will
    happen. But if this helps, then we'll do it. Do any alternatives
    exist?

    But what do we do in non-MinGW Windows environments (assuming libtool
    works with MinGW-based systems at all)? Is there where the Windows'
    dlltool and similar tools come in? Can anyone point me to good
    reading for this on the web or elsewhere?

    The libtool quote as promised:

    "2.2 Libtool

    Very often, one wants to build not only programs, but libraries, so
    that other programs can benefit from the fruits of your labor.
    Ideally, one would like to produce shared (dynamically linked)
    libraries, which can be used by multiple programs without duplication
    on disk or in memory and can be updated independently of the linked
    programs. Producing shared libraries portably, however, is the stuff
    of nightmares; each system has its own incompatible tools, compiler
    flags, and magic incantations. Fortunately, gnu provides a solution:
    Libtool.

    Libtool handles all the requirements of building shared libraries for
    you, and at this time seems to be the only way to do so with any
    portability. It also handles many other headaches, such as: the
    interaction of Makefile rules with the variable suffixes of shared
    libraries, linking reliably with shared libraries before they are
    installed by the superuser, and supplying a consistent versioning
    system (so that different versions of a library can be installed or
    upgraded without breaking binary compatibility)."


    -Matt

    --
    Remove the "downwithspammers-" text to email me.

  2. Re: Cross-platform dev Qs (autoconf, etc) for new, open-source project

    Matt wrote:

    > A note about autoconf: I'm hoping it provides a *supplement* to my
    > existing Makefile system, instead of replacing such system with new,
    > auto-generated makefiles, etc. (For this and possibly other reasons
    > I'm steering clear of using automake, as per experiences like these:
    > ). I'm a control
    > freak about my build-control process, and I don't want some automated
    > tool specifying what my build rules and dependencies are. Rather, I
    > want autoconf (or some tool that replaces it) to simply make sure that
    > the build environment (on said machine) is sufficient and then set the
    > make variables accordingly as inputs to my existing make/Makefile
    > process. Is this the way it works...or at least can work...with
    > autoconf? Another way to ask this: can autoconf essentially be made a
    > "slave" to the Makefile.in file?


    I share your concerns about autoconf. I have especially been bothered
    about its dependency on a Unix-compatible (Borne) shell, but also that
    I need to write by customized tests in the M4 scripting language. So,
    although I like the feature-checking concept of autoconf, I do not like
    its implementation.

    Some months ago I started developing an alternative to autoconf. This
    alternative will only be available to C (or C++) projects. The basic
    idea is to use C, rather than a combination of M4 and Borne sheel, to
    check for features.

    Your detection code will look like this:

    int Configure(void)
    {
    CheckHeader("sys/types.h");
    CheckType("wchar_t", "stdlib.h");
    CheckFunction("strcasecmp");

    if (IsPackage("zlib", "yes")) { /* --with-zlib */
    CheckLibrary("z", "gzopen");
    }

    return 0;
    }

    I have not made this an official open-source project yet, but my
    working prototype (tested on Windows and several Unices) is available
    to any impatient soul out there who can figure out my email address
    The prototype is not a complete replacement for autoconf; at the moment
    it only produces the config.h include file.

    --
    mail1dotstofanetdotdk

  3. Re: Cross-platform dev Qs (autoconf, etc) for new, open-source project

    In comp.os.ms-windows.programmer.misc Matt wrote:

    > [ ... ]
    > A note about autoconf: I'm hoping it provides a *supplement* to my
    > existing Makefile system, instead of replacing such system with new,
    > auto-generated makefiles, etc.


    I think you're reasonably safe with autoconf. The Makefile it
    generates is just a copy of a Makefile.in written by you, with
    some variables substituted. The Makefile.in doesn't have to
    follow any particular convention, other than the name and
    syntax of the substitution variables.

    The same cannot be said of auto*make*.

    --
    pa at panix dot com

+ Reply to Thread