// file : doc/packaging.cli // license : MIT; see accompanying LICENSE file "\name=build2-packaging-guide" "\subject=toolchain" "\title=Packaging Guide" // NOTES // // - Maximum
line is 70 characters. // // - In guideline titles (do/don't) omit a/the. // // @@ Close the issue in WISHLIST. " \h0#preface|Preface| This document provides guidelines for converting third-party C/C++ projects to the \c{build2} build system and making them available as packages from \l{https://cppget.org cppget.org}, the \c{build2} community's central package repository. For additional information, including documentation for individual \c{build2} toolchain components, man pages, HOWTOs, etc., refer to the project \l{https://build2.org/doc.xhtml Documentation} page. \N|This document is a work in progress and is incomplete.| \h1#intro|Introduction| @@ Assume read through toolchain introduction and build system introduction. Also, ideally, have some experience using \c{build2} in your own projects. The aim of this guide is to ease the convertion of third-party C/C++ projects to the \c{build2} build system and publishing them to the \l{https://cppget.org cppget.org} package repository by codifying the best practices and techniques. By following the presented guidelines you also make it easier for others to review your work and help with ongoing maintenance. The primary focus of this guide are existing C/C++ projects that use a different build system and that are maintained by a third-party, which we will refer to as \i{upstream}. Unless upstream is willing to incorporate support for \c{build2} directly into their repository, such projects are normally packaged for \c{build2} in a separate \c{git} repository under the \l{https://github.com/build2-packaging github.com/build2-packaging} organization. Note, however, that many of the presented guidelines are also applicable when converting your own projects (that is, where you are the upstream) as well as projects that use languages other than C or C++. Most C/C++ packages that are published to \l{https://cppget.org cppget.org} are either libraries or executables (projects that provide both are normally split into several packages) with libraries being in the strong majority. Libraries are also generally more difficult to build correctly. As a result, this guide uses libraries as a baseline. In most cases, a library-specific step is easily distinguished as such and can be skipped when dealing with executables. And in cases where a more nuanced change is required, a note will be provided. At the high-level, packaging a third-party project involves the following steps: \ol| \li|Create the \c{git} repository and import upstream source code.| \li|Generate \c{buildfile} templates that match upstream layout.| \li|Tweak the generated \c{buildfiles} to match upstream build.| \li|Test using the \l{https://ci.cppget.org \c{build2} CI service}.| \li|Publish the package to \l{https://cppget.org cppget.org}.| | Once this process is completed and the package is published, new releases normally require a small amount of work provided there are no drastic changes in the upstream layout or build. The sequence of steps for a new release would typical look like this: \ol| \li|Add new and/or remove old upstream source code, if any.| \li|Tweak \c{buildfiles} to match changes to upstream build, if any.| \li|Test using the \l{https://ci.cppget.org \c{build2} CI service}.| \li|Publish the package to \l{https://cppget.org cppget.org}.| | While packaging a simple library or executable is relatively straightforward, the C and C++ languages and their ecosystem is famous for a large amount varience in the platforms, compilers, and build systems used. This leads to what appears to be an endless list of special considerations that are applicable in certain, more complex cases. As result, the presented guidelines are divided into four chapters: The \l{#core Core Guidelines} cover steps that are applicable to all or most packaging efforts. As mentioned earlier, these steps will assume packaging a library but they should be easy to adapt to executables. This chapter is followed by \l{#dont-do What Not to Do} which covers the common packaging mistakes and omissions. These are unfortunately relatively common because experience with other build systems often does not translate directly to \c{build2} and some techniques (such as header-only libraries) are discouraged. The last two chapters are \l{#howto HOWTO} and \l{#faq FAQ} which cover the above-mentioned long list of special considerations that are only applicable in certain cases as well as answer frequent packaging-related questions, respectively. @@ Purpose of notes to provide rationale. Besides the presented guidelines you may also find the existing packages found in \l{https://github.com/build2-packaging github.com/build2-packaging} a good source of example material. The repositories pinned to the front page are the recommended starting point. \h#intro-term|Terminology| upstream upstream repository project package (third-party project) package \c{git} repository multi-package repository \h1#core|Core Guidelines| \h#core-repo|Setup the package repository| This section covers the creation of the package \c{git} repository and the importation of the upstream source code. \h2#core-repo-exists|Check if package repository already exists| Before deciding to package a third-party project you have presumably checked on \l{https://cppget.org cppget.org} if someone has already packaged it. There are several other places that make sense to check as well: \ul| \li|\l{https://queue.cppget.org queue.cppget.org} contains packages that have been submitted but not yet published.| \li|\l{https://queue.stage.build2.org queue.stage.build2.org} contains packages that have been submitted but can only be published after the next release of the \c{build2} toolchain (see \l{#faq-publish-stage Where to publish if package requires staged toolchain?} for background).| \li|\l{https://github.com/build2-packaging github.com/build2-packaging} contains all the third-party package repositories. Someone could already be working on the package but haven't they finished it.| \li|\l{https://github.com/build2-packaging/WISHLIST/issues github.com/build2-packaging/WISHLIST} contains as issues projects that people wish were packaged. These may contain offers to collaborate or announcements of ongoing work.|| In all these cases you should be able to locate the package \c{git} repository and/or connect with others in order to collaborate on the packaging work. If the existing effort looks abandoned (for example, there hasn't been any progress for a while and the existing maintainer doesn't respond) and you would like to take over the package, \l{https://build2.org/community.xhtml#help get in touch}. \h2#core-repo-name|Use upstream repository name as package repository name| It is almost always best to use the upstream repository name as the package repository name. If there is no upstream repository (for example, because the project doesn't use a version control system), the name used in the source archive distribution would be the natural fallback. \N|See \l{#core-package-name Decide on the package name} for the complete picture on choosing names.| \h2#core-repo-create|Create package repository in personal workspace| For a third-party project, the end result that we are aiming for is a package repository under the \l{https://github.com/build2-packaging github.com/build2-packaging} organization. \N|We require all the third-party projects that are published to \l{https://cppget.org cppget.org} to be under the \l{https://github.com/build2-packaging github.com/build2-packaging} organization in order to ensure some continuity in case the original maintainer loose interest, etc. You will still be the owner of the repository and by hosting your packaging efforts under this organization (as opposed to, say, your personal workspace) you make it easier for others to discover your work and to contribute to the package maintenance. Note that this requirement does not apply to your own projects (that is, where you are the upstream) and where the \c{build2} support is normally part of the upstream repository. Finally, a note on the use of \c{git} and GitHub: if for some reason you are unable to use either, \l{https://build2.org/community.xhtml#help get in touch} to discuss alternatives.| However, the recommended approach is to start with a repository in your personal workspace and then, when it is ready or in a reasonably stable shape, transfer it to \l{https://github.com/build2-packaging github.com/build2-packaging}. This gives you the freedom to make destructive changes to the repository (including deleting it and strating over) during the initial packaging work. It also removes the pressure to perform: you can give it a try and if things turn out more difficult than you expected, you can just drop the repository. \N|For repositories under \l{https://github.com/build2-packaging github.com/build2-packaging} the \c{master}/\c{main} branch is protected: it cannot be deleted and its commit history cannot be overwritten with a forced push.| \N|While you can use any name for a repository under the personal workspace, under \l{https://github.com/build2-packaging github.com/build2-packaging} it should follow the \l{core-repo-name Use upstream repository name as package repository name} guideline. In particular, there should be no prefixes like \c{build2-} or suffixes like \c{-package}. If the repository under your personal workspace does not follow this guideline, you should rename it before transferring it to the \l{https://github.com/build2-packaging github.com/build2-packaging} organization.| There is one potenential problem with this approach: it is possible that several people start working on the same third-party project without being aware of each other's efforts. If the project you are packaging is relatively small and you don't expect it to take more than a day or two, then this is probably not worth worrying about. For bigger projects, however, it makes sense to announce your work by creating (or updating) the corresponding issue in \l{https://github.com/build2-packaging/WISHLIST github.com/build2-packaging/WISHLIST}. To put it all together, the recommended sequence of actions for this step: \ol| \li|Create a new empty repository under your personal workspace from GitHub UI. Don't automatically add any files (\c{README}, \c{LICENSE}, etc).| \li|Set the repository description in GitHub UI to the \c{build2 package for} line, where \c{ } is the project name.| \li|Clone the repository to your machine.|| \N|Since this is your personal repository, you can do the initial work directly in \c{master}/\c{main} or in a separate branch, it's up to you.| As a running example, let's assume we want to package a library called \c{foo} whose upstream repository is at \c{https://github.com/ /foo.git}. We have created its package repository at \c{https://github.com/ /foo.git} (with the \c{build2 package for foo} description) and can now clone it: \ $ git clone https://github.com/ /foo.git \ \h2#core-repo-init|Initialize package repository with \c{bdep new --type empty}| Change to the root directory of the package repository that you have clonned on the previous step and run (continuing with our \c{foo} example): \ $ cd foo $ bdep new --type empty $ tree . ./ ├── .gitattributes ├── .gitignore ├── README.md └── repositories.manifest \ This command creates a number of files in the root of the repository: \dl| \li|\n\c{README.md}\n This is the project \c{README}. We will discuss the recommended content for this file later.| \li|\n\c{repositories.manifest}\n This file specifies the repositories from which this project will obtain its dependencies (see \l{intro#guide-add-remove-deps Adding and Removing Dependencies}). If the project you are packaging has no dependencies, then you can safely remove this file (it's easy to add later if this changes). And for projects that do have dependecies we will discuss the appropriate changes to this file later.| \li|\n\c{.gitattributes} and \c{.gitignore}\n These are the \c{git} infrastrucutre files for the repository. You shouldn't normally need to change anything in them at this stage (see the comments inside for details).|| Next add and commit these files: \ $ cd foo/ # Change to the package repository root. $ git add . $ git status $ git commit -m \"Initialize repository\" \ \N|In these guidelines we will be using the package repository setup that is capable of having multiple packages. This is recommended even for upstream projects that only provides a single package because it gives us the flexibility of adding new packages at a later stage without having to perform a major restructuring of our repository. Note also that upstream providing multiple package is not the only reason we may end up having multiple \c{build2} packages. Another common reason is factoring tests into a separate package due to a dependency on a testing framework (see \l{https://github.com/build2/HOWTO/blob/master/entries/handle-tests-with-extra-dependencies.md How do I handle tests that have extra dependencies?} for background and details). While upstream adding new packages may not be very common, upstream deciding to use a testing framework is a lot more plausible. The only notable drawback of using a multi-package setup with a single package is the extra subdirectory for the package and a few extra files (such as \c{packages.manifest} that lists the packages) in the root of the repository. If you are certain that the project that you are converting is unlikely to have multiple packages (for example, because you are the upstream) or need extra dependencies for its tests (a reasonable assumption for a C project), then you could instead go with the single-package repository where the repository root is the package root. See \l{bdep-new(1)} for details on how to initialize such a repository. In this guide, however, we will continue to assume a multi-package repository setup.| \h2#core-repo-submodule|Add upstream repository as \c{git} submodule| If the third-party project is available from a \c{git} repository, then the recommended approach is to use the \c{git} submodule mechanism to make the upstream source code available inside the package repository, customarily in a subdirectory called \c{upstream/}. \N|While \c{git} submodules receive much criticism, in our case we use them exactly as indended: to select and track specific (release) commits of an external project. As a result, there is nothing tricky about their use for our purpose and all the relevant commands will be provided and explained, in case you are not familiar with this \c{git} mechanism.| Given the upstream repository URL, to add it as a submodule, run the following command from the package repository root (continuing with our \c{foo} example): \ $ git submodule add https://github.com/ /foo.git upstream \ \N|You should prefer \c{https://} over \c{git://} for the upstream repository URL since the \c{git://} protocol may not be accessible from all networks. Naturally, never use a URL that requires authentication, for example, SSH.| Besides the repository URL, you also need the commit of the upstream release which you will be packaging. It is common practice to tag releases so the upstream tags would be the first place to check. Failed that, you can always use the commit id. Assuming the upstream release tag you are interested in is called \c{vX.Y.Z}, to update the \c{upstream} submodule to point to this release commit, run the following command: \ $ cd upstream $ git checkout vX.Y.Z $ cd .. \ Then add and commit these changes: \ $ cd foo/ # Change to the package repository root. $ git add . $ git status $ git commit -m \"Add upstream submodule\" \ Now we have all the upstream source code for the release that we are interested in available in the \c{upstream/} subdirectory of our repository. The plan is to then use symbolic links (symlinks) to non-invasively overlay the \c{build2} files (\c{buildfile}, \c{manifest}, etc) with the upstream source code, if necessary adjusting upstream structure to split it into multiple packages and/or to better align with the source/output layouts recommended by \c{build2} (see \l{https://build2.org/article/symlinks.xhtml Using Symlinks in \c{build2} Projects} for background and rationale). But before we can start adding symlinks to the upstream source (and other files like \c{README}, \c{LICENSE}, etc), we want to generate the \c{buildfile} templates that match the upstream source code layout. This is the subject of the next section. \N|While on UNIX-like operating systems symlinks are in widespread use, on Windows it's a niche feature that unfortunately could be cumbersome to use (see \l{https://build2.org/article/symlinks.xhtml#windows Symlinks and Windows} for details). However, the flexibility afforded by symlinks when packaging third-party projects is unmatched by any other mechanism and we therefore use them despite potentially sub-optimal experience on Windows.| \h#core-package|Create package and generate \c{buildfile} templates| This section covers the addition of the package to the repository we have prepared in the previous steps and the generation of the \c{buildfile} templates that match the upstream source code layout. \h2#core-package-name|Decide on the package name| While choosing the package repository name was pretty straightforward, things get less clear cut when it comes to the package name. \N|If you need a refresher on the distinction between projects and packages, see \l{#intro-term Terminology}.| Picking a name for a package that provides an executable is still relatively straightforward: you should use the upstream name (which is usually the same as the upstream project name) unless there is a good reason to deviate. One recommended place to check before deciding on a name is the \l{https://packages.debian.org Debian package repository}. If their package name differs from upstream, then there is likely a good reason for that and it is worth trying to understand what it is. \N|Tip: when trying to find the corresponding Debain package, search for the executable file name in the package contents if you cannot fine the package by its upstream name. Also consider searching in the \c{unstable} distribution in addition to \c{testing} for newer packages.| Picking a name for a package that provides a library is where things can get more complicated. While all the recommendation that have been listed for executables apply equally to libraries, there are additional considerations. In \c{build2} we recommend (but not require) that new library projects use a name that starts with \c{lib} in order to easily distinguish them from executables and avoid any clashes, potential in the future (see \l{intro#proj-struct Canonical Project Structure} for details). To illustrate the problem, consider the \c{zstd} project which provides a library and an executable. In upstream repository both are part of the same codebase that doesn't try to separate them into packages so that, for example, library could be used without downloading and building the executable. In \c{build2}, however, we do need to split them into two separate packages and both packages cannot be called \c{zstd}. So we call them \c{zstd} and \c{libzstd}. \N|If you are familiar with the Debian package naming policy, you will undoubtedly recognize the approach. In Debian all the library packages (with very few exceptions) start with the \c{lib} prefix. So when searching for an upstream name in the \l{https://packages.debian.org Debian package repository} make sure to prefix it with \c{lib} (unless it already starts with this prefix, of course).| This brings the question of what to do about third-party libraries: should we add the \c{lib} prefix to the package name if it's not already there? Unfortunately, there is no clear cut answer and whichever decision you make, there will be drawbacks. Specifically, if you add the \c{lib} prefix, the main drawback is that the package name now deviates from upstream name and if the project maintainer ever decides to add \c{build2} support the upstream repository, there could be substantial friction. On the other handle, if you don't add the \c{lib} prefix, then you will always run the risk of a future clash with an executable name. And, as was illustrated with the \c{zstd} example, a late addition of an executable won't necessarily cause any issues to upstream. As a result, we don't have a hard requirement for the \c{lib} prefix unless there is already an executable that would cause the clash (this applies even if it's not being packaged yet or is provided by an unrelated project). If you don't have a strong preference, we recommend that you add the \c{lib} prefix (unless it is already there). In particular, this will free you from having to check for any potential clashes. See \l{https://github.com/build2/HOWTO/blob/master/entries/name-packages-in-project.md How should I name packages when packaging third-party projects?} for additional background and details. To build some intuition for choosing package names, let's consider several real examples. We start with executables: \ upstream | upstream | Debian | build2 package| build2 project name|executable name|package name|repository name|package name ------------+---------------+------------+---------------+------------ byacc byacc byacc byacc byacc sqlite sqlite3 sqlite3 sqlite sqlite3 vim xxd xxd xxd xxd OpenBSD m4 - openbsd-m4 openbsd-m4 qtbase 5 moc qtbase5-\ Qt5 Qt5Moc dev-tools qtbase 6 moc qt6-base-\ Qt6 Qt6Moc dev-tools \ The examples are arranged from the most straightforward naming to the least. The last two examples show that sometimes, after carefully considering upstream naming, you nevertheless have no choice but to ignore it and forge your own path. Next let's look at library examples. Notice that some use the same \c{build2} package repository name as the executables above. That means they are part of the same multi-package repository. \ upstream | upstream | Debian | build2 package| build2 project name|library name |package name|repository name|package name ------------+---------------+------------+---------------+------------ libevent libevent libevent libevent libevent brotli brotli libbrotli brotli libbrotli zlib zlib zlib zlib libz sqlite libsqlite3 libsqlite3 sqlite libsqlite3 libsig\ libsigc++ libsigc++ libsig\ libsigc++ cplusplus cplusplus qtbase 5 QtCore qtbase5-dev Qt5 libQt5Core qtbase 6 QtCore qt6-base-dev Qt6 libQt6Core \ If an upstream project is just a single library, then the project name is normally the same as the library name (but there are exceptions, like \c{libsigcplusplus} in the above table). However, when looking at upstream repository that contains multiple components (libraries and/or executables, like \c{qtcore} in the above example), it may not be immediately obvious what the upstream's library names are. In such cases, the corresponding Debian packages can really help clarify the situation. Failed that, look into the existing build system. In particular, if it generates the \c{pkg-config} file, then the name of this file is usually the upstream library name. \N|Looking at the names of the library binaries is less helpful because on UNIX-like systems they must start with the \c{lib} prefix. And on Windows the names of library binaries often embed extra information (static/import, debug/release, etc) and may not correspond directly to the library name.| And, speaking of multiple components, if you realize the upstream project provides multiple libraries and/or executables, then you need to decide whether to split them into seperate \c{build2} packages and if so, how. Here, again, the corresponding Debian packages can be a good strating point. Note, however, that in this case we often deviate from their split, especially when it comes to libraries. For example, \c{libevent} shown in the above table provides several libraries (\c{libevent-core}, \c{libevent-extra}, etc) and in Debian it is actually split into several binary packages along these lines. In \c{build2}, however, there is a single package that provides all these libraries with everything except \c{libevent-core} being optional. An example which shows the decision made in a different direction would be the Boost libraries: in Debian all the header-only Boost libraries are bundled into a single package while in \c{build2} they are all seperate packages. The overall criteria here can be stated as follows: if a small family of libraries provide complimentary functionality (like \c{libevent}), then we put them all into a single package, usually making the additional functionality optional. However, if the libraries are independent (like Boost) or provide alternative rather than complimentary functionality (for example, like different backends in \c{imgui}), then we make them separate packages. Note that we never bundle an executable and a (public) library in a single package. Note also that while it's a good idea to decide on the package split and all the package names upfront to avoid suprises later, you don't have to actually provide all the packages right away. For example, if upstream provides a library and an executable (like \c{zstd}), you can start with the library and the executable package can be added later (potentially by someone else). Admittedly, the recommendation in this section are all a bit fuzzy and one can choose different names or different package splits that could all seem reasonable. If you are unsure how to split the upstream project or what names to use, \l{https://build2.org/community.xhtml#help get in touch} to discuss the alternatives. It can be quite painful to change these things after you have completed the remaining packaging steps. Continuing with our \c{foo} example, we will follow the recommendation and call the library package \c{libfoo}. \h2#core-package-struct|Decide on the package source code layout| Another aspect we need to decide on is the source code layout inside the package. Here we want to stay as close to upstream layout as possible unless there are valid reasons to deviate. This has the best chance of giving us a build without any compile errors since the header inclusion in the project can be sensitive to this layout. This also makes it easier for upstream to adopt the \c{build2} build. Sometimes, however, there are good reasons for deviating from upstream, especially in cases where upstream is clearly following bad practices, for example including generically-named public headers without the library name as a subdirectory prefix. If you do decide to change the layout, it's usually less disruptive (to the build) to rearrange things at the outer levels than at the inner. For example, it should normally be possible to move/rename the top-level \c{tests/} directory or to place the library source files into a subdirectory. Our overall plan for the package is to create the initial layout and \c{buildfile} templates automatically using \l{bdep-new(1)} in the \c{--package} mode, then tweak \c{buildfiles} if necessary, and finally \"fill\" the package with upstream source code using symlinks. The main rationale for using \l{bdep-new(1)} instead of doing everything by hand is that there are many nuances in getting the build right and auto-generated \c{buildfiles} had years of refinement and fine-tuning. The familiar structure also makes it easier for others to understand your build, for example while reviewing your package submission. The \l{bdep-new(1)} command supports a wide variety of \l{bdep-new.xhtml#src-layout source layouts}. While it may take a bit of time to understand the customization points necessary to achieve the desired layout for your first package, this will pay off in spades when you work on converting subsequent packages. And so the focus of the following several steps is to iteratively discover the \l{bdep-new(1)} command line that best approximates the upstream layout. The recommended procedure is as follows: \ol| \li|\nStudy the upstream source layout and existing build system.| \li|\nCraft and execute the \l{bdep-new(1)} command line necessary to achieve the upstream layout.| \li|\nStudy the auto-generated \c{buildfiles} for things that don't fit and need to change. But don't rush to start manually editing the result. First get an overview of the required changes and then check if it's possible to achieve these changes automatically using one of \l{bdep-new(1)} sub-options. If that's the case, delete the package subdirectory, and restart from step #2.|| This and the following two sections discuss each of these steps in more detail and also look at some examples. The first step above is to study the upstream project in order to understand where the various parts are (headers, sources, etc.) and how they are built. Things that can help here include: \ul| \li|Read through the existing build system definitions.| \li|Try to build the project using the existing build system.| \li|Try to install the project using the existing build system.| \li|Look into the Debian package contents to see if there are any differences with regards to the installation locations.|| For libraries, the first key pieces of information we need to find is how the public headers are included and where they are installed. The two common \i{good} practices is to either include the public headers with a library name as a subdirectory, for example, \c{#include\ }, or to include the library name into each public header name, for example, \c{#include\ } or \c{#include\ } (in the last example the header name is the library name itself, which is also fairly common). Unfortunately, there is also a fairly common \i{bad} practice: having generically named headers (such as \c{util.h}) included without the library name as a subdirectory. \N|The reason this is a bad practice is that libraries that have such headers cannot coexist, neither in the same build nor when installed. See \l{intro#proj-struct Canonical Project Structure} for background and details. See \l{#howto-bad-inclusion-practice How do I deal with bad header inclusion practice} if you encounter such a case.| Where should we look to get this information? While the library source files sound like a natural place, oftentimes they include own headers with the \c{\"\"} style inclusion, either because the headers are in the same directory or because the library build arranges for them to be found this way with additional header search paths. As a result, a better place to look could be library's examples and/or tests. Some libraries also describe which headers they provide and how to include them in their documentation. The way public headers are included normally determines where they are installed. If they are included with a subdirectory, then they are normally installed into the same subdirectory in, say, \c{/usr/include/}. Continuing with the above example, a header that is included as \c{ } would normally be installed as \c{/usr/include/foo/util.h}. On the other hand, if the library name is part of the header name, then the headers are usually (but not always) installed directly into, say, \c{/usr/include/}, for example as \c{/usr/include/foo_util.h}. \N|While these are the commonly used installation schemes, there are deviations. In particular, in both cases upstream may choose to add an additional subdirectory when installing (so the above examples we instead end up with, say, \c{/usr/include/foo_v1/foo/util.h} and \c{/usr/include/foo_v1/sub/foo_util.h}). See \l{#howto-extra-header-install-subdir How do I handle extra header installation subdirectory} if you encounter such a case.| The inclusion scheme would normally be recreated in the upstream source code layout. In particular, if upstream includes public headers with a subdirectory prefix, then this subdirectory would normally also be present in the upstream layout so that such a header can be included form the upstream codebase directly. As an example, let's say we determined that public headers of \c{libfoo} are included with the \c{foo/} subdirectory, such as \c{ }. One of the typical upstream layouts for such a library would look like this: \ $ tree upstream/ upstream/ ├── include/ │ └── foo/ │ └── util.hpp └── src/ ├── priv.hpp └── util.cpp \ Notice how the \c{util.hpp} header is in the \c{foo/} subdirectory rather than in \c{include/} directly. The second key pieces of information we need to find is whether and, if so, how the public headers and sources are split. For instance, in the above example, we can see that public headers go into \c{include/} while sources and private headers go into \c{src/}. But they could also be combined in the same directory, for example, as in the following layout: \ upstream/ └── foo/ ├── priv.hpp ├── util.cpp └── util.hpp \ \N|In multi-package projects, for example, those that provide both a library and an executable, you would also want to understand how the sources are split between the packages.| If the headers and sources are split into different directories, then the source directory may or may not have the inclusion subdirectory, similar to the header directory. In the above split layout the \c{src/} directory doesn't contain the inclusion subdirectory (\c{foo/}) while the following layout does: \ upstream/ ├── include/ │ └── foo/ │ └── util.hpp └── src/ └── foo/ ├── priv.hpp └── util.cpp \ With the understanding of these key properties of upstream layout you should be in a good position to start crafting the \l{bdep-new(1)} command line that recreates it. \N|The \c{bdep-new} documentation uses a slightly more general terminology compared to what we used in the previous section in order to also be applicable to projects that use modules instead of headers. Specifically, the inclusion subdirectory (\c{foo/}) is called \i{source subdirectory} while the header directory (\c{include/}) and source directory (\c{src/}) are called \i{header prefix} and \i{source prefix}, respectively.| \h2#core-package-craft-cmd|Craft \c{bdep new} command line to create package| The recommened procedure for this step is to read through the \c{bdep-new}'s \l{bdep-new.xhtml#src-layout SOURCE LAYOUT} section (which contains a large number of examples) while experimenting with various options in an attempt to create the desired layout. If the layout you've got isn't quite right yet, simply remove the package directory along with the \c{packages.manifest} file and try again. Let's illustrate this approach on the original example of the split layout: \ upstream/ ├── include/ │ └── foo/ │ └── util.hpp └── src/ ├── priv.hpp └── util.cpp \ We know it's split, so let's start with that and see what we get. Remember, our \c{foo} package repository that we have clonned and initialized earlier looks like this: \ $ tree foo/ foo/ ├── .gitattributes ├── .gitignore ├── README.md └── repositories.manifest \ Now we create the \c{libfoo} package inside: \ $ cd foo $ bdep new --package --lang c++ --type lib,split libfoo $ tree libfoo/ libfoo/ ├── include/ │ └── libfoo/ │ └── foo.hxx └── src/ └── libfoo/ └── foo.cxx \ The outer structure looks right, but inside \c{include/} and \c{src/} things are a bit off. Specifically, the source subdirectory should be \c{foo/}, not \c{libfoo/}, there shouldn't be one inside \c{src/}, and the file extensions don't match upstream. All this can be easily tweaked, however: \ $ rm -r libfoo/ packages.manifest $ bdep new --package \ --lang c++,cpp \ --type lib,split,subdir=foo,no-subdir-source \ libfoo $ tree libfoo/ libfoo/ ├── include/ │ └── foo/ │ └── foo.hpp └── src/ └── foo.cpp \ The other \c{bdep-new} sub-options (see the \l{bdep-new(1)} man page for the complete list) that you will likely want to use when packaging a third-party project include: \dl| \li|\n\cb{no-version} Omit the auto-generated version header. Usually upstream will provided its own equivalent to this functionality. \N|Note that even if upstream doesn't provide any version information, it's not a good idea to try to rectify this by providing your own version header since upstream may add it in a future version and you may end up with a conflict. Instead, work with the project maintainer to rectify this in upstream.|| \li|\n\cb{no-symexport}\n\cb{auto-symexport} The \c{no-symexport} sub-option suppresses the generation of the DLL symbol exporting header. This is an appropriate option if upstream provides its own symbol exporting arrangements. The \c{auto-symexport} sub-option enables automatic DLL symbol exporting support (see \l{b##cc-auto-symexport Automatic DLL Symbol Exporting} for background). This is an appropriate option if upstream relies on similar support in the existing build system. It is also recommended that you give this functionality a try even if upstream does not support building shared libraries on Windows.| \li|\n\cb{binless} Create a header-only library. See \l{#dont-header-only Don't make library header-only if it can be compiled} and \l{https://github.com/build2/HOWTO/blob/master/entries/make-header-only-library.md How do I make a header-only C/C++ library?}| \li|\n\cb{buildfile-in-prefix} Place header/source \c{buildfiles} into the header/source prefix directory instead of source subdirectory. To illustrate the difference, compare these two auto-generated layouts paying attention to the location of \c{buildfiles}: \ $ bdep new ... --type lib,split,subdir=foo libfoo $ tree libfoo/ libfoo/ ├── include/ │ └── foo/ │ ├── buildfile │ └── foo.hpp └── src/ └── foo/ ├── buildfile └── foo.cpp \ \ $ bdep new ... --type lib,split,subdir=foo,buildfile-in-prefix libfoo $ tree libfoo/ libfoo/ ├── include/ │ ├── foo/ │ │ └── foo.hpp │ └── buildfile └── src/ ├── foo/ │ └── foo.cpp └── buildfile \ Note that this sub-option only makes sense if we have the header and/or source prefixes (\c{include/} and \c{src/} in our case) as well as the source subdirectory (\c{foo/} in our case). Why would we want to do this? The main reason is to be able to symlink the entire upstream directories rather than individual files. In the first listing, the generated \c{buildfiles} are inside the \c{foo/} subdirectories which mean we cannot just symlink \c{foo/} from upstream. With a large number of files to symlink, this can be such a strong motivation that it may make sense to invent a source subdirectory in the source prefix even if upstream doesn't have one. See \l{#dont-main-target-root-buildfile Don't build your main targets in the root \c{buldfile}} for details on this technique. Another reason we may want to move \c{buildfiles} to prefix is to be able to handle upstream projects that have multiple source subdirectories. While this situation is not very common in the header prefix, it can be enountered in the source prefix of more complex projects, where upstream wishes to organize the source files into components.|| Continuing with our \c{libfoo} example, assuming upstream provides own symbol exporting, the final \c{bdep-new} command line would be: \ $ bdep new --package \ --lang c++,cpp \ --type lib,split,subdir=foo,no-subdir-source,no-version,no-symexport \ libfoo \ \h2#core-package-review|Review and test auto-genetated \c{buildfile} templates| Let's get a more complete view of what got generated by the final \c{bdep-new} command line from the previous section: \ $ tree libfoo/ libfoo/ ├── build/ │ └── ... ├── include/ │ └── foo/ │ ├── buildfile │ └── foo.hpp ├── src/ │ ├── buildfile │ └── foo.cpp ├── tests/ │ ├── build/ │ │ └── ... │ ├── basics/ │ │ ├── buildfile │ │ └── driver.cpp │ └── buildfile ├── buildfile ├── manifest └── README.md \ Once the overall layout looks right, the next step is to take a closer look at the generated \c{buildfiles} to make sure that overall they match the upstrem build. Of particular interest are the header and source directory \c{buildfiles} (\c{libfoo/include/foo/buildifle} and \c{libfoo/src/buildifle} in the above listing) which define how the library is built and installed. Here we are focusing on the macro-level differences that are easier to change by tweaking the \c{bdep-new} command line rather than manually. For example, if we look at the generated source directory \c{buildfile} and realize it builds a \i{binful} library (that is, a library that includes source files and therefore produces library binaries) while the upsteam library is header-only, it is much easier to fix this by re-running \c{bdep-new} with the \c{binless} sub-option than by changing the \c{buildfiles} manually. \N|Don't be tempted to start making manual changes at this stage even if you cannot see anything else that can be fixed with a \c{bdep-new} re-run. This is still a dry-run and we will recreate the package one more time in the following section before starting manual adjustments.| Besides examining the generated \c{buildfiles}, it's also a good idea to build, test, and install the generated package to make sure everything ends up where you expected and matches upstream where necessary. In particular, make sure public headers are installed into the same location as upstream. \N|The \c{bdep-new}-generated library is a simple \"Hello, World!\" example that can nevertheless be built, tested, and installed. The idea here is to verify it matches upstream using the generated source files before replacing them with the upstream source file symlinks.| Note that at this stage its easiest to build, test, and install in source directly sidestepping the \c{bdep} initialization of the package (which you would have to de-initalize before you can re-run \c{bdep-new}). Continue with the above example, the recommended sequence of commands would be: \ $ cd libfoo $ b update $ b test $ b install config.install.root=/tmp/install $ b clean \ Let's also briefly discuss other subdirectories and files found in the \c{bdep-new}-generated \c{libfoo} package. The \c{build/} subdirectory is the standard \c{build2} place for project-wide build system information (see \l{b#intro-proj-struct Project Structure} for details). We will look closer at its contents in the following sections. In the root directory of our package we find the root \c{buildfile} and package \c{manifest}. We will be tweaking both in the following steps. There is also \c{README.md} which we will replace with the upstream symlink. The \c{tests/} subdirectory is the standard \c{build2} tests subproject (see \l{b#intro-operations-test Testing} for details). While you can suppress its generation with the \c{no-tests} \c{bdep-new} sub-option, we recommend that you keep it and use it as a starting point for porting upstream tests or, if upstream doesn't provide any, for a basic \"smoke test\" (@@ ref HOWTO). \N|You can easily add/remove/rename this \c{tests/} subproject. The only place where it is mentioned explicitly and where you will need to make changes is the root \c{buildfile}. In pacticular, if upstream provides examples that you wish to port, it is recommended that you use a copy of the generated \c{tests/} subproject as a starting point (not forgeting to add the corresponding entry in the root \c{buildfile}).| \h2#core-package-create|Create final package| If you are satisfied with the \c{bdep-new} command line and there are no more automatic adjustments you can squeeze out of it, then it's time to re-run \c{bdep-new} one last time to create the final package. \N|While redoing this step later will require more effort, especially if you've made manual modifications to \c{buildfile} and \c{manifest}, nothing is set in stone and it can be done again by simply removing the package directory and removing (or editing, if you have multiple packages and only want to redo some of them) \c{packages.manifest} and starting over.| This time, however, we will do things a bit differently in order to take advantage of some additional automation offered by \c{bdep-new}. If the package directory already exists and contains certain files, \c{bdep-new} can take this into account when generating the root \c{buildfile} and package \c{manifest}. In particular, it will try to guess the license from the \c{LICENSE} file and extract the summary from \c{README.md} and use this information in \c{manifest}. \N|If the file names or formats used by upstream don't match those recognized by \c{bdep-new} or if an attempt to extra the information is unsuccessful, then for now simply omit the corresponding files from the package directory and add them later manually. Specifically, for \c{README}, \c{bdep-new} only recognizes \c{README.md}. For license files, \c{bdep-new} recognizes \c{LICENSE}, \c{LICENSE.txt} \c{LICENSE.md}, \c{COPYING}, and \c{UNLICENSE}. @@ TODO: PACKAGE-README.md and README-PACKAGE.md (and below) | Continuing with our \c{libfoo} example and assuming upstream provides the \c{README.md} and \c{LICENSE} files, we first manually create the package directory, then add the symlinks, and finally run \c{bdep-new} (notice that we have omitted the package name from the \c{bdep-new} command line since we are running from inside the package directory): \ $ cd foo/ # Change to the package repository root. $ rm -r libfoo/ packages.manifest $ mkdir libfoo/ $ cd libfoo/ # Change to the package root. $ ln -s ../upstream/README.md ./ $ ln -s ../upstream/LICENSE ./ $ bdep new --package \ --lang c++,cpp \ --type lib,split,subdir=foo,no-subdir-source,no-version,no-symexport \ If auto-detection succeeds, then you should see the \c{summary} and \c{license} values automatically populated in \c{manifest} and the symlinked files listed in the root \c{buildfile}. \h2#core-package-adjust-version|Adjust package version| While adjusting the \c{bdep-new}-generated code is the subject of the following sections, one tweak that we want to make right away is to change the package version in the \c{manifest} file. In this guide we will assume the upstream package uses semver (semantic version) or semver-like (that is, has three version components) and will rely on the \i{continuous versioning} feature of \c{build2} to make sure that each commit in our package repository has a distinct version (see \l{intro#guide-versioning-releasing Versioning and Release Management} for background). \N|If upstream does not use semver, then see \l{https://github.com/build2/HOWTO/blob/master/entries/handle-projects-which-dont-use-semver.md How do I handle projects that don't use semantic versioning?} and \l{https://github.com/build2/HOWTO/blob/master/entries/handle-projects-which-dont-use-version.md How do I handle projects that don't use versions at all?} for available options. If you decide to use the non-semver upstream version as is, then you will have to forgo \i{continuous versioning} as well as the use of \l{bdep-release(1)} for release management. The rest of the guide, however, will still apply. In particular, you will still be able to use \l{bdep-ci(1)} and \l{bdep-publish(1)} with a bit of extra effort.| The overall plan to implement continous versioning is to start with a pre-release snapshot of the upsream version, keep it like that while we are adjusting the \c{bdep-new}-generated package and committing our changes (at which point we get distinct snapshot versions), and finally, when the package is ready to publish, change to the final upstream version with the help of \l{bdep-release(1)}. Specifically, if the upstream version is \c{\i{X}.\i{Y}.\i{Z}}, then we start with the \c{\i{X}.\i{Y}.\i{Z}-a.0.z} pre-release snapshot. Let's see how this works for our \c{libfoo} example. Say, the upstream version that we are packaging is \c{2.1.0}. This means we start with \c{2.1.0-a.0.z}. \N|Naturally, the upstream version that we are using should correspond to the commit of the \c{upstream} submodule we have added on the \l{#core-repo-submodule Add upstream repository as \c{git} submodule} step.| Next we edit the \c{manifest} file in the \c{libfoo} package and change the \c{version} value to read: \ version: 2.1.0-a.0.z \ Let's also commit this initial state of the package for easier rollbacks: \ $ cd foo/ # Change to the package repository root. $ git add . $ git status $ git commit -m \"Initialize package\" \ \h#core-fill|Fill package with source code and add dependencies| With the package skeleton ready, the next steps are to fill it with upstream source code, add dependencies, and make any necessary manual adjustments to the generated \c{buildfiles}, \c{manifest}, etc. If we do this all at once, however, it can be hard to pin-point the cause of build failures. For example, if we convert both the library and its tests right away and something doesn't work, it can be hard to determine whether the mistake is in the library or in the tests. As a result, we are going to split this work into a sequence or smaller steps that incrementally replace the \c{bdep-new}-generated code with upstream while allowing us to test each change individually. We will also commit the changes on each step for easy rollbacks. Specifically, the overall plan is as follows: \ol| \li|Initialize (\c{bdep-init}) the package in one or more build configurations.| \li|Add dependencies, if any.| \li|Fill the library with upstream source code.| \li|Adjust project-wide and source subdirectory \c{buildfiles}.| \li|Make a smoke test for the library.| \li|Replace the smoke test with upstream tests.| \li|Tweak root \c{buildfile} and \c{manifest}.| \li|Test the result using the CI service. @@ Actually doing it as ready.| | The first three steps are the subject of this section with the following sections covering the rest of the plan. \N|As you become more experienced with packaging third-party projects for \c{build2} it may make sense to start combining or omitting some steps, especially for simpler libraries. For example, if you see that a library comes with a simple test that shouldn't cause any complications, then you could omit the smoke test.| \h2#core-fill-init|Initialize package in build configurations| Before we start making any changes to the \c{bdep-new}-generated files, let's initialize the package in at least one build configuration so that we are able to build and test our changes (see \l{intro#guide Getting Started Guide} for background on \c{bdep}-based development workflow). Continuing with our \c{libfoo} example from the earlier steps: \ $ cd foo/ # Change to the package repository root. $ bdep init -C ../foo-gcc @gcc cc config.cxx=g++ \ Let's build and test the \c{bdep-new}-generated package to make sure everything is in order: \ $ bdep update $ bdep test $ bdep clean \ You can create additional configurations, for example, if you have access to several compilers. For instance, to create a build configuration for Clang: \ $ bdep init -C ../foo-clang @clang cc config.cxx=clang++ \ If you would like to perform a certain operation on all the build configurations, pass the \c{-a|--all} flag to \c{bdep}: \ $ bdep update -a $ bdep test -a $ bdep clean -a \ Let's also verify that the resulting package repository is clean (doesn't have any uncommitted or untracked files): \ $ git status \ \h2#core-fill-depend|Add dependencies| If the upstream project has any dependencies, now is a good time to specify them so that when we attempt to build upstream source code, they are already present. Identifiying whether the upstream project has dependencies is not always easy. The natural first places to check are the documentation and the existing build system. Sometimes projects also bundle their dependencies with the project source code (also called vendoring). So it makes sense to look around the upstream repository for anything that looks like bundled dependencies. Normally we would need to \"unbundle\" such dependencies when converting to \c{build2} by instead specifying a dependency on an external package. \N|While there are several reasons we insist on unbundling of dependencies, the main one is that bundling can cause multiple, potentially conflicting copied of the same dependency to exist in the build. This can cause subtle build failures that are hard to understand and to track down.| One particularly common case to check for is bundling of the testing framework, such as Catch2, by C++ projects. If you have identified that the upstream tests depend on a testing framework (whether bundled or not), see \l{https://github.com/build2/HOWTO/blob/master/entries/handle-tests-with-extra-dependencies.md How do I handle tests that have extra dependencies?} If you have concluded that the upstream project doesn't have any dependencies, then you can remove \c{repositories.manifest} from the package repository root (uness you have already done so), commit this change, and skip the rest of this section. And if you are still reading, then we assume you have a list of dependencies you need to add, preferably with their minimum required versions. If you could not identify the minimum required version for a dependency, then you can fallback to the latest available version, as will be described in a moment. With the list of dependencies in hand, the next step is to determine whether they are already available as \c{build2} packages. For that, head over to \l{https://cppget.org cppget.org} and seach for each dependency. If you are unable to find a package for a dependency, then it means it hasn't been packaged for \c{build2} yet. Check the places mentioned in the \l{#core-repo-exists Check if package repository already exists} step to see if perhaps someone is already working on the package. If not and the dependency is not optional, then the only way forward is to first package the dependency. If you do find a package for a dependency, then note the section of the repository (\c{stable}, \c{testing}, etc; see \l{intro#guide-repositories Package Repositories} for background) from which the minimum required version of the package is available. If you were unable to identify the minimum required version, then note the latest version available from the \c{stable} section. Given the list of repository sections, edit the \c{repositories.manifest} file in the package repository root and uncomment the entry for \c{cppget.org}: \ : role: prerequisite location: https://pkg.cppget.org/1/stable #trust: ... \ Next, replace \c{stable} at the end of the \c{location} value with the least stable section from your list. For example, if your list contains \c{stable}, \c{testing}, and \c{beta}, then you need \c{beta} (the sections form a hierarchy so that \c{beta} includes \c{testing} which in turn inclues \c{stable}). \N|If you wish, you can also uncomment the \c{trust} value and replace \c{...} with the \l{https://cppget.org/?about repostitory fingerprint}. This way you won't be prompted to confirm the repository authenticity on first fetch. See \l{intro#guide-add-remove-deps Adding and Removing Dependencies} for details.| Once this is done, edit \c{manifest} in package root and add the \c{depends} value for each dependency. See \l{intro#guide-add-remove-deps Adding and Removing Dependencies} for background. In particular, here you will use the minimum required version (or the latest available) to form a version contraint. Which constaint operator to use will depend on the dependency's versioning policies. If the dependency uses semver, then a \c{^}-based constraint is a sensible default. As an example, let's say our \c{libfoo} depends on \c{libz}, \c{libasio}, and \c{libsqlite3}. To specify these dependencies we would add the following entries to its \c{manifest}: \ depends: libz ^1.2.0 depends: libasio ^1.28.0 depends: libsqlite3 ^3.39.4 \ With all the dependencies specified, now let's synchronize the state of the build configurations with our changes by running \l{bdep-sync(1)} from the package repository root: \ $ bdep sync -a \ \N|If you have any build-time dependencies (see \l{intro#guide-build-time-linked Build-Time Dependencies and Linked Configurations} for background), then you will get a warning about the corresponding \c{config.import.*} variable being unused and therefore dropped. This is because we haven't yet added the corresponding \c{import} directives to our \c{buildfiles}. For now you can ignore this warning and we will fix it later, when we adjust the generated \c{buildfiles}.| This command should first fetch the metadata for the repository we specified in \c{repositories.manifest} and then fetch, unpack and configure each dependency that we specified in \c{manifest}. We can examine the resulting state, including the version of each dependency, with \l{bdep-status(1)}: \ $ bdep status -ai \ The last step for this section is to commit our changes: \ $ cd foo/ # Change to the package repository root. $ git add . $ git status $ git commit -m \"Add dependencies\" \ \h2#core-fill-source|Fill with upstream source code| Now we are ready to begin replacing the \c{bdep-new}-generated files with upstream source code symlinks and we start with library's header and source files. Continuing with our \c{libfoo} example, this is what we currently have (notice that \c{LICENSE} and \c{README.md} are already symlinks to upstream): \ $ cd foo/ # Change to the package repository root. $ tree libfoo/ libfoo/ ├── build/ │ └── ... ├── include/ │ └── foo/ │ ├── buildfile │ └── foo.hpp ├── src/ │ ├── buildfile │ └── foo.cpp ├── tests/ │ └── ... ├── LICENSE -> ../upstream/LICENSE ├── README.md -> ../upstream/README.md ├── buildfile └── manifest \ Now we replace generated \c{include/foo/foo.hpp} with library's real headers and \c{src/foo.cpp} with its real source files: \ $ cd libfoo/ # Change to the package root. $ cd include/foo/ $ rm foo.hpp $ ln -s ../../../upstream/include/foo/*.hpp ./ $ cd - $ cd src $ rm foo.cpp $ ln -s ../../upstream/src/*.cpp ./ $ cd - $ tree libfoo/ libfoo/ ├── build/ │ └── ... ├── include/ │ └── foo/ │ ├── buildfile │ ├── core.hpp -> ../../../upstream/include/foo/core.hpp │ └── util.hpp -> ../../../upstream/include/foo/util.hpp ├── src/ │ ├── buildfile │ ├── core.cpp -> ../../upstream/src/core.cpp │ └── util.cpp -> ../../upstream/src/util.cpp ├── tests/ │ └── ... └── ... \ Note that the wildcards used above may not be enough in all situations and it's a good idea to manually examine the relevant upstream directories and make sure nothing is missing. Specifically, look out for: \ul| \li|Header/sources with other extensions, for example, C, Objective-C, etc.| \li|Other files that may be need, for example, \c{.def}, \c{config.h.in}, etc.| \li|Subdirectories that contain more header/source files.|| If upstream contains subdirectories with addition header/source files, then you can symlink entire subdirectories instead of doing it file by file. For example, let's say \c{libfoo}'s upstream source directory contains the \c{impl/} subdirectory with additional source files: \ $ cd src $ ln -s ../../upstream/impl ./ $ cd - $ tree libfoo/ libfoo/ ├── build/ │ └── ... ├── include/ │ └── ... ├── src/ │ ├── impl/ -> ../../upstream/src/impl/ │ │ ├── bar.cpp │ │ └── baz.cpp │ ├── buildfile │ ├── core.cpp -> ../../upstream/src/core.cpp │ └── util.cpp -> ../../upstream/src/util.cpp ├── tests/ │ └── ... └── ... \ Wouldn't it be nice if we could symlink the entire top-level subdirectories (\c{include/foo/} and \c{src/}) in our case instead of symlinking individual files? As discussed in \l{#core-package-craft-cmd Craft \c{bdep new} command line to create package}, we can but we will need to change the package layout. Specifically, we will need to move the \c{buildfiles} out of the source subdirectories with the help of the \c{buildfile-in-prefix} sub-option of \c{bdep-new}. In the above case, we will need to invent a source subdirectory in \c{src/}. Whether this is a worthwhile change largely depends on how many files you have to symlink individually. If it's just a handful, then it's probably not worth the complication, especially if you have to invent source subdirectories. On the other hand, if you are looking at symlinking hundreds of files, changing the layout makes perfect sense. \N|One minor drawback of symlinking entire directories is that you cannot easily patch individual upstream files (see \l{#howto-patch-upstream-source How do I patch upstream source code}). You will also need to explicitly list such directories as symlinks in \c{.gitattributes} if you want your package to be usable from the \c{git} repository on Windows. See \l{https://build2.org/article/symlinks.xhtml#windows Symlinks and Windows} for details.| We won't be able to test this change yet because to make things build will most likely also need to tweak the generated \c{buildfiles}, which is the subject of the next section. However, it still makes sense to commit our changes to make rollbacks easier: \ $ cd foo/ # Change to the package repository root. $ git add . $ git status $ git commit -m \"Add upstream source symlinks\" \ \h#core-adjust-build|Adjust project-wide and source \c{buildfiles}| With source code and dependencies added, the next step is to adjust the regenerated \c{buildfiles} that build the library. This involves two places: the project-wide build system files in \c{build/} and the source subdirectory \c{buildfiles} (in \c{include/} and \c{src/} for our \c{libfoo} example). \h2#core-adjust-build-wide|Review project-wide build system files in \c{build/}| We start with reviwing the files in the \c{build/} subdirectory of our package, where you will find three files: \c{bootstrap.build}, \c{root.build}, and \c{export.build}. To recap, the first two contain the project-wide build system setup (see \l{b#intro-proj-struct Project Structure} for details) while the last is an export stub that facilitates the importation of targets from our package (see \l{b#intro-import Target Importation} for details). Normally you don't need to change anything in \c{bootstrap.build} \- all it does is specify the build system project name and load a standard set of core build system modules. Likewise, \c{export.build} is good as generated unless you need to do something special, like exporting targets from different subdirectories of your package. While \c{root.build} is also often good as is, situations where you may need to tweak it are not uncommon and include: \ul| \li|Loading additional build system module. For example, if your package makes use of Objective-C/C++ (see \l{b#c-objc Objective-C Compilation} and \l{b#cxx-objcxx Objective-C++ Compilation}) or Assembler (see \l{b#c-as-cpp Assembler with C Preprocessor Compilation}), then \c{root.build} would be the natural place to load the correponding modules. \N|If your package uses a mixture of C and C++, then it's recommended to set this up using the \c{--lang} sub-option of \c{bdep-new} rather than manually. For example: \ $ bdep new --lang c++,c ... \ || \li|Specifying package configuration variable. If upstream provides the ability to configure their code, for example to enable optional features, then you may want to translate this to \c{build2} configuration variables, which must be specified in \c{root.build} (see \l{b#proj-config Project Configuration} for background and details). Note that you don't need to add all the configuration variables right away. Instead, you could first handle the \"core\" functionality which doesn't require any configuration and then add the configuration variables one by one while also making the corresponding changes in \c{buildfiles}. \N|One type of configuration that you should normally not expose when packaging for \c{build2} is support for both header-only and compiled modes. See \l{b#dont-header-only Don't make library header-only if it can be compiled}.||| Also, in C++ projects, if you don't have any inline or template files, then you can drop the assignment of the file extension for the \c{ixx} and \c{txx} target types, respectively. If you have added any configuration variables and would like to use non-default values for some of them in your build, then you will need to reconfigure the package. For example, let' say we have added the \c{config.libfoo.debug} variable to our \c{libfoo} package which enables additional debugging facilities in the library. This is how we can reconfigure all our builds to enable this functionality: \ $ bdep sync -a config.libfoo.debug=true \ If you have made any changes, commit them (similar to the previous step, we cannot test things just yet): \ $ cd foo/ # Change to the package repository root. $ git add . $ git status $ git commit -m \"Adjust project-wide build system files\" \ \h2#core-adjust-build-src|Adjust source subdirectory \c{buildfiles}| The next step we need to perform before we can build our library is to adjust its \c{buildfiles}. These \c{buildfiles} are found in the source subdirectory or, if we used the \c{buildfile-in-prefix} sub-option, in the prefix directory. There will be two \c{buildfiles} if we use the split layout (\c{split} sub-option) or a single \c{buildfile} in the combined layout. The single \c{buildfile} in the combined layout contains essentially the same definitions as the split \c{buildfiles} but combined into one and with some minor simplifications that this allows. So here we will assume the split layout and will continue with our \c{libfoo} from the previous sections. To recap, here is the layout we've got with the \c{buildfiles} of interest found in \c{include/foo/} and in \c{src/}: \ libfoo/ ├── build/ │ └── ... ├── include/ │ └── foo/ │ ├── buildfile │ ├── core.hpp -> ../../../upstream/include/foo/core.hpp │ └── util.hpp -> ../../../upstream/include/foo/util.hpp ├── src/ │ ├── buildfile │ ├── core.cpp -> ../../upstream/src/core.cpp │ └── util.cpp -> ../../upstream/src/util.cpp ├── tests/ │ └── ... └── ... \ \h2#core-adjust-build-src-header|Adjust header \c{buildfile}| The \c{buildfile} in \c{include/foo/} is pretty simple: \N|The \c{buildfile} in your package may look slightly different depending on the exact \c{bdep-new} sub-options used. However, all the relevant definitions discussed below should still be easily recognizable.| \ pub_hdrs = {hxx ixx txx}{**} ./: $pub_hdrs # Install into the foo/ subdirectory of, say, /usr/include/ # recreating subdirectories. # {hxx ixx txx}{*}: { install = include/foo/ install.subdirs = true } \ Normally the only change that you would make to this \c{buildfile} is to adjust the installation location of headers (see \l{b#intro-operations-install Installing} for background). In particular, if our headers were included without the \c{ } prefix but instead contained the library name in their names (for example, \c{foo_util.hpp}), then the installation setup would instead look like this: \ # Install directly into say, /usr/include/ recreating subdirectories. # {hxx ixx txx}{*}: { install = include/ install.subdirs = true } \ If the library doesn't have any headers in subdirectories, you can drop the \c{install.subdirs} variable: \ # Install into the foo/ subdirectory of, say, /usr/include/. # {hxx ixx txx}{*}: install = include/foo/ \ \N|In the combined layout, the installation-related definitions are at the end of the combined \c{buildfile}.| See also \l{#howto-extra-header-install-subdir How do I handle extra header installation subdirectory}. \h2#core-adjust-build-src-source|Adjust source \c{buildfile}: overview| Next is the \c{buildfile} in \c{src/}: \N|Again, the \c{buildfile} in your package may look slightly different depending on the exact \c{bdep-new} sub-options used. However, all the relevant definitions discussed below should still be easily recognizable. For a binless (header-only) library, this \c{buildfile} will contain only a small subset of the definitions shown below. See \l{https://github.com/build2/HOWTO/blob/master/entries/make-header-only-library.md How do I make a header-only C/C++ library?} for additional considerations when packaging header-only libraries.| \ intf_libs = # Interface dependencies. impl_libs = # Implementation dependencies. #import xxxx_libs += libhello%lib{hello} # Public headers. # pub = [dir_path] ../include/foo/ include $pub pub_hdrs = $($pub/ pub_hdrs) lib{foo}: $pub/{$pub_hdrs} # Private headers and sources as well as dependencies. # lib{foo}: {hxx ixx txx cxx}{**} $impl_libs $intf_libs # Build options. # out_pfx_inc = [dir_path] $out_root/include/ src_pfx_inc = [dir_path] $src_root/include/ out_pfx_src = [dir_path] $out_root/src/ src_pfx_src = [dir_path] $src_root/src/ cxx.poptions =+ \"-I$out_pfx_src\" \"-I$src_pfx_src\" \ \"-I$out_pfx_inc\" \"-I$src_pfx_inc\" #{hbmia obja}{*}: cxx.poptions += -DFOO_STATIC_BUILD #{hbmis objs}{*}: cxx.poptions += -DFOO_SHARED_BUILD # Export options. # lib{foo}: { cxx.export.poptions = \"-I$out_pfx_inc\" \"-I$src_pfx_inc\" cxx.export.libs = $intf_libs } #liba{foo}: cxx.export.poptions += -DFOO_STATIC #libs{foo}: cxx.export.poptions += -DFOO_SHARED # For pre-releases use the complete version to make sure they cannot # be used in place of another pre-release or the final version. See # the version module for details on the version.* variable values. # if $version.pre_release lib{foo}: bin.lib.version = \"-$version.project_id\" else lib{foo}: bin.lib.version = \"-$version.major.$version.minor\" # Don't install private headers. # {hxx ixx txx}{*}: install = false \ \h2#core-adjust-build-src-source-clean|Adjust source \c{buildfile}: cleanup| As a first step, let's remove all the definitions that we don't need in our library. The two common pieces of functionality that are often not needed are support for auto-generated headers (such as \c{config.h} generated from \c{config.h.in}) and dependencies on other libraries. If you don't have any auto-generated headers, then remove all the assignments and expansions of the \c{out_pfx_inc} and \c{out_pfx_src} variables. Here is what the relevant lines in the above \c{buildfile} should look like after this change: \ # Build options. # src_pfx_inc = [dir_path] $src_root/include/ src_pfx_src = [dir_path] $src_root/src/ cxx.poptions =+ \"-I$src_pfx_src\" \"-I$src_pfx_inc\" # Export options. # lib{foo}: { cxx.export.poptions = \"-I$src_pfx_inc\" } \ \N|If you do have auto-genetated headers, then in the split layout you can remove \c{out_pfx_inc} if you only have private auto-generated headers and \c{out_pfx_src} if you only have public.| \N|In the combined layout the single \c{buildfile} does not set the \c{*_pfx_*} variables. Instead it uses the \c{src_root} and \c{out_root} variables directly. For example: \ # Build options. # cxx.poptions =+ \"-I$out_root\" \"-I$src_root\" # Export options. # lib{foo}: { cxx.export.poptions = \"-I$out_root\" \"-I$src_root\" } \ To remove support for auto-generated headers in the combined \c{buildfile}, simply remove the corresponding \c{out_root} expansions: \ # Build options. # cxx.poptions =+ \"-I$src_root\" # Export options. # lib{foo}: { cxx.export.poptions = \"-I$src_root\" } \ If you only have private auto-genertated headers, then only remove the expansion from \c{cxx.export.poptions}.| If you don't have any dependencies, then remove all the assignments and expansions of the \c{intf_libs} and \c{intf_libs} variables. That is, the following lines in the original \c{buildfile}: \ intf_libs = # Interface dependencies. impl_libs = # Implementation dependencies. #import xxxx_libs += libhello%lib{hello} # Private headers and sources as well as dependencies. # lib{foo}: {hxx ixx txx cxx}{**} $impl_libs $intf_libs # Export options. # lib{foo}: { cxx.export.poptions = \"-I$out_pfx_inc\" \"-I$src_pfx_inc\" cxx.export.libs = $intf_libs } \ Become just these: \ # Private headers and sources as well as dependencies. # lib{foo}: {hxx ixx txx cxx}{**} # Export options. # lib{foo}: { cxx.export.poptions = \"-I$out_pfx_inc\" \"-I$src_pfx_inc\" } \ \h2#core-adjust-build-src-source-dep|Adjust source \c{buildfile}: dependencies| If you do have have dependencies, then let's handle them now. \N|Here we will assume dependencies on other libraries, which is the common case. If you have dependencies on executables, for example, source code generators, see \l{intro#guide-build-time-linked Build-Time Dependencies and Linked Configurations} on how to handle that. In this case you will also need to reconfigure your package after adding the corresponding \c{import} directives in order to re-acquire the previously dropped \c{config.import.*} values. Make sure to also pass any configuration variables you specified in \l{#core-adjust-build-wide Review project-wide build system files in \c{build/}}. For example: \ $ bdep sync -a --disfigure config.libfoo.debug=true \ | For each library that your package depends on (and which you have added to \c{manifest} on the \l{#core-fill-depend Add dependencies} step), you need to first determine whether it's an interface of implementation dependency and then import it either into \c{intf_libs} or \c{impl_libs} variable, respectively. See \l{b#intro-lib Library Exportation and Versioning} for background on the interface vs implementation distinction. But as a quick rule of thumb, if your library includes a header from the dependency library in one of its public headers, then it's an interface dependency. Otherwise, it's an implementation dependency. Continuing with our \c{libfoo} example, as we have established in \l{#core-fill-depend Add dependencies}, it depends on \c{libasio}, \c{libz}, and \c{libsqlite3} and let's say we've determined that \c{libasio} is an interface dependency because it's included from \c{include/foo/core.hpp} while the other two are implementation dependencies because they are only included from \c{src/}. Here is how we would change our \c{buildfile} to import them: \ intf_libs = # Interface dependencies. impl_libs = # Implementation dependencies. import intf_libs += libasio%lib{asio} import impl_libs += libz%lib{z} import impl_libs += libsqlite3%lib{sqlite3} \ And you can tidy this a bit further if you would like: \ import intf_libs = libasio%lib{asio} import impl_libs = libz%lib{z} import impl_libs += libsqlite3%lib{sqlite3} \ \N|If you don't have any implementation or interface dependencies, you can remove the assignment and all the expansion of the corresponding \c{*_libs} variable.| Note also that system libraries like \c{-lm}, \c{-ldl} on UNIX or \c{advapi32.lib}, \c{ws2_32.lib} on Windows should not be imported. Instead, they should be listed in the \c{c.libs} or \c{cxx.libs} variables. See \l{https://github.com/build2/HOWTO/blob/master/entries/link-system-library.md How do I link a system library} for details. \h2#core-adjust-build-src-source-pub|Adjust source \c{buildfile}: public headers| With the unnecessary parts of the \c{buildfile} cleaned up and dependenies handled, let's discuss the common changes to the remaining definitions, going from top to bottom. We start with the public headers block: \ # Public headers. # pub = [dir_path] ../include/foo/ include $pub pub_hdrs = $($pub/ pub_hdrs) lib{foo}: $pub/{$pub_hdrs} \ This block gets hold of the list of public headers and makes them prerequisites of the library. Normally you shouldn't need to make any changes here. If you need to exclude some headers, it should be done in the \c{buildfile} in the \c{inlcude/} directory. \N|In the combined layout the single \c{buildfile} does not have such code. Instead, all headers are covered by the wildcard pattern in following block.| \h2#core-adjust-build-src-source-src|Adjust source \c{buildfile}: sources, private headers| The next block deals with sources, private headers, and dependencies, if any: \ # Private headers and sources as well as dependencies. # lib{foo}: {hxx ixx txx cxx}{**} $impl_libs $intf_libs \ By default it will list all the relevant files as prerequisites of the library, starting from the directory of the \c{buildfile} and including all the subdirectories, recursively (see \l{b#name-patterns Name Patterns} for background on wildcard patterns). If your C++ package doesn't have any inline or template files, then you can remove the \c{ixx} and \c{txx} target types, respectively (which is parallel to the change made in \c{root.build}; see \l{#core-adjust-build-wide Review project-wide build system files in \c{build/}}). For example: \ # Private headers and sources as well as dependencies. # lib{foo}: {hxx cxx}{**} $impl_libs $intf_libs \ The other common change to this block is the exclusion of certain files or making them conditionally included. As an example, let's say in our \c{libfoo} the source subdirectory contains a bunch of \c{*-test.cpp} files which are unit tests and should not be listed as prerequisites of a library. Here is how we can exclude them: \ # Private headers and sources as well as dependencies. # lib{foo}: {hxx cxx}{** -**-test} $impl_libs $intf_libs \ Let's also assume our \c{libfoo} contains \c{impl-win32.cpp} and \c{impl-posix.cpp} which provide alternative implementations of the same functionality for Windows and POSIX and should only be included as prerequisites on the respective platforms. Here is how we can handle that: \ # Private headers and sources as well as dependencies. # lib{foo}: {hxx cxx}{** -impl-win32 -impl-posix -**-test} lib{foo}: cxx{impl-win32}: include = ($cxx.target.class == 'windows') lib{foo}: cxx{impl-posix}: include = ($cxx.target.class != 'windows') lib{foo}: $impl_libs $intf_libs \ There are two nuances in the above example worth highlighting. Firstly, we have to exclude the files from the wildcard pattern before we can conditionally include them. Secondly, we have to always link libraries last. In particual, the following is a shorter but an incorrect version of the above: \ lib{foo}: {hxx cxx}{** -impl-win32 -impl-posix -**-test} \ $impl_libs $intf_libs lib{foo}: cxx{impl-win32}: include = ($cxx.target.class == 'windows') lib{foo}: cxx{impl-posix}: include = ($cxx.target.class != 'windows') \ \N|You may also be tempted to use the \c{if} directive instead of the \c{include} variable for conditional prerequisites. For example: \ if ($cxx.target.class == 'windows') lib{foo}: cxx{impl-win32} else lib{foo}: cxx{impl-posix} \ This would also be incorrect. For background and details, see \l{https://github.com/build2/HOWTO/blob/master/entries/keep-build-graph-config-independent.md How do I keep the build graph configuration-independent?}| \h2#core-adjust-build-src-source-opt|Adjust source \c{buildfile}: build and export options| The next two blocks are the build and export options, which we will discuss together: \ # Build options. # out_pfx_inc = [dir_path] $out_root/include/ src_pfx_inc = [dir_path] $src_root/include/ out_pfx_src = [dir_path] $out_root/src/ src_pfx_src = [dir_path] $src_root/src/ cxx.poptions =+ \"-I$out_pfx_src\" \"-I$src_pfx_src\" \ \"-I$out_pfx_inc\" \"-I$src_pfx_inc\" #{hbmia obja}{*}: cxx.poptions += -DFOO_STATIC_BUILD #{hbmis objs}{*}: cxx.poptions += -DFOO_SHARED_BUILD # Export options. # lib{foo}: { cxx.export.poptions = \"-I$out_pfx_inc\" \"-I$src_pfx_inc\" cxx.export.libs = $intf_libs } #liba{foo}: cxx.export.poptions += -DFOO_STATIC #libs{foo}: cxx.export.poptions += -DFOO_SHARED \ The build options are in effect when the library itself is being build and the exported options are propagated to the library consumers (see \l{b#intro-lib Library Exportation and Versioning} for background on exported options). For now we will ignore the commented out lines that add \c{-DFOO_STATIC*} and \c{-DFOO_SHARED*} macros \- they are for symbol exporting and we will discuss this topic separately. If the library you are packaging only uses portable APIs, then chances are you won't need to change anything here. On the other hand, if it does anything platform-specific, then you will most likely need to add some options here. As discussed in the \l{b#intro-dirs-scopes Output Directories and Scopes} section of the build system introduction, there is a number of variables that are used to specify compilation and linking options, such as \c{*.poptions} (\c{cxx.poptions} in the above example), \c{*.coptions}, etc. The below table shows all of them with their rough \c{make} equivalents in the third column: \ *.poptions preprocess CPPFLAGS *.coptions compile CFLAGS/CXXFLAGS *.loptions link LDFLAGS *.aoptions archive ARFLAGS *.libs system libraries LIBS/LDLIBS \ The recommended approach here is to study the upstream build system definition and copy custom compile/link options to the appropriate \c{build2} variables. Note, however, that doing it thoughtlessly/faithfully by copying all the options may not always be a good idea. See \l{https://github.com/build2/HOWTO/blob/master/entries/compile-options-in-buildfile.md Which C/C++ compile/link options are OK to specify in a project's buildfile?} for the guidelines. Also, oftentimes, such custom options must only be specified for certain target platforms or when using a certain compiler. While \c{build2} provides a large amount of information to identiy the build configuration as well as more advanced \c{buildfile} language mechanism (such as \l{b#intro-switch Pattern Matching (\c{switch})} to make sense of it, this is a large topic for which we refer you to \l{b The \c{build2} Build System} manual. Additionally, \l{https://github.com/build2-packaging github.com/build2-packaging} now contains a large number of packages that you can study and search for examples. Let's also consider a representative example based on our \c{libfoo} to get a sense of what this normally looks like as well as to highlight a few nuances. Let's assume our \c{libfoo} requires either the \c{FOO_POSIX} or \c{FOO_WIN32} macro to be defined during the build in order to identify the target platform. Additionaly, extra features can be enabled by defining \c{FOO_EXTRAS} both during the build and for consumption (so this macro must also be exported). Next, this library requires the \c{-fno-strict-aliasing} compile option for the GCC-class compilers (GCC, Clang, etc). Finally, we need to link \c{pthread} on POSIX and \c{ws2_32.lib} on Windows. This is how we would work all this into the above fragment: \ # Build options. # out_pfx_inc = [dir_path] $out_root/include/ src_pfx_inc = [dir_path] $src_root/include/ out_pfx_src = [dir_path] $out_root/src/ src_pfx_src = [dir_path] $src_root/src/ cxx.poptions =+ \"-I$out_pfx_src\" \"-I$src_pfx_src\" \ \"-I$out_pfx_inc\" \"-I$src_pfx_inc\" cxx.poptions += -DFOO_EXTRAS if ($cxx.target.class == 'windows') cxx.poptions += -DFOO_WIN32 else cxx.poptions += -DFOO_POSIX #{hbmia obja}{*}: cxx.poptions += -DFOO_STATIC_BUILD #{hbmis objs}{*}: cxx.poptions += -DFOO_SHARED_BUILD if ($cxx.class == 'gcc') cxx.coptions += -fno-strict-aliasing switch $cxx.target.class, $cxx.target.system { case 'windows', 'mingw32' cxx.libs += -lws2_32 case 'windows' cxx.libs += ws2_32.lib default cxx.libs += -pthread } # Export options. # lib{foo}: { cxx.export.poptions = \"-I$out_pfx_inc\" \"-I$src_pfx_inc\" -DFOO_EXTRAS cxx.export.libs = $intf_libs } #liba{foo}: cxx.export.poptions += -DFOO_STATIC #libs{foo}: cxx.export.poptions += -DFOO_SHARED \ There are a few nuances in the above code worth keeping in mind. Firstly, notice that we append (rather than assign) to all the non-export variables (\c{*.poptions}, \c{*.coptions}, \c{*.libs}). This is because they may already contain some values specified by the user with their \c{config.*.*} counterparts. On the other hand, the \c{*.export.*} variables are assigned. Secondly, the order in which we append to the variables is important for the value to accumulate correctly. You want to fist append all the scope-level values, then target type/pattern-specific, and finally any target-specific; that is, from more general to more specific (see \l{b#intro-lang Buildfile Language} for background). To illustrate this point, let's say in our \c{libfoo}, the \c{FOO_POSIX} or \c{FOO_WIN32} macro are only necessary when compiling \c{util.cpp}. Below would be the correct order of assigning to \c{cxx.poptions}: \ cxx.poptions =+ \"-I$out_pfx_src\" \"-I$src_pfx_src\" \ \"-I$out_pfx_inc\" \"-I$src_pfx_inc\" cxx.poptions += -DFOO_EXTRAS #{hbmia obja}{*}: cxx.poptions += -DFOO_STATIC_BUILD #{hbmis objs}{*}: cxx.poptions += -DFOO_SHARED_BUILD if ($cxx.target.class == 'windows') {obja objs}{util}: cxx.poptions += -DFOO_WIN32 else {obja objs}{util}: cxx.poptions += -DFOO_POSIX \ \N|Not that target-specific \c{*.poptions} and \c{*.coptions} must be specified on the object file targets while \c{*.loptions} and \c{*.libs} \- on the library or executable targets.| \h2#core-adjust-build-src-source-sym|Adjust source \c{buildfile}: symbol exporting| Let's now turn to a special sub-topic of the build and export options that relates to the shared library symbol exporting. To recap, a shared library on Windows must explicitly specify the symbols (functions and global data) that it wishes to make accessible to its users. This can be achieved in three different way: The library can explicitly mark in its source code the names whose symbols should be exported. Alternatively, the library can profide a \c{.def} file to the linker that lists the symbols to be exported. Finally, the library can request automatic exporting of all symbols, which is the default semantics on non-Windows platforms. Note that the last two approaches only work for exporting functions, not data, unless extra steps are taken by the library users. Let's discuss each of these approaches in the reverse order, that is, starting with the automatic symbol exporting. The automatic symbol exporting is implemented in \c{build2} by generating a \c{.def} file that exports all the relevant symbols. It requires a few additional definitions in our \c{buildfile} as described in \l{b#cc-auto-symexport Automatic DLL Symbol Exporting}. You can automacially generate the necessary setup with the \c{auto-symexport} \c{bdep-new} sub-option. Using a custom \c{.def} file to export symbols is fairly straightforward: simply list it as a prerequsite of the library and it will be automatically passed to the linker. For example: \ # Private headers and sources as well as dependencies. # lib{foo}: {hxx cxx}{**} $impl_libs $intf_libs def{foo} \ The last approach is to explicitly specify in the source code which symbols must be exported by marking the corresponding declarations with \c{__declspec(dllexport)} during the library build and \c{__declspec(dllimport)} during the library use. This is commonly achieved with a macro, customarily called \c{*_EXPORT} or \c{*_API}, which is defined to one of the above specifiers based on whether static or shared library is being build or being consumed, which, in turn, is also normally signalled with a few more macros, such as \c{*_BUILD_DLL} and \c{*_USE_STATIC}. In \c{build2} you can explicitly signal any of the four situations by uncommending and adjusting the following four lines in the build and export options blocks: \ # Build options. # ... #{hbmia obja}{*}: cxx.poptions += -DFOO_STATIC_BUILD #{hbmis objs}{*}: cxx.poptions += -DFOO_SHARED_BUILD # Export options. # ... #liba{foo}: cxx.export.poptions += -DFOO_STATIC #libs{foo}: cxx.export.poptions += -DFOO_SHARED \ As an example, let's assume our \c{libfoo} defines in one of its headers the \c{FOO_EXPORT} macro based on the \c{FOO_BUILD_DLL} (shared library is being build) and \c{FOO_USE_STATIC} (static library is being used) macros that it expects to be appropriately defined by the build system. This is how we would modify the above fragment to handle this setup: \ # Build options. # ... {hbmis objs}{*}: cxx.poptions += -DFOO_BUILD_DLL # Export options. # ... liba{foo}: cxx.export.poptions += -DFOO_USE_STATIC \ \h2#core-adjust-build-src-source-ver|Adjust source \c{buildfile}: shared library version| The final few lines in the above \c{buildfile} deal with shared library binary (ABI) versioning: \ # For pre-releases use the complete version to make sure they cannot # be used in place of another pre-release or the final version. See # the version module for details on the version.* variable values. # if $version.pre_release lib{foo}: bin.lib.version = \"-$version.project_id\" else lib{foo}: bin.lib.version = \"-$version.major.$version.minor\" \ The \c{bdep-new}-generated setup arranges for the platform-independent versioning where the package's major and minor version components are embedded into the shared library binary name (and \c{soname}) under the assumption that only patch versions are ABI-compatible. The two situation where you would want to change this are when the above assumption does not hold and/or when the upstream provides platform-specific shared library versions which you would like to re-create in your \c{build2} build. See \l{b#intro-lib Library Exportation and Versioning} for background and details. \h2#core-adjust-build-src-source-ext|Adjust source \c{buildfile}: extra requirements| The changes discussed so far should be sufficient to handle a typical library that is written in C and/or C++ and is able to handle platform differences with the preprocessor and compile/link options. However, sooner or later you will run into a more complex library that may use additional languages, require more elaborate platform detection, or use additional functionality, such as support for source code generators. The below list provides pointers to resources that cover the more commonly encountered additional requirements. \ul| \li|\l{b#module-in The \c{in} build system module} Use to process \c{config.h.in} (or other \c{.in} files) that don't require Autoconf-style platform probing (\c{HAVE_*} options).| \li|\l{https://github.com/build2/libbuild2-autoconf The \c{autoconf} build system module} Use to process \c{config.h.in} (or their CMake/Meson variants) that require Autoconf-style platform probing (\c{HAVE_*} options).| \li|\l{b#c-objc Objective-C Compilation} and \l{b#cxx-objcxx Objective-C++ Compilation} Use to compile Objective-C (\c{.m}) or Objective-C++ (\c{.mm}) source files.| \li|\l{b#c-as-cpp Assembler with C Preprocessor Compilation} Use to compile Assembler with C Preprocessor (\c{.S}) source files.| \li|\l{b#intro-unit-test Implementing Unit Testing} Use if upstream has tests (normally unit tests) in the source subdirectory.| \li|\l{intro#guide-build-time-linked Build-Time Dependencies and Linked Configurations} Use if upstream relies on source code generators, such as \l{https://cppget.org/reflex \c{lex}} and \l{https://cppget.org/byacc \c{yacc}}.| \li|\l{https://github.com/build2/HOWTO/ The \c{build2} HOWTO} See the \c{build2} HOWTO article collection for more unusual requirements.|| \h2#core-adjust-build-test|Test library build| At this point our library should be ready to build, at least in theory. While we cannot build and test the entire package before adjusting the generated \c{tests/} subproject (the subject of the next step), we can try to build just the library and, if it has any unit tests in the source subdirectory, even run some tests. \N|Is the library is header only, there won't be anything to build unless there unit tests. Still you may want to continue with this exercise to detect any syntactic mistakes in the \c{buildfiles}, etc.| To build only a specific subdirectory of our package we use the build system directly (continuing with our \c{libfoo} example): \ $ cd libfoo/src/ # Change to the source subdirectory. $ b update \ If there are any issues, try to fix them and then build again. Once the library builds and if you have unit tests, you can try to run them: \ $ b test \ Once the library builds, it makes sense to commit our changes for easier rollbacks: \ $ cd foo/ # Change to the package repository root. $ git add . $ git status $ git commit -m \"Adjust source subdirectory buildfiles\" \ \h#core-test-smoke|Make smoke test| With the library build sorted, we need tests to make sure it is actually functional. As \l{#core-fill discussed earlier}, it is recommended to start with a simple smoke test, make sure that works, and then replace it with upstream tests. However, if upstream tests look simple enough, you can skip the smoke test. For example, if upstream has all its tests in a single source file and its build doesn't look too complicated, then you can just use that source file in place of the smoke test. \N|If upstream has no tests, then the smoke test will have to stay. A library can only be published if it has at least one test. It is also recommended to have the smoke test if upstream tests are in a separate package. See \l{https://github.com/build2/HOWTO/blob/master/entries/handle-tests-with-extra-dependencies.md How do I handle tests that have extra dependencies?} for background and details.| To recap, the \c{bdep-new}-generated \c{tests/} subdirectory looks like this (continuing with our \c{libfoo} example): \ libfoo/ ├── ... └── tests/ ├── build/ │ ├── bootstrap.build │ └── root.build ├── basics/ │ ├── driver.cpp │ └── buildfile └── buildfile \ The \c{tests/} subdirectory is a build system subproject, meaning that it can be built independently, for example, to test the installed version of the library (see \l{b#intro-operations-test Testing} for background). In particular, this means it has the \c{build/} subdirectory with project-wide build system files the same as our library. Then there is the \c{basics/} subdirectory which contains the generated test and which is what we will be turning into a smoke test. The subproject root \c{buildfile} rarely needs changing. \h2#core-test-smoke-build-wide|Review project-wide build system files in \c{tests/build/}| Review the generated \c{bootstrap.build} and \c{root.build} (there will be no \c{export.build}) similar to \l{#core-adjust-build-wide Review project-wide build system files in \c{build/}}. Here the only change you would normally make is in \c{root.build} to drop the assignment of extensions for target types that are not used in tests. \h2#core-test-smoke-adjust|Convert generated test to library smoke test| The \c{basics/} subdirectory contains the \c{driver.cpp} source file that implements the test and \c{buildfile} that builds it. You can rename both the test directory (\c{basics/}) and the source file \c{driver.cpp}, for example, if you are going with the upstream tests directly. You can also add more tests by simply copying \c{basics/}. The purpose of a smoke test is to make sure the library's public headers can be included (including in the installed case, no pun intended), it can be linked, and its basic functionality works. To achieve this, we modify \c{driver.cpp} to include the library's main headers and call a few functions. For example, if the library has the init/deinit type of functions, those are good candidates to call. If the library is not header-only, make sure that the smoke test calls at least one non-inline/template function to test symbol exporting. \N|Make sure that your test includes the library's public headers the same way as would be used by the library users.| Continuing with our \c{libfoo} example, this is what its smoke test might look like: \ #include #include #undef NDEBUG #include int main () { foo::context* c (foo::init (0 /* flags */)); assert (c != nullptr); foo::deinit (c); } \ \N|The C/C++ \c{assert()} macro is often adequate for simple tests and does not require extra dependencies. But see \l{https://github.com/build2/HOWTO/blob/master/entries/use-assert-in-tests.md How do I correctly use C/C++ assert() in tests?}| The test \c{buildfile} is pretty simple: \ import libs = libfoo%lib{foo} exe{driver}: {hxx ixx txx cxx}{**} $libs testscript{**} \ If you have adjusted the library target name (\c{lib{foo\}}) in the source subdirectory \c{buildfile}, then you will need to make the corresponding change in the \c{import} directive here. You may also want to tidy it up by removing unused prerequisite types. For example: \ import libs = libfoo%lib{foo} exe{driver}: {hxx cxx}{**} $libs \ \h2#core-test-smoke-locally|Test locally| With the smoke test ready, we can finally do some end-to-end testing of our library build. We will start with doing some local testing to catch basic mistakes and then do the full CI to detect any platform/compiler-specific issues. First let's run the test in the default build configuration by invoking the build system directly: \ $ cd libfoo/tests/ # Change to the tests/ subproject. $ b test \ If there are any issues (compile/link errors, test failures), try to address them and re-run the test. Once the default configuration builds and passes the tests, you can do the same for all the build configurations, in case you have \l{#core-fill-init initialized} your library in several: \ $ bdep test -a \ \h2#core-test-smoke-locally-install|Test locally: installation| Once this works, let's test the installed version of the library. In particular, this makes sure that the public headers are installed in a way that is compatible with how they are included by our test (and would be included by the users of our library). To test this we first install the library into some temporary directory: \ $ cd libfoo/ # Change to the package root. $ b install config.install.root=/tmp/install \ Next we build just the \c{tests/} subproject arranging for it to find the installed library: \ $ cd libfoo/ # Change to the package root. $ b test: tests/@/tmp/libfoo-tests-out/ \ config.cc.loptions=-L/tmp/install/lib \ config.bin.rpath=/tmp/install/lib \ \N|The equivalent MSVC command line would be: \ > b install config.install.root=c:\tmp\install > b test: tests\@c:\tmp\libfoo-tests-out\^ config.cc.loptions=/LIBPATH:c:\tmp/\install\lib \ | \N|It is a good idea to look over the installed files and make sure there is nothing unexpected, for example, missing or extraneous files.| Once done testing the installed case, let's clean things up: \ $ rm -r /tmp/install /tmp/libfoo-tests-out \ \h2#core-test-smoke-locally-dist|Test locally: distribution| Another special case worth testing is the preparation of the source distribution (see \l{b#intro-operations-dist Distributing} for background). This, in particular, is how your package will be turned into the source archive for publishing to \l{https://cppget.org cppget.org}. Here we are primarily looking for missing files. As a bonus, this will also allow us to test the in source build. First we distribute our package to some temporary directory: \ $ cd libfoo/ # Change to the package root. $ b dist config.dist.root=/tmp/dist config.dist.uncommitted=true \ The result will be in the \c{/tmp/dist/libfoo- /} directory which should resemble our \c{libfoo/} package but without files like \c{.gitignore}. Next we build and test the distribution in source: \ $ cd /tmp/dist/libfoo- / $ b configure config.cxx=g++ $ b update $ b test \ \N|If your package has dependencies that you import in your \c{buildfile}, then the above \c{configure} operation will most likely fail because such dependencies cannot be found (it may succeed if they are available as system-installed). The error message will suggest specifying the location of each dependency with \c{config.import.*} variable. You can fix this by setting each such \c{config.import.*} to the location of the build configuration \l{#core-fill-init created by \c{bdep}} which should contain all the necessary dependecies. Simply re-run the \c{configure} operation until you have discovered and specified all the necessary \c{config.import.*} variables, for example: \ $ b configure config.cxx=g++ \ config.import.libz=.../foo-gcc \ config.import.libasio=.../foo-gcc \ config.import.libsqlite3=.../foo-gcc \ | \N|It is a good idea to look over the distributed files and make sure there is nothing missing or extraneous.| Once done testing the distribution, let's clean things up: \ $ rm -r /tmp/dist \ \h2#core-test-smoke-ci|Commit and test with CI| With local testing complete, let's commit our changes and submit a remote CI job to test our library on all the major platforms and with all the major compilers: \ $ cd foo/ # Change to the package repository root. $ git add . $ git status $ git commit -m \"Add smoke test\" $ git push $ bdep ci \ The result of the \l{bdep-ci(1)} command is a link where you can see the status of the builds. If any fail, view the logs to determine the cause, try to fix it, commit your fix, and CI again. \N|It is possible that upstream does not support some platforms or compilers. For example, it's common for smaller projects not to bother with supporting \"secondary\" compilers, such as MinGW GCC on Windows or Homebrew GCC on Mac OS. If upstream expressly does not support some platform or compiler, it's probably not worth spending time and energy trying to support it in the package. Most likely it will require changes to upstream source code and that is best done upstream rather than in the package (see \l{#dont-fix-upstream Don't try to fix upstream issues in the package} for background). In this case you would want to exclude these platforms/compilers from the CI builds using the \l{bpkg#manifest-package-builds \c{builds} package \c{manifest} value}. The other common cause of a failed build is a newer version of a compiler or platform that breaks upstream. In this case there are three options: Ideally you would want to fix this upstream and have a new version released. Failed that, you may want to patch the upstream code to fix the issues, especially if this is one of the major platforms and/or primary compilers (see \l{#howto-patch-upstream-source How do I patch upstream source code} for details). Finally, you can just leave the build failing with the expectation that it will be fixed in the next upstream version. Note that in this case you should not exclude the failing build from CI.| \h#core-test-upstream|Replace smoke test with upstream tests| With the smoke test working we can now proceed with replacing it with the upstream tests. \h2#core-test-upstream-understand|Understand how upstream tests work| While there are some commonalities in how C/C++ libraries are normally built, when it comes to tests there is unfortunately little common ground in how they are arranged, built, and executed. As a result, the first step in dealing with upstream tests is to study the existing build system and try to understand how they work. To get you started, below are some of the questions you would likely need answered before you can proceed: \ul| \li|\b{Are upstream tests unit tests or integration tests?} While the distinction is often fuzzy, for our purposes the key differentiator between unit and integration tests is which API they use: integration tests only use the library's public API while unit tests need access to the implementation details. Normally (but not always), unit tests will reside next to the library source code since they need access to more than just the library binary (individual object files, utility libraries, etc). While integration tests are normally (but again not always) placed into a seperate subdirectory, usually called \c{tests} or \c{test}. If the library has unit tests, then refer to \l{b#intro-unit-test Implementing Unit Testing} for background on how to hanle them in \c{build2}. If the library has integration tests, then use them to to replace (or complement) the smoke test. If the library has unit tests but no integration tests, then it's recommended to keep the smoke test since that's the only way the library will be tested via its public API.| \li|\b{Do upstream tests use an external testing framework?} Oftentimes a C++ library will use an external testing framework to implement tests. Popular choices include \l{https://cppget.org/catch2 \c{catch2}}, \l{https://cppget.org/gtest \c{gtest}}, \l{https://cppget.org/doctest \c{doctest}}, and \l{https://cppget.org/libboost-test \c{libboost-test}}. If a library uses such an external testing framework, then it is recommended to factor tests into a separate package in order to avoid making the library package depend on the testing framework (which is only required during testing). See \l{https://github.com/build2/HOWTO/blob/master/entries/handle-tests-with-extra-dependencies.md How do I handle tests that have extra dependencies?} for details. \N|Sometimes you will find that upstream bundles the source code of the testing framework with their tests. This is especially common with \c{catch2}. If that's the case, it is strongly recommended that you \"unbundle\" it by making it a proper external dependency.|| \li|\b{Are upstream tests in a single or multiple executables?} It's not unusual for libraries to have a single test executable that runs all the test cases. This is especially common if a C++ testing framework is used. In this case it is natural to replace the contents of the smoke test with the upstream source code, potentially renaming the test subdirectory (\c{basics/}) to better match upstream naming. If upstream has multiple test executables, then they could all be in single test subdirectory (potentially reusing some common bits) or spread over multiple subdirectories. In both cases it's a good idea to follow the upstream structure unless you have good reasons to deviate. In the former case (all executables in the same subdirectory), you can re-purpose the smoke test subdirectory. In the latter case (each executable in a separate subdirectory) you can make copies of the smoke test subdirectory.| \li|\b{Are upstream tests well behaved?} Unfortunately it's not uncommon for upstream tests not to behave well, such as write diagnostics to \c{stdout} instead of \c{stderr}, create temporary files without cleaning them up, or assume presence of input files in the current working directory. For details on how to deal with such situations see \l{https://github.com/build2/HOWTO/blob/master/entries/sanitize-test-execution.md How do I sanitize the execution of my tests?}|| \h2#core-test-upstream-convert|Convert smoke test to upstream tests| Once you have a good grasp of how upstream tests work, convert or replace the smoke test with the upstream tests. If upstream has multiple test executables, you may want to deal with one test at a time, making sure that it passes before moving to the next one. It's normally a good idea to use the smoke test \c{buildfile} as a starting point for upstream tests. To recap, the smoke test \c{buildfile} for our \c{libfoo} example ended up looking like this: \ import libs = libfoo%lib{foo} exe{driver}: {hxx cxx}{**} $libs \ At a minimum you will most likely need to change the name of the executable to match upstream. If you need to build multiple executables in the same directory, then it's probably best to get rid of the name pattern for the source files and specify the prerequisite names explicitly, for example: \ import libs = libfoo%lib{foo} ./: exe{test1}: cxx{test1} $libs ./: exe{test2}: cxx{test2} $libs \ If you have a large number of such test executables, then a \c{for}-loop might be a more scalable option: \ import libs = libfoo%lib{foo} for src: cxx{test*} ./: exe{$name($src)}: $src $libs \ \h2#core-test-upstream-locally|Test locally| With the upstream tests ready, we re-do the same end-to-end testing as we did with the smoke test: \l{#core-test-smoke-locally Test locally}\n \l{#core-test-smoke-locally-install Test locally: installation}\n \l{#core-test-smoke-locally-dist Test locally: distribution}\n \h2#core-test-upstream-ci|Commit and test with CI| With local testing complete, we commit our changes and submit a remote CI job. This step is similar to what \l{#core-test-smoke-ci we did for the smoke test} but this time we are using the upstream tests: \ $ cd foo/ # Change to the package repository root. $ git add . $ git status $ git commit -m \"Add upstream tests\" $ git push $ bdep ci \ \h#core-examples-banchmarks|Add upstream examples, benchmarks, if any| If the upstream project provides examples and/or benchmarks and you wish to add them to the \c{build2} build (which is not strictly necessary for the \c{build2} package to be usable), then now is a good time to do that. As was mentioned in \l{#core-package-review Review and test auto-genetated \c{buildfile} templates}, the recommended approach is to copy the \c{tests/} subproject (potentially from the commit history before the smoke test was replaced with the upstream tests) and use that as a starting point for examples and/or benchmarks. Just do not forgeting to add the corresponding entry in the root \c{buildfile}. Once that is done, follow the same steps as in \l{#core-test-upstream Replace smoke test with upstream tests} to add upstream examples/benchmarks and test the result. \h#core-root|Adjust root \c{buildfile} and \c{manifest}| The last few files that we need to review and potentially adjust are the root \c{buildfile} and package \c{manifest}. \h2#core-root-buildfile|Adjust root \c{buildfile}| The main function of the root \c{buildfile} is to pull all the subdirectories that need building plus list targets that are usually found in the root directory of a project, typically \c{README.md}, \c{LICENSE}, etc. This is what the generated root \c{buildfile} looks like for our \c{libfoo} project assuming we have symlinked \c{README.md} and \c{LICENSE} from upstream on the \l{#core-package-create Create final package} step: @@ PACKAGE-README.md? \ ./: {*/ -build/} doc{README.md} legal{LICENSE} manifest # Don't install tests. # tests/: install = false \ If the upstream project provides any other documentation (change log, news, etc) or legal files (list of authorts, code of conduct, etc), then you may want to symlink and list them as the \c{doc{\}} and \c{legal{\}} prerequisites, respectively. \N|One file you don't need listing is \c{INSTALL} (or equivalent) which normally contains the installation instructions for the upstream build system. In the \c{build2} package the \c{PACKAGE-README.md} file serves this purpose.| \h2#core-root-buildfile-doc|Adjust root \c{buildfile}: other subdirectories| If the upstream project has other subdirectories that makes sense to include into the \c{build2} package, then now is good time to take care of that. The most common such case will be extra documentation (besides the root \c{README}), typically in a subdirectory called \c{doc/}, \c{docs/}, or \c{documentation/}. The typical procedure for handling such subdirectories will be to symlink the relevant files (or the entire subdirectory) and then list the files as prerequisites. For this last step, there are two options: we can list the files directly in the root \c{buildfile} or we can create a seperate \c{buildfile} in the subdirectory. Let's examine both approaches using our \c{libfoo} as an example. Assume that upstream \c{libfoo} contains the \c{docs/} subdirectory with additional \c{*.md} files that document its API. It would make sense to include them into the \c{build2} package. Listing the subdirectory files directly in the root \c{buildfile} works best for simple case, where you have a bunch of static files that don't require any customizations, such as to their installation location. In this case we can symlink the entire \c{docs/} subdirectory: \ $ cd libfoo/ # Change to the package root. $ ln -s ../upstream/docs ./ \ The adjustment to the root \c{buildfile} are pretty straightforward: we exclude the \c{docs/} subdirectory (since it has no \c{buildfile}) and list the \c{*.md} files as prerequisites using the \c{doc{\}} target type (which, in particular, makes sure they are installed into the appropriate location): \ ./: {*/ -build/ -docs/} \ doc{README.md} docs/doc{*.md} \ legal{LICENSE} manifest \ The alternative approach (create a seperate \c{buildfile}) is a good choice if things are more complicated then that. Let's say we need to adjust the installation location of the files in \c{docs/} because there is another \c{README.md} that would conflict with the root one when installed into the same location. This time we cannot symlink the top-level \c{docs/} subdirectory (because we need to place a \c{buildfile} there). The two options here is to either symlink the individual files or introduce another subdirectory level inside \c{docs/} (which is the same approach as discussed in \l{#dont-main-target-root-buildfile Don't build your main targets in the root \c{buldfile}}). Let's illustrate both sub-cases. Symlinking individual files works best when you don't expect the set of files to change often. For example, if \c{docs/} contains a man page and its HTML rendering, then it's unlikely this set will change. On the other hand, if \c{docs/} contains a manual split into an \c{.md} file per chapter, then there is good chance this set of files will fluctuate between releases. Continuing with our \c{libfoo} example, this is how we symlink the individual \c{*.md} files in \c{docs/}: \ $ cd libfoo/ # Change to the package root. $ mkdir docs $ cd docs/ $ ln -s ../../upstream/docs/*.md ./ \ Then write a new \c{buildfile} in \c{docs/}: \ ./: doc{*.md} # Install the documentation in docs/ into the manual/ subdirectory of, # say, /usr/share/doc/libfoo/ since we cannot install both its and root # README.md into the same location. # doc{*.md}: install = doc/manual/ \ Note that we don't need to make any changes to the root \c{buildfile} since this subdirectory will automatically get picked up by the \c{{*/\ -build/\}} name pattern that we have there. Let's now look at the alternative arrangement with another subdirectory level inside \c{docs/}. Here we achieve the same result but in a slightly different way. Specifically, we call the subdirectory \c{manual/} and install recreating subdirectories (see \l{b#intro-operations-install Installing} for background): \ $ cd libfoo/ # Change to the package root. $ mkdir -p docs/manual $ cd docs/manual/ $ ln -s ../../../upstream/docs/*.md ./ \ And the corresponding \c{buildfile} in \c{docs/}: \ ./: doc{**.md} # Install the documentation in docs/ into, say, /usr/share/doc/libfoo/ # recreating subdirectories. # doc{*}: { install = doc/ install.subdirs = true } \ \h2#core-root-buildfile-commit|Adjust root \c{buildfile}: commit and test| Once all the adjustments to the root \c{buildfile} are made, it makes sense to test it locally (this time from the root of the package), commit our changes, and test with CI: \ $ cd libfoo/ # Change to the package root. $ b test $ bdep test -a \ If you had to add any extra files to the root \c{buildfile} or add \c{buildfile} in extra subdirectories, then it also makes sense to test installation (\l{#core-test-smoke-locally-install Test locally: installation}) and preparation of the source distribution (\l{#core-test-smoke-locally-dist Test locally: distribution}) and make sure the extra files end up in the right places. Then commit our changes and CI: \ $ cd foo/ # Change to the package repository root. $ git add . $ git status $ git commit -m \"Adjust root buildfile\" $ git push $ bdep ci \ \h2#core-root-manifest|Adjust \c{manifest}| The last file we need to look over is the package's \c{manifest}. Here is what it typically looks like, using our \c{libfoo} as an example: @@ TODO: regenerate with final version. \ : 1 name: libfoo version: 2.1.0-a.0.z language: c++ project: foo summary: C++ library implementing secure Foo protocol license: MIT ; MIT License. description-file: README.md url: https://example.org/foo email: boris@codesynthesis.com #build-error-email: boris@codesynthesis.com depends: * build2 >= 0.16.0 depends: * bpkg >= 0.16.0 #depends: libhello ^1.0.0 \ You can find the description of these and other package \c{manifest} values in the \l{bpkg#manifest-package Package Manifest} section of the \l{bpkg The \c{build2} Package Manager} manual. In the above listing the values that we likely need to adjust are \c{summary} and \c{license} (@@ see sections below), unless correctly auto-detected by \c{bpkg-new} on the \l{#core-package-create Create final package} step, as well as \c{url}, \c{email}, @@ TODO: update with final list (\c{package-*}, etc), also make links. While you may be tempted to also adjust the \c{version} value, don't since this will be done automatically by \l{bdep-release(1)} later. @@ Should we give quick recommendations for url, email, etc and also an example for libfoo? You may also want to add the following value in certain cases: \dl| \li|\l{bpkg#manifest-package-changes \cb{changes-file}} If you have added a news of change log file to the root \c{buildfile} (see \l{#core-root-buildfile Adjust root buildfile}), then it also makes sense to list it in the \c{manifest}. For example: \ changes-file: NEWS \ | \li|\l{bpkg#manifest-package-topics \cb{topics}} Package topics. For example: \ topics: network protocol, network security \ \N|If the upstream project is hosted on GitHub or similar, then you can usually copy the topics from the upstream repository description.|| \li|\l{bpkg#manifest-package-doc-url \cb{doc-url}}\n \l{bpkg#manifest-package-src-url \cb{src-url}} Documentation and source code URLs. For example: \ doc-url: https://example.org/foo/doc/ src-url: https://github.com/.../foo \ || \h2#core-root-manifest-license|Adjust \c{manifest}: \c{license}| For \c{license}, use the \l{https://spdx.org/licenses/ SPDX license ID} if at all possible. If multiple licenses are involved, use the SPDX License expression. See the \l{https://build2.org/bpkg/doc/build2-package-manager-manual.xhtml#manifest-package-license \c{license} manifest value} documentation for details, including the list of the SPDX IDs for the commonly used licenses. \h2#core-root-manifest-summary|Adjust \c{manifest}: \c{summary}| For \c{summary} use a brief description of the functionality provided by the package. Less than 70 characters is a good target to aim for. Don't capitalize subsequent words unless proper nouns and omit the trailing dot. For example: \ summary: Vim xxd hexdump utility \ Omit weasel words such as \"modern\", \"simple\", \"fast\", \"small\", etc., since they don't convey anything specific. Omit \"header-only\" or \"single-header\" for C/C++ libraries since, at least in the context of \c{build2}, it does not imply any advantage. If upstream does not offer a sensible summary, the following template is recommended for libraries: \ summary: C library summary: C++ library \ For example: \ summary: Event notification C library summary: Validating XML parsing and serialization C++ library \ If the project consists of multiple packages it may be tempting to name each package in terms of the overall project name, for example: \ summary: libigl's core module \ This doesn't give the user any clue about what functionality is provided unless they find out what \c{libigl} is about. Better: \ summary: Geometry processing C++ library, core module \ If you follow the above pattern, then to produce a summary for external tests or examples packages simply add \"tests\" or \"examples\" at the end, for example: \ summary: Event notification C library tests summary: Geometry processing C++ library, core module examples \ \h2#core-root-manifest-commit|Adjust \c{manifest}: commit and test| Once all the adjustments to the \c{manifest} are made, it makes sense to test it locally (this time from the root of the package), commit our changes, and test with CI: \ $ cd libfoo/ # Change to the package root. $ b test $ bdep test -a \ Then commit our changes and CI: \ $ cd foo/ # Change to the package repository root. $ git add . $ git status $ git commit -m \"Adjust manifest\" $ git push $ bdep ci \ \h#core-release-publish|Release and publish| Once all the adjustments are in and everything is tested, we can finally release the final version of the package as well as publish it to \l{https://cppget.org cppget.org}. Both of these steps are automated with the corresponding \c{bdep} commands. \h2#core-release-publish-release|Release final version| As you may recall, our package currently has a pre-release snapshot version of the upstream version (see \l{#core-package-adjust-version Adjust package version}). Once all the changes are in, we can change to the final upstream version, in a sense signalling that this package version is ready. The recommended way to do this is with the \l{bdep-release(1)} command (see \l{intro#guide-versioning-releasing Versioning and Release Management} for background). Besides replacing the \c{version} value in the package \c{manifest} file, it also commits this change, tags it with the \c{v\i{X}.\i{Y}.\i{Z}} tag, and can be instructed to push the changes (or show the \c{git} command to do so). This command also by default \"opens\" the next development version, which is something that we normally want for our own projects but not when we package a third-party one (since we cannot predict which version the upstream will release next). So we disable this functionality. For example: \ $ cd foo/ # Change to the package repository root. $ bdep release --no-open --show-push \ Then review the commit made by \c{bdep-release} and push the changes by copying the command that it printed: \ $ git diff HEAD~1 $ git push ... \ \N|If something is wrong and you need to undo this commit, don't forget to also remove the tag. Note also that once you have pushed your changes, you cannot undo the commit. Instead, you will need to make a revision. See \l{#core-version-management Version management} for background and details.| \h2#core-release-publish-publish|Publish released version| Once the version is released we can publish the package to \l{https://cppget.org cppget.org} with the \l{bdep-publish(1)} command (see \l{intro#guide-versioning-releasing Versioning and Release Management} for background): \ $ cd foo/ # Change to the package repository root. $ bdep publish \ The \c{bdep-publish} command prepares the source distributin of your package, uploads the resulting archive to the package repository, and prints a link to the package submission in the queue. Open this link in the browser and check that there are no surprises in build results (they should match the earlier CI results) or in the displayed package information (\c{README.md}, etc). \N|While there should normally be no discrepancies in the build results compared to our earlier CI submissions, the way the packages are built on CI and in the package repository are not exactly the same. Specifically, CI builds them from \c{git} while the package repository \- from the submitted package archives. If there are differences, it's almost always due to issues in the source distribution preparation \l{#core-test-smoke-locally-dist Test locally: distribution}.| If everything looks good, then you are done: the package submission will be reviewed and, if there are no problems, moved to \l{https://cppget.org cppget.org}. If there are problems, then an issue will be created in the package repository with the review feedback. In this case you will need to \l{#core-version-management-new-revision release and publish a version revision} to address these problems. But in both cases you should first read through \l{#core-version-management Package version management} to understand the recommended \"version lifecycle\" of a third-party package. \h#core-version-management|Package version management| Once we have pushed the release commit, in order to preserve continous versioning, no further changes should be made to the package without also changing its version. \N|More precisely, you can make and commit changes without changing the version provided they don't affect the package. For example, you may keep a \c{TODO} file in the root of your repository which is not part of any package. Updating such a file without changing the version is ok since the package remains unchanged.| While in our own projects we can change the versions as we see fit, with third-party projects the versions are dictated by the upstream and as a result we are limited to what we can use to fix issues in the package itself. It may be tempting (and maybe even conceptually correct) to release a patch version for our own fixes, however, we will be in trouble if later upstream releases the same patch version but with a different set of changes (plus the users of our package may wonder where did this version come from). As a result, we should only change the major, minor, or patch components of the package version in response to the corresponding upstream releases. For fixes to the package itself we should instead use version revisions. \N|Because a revision replaces the existing version, we should try to limit revision changes to bug fixes and preferably only to the package \"infrastructure\" (\c{buildfiles}, \c{manifest}, etc). Fixes to upstream source code should be limited to critical bugs, preferably be backported from upstream. To put it another way, changes in a revision should have an even more limited scope than a patch release.| Based on this, the recommended \"version lifecycle\" for a third-party package is as follows: \ol| \li|After a release (the \l{#core-release-publish-release Release final version} step above), for example, version \c{2.1.0}, the package enters a \"revision phase\" where we can release revisions (\c{2.1.0+1}, \c{2.1.0+2}, etc) to address any issues in the package. See @@ for details.| \li|When a new upstream version is released, for example version \c{2.2.0}, and we wish to upgrade our package to this version, we switch to its pre-release snapshot version (\c{2.2.0-a.0.z}) the same as we did on the \l{#core-package-adjust-version Adjust package version} step initially. See @@ for details.| \li|Once we are done upgrading to the new upstream version, we release the final version just like on the \l{#core-release-publish-release Release final version} step initially.|| Note also that in the above example, once we have switched to \c{2.2.0-a.0.z}, we cannot go back and release another revision or patch version for \c{2.1.0} on the current branch. Instead, we will need to create a separate branch for the \c{2.1.Z} release series and make a revision or patch version there. See @@ for details. \h2#core-version-management-new-revision|New revision| As discussed in \l{#core-version-management Package version management}, we release revisions to fix issues in the package \"infrastructure\" (\c{buildfiles}, \c{manifest}, etc) as well as critical bugs in upstream source code. In the revision phase of the package version lifecycle (i.e., when the version does not end with \c{-a.0.z}), every commit must be accompanies by the revision increment to maintain continous verisions. As a result, each revision release commit also contains the changes in this revision. Below is a typical workflow for releasing and publishing the revision: \ $ # make changes $ # test locally $ git add . $ bdep release --revision --show-push $ # review commit $ git push ... $ # test with CI $ bdep publish \ Customarily, the revision commit message has the \c{\"Release version X.Y.Z+R\"} summary as generated by \c{bdep-release} followed by the description of changes organized in a list of there are several. For example: \ Release version 2.1.0+1 - Don't compile port/strlcpy.c on Linux if GNU libc is 2.38 or newer since it now provides the strl*() functions. - Switch to using -pthread instead of -D_REENTRANT/-lpthread. \ \N|The fact that all the changes must be in a single commit is another reason to avoid substantial changes in revisions.| Note also that you can make multiple commits while developing and testing the changes for a revision in a separate branch. However, once they are ready for a release, they need to be squashed into a single commit. The \l{bdep-release(1)} command provides the \c{--amend} and \c{--squash} options to automate this. For example, here is what a workflow with a separate branch might look like: \ $ git checkout -b wip-2.1.0+1 $ # make strl*() changes $ # test locally $ git commit -a -m \"Omit port/strlcpy.c if glibc 2.38 or newer\" $ git push -u $ # test with CI $ # make pthread changes $ # test locally $ git commit -a -m \"Switch to using -pthread\" $ git push $ # test with CI $ git checkout master $ git merge --ff-only wip-2.1.0+1 $ bdep release --revision --show-push --amend --squash 2 $ # review commit $ # test locally $ git push ... $ # test with CI $ bdep publish \ \h2#core-version-management-new-version|New version| As discussed in \l{#core-version-management Package version management}, we release new versions in reponse to the corresponding upstream releases. The amount or work required to upgrade a package to a new upstream version depends on the extend of changes in the new version. On one extreme you may have a patch release which fixes a couple of bugs in the upstream source code without any changes to the set of source files, upstream build system, etc. In such cases, upgrading a package is a simple matter of creating a new work branch, pointing the \c{upstream} \c{git} submodule to the new release, running tests, and releasing and publishing a new package version. @@ make list with links. @@ Open the version. On the other extreme you may have a new major upstream release which is essentially a from-scratch rewrite with new source code layout, different upstream build system, etc. In such cases it may be easier to likewise start from scratch. Specifically, create a new work branch, point the \c{upstream} \c{git} submodule to the new release @@ link, delete the existing package, and continue from \l{#core-package Create package and generate \c{buildfile} templates}. Most of the time, however, it will be something in between where you may need to tweak a few things here and there, such as adding symlinks to new source files (or removing old ones), tweaking the \c{buildfiles} to reflect changes in the upstream build system, etc. The following sections provide a checklist-like sequence of steps that can be used to review upstream changes with links to the relevant earlier sections in case undjustments are required. \h2#core-version-management-new-version-branch|New version: create new work branch| When upgrading a package to a new upstream version it's recommended to do this in a new work branch which, upon completion, is merged into \c{master}. For example, if the new upstream version is \c{2.2.0}: \ $ git checkout -b wip-2.2.0 \ \h2#core-version-management-new-version-open|New version: open new version| This step corresponds to \l{#core-package-adjust-version Adjust package version} during the initial packaging. Here we can make use of the \c{bdep-release} command to automatically open the new version and make the corresponding commit. For example, if the new upstream version is \c{2.2.0}: \ $ bdep release --open --no-push --open-base 2.2.0 \ \h2#core-version-management-new-version-submodule|New version: update \c{upstream} submodule| This step corresponds to \l{#core-repo-submodule Add upstream repository as \c{git} submodule} during the initial packaging. Here we need to update the submodule to point to the upstream commit that corresponds to the new version. For example, if the upstream release tag we are interested in is called \c{v2.2.0}, to update the \c{upstream} submodule to point to this release commit, run the following command: \ $ cd upstream $ git checkout v2.2.0 $ cd .. $ git add . $ git status $ git commit -m \"Update upstream submodule to 2.2.0\" \ \h2#core-version-management-new-version-review|New version: review upstream changes| At this point it's a good idea to get an overview of the upstream changes between the two releases in order to determine which adjustments are likely to be required in the \c{build2} package. We can use the \c{upstream} submodule for that, which contains the change history we need. One way to get an overview of changes between the releases is to use a graphical repository browser such as \c{gitk} and view a cumulative \c{diff} of changes between the two versions. For example, assuming the latest packaged version is tagged \c{v2.1.0} and the new version is tagged \c{v2.2.0}: \ $ cd upstream $ gitk& \ Then scroll down and click on the commit tagged \c{v2.1.0}, scroll up and righ-click on the commit tagged \c{v2.2.0} and select the \"Diff this -> selected\" menu item. This will display a cumulative set of changes between the two upstream versions. Look through them for the following types of changes: - changes to the source code layout - new source files beging added or removed - changes to the upstream build system @@ Maybe in the initial instructions makes sense to identify and note the point where to merge the branch to master if working in a branch (e.g., because of a new version). @@ Add item to review issues in case good opportunity to fix any. \h2#core-version-management-old-series|New version/revision in old release series| As discussed in \l{#core-version-management Package version management}, if we have already switched to the next upstream version in the \c{master} branch, we cannot go back and release a new version or a revision for an older release series on the same branch. Instead, we need to create a seperate, long-lived branch for this work. As an example, let's say we need to release another revision or a patch version for an already released \c{2.1.0} while our \c{master} branch has already moved on to \c{2.2.0}. In this case we create a new branch, called \c{2.1}, to continue with the \c{2.1.Z} release series. The starting point of this branch should be the latest released version/revision in the \c{2.1} series. Let's say in our case it is \c{2.1.0+2}, meaning we have released two revisions for \c{2.1.0} on the \c{master} branch before upgrading to \c{2.2.0}. Therefore we use the \c{v2.1.0+2} release tag to start the \c{2.1} branch: \ $ git checkout -b 2.1 v2.1.0+2 \ Once this is done, we continue with the same steps as in \l{#core-version-management-new-revision New revision} or \l{#core-version-management-new-version New version} except that we never merge this branch to \c{master}. If we ever need to release another revision or version in this release series, then we continue using this branch. In a sense, this branch becomes the equivalent of the \c{master} branch for this release series and you should treat it as such (once published, never delete, rewrite its history, etc). \N|It is less likely but possible that you may need to release a new minor version in an old release series. For example, the master branch may have moved on to \c{3.0.0} and you want to release \c{2.2.0} after the already released \c{2.1.0}. In this case it makes sense to call the branch \c{2} since it corresponds to the \c{2.Y.Z} release series. If you already have the \c{2.1} branch, then it makes sense to rename it to \c{2}.| @@ Enforce continous versioning? @@ When do we transfer the repository to build2-packaging? Should not publish until then. @@ GH issue #?? has some notes. ======== @@ Add example of propagating config.libfoo.debug to macro on build options? @@ Note on library metadata where talk about configuration. Also about autoconf. @@ Use of the version module and non-semver versions? Links to HOWTO entries! @@ The 'Don't write buildfiles by hand entry' is now mostly duplicate/redundant. ====================================================================== \h1#dont-do|What Not to Do| @@ Reorder. \h#dont-fix-upstream|Don't try to fix upstream issues in the package| @@ TODO - support officially unsupported platforms/compiler - suppress warnings Any deviations from upstream makes it more difficult to maintain. If your package makes a large number of changes to upstream, releasing a new version will require a lot of work. \h#dont-from-scratch|Don't write \c{buildfiles} from scratch, use \c{bdep-new}| Unless you have good reasons not to, create the initial project layout automatically using \l{bdep-new(1)}, then tweak it if necessary and fill with upstream source code. The main rationale here is that there are many nuances in getting the build right and auto-generated \c{buildfiles} had years of refinement and fine-tuning. The familiar structure also makes it easier for others to understand your build, for example while reviewing your package submission. The \l{bdep-new(1)} command supports a wide variety of \l{bdep-new.xhtml#src-layout source layouts}. While it may take a bit of time to understand the customization points necessary to achieve the desired layout for your first package, this will pay off in spades when you work on converting subsequent packages. The recommended sequence of steps is as follows: \ol| \li|Study the upstream source layout. We want to stay as close to upstream as possible since this has the best chance of producing an issues-free result (see \l{#dont-change-upstream Don't change upstream source code layout unnecessarily} for details).| \li|Craft and execute the \l{bdep-new(1)} command line necessary to achieve the upstream layout.| \li|Study the auto-generated \c{buildfiles} for things that don't fit and need to change. But don't rush to start manually editing the result. First get an overview of the required changes and then check if it's possible to achieve these changes automatically using one of \l{bdep-new(1)} sub-options. For example, if you see that the generated project assumes the wrong C++ file extensions, these can be changed with \c{--lang|-l} sub-options.| \li|Once you have squeezed as much as possible out of \l{bdep-new(1)}, it's time for manual customizations. These would normally include: \ul| \li|Replace generated source code with upstream, normally as symlinks from the \c{upstream/} \c{git} submodule.| \li|Tweak source subdirectory \c{buildfile} that builds the main target (library, executable).| \li|Add tests and, if necessary, examples.| \li|Tweak \c{manifest} (in particular the \c{version}, \c{summary}, and \c{license} values).| \li|Fill in \c{README.md}.||| | \h#dont-change-upstream|Don't change upstream source code layout unnecessarily| It's a good idea to stay as close to the upstream's source code layout as possible. For background and rationale, see \l{#core-package-struct Decide on the package source code layout}. \h#dont-forget-update-manifest|Don't forget to update \c{manifest} values| After \l{#dont-from-scratch generating the project template with \c{bdep-new}}, don't forget to update at least the key values in the generated \c{manifest}: \l{#dont-forget-update-manifest-version \c{version}}, \l{#dont-forget-update-manifest-license \c{license}}, and \l{#dont-forget-update-manifest-summary \c{summary}}. \h2#dont-forget-update-manifest-version|Don't forget to update \c{manifest} value \c{version}| For \c{version}, use the upstream version directly if it is semver (or semver-like, that is, has three version components). Otherwise, see \l{https://github.com/build2/HOWTO/blob/master/entries/handle-projects-which-dont-use-semver.md How do I handle projects that don't use semantic versioning?} and \l{https://github.com/build2/HOWTO/blob/master/entries/handle-projects-which-dont-use-version.md How do I handle projects that don't use versions at all?} \h2#dont-forget-update-manifest-license|Don't forget to update \c{manifest} value \c{license}| \h2#dont-forget-update-manifest-summary|Don't forget to update \c{manifest} value \c{summary}| \h#dont-header-only|Don't make library header-only if it can be compiled| Some libraries offer two alternative modes: header-only and compiled. Unless there are good reasons not to, a \c{build2} build of such a library should use the compiled mode. \N|Some libraries use the \i{precompiled} term to describe the non-header-only mode. We don't recommend using this term in the \c{build2} build since it has a strong association with precompiled headers and can therefore be confusing. Instead, use the \i{compiled} term.| The main rationale here is that a library would not be offering a compiled mode if there were no benefits (usually faster compile times of library consumers) and there is no reason not to take advantage of it in the \c{build2} build. There are, however, reasons why a compiled mode cannot be used, the most common of which are: \ul| \li|The compiled mode is not well maintained/tested by upstream and therefore offers inferior user experience.| \li|The compiled mode does not work on some platforms, usually Windows due to the lack of symbol export support (but see \l{b##cc-auto-symexport Automatic DLL Symbol Exporting}).| \li|Uses of the compiled version of the library requires changes to the library consumers, for example, inclusion of different headers.| | If a compiled mode cannot be always used, then it may be tempting to support both modes potentially making the mode user-configurable. Unless there are strong reasons to, you should resist this temptation and, if the compiled mode is not universally usable, then use the header-only mode everywhere. The main rationale here is that variability adds complexity which makes the result more prone to bugs, more difficult to use, and harder to review and maintain. If you really want to have the compiled mode, then the right way to do it is to work with upstream to fix any issues that prevent its use in \c{build2}. There are, however, reasons why supporting both mode may be needed, the most common of which are: \ul| \li|The library is widely used in both modes but switching from one mode to the other requires changes to the library consumers (for example, inclusion of different headers). In this case only supporting one mode would mean not supporting a large number of library consumers.| \li|The library consists of a large number of independent components and its common for applications to only use a small subset of them. On the other hand, compiling all of them in the compiled mode takes a substantial amount of time. (Note that this can also be addressed by making the presence of optional components user-configurable.)| | \h#dont-main-target-root-buildfile|Don't build your main targets in the root \c{buldfile}| It may be tempting to have your main targets (libraries, executables) in the root \c{buildfile}, especially if it allows you to symlink entire directories from \c{upstream/} (which is not possible if you have to have a \c{buildfile} inside). However, this is a bad idea except for the simplest projects. Firstly, this quickly gets messy since you have to combine managing \c{README}, \c{LICENSE}, etc., and subdirectories with you main target builds. But, more importantly, this means that when you main target is imported (and thus the \c{buildfile} that defines this target must be loaded), your entire project will be loaded, including any \c{tests/} and \c{examples/} subproject, which is wasteful. If you want to continue symlinking entire directories from \c{upstream/} but without moving everything to the root \c{buildfile}, the recommended approach is to simply add another subdirectory level. Let's look at a few concrete example to illustrate the technique (see \l{#core-package-struct Decide on the package source code layout} for background on the terminology used). Here is the directory structure of a package which uses a combined layout (no header/source split) and where everything is in the root \c{buildfile}: \ libigl-core/ ├── igl/ -> upstream/igl/ ├── tests/ └── buildfile # Defines lib{igl-core}. \ And here is the alternative structure where we have added the extra \c{libigl-core} subdirectory with its own \c{buildfile}: \ libigl-core/ ├── libigl-core/ │ ├── igl/ -> ../upstream/igl/ │ └── buildfile # Defines lib{igl-core}. ├── tests/ └── buildfile \ Below is the \c{bdep-new} invocation that can be used to automatically create this alternative structure (see \l{#core-package-craft-cmd Craft \c{bdep\ new} command line to create package} for background and \l{bdep-new(1)} for details): \ $ bdep new \ --type lib,prefix=libigl-core,subdir=igl,buildfile-in-prefix \ libigl-core \ Let's also look at an example of a split layout, which may require a slightly different \c{bdep-new} sub-options to achieve the same result. Here is the layout which matched upstream exactly: \ $ bdep new --type lib,split,subdir=foo,no-subdir-source libfoo $ tree libfoo libfoo/ ├── include/ │ └── foo/ │ ├── buildfile │ └── ... └── src/ ├── buildfile └── ... \ However, with this layout we will not be able to symlink the entire \c{include/foo/} and \c{src/} subdirectories because there are \c{buildfiles} inside (and which may tempt you to just move everything to the root \c{buidfile}). To fix this we can move the \c{buildfiles} out of source subdirectory \c{foo/} and into prefixes (\c{include/} and \c{src/}) using the \c{buildfile-in-prefix} sub-option. And since \c{src/} doesn't have a source subdirectory, we have to invent one: \ $ bdep new --type lib,split,subdir=foo,buildfile-in-prefix libfoo $ tree libfoo libfoo/ ├── include/ │ ├── foo/ -> ../upstream/include/foo/ │ └── buildfile └── src/ ├── foo/ -> ../upstream/src/ └── buildfile \ \h1#howto|Packaging HOWTO| @@ howto make smoke test (and fix ref). Actually, we now have a step for this. \h#howto-debug-macro|How do I expose extra debug macros of a library| Sometime libraries provide extra debugging facilities that are usually enabled or disabled with a macro. For example, \c{libfoo} may provide the \c{LIBFOO_DEBUG} macro that enables additional sanity checks, tracing, etc. Normally such facilities are disable by default. While it may seem like a good idea to detect a debug build and enable this automatically, it is not: such facilities usually impose substantial overhead and the presence of debug information does not mean that performance is not important (people routinely make optimized builds with debug information). As a result, the recommended approach is to expose this as a configuration variable that the end-users of the library can use (see \l{b#proj-config Project Configuration} for background). Continue with the \c{libfoo} example, we can add \c{config.libfoo.debug} to its \c{build/root.build}: \ # build/root.build config [bool] config.libfoo.debug ?= false \ And then define the \c{LIBFOO_DEBUG} macro based on that in the \c{buildfile}: \ # src/buildfile if $config.libfoo.debug cxx.poptions += -DLIBFOO_DEBUG \ If the macro is also used in the library's interface (for example, in inline or template functions), then we will also need to export it: \ # src/buildfile if $config.libfoo.debug { cxx.poptions += -DLIBFOO_DEBUG lib{foo}: cxx.export.poptions += -DLIBFOO_DEBUG } \ \N|If the debug facility in question should be enabled by default even in the optimized builds (in which case the macro usually has the \c{NO_DEBUG} semantics), the other option is to hook it up to the standard \c{NDEBUG} macro, for example, in the library's configuration header file.| Such \c{.debug} configuration variables should primarily be meant for the end-user to selectively enabled extra debugging support in certain libraries of their build. However, if your project depends on a number of libraries with such extra debuggin support and it generally makes sense to also enable this support in dependencies if it is enabled in your project, then you may want to propagate your \c{.debug} configuration value to the dependencies (see the \l{bpkg#manifest-package-depends \c{depends} package \c{manifest} value} for details on dependency configuration). You, however, should still allow the user to override this decision on the per-dependency basis. Continuing with the above example, let's say we have \c{libbar} with \c{config.libbar.debug} that depends on \c{libfoo} and that wishes to by default enable debugging in \c{libfoo} if it is enabled in \c{libbar}. This is how we can correctly arrange for this in \c{libbar}'s \c{manifest}: \ depends: \\ libfoo ^1.2.3 { # We prefer to enable debug in libfoo if enabled in libbar # but accept if it's disabled (for example, by the user). # prefer { if $config.libbar.debug config.libfoo.debug = true } accept (true) } \\ \ \h#howto-patch-upstream-source|How do I patch upstream source code| @@ TODO \h#howto-bad-inclusion-practice|How do I deal with bad header inclusion practice| This sections explains how to deal with libraries that include their public, generically-named headers without the library name as directory prefix. Such libraries cannot coexist, neither in the same build nor when installed. For background and details, see \l{intro#proj-struct Canonical Project Structure}. @@ TODO \h#howto-extra-header-install-subdir|How do I handle extra header installation subdirectory| This sections explains how to handle an additional header installation subdirectory. @@ TODO \h#howto-no-extension-header|How do I handle headers without extensions| If all the headers in a project have no extension, then you can simply specify the empty \c{extension} value for the \c{hxx{\}} target type in \c{build/root.build}: \ hxx{*}: extension = cxx{*}: extension = cpp \ Note, however, that using wildcard patterns for such headers in your \c{buildfile} is a bad idea since such a wildcard will most likely pick up other files that also have no extension (such as \c{buildfile}, executables on UNIX-like systems, etc). Instead, it's best to spell the names of such headers explicitly. For example, instead of: \ lib{hello}: {hxx cxx}{*} \ Write: \ lib{hello}: cxx{*} hxx{hello} \ If only some headers in a project have no extension, then it's best to specify the non-empty extension for the \c{extension} variable in \c{build/root.build} (so that you can still use wildcard for headers with extensions) and spell out the headers with no extension explicitly. Continuing with the above example, if we have both the \c{hello.hpp} and \c{hello} headers, then we can handle them like this: \ hxx{*}: extension = hpp cxx{*}: extension = cpp \ \ lib{hello}: {hxx cxx}{*} hxx{hello.} \ Notice the trailing dot in \c{hxx{hello.\}} \- this is the explicit \"no extension\" specification. See \l{b#targets Targets and Target Types} for details. \h1#faq|Packaging FAQ| \h#faq-alpha-stable|Why is my package in \c{alpha} rather than \c{stable}?| If your package uses a semver version (or semver-like, that is, has three version components) and the first component is zero (for example, \c{0.1.0}), then, according to the semver specification, this is an alpha version and \l{bdep-publish(1)} automatically published such a version to the \c{alpha} section of the repository. Sometimes, however, in a third-party package, while the version may look like semver, upstream may not assign the zero first component any special meaning. In such cases you can override the \c{bdep-publish} behavior with the \c{--section} option, for example: \ $ bdep publish --section=stable \ Note that you should only do this if you are satisfied that by having the zero first component upstream does not imply alpha quality. \h#faq-publish-stage|Where to publish if package requires staged toolchain?| If your package requires the \l{https://build2.org/community.xhtml#stage staged toolchain}, for example, because it needs a feature or bugfix that is not yet available in the released toolchain, then you won't be able to publish it to \c{cppget.org}. Specifically, if your package has the accurate \c{build2} version constraint and you attempt to publish it, you will get an error like this: \ error: package archive is not valid info: unable to satisfy constraint (build2 >= 0.17.0-) for package foo info: available build2 version is 0.16.0 \ There are three alternative ways to proceed in this situation: \ol| \li|Wait until the next release and then publish the package to \c{cppget.org}.| \li|If the requirement for the staged toolchain is \"minor\", that is, it doesn't affect the common functionality of the package or only affects a small subset of platforms/compilers, then you can lower the toolchain version requirement and publish the package to \c{cppget.org}. For example, if you require the staged toolchain because of a bugfix that only affects one platform, it doesn't make sense to delay publishing the package since it is perfectly usable on all the platforms in the meantime.| \li|Publish it to \l{https://queue.stage.build2.org queue.stage.build2.org}, the staging package repository. This repository contain new packages that require the staged toolchain to work and which will be automatically moved to \c{cppget.org} once the staged version is released. The other advantage of publishing to this repository (besides not having to remember to manually publish the package once the staged version is released) is that your package becomes available from an archive repository (which is substantially faster than a \c{git} repository). To publish to this repository, use the following \c{bdep-publish} command line: \ $ bdep publish --repository=https://stage.build2.org ... \ || "