aboutsummaryrefslogtreecommitdiff
path: root/doc/manual.cli
diff options
context:
space:
mode:
Diffstat (limited to 'doc/manual.cli')
-rw-r--r--doc/manual.cli1417
1 files changed, 1248 insertions, 169 deletions
diff --git a/doc/manual.cli b/doc/manual.cli
index 9d79259..85a6613 100644
--- a/doc/manual.cli
+++ b/doc/manual.cli
@@ -311,6 +311,16 @@ it searches for a target for the \c{cxx{hello\}} prerequisite. During this
search, the \c{extension} variable is looked up and its value is used to end
up with the \c{hello.cxx} file.
+\N|To resolve a rule match ambiguity or to override a default match \c{build2}
+uses \i{rule hints}. For example, if we wanted link a C executable using the
+C++ link rule:
+
+\
+[rule_hint=cxx] exe{hello}: c{hello}
+\
+
+|
+
Here is our new dependency declaration again:
\
@@ -361,7 +371,7 @@ Nothing really new here: we've specified the default extension for the
prerequisites. If you have experience with other build systems, then
explicitly listing headers might seem strange to you. As will be discussed
later, in \c{build2} we have to explicitly list all the prerequisites of a
-target that should end up in a distribution of our project.
+target that should end up in a source distribution of our project.
\N|You don't have to list \i{all} headers that you include, only the ones
belonging to your project. Like all modern C/C++ build systems, \c{build2}
@@ -411,11 +421,11 @@ exe{hello}: {hxx cxx}{**}
development more pleasant and less error prone: you don't need to update your
\c{buildfile} every time you add, remove, or rename a source file and you
won't forget to explicitly list headers, a mistake that is often only detected
-when trying to build a distribution of a project. On the other hand, there is
-the possibility of including stray source files into your build without
-noticing. And, for more complex projects, name patterns can become fairly
-complex (see \l{#name-patterns Name Patterns} for details). Note also that on
-modern hardware the performance of wildcard searches hardly warrants a
+when trying to build a source distribution of a project. On the other hand,
+there is the possibility of including stray source files into your build
+without noticing. And, for more complex projects, name patterns can become
+fairly complex (see \l{#name-patterns Name Patterns} for details). Note also
+that on modern hardware the performance of wildcard searches hardly warrants a
consideration.
In our experience, when combined with modern version control systems like
@@ -448,7 +458,7 @@ invocation. In other words, expect an experience similar to a plain
\c{Makefile}.
One notable example where simple projects are handy is a \i{glue
-\c{buildfiles}} that \"pulls\" together several other projects, usually for
+\c{buildfile}} that \"pulls\" together several other projects, usually for
convenience of development. See \l{#intro-import Target Importation} for
details.|
@@ -587,7 +597,7 @@ configuration \i{persistent}. We will see an example of this shortly.
Next up are the \c{test}, \c{install}, and \c{dist} modules. As their names
suggest, they provide support for testing, installation and preparation of
-distributions. Specifically, the \c{test} module defines the \c{test}
+source distributions. Specifically, the \c{test} module defines the \c{test}
operation, the \c{install} module defines the \c{install} and \c{uninstall}
operations, and the \c{dist} module defines the \c{dist}
(meta-)operation. Again, we will try them out in a moment.
@@ -746,7 +756,7 @@ Let's take a look at a slightly more realistic root \c{buildfile}:
Here we have the customary \c{README.md} and \c{LICENSE} files as well as the
package \c{manifest}. Listing them as prerequisites achieves two things: they
will be installed if/when our project is installed and, as mentioned earlier,
-they will be included into the project distribution.
+they will be included into the project source distribution.
The \c{README.md} and \c{LICENSE} files use the \c{doc{\}} target type. We
could have used the generic \c{file{\}} but using the more precise \c{doc{\}}
@@ -1418,7 +1428,7 @@ if ($cc.class == 'gcc')
}
if ($c.target.class != 'windows')
- c.libs += -lpthread # only C
+ c.libs += -ldl # only C
\
Additionally, as we will see in \l{#intro-operations-config Configuring},
@@ -1634,6 +1644,15 @@ $ b update: hello/exe{hello} # Update specific target
$ b update: libhello/ tests/ # Update two targets.
\
+\N|If you are running \c{build2} from PowerShell, then you will need to use
+quoting when updating specific targets, for example:
+
+\
+$ b update: 'hello/exe{hello}'
+\
+
+|
+
Let's revisit \c{build/bootstrap.build} from our \c{hello} project:
\
@@ -1705,9 +1724,18 @@ $ b
...
\
-Let's take a look at \c{config.build}:
+To remove the persistent configuration we use the \c{disfigure}
+meta-operation:
+
+\
+$ b disfigure
+\
+
+Let's again configure our project and take a look at \c{config.build}:
\
+$ b configure config.cxx=clang++ config.cxx.coptions=-g
+
$ cat build/config.build
config.cxx = clang++
@@ -1742,6 +1770,15 @@ Any variable value specified on the command line overrides those specified in
the \c{buildfiles}. As a result, \c{config.cxx} was updated while the value of
\c{config.cxx.coptions} was preserved.
+\N|To revert a configuration variable to its default value, list its name in
+the special \c{config.config.disfigure} variable. For example:
+
+\
+$ b configure config.config.disfigure=config.cxx
+\
+
+|
+
Command line variable overrides are also handy to adjust the configuration for
a single build system invocation. For example, let's say we want to quickly
check that our project builds with optimization but without permanently
@@ -2275,36 +2312,40 @@ If the value of the \c{install} variable is not \c{false}, then it is normally
a relative path with the first path component being one of these names:
\
-name default override
----- ------- --------
-root config.install.root
+name default override
+---- ------- --------
+root config.install.root
-data_root root/ config.install.data_root
-exec_root root/ config.install.exec_root
+data_root root/ config.install.data_root
+exec_root root/ config.install.exec_root
-bin exec_root/bin/ config.install.bin
-sbin exec_root/sbin/ config.install.sbin
-lib exec_root/lib/ config.install.lib
-libexec exec_root/libexec/<project>/ config.install.libexec
-pkgconfig lib/pkgconfig/ config.install.pkgconfig
+bin exec_root/bin/ config.install.bin
+sbin exec_root/sbin/ config.install.sbin
+lib exec_root/lib/ config.install.lib
+libexec exec_root/libexec/<project>/ config.install.libexec
+pkgconfig lib/pkgconfig/ config.install.pkgconfig
-etc data_root/etc/ config.install.etc
-include data_root/include/ config.install.include
-share data_root/share/ config.install.share
-data share/<project>/ config.install.data
+etc data_root/etc/ config.install.etc
+include data_root/include/ config.install.include
+include_arch include/ config.install.include_arch
+share data_root/share/ config.install.share
+data share/<project>/ config.install.data
+buildfile share/build2/export/<project>/ config.install.buildfile
-doc share/doc/<project>/ config.install.doc
-legal doc/ config.install.legal
-man share/man/ config.install.man
-man<N> man/man<N>/ config.install.man<N>
+doc share/doc/<project>/ config.install.doc
+legal doc/ config.install.legal
+man share/man/ config.install.man
+man<N> man/man<N>/ config.install.man<N>
\
Let's see what's going on here: The default install directory tree is derived
from the \c{config.install.root} value but the location of each node in this
tree can be overridden by the user that installs our project using the
-corresponding \c{config.install.*} variables. In our \c{buildfiles}, in turn,
-we use the node names instead of actual directories. As an example, here is a
-\c{buildfile} fragment from the source directory of our \c{libhello} project:
+corresponding \c{config.install.*} variables (see the \l{#module-install
+\c{install}} module documentation for details on their meaning). In our
+\c{buildfiles}, in turn, we use the node names instead of actual
+directories. As an example, here is a \c{buildfile} fragment from the source
+directory of our \c{libhello} project:
\
hxx{*}:
@@ -2335,15 +2376,36 @@ the \c{details/} subdirectory with the \c{utility.hxx} header, then this
header would have been installed as
\c{.../include/libhello/details/utility.hxx}.
+\N|By default the generated \c{pkg-config} files will contain
+\c{install.include} and \c{install.lib} directories as header (\c{-I}) and
+library (\c{-L}) search paths, respectively. However, these can be customized
+with the \c{{c,cxx\}.pkgconfig.{include,lib\}} variables. For example,
+sometimes we may need to install headers into a subdirectory of the include
+directory but include them without this subdirectory:
+
+\
+# Install headers into hello/libhello/ subdirectory of, say,
+# /usr/include/ but include them as <libhello/*>.
+#
+hxx{*}:
+{
+ install = include/hello/libhello/
+ install.subdirs = true
+}
+
+lib{hello}: cxx.pkgconfig.include = include/hello/
+\
+
+|
\h2#intro-operations-dist|Distributing|
The last module that we load in our \c{bootstrap.build} is \c{dist} which
-provides support for the preparation of distributions and defines the \c{dist}
-meta-operation. Similar to \c{configure}, \c{dist} is a meta-operation rather
-than an operation because, conceptually, we are preparing a distribution for
-performing operations (like \c{update}, \c{test}) on targets rather than
-targets themselves.
+provides support for the preparation of source distributions and defines the
+\c{dist} meta-operation. Similar to \c{configure}, \c{dist} is a
+meta-operation rather than an operation because, conceptually, we are
+preparing a distribution for performing operations (like \c{update}, \c{test})
+on targets rather than targets themselves.
The preparation of a correct distribution requires that all the necessary
project files (sources, documentation, etc) be listed as prerequisites in the
@@ -2428,9 +2490,6 @@ from out. Here is a fragment from the \c{libhello} source directory
\
hxx{version}: in{version} $src_root/manifest
-{
- dist = true
-}
\
Our library provides the \c{version.hxx} header that the users can include to
@@ -2441,13 +2500,24 @@ minor, patch, etc) and then preprocesses the \c{in{\}} file substituting these
values (see the \l{#module-version \c{version}} module documentation for
details). The end result is an automatically maintained version header.
-One problem with auto-generated headers is that if one does not yet exist,
-then the compiler may still find it somewhere else. For example, we may have
-an older version of a library installed somewhere where the compiler searches
-for headers by default (for example, \c{/usr/local/include/}). To overcome
-this problem it is a good idea to ship pre-generated headers in our
-distributions. But since they are output targets, we have to explicitly
-request this with \c{dist=true}.
+Usually there is no need to include this header into the distribution since it
+will be automatically generated if and when necessary. However, we can if we
+need to. For example, we could be porting an existing project and its users
+could be expecting the version header to be shipped as part of the archive.
+Here is how we can achieve this:
+
+\
+hxx{version}: in{version} $src_root/manifest
+{
+ dist = true
+ clean = ($src_root != $out_root)
+}
+\
+
+Because this header is an output target, we have to explicitly request its
+distribution with \c{dist=true}. Notice that we have also disabled its
+cleaning for the in source build so that the \c{clean} operation results in a
+state identical to distributed.
\h#intro-import|Target Importation|
@@ -2563,6 +2633,16 @@ Subprojects and Amalgamations} for details on this subject).
subproject in \c{libhello}. The test imports \c{libhello} which is
automatically found as an amalgamation containing this subproject.|
+\N|To skip searching in subprojects/amalgamations and proceed directly to the
+rule-specific search (described below), specify the \c{config.import.*}
+variable with an empty value. For example:
+
+\
+$ b configure: ... config.import.libhello=
+\
+
+|
+
If the project being imported cannot be located using any of these methods,
then \c{import} falls back to the rule-specific search. That is, a rule that
matches the target may provide support for importing certain target types
@@ -2763,15 +2843,7 @@ impl_libs = # Implementation dependencies.
lib{hello}: {hxx ixx txx cxx}{** -version} hxx{version} \
$impl_libs $intf_libs
-# Include the generated version header into the distribution (so that
-# we don't pick up an installed one) and don't remove it when cleaning
-# in src (so that clean results in a state identical to distributed).
-#
hxx{version}: in{version} $src_root/manifest
-{
- dist = true
- clean = ($src_root != $out_root)
-}
# Build options.
#
@@ -3380,10 +3452,10 @@ details/ # Scope.
hxx{*}: install = false
}
-hxx{version}: # Target-specific.
+lib{hello}: # Target-specific.
{
- dist = true
- clean = ($src_root != $out_root)
+ cxx.export.poptions = \"-I$src_root\"
+ cxx.export.libs = $intf_libs
}
exe{test}: file{test.roundtrip}: # Prerequisite-specific.
@@ -3400,7 +3472,7 @@ example:
h{config}: in{config}
{
in.symbol = '@'
- in.substitution = lax
+ in.mode = lax
SYSTEM_NAME = $c.target.system
SYSTEM_PROCESSOR = $c.target.cpu
@@ -4054,7 +4126,7 @@ source subdirectory \c{buildfile} of an executable created with this option:
# Unit tests.
#
-exe{*.test}
+exe{*.test}:
{
test = true
install = false
@@ -4147,8 +4219,14 @@ specified for our source files.
\N|If you need to specify a name that does not have an extension, then end it
with a single dot. For example, for a header \c{utility} you would write
-\c{hxx{utility.\}}. If you need to specify a name with an actual trailing
-dot, then escape it with a double dot, for example, \c{hxx{utility..\}}.|
+\c{hxx{utility.\}}. If you need to specify a name with an actual trailing dot,
+then escape it with a double dot, for example, \c{hxx{utility..\}}.
+
+More generally, anywhere in a name, a double dot can be used to specify a dot
+that should not be considered the extension separator while a triple dot \-
+which should. For example, in \c{obja{foo.a.o\}} the extension is \c{.o} and
+if instead we wanted \c{.a.o} to be considered the extension, then we could
+rewrite it either as \c{obja{foo.a..o\}} or as \c{obja{foo...a.o\}}.|
The next couple of lines set target type/pattern-specific variables to treat
all unit test executables as tests that should not be installed:
@@ -4255,10 +4333,10 @@ latter is used to update generated source code (such as headers) that is
required to complete the match.|
Debugging issues in each phase requires different techniques. Let's start with
-the load phase. As mentioned in \l{#intro-lang Build Language}, \c{buildfiles}
-are processed linearly with directives executed and variables expanded as they
-are encountered. As we have already seen, to print a variable value we can use
-the \c{info} directive. For example:
+the load phase. As mentioned in \l{#intro-lang Buildfile Language},
+\c{buildfiles} are processed linearly with directives executed and variables
+expanded as they are encountered. As we have already seen, to print a variable
+value we can use the \c{info} directive. For example:
\
x = X
@@ -4388,12 +4466,12 @@ Instead of printing the entire scope, we can also print individual targets by
specifying one or more target names in \c{dump}. To make things more
interesting, let's convert our \c{hello} project to use a utility library,
similar to the unit testing setup (\l{#intro-unit-test Implementing Unit
-Testing}). We will also link to the \c{pthread} library to see an example of a
+Testing}). We will also link to the \c{dl} library to see an example of a
target-specific variable being dumped:
\
exe{hello}: libue{hello}: bin.whole = false
-exe{hello}: cxx.libs += -lpthread
+exe{hello}: cxx.libs += -ldl
libue{hello}: {hxx cxx}{**}
dump exe{hello}
@@ -4405,7 +4483,7 @@ The output will look along these lines:
buildfile:5:1: dump:
/tmp/hello/hello/exe{hello.?}:
{
- [strings] cxx.libs = -lpthread
+ [strings] cxx.libs = -ldl
}
/tmp/hello/hello/exe{hello.?}: /tmp/hello/hello/:libue{hello.?}:
{
@@ -4416,7 +4494,8 @@ buildfile:5:1: dump:
The output of \c{dump} might look familiar: in \l{#intro-dirs-scopes Output
Directories and Scopes} we've used the \c{--dump} option to print the entire
build state, which looks pretty similar. In fact, the \c{dump} directive uses
-the same mechanism but allows us to print individual scopes and targets.
+the same mechanism but allows us to print individual scopes and targets from
+within a \c{buildfile}.
There is, however, an important difference to keep in mind: \c{dump} prints
the state of a target or scope at the point in the \c{buildfile} load phase
@@ -4430,6 +4509,9 @@ a result, while the \c{dump} directive should be sufficient in most cases,
sometimes you may need to use the \c{--dump} option to examine the build state
just before rule execution.
+\N|It is possible to limit the output of \c{--dump} to specific scopes and/or
+targets with the \c{--dump-scope} and \c{--dump-target} options.|
+
Let's now move from state to behavior. As we already know, to see the
underlying commands executed by the build system we use the \c{-v} options
(which is equivalent to \c{--verbose\ 2}). Note, however, that these are
@@ -4506,6 +4588,25 @@ Higher verbosity levels result in more and more tracing statements being
printed. These include \c{buildfile} loading and parsing, prerequisite to
target resolution, as well as build system module and rule-specific logic.
+While the tracing statements can be helpful in understanding what is
+happening, they don't make it easy to see why things are happening a
+certain way. In particular, one question that is often encountered during
+build troubleshooting is which dependency chain causes matching or execution
+of a particular target. These questions can be answered with the help of
+the \c{--trace-match} and \c{--trace-execute} options. For example, if we
+want to understand what causes the update of \c{obje{hello\}} in the
+\c{hello} project above:
+
+\
+$ b -s --trace-execute 'obje{hello}'
+info: updating hello/obje{hello}
+ info: using rule cxx.compile
+ info: while updating hello/libue{hello}
+ info: while updating hello/exe{hello}
+ info: while updating dir{hello/}
+ info: while updating dir{./}
+\
+
Another useful diagnostics option is \c{--mtime-check}. When specified, the
build system performs a number of file modification time sanity checks that
can be helpful in diagnosing spurious rebuilds.
@@ -4550,15 +4651,20 @@ cross-compilation (specifically, inability to run tests).
As a result, we recommend using \i{expectation-based} configuration where your
project assumes a feature to be available if certain conditions are
-met. Examples of such conditions at the source code level include feature
-test macros, platform macros, runtime library macros, compiler macros, etc.,
-with the build system modules exposing some of the same information via
-variables to allow making similar decisions in \c{buildfiles}. Another
-alternative is to automatically adapt to missing features using more advanced
-techniques such as C++ SFINAE. And in situations where none of this is
-possible, we recommend delegating the decision to the user via a configuration
-value. Our experience with \c{build2} as well as those of other large
-cross-platform projects such as Boost show that this is a viable strategy.
+met. Examples of such conditions at the source code level include feature test
+macros, platform macros, runtime library macros, compiler macros, etc., with
+the build system modules exposing some of the same information via variables
+to allow making similar decisions in \c{buildfiles}. The standard
+pre-installed \l{https://github.com/build2/libbuild2-autoconf/ \c{autoconf}}
+build system module provides emulation of GNU \c{autoconf} using this
+approach.
+
+Another alternative is to automatically adapt to missing features using more
+advanced techniques such as C++ SFINAE. And in situations where none of this
+is possible, we recommend delegating the decision to the user via a
+configuration value. Our experience with \c{build2} as well as those of other
+large cross-platform projects such as Boost show that this is a viable
+strategy.
Having said that, \c{build2} does provide the ability to extract configuration
information from the environment (\c{$getenv()} function) or other tools
@@ -4766,13 +4872,31 @@ is user-defined, then the default value is not evaluated.
Note also that if the configuration value is not specified by the user and you
haven't provided the default, the variable will be undefined, not \c{null},
and, as a result, omitted from the persistent configuration
-(\c{build/config.build} file). However, \c{null} is a valid default value. It
-is traditionally used for \i{optional} configuration values. For example:
+(\c{build/config.build} file). In fact, unlike other variables, project
+configuration variables are by default not \i{nullable}. For example:
+
+\
+$ b configure config.libhello.fancy=[null]
+error: null value in non-nullable variable config.libhello.fancy
+\
+
+There are two ways to make \c{null} a valid value of a project configuration
+variable. Firstly, if the default value is \c{null}, then naturally the
+variable is assumed nullable. This is traditionally used for \i{optional}
+configuration values. For example:
\
config [string] config.libhello.fallback_name ?= [null]
\
+If we need a nullable configuration variable but with a non-\c{null} default
+value (or no default value at all), then we have to use the \c{null} variable
+attribute. For example:
+
+\
+config [string, null] config.libhello.fallback_name ?= \"World\"
+\
+
A common approach for representing an C/C++ enum-like value is to use
\c{string} as a type and pattern matching for validation. In fact, validation
and propagation can often be combined. For example, if our library needed to
@@ -4816,15 +4940,6 @@ if! $defined(config.libhello.database)
fail 'config.libhello.database must be specified'
\
-And if you want to also disallow \c{null} values, then the above check should
-be rewritten like this: \N{An undefined variable expands into a \c{null}
-value.}
-
-\
-if ($config.libhello.database == [null])
- fail 'config.libhello.database must be specified'
-\
-
If computing the default value is expensive or requires elaborate logic, then
the handling of a configuration variable can be broken down into two steps
along these lines:
@@ -4925,9 +5040,96 @@ $ b -v config.libhello.woptions=-Wno-extra
g++ ... -Wall -Wextra -Wno-extra -Werror ...
\
-While we have already seen some examples of how to propagate the configuration
-values to our source code, \l{#proj-config-propag Configuration Propagation}
-discusses this topic in more detail.
+If you do not plan to package your project, then the above rules are the only
+constraints you have. However, if your project is also a package, then other
+projects that use it as a dependency may have preferences and requirements
+regarding its configuration. And it becomes the job of the package manager
+(\c{bpkg}) to negotiate a suitable configuration between all the dependents of
+your project (see \l{bpkg#dep-config-negotiation Dependency Configuration
+Negotiation} for details). This can be a difficult problem to solve optimally
+in a reasonable time and to help the package manager come up with the best
+configuration quickly you should follow the below additional rules and
+recommendations for configuration of packages (but which are also generally
+good ideas):
+
+\ol|
+
+\li|Prefer \c{bool} configuration variables. For example, if your project
+ supports a fixed number of backends, then provide a \c{bool} variable to
+ enable each rather than a single variable that lists all the backends to
+ be enabled.|
+
+\li|Avoid project configuration variable dependencies, that is, where the
+ default value of one variable depends on the value of another. But if you
+ do need such a dependency, make sure it is expressed using the original
+ \c{config.<project>.*} variables rather than any intermediate/computed
+ values. For example:
+
+ \
+ # Enable Y only if X is enabled.
+ #
+ config [bool] config.hello.x ?= false
+ config [bool] config.hello.y ?= $config.libhello.x
+ \
+
+ |
+
+\li|Do not make project configuration variables conditional. In other words,
+ the set of configuration variables and their types should be a static
+ property of the project. If you do need to make a certain configuration
+ variable \"unavailable\" or \"disabled\" if certain conditions are met
+ (for example, on a certain platform or based on the value of another
+ configuration variable), then express this with a default value and/or a
+ check. For example:
+
+ \
+ windows = ($cxx.target.class == 'windows')
+
+ # Y should only be enabled if X is enabled and we are not on
+ # Windows.
+ #
+ config [bool] config.hello.x ?= false
+ config [bool] config.hello.y ?= ($config.hello.x && !$windows)
+
+ if $config.libhello.y
+ {
+ assert $config.hello.x \"Y can only be enabled if X is enabled\"
+ assert (!$windows) \"Y cannot be enabled on Windows\"
+ }
+ \
+
+ |
+
+|
+
+Additionally, if you wish to factor some \c{config} directives into a separate
+file (for example, if you have a large number of them or you would like to
+share them with subprojects) and source it from your \c{build/root.build},
+then it is recommended that you place this file into the \c{build/config/}
+subdirectory, where the package manager expects to find such files (see
+\l{bpkg#package-skeleton Package Build System Skeleton} for background). For
+example:
+
+\
+# root.build
+#
+
+...
+
+source $src_root/build/config/common.build
+\
+
+\N|If you would prefer to keep such a file in a different location (for
+example, because it contains things other than \c{config} directives), then
+you will need to manually list it in your package's \c{manifest} file, see the
+\l{bpkg#manifest-package-build-file \c{build-file}} value for details.|
+
+Another effect of the \c{config} directive is to print the configuration
+variable in the project's configuration report. This functionality is
+discussed in the following section. While we have already seen some examples
+of how to propagate the configuration values to our source code,
+\l{#proj-config-propag Configuration Propagation} discusses this topic in more
+detail.
\h#proj-config-report|Configuration Report|
@@ -5311,10 +5513,21 @@ configuration header into two, one public and installed while the other
private.|
+
\h1#attributes|Attributes|
\N{This chapter is a work in progress and is incomplete.}
+The only currently recognized target attribute is \c{rule_hint} which
+specifies the rule hint. Rule hints can be used to resolve ambiguity when
+multiple rules match the same target as well as to override an unambiguous
+match. For example, the following rule hint makes sure our executable is
+linked with the C++ compiler even though it only has C sources:
+
+\
+[rule_hint=cxx] exe{hello}: c{hello}
+\
+
\h1#name-patterns|Name Patterns|
@@ -5365,7 +5578,8 @@ Note that some wildcard characters may have special meaning in certain
contexts. For instance, \c{[} at the beginning of a value will be interpreted
as the start of the attribute list while \c{?} and \c{[} in the eval context
are part of the ternary operator and value subscript, respectively. In such
-cases the wildcard character will need to be escaped, for example:
+cases the character will need to be escaped in order to be treated as a
+wildcard, for example:
\
x = \[1-9]-foo.txt
@@ -5454,7 +5668,7 @@ exe{hello}: cxx{+{f* b*} -{foo bar}}
This is particularly useful if you would like to list the names to include or
exclude in a variable. For example, this is how we can exclude certain files
from compilation but still include them as ordinary file prerequisites (so
-that they are still included into the distribution):
+that they are still included into the source distribution):
\
exc = foo.cxx bar.cxx
@@ -5479,17 +5693,25 @@ exe{hello}: cxx{+{$inc} -{$exc}}
One common situation that calls for exclusions is auto-generated source
code. Let's say we have auto-generated command line parser in \c{options.hxx}
-and \c{options.cxx}. Because of the in-tree builds, our name pattern may or
-may not find these files. Note, however, that we cannot just include them as
-non-pattern prerequisites. We also have to exclude them from the pattern match
-since otherwise we may end up with duplicate prerequisites. As a result, this
-is how we have to handle this case provided we want to continue using patterns
-to find other, non-generated source files:
+and \c{options.cxx}. Because of the in/out of source builds, our name pattern
+may or may not find these files. Note, however, that we cannot just include
+them as non-pattern prerequisites. We also have to exclude them from the
+pattern match since otherwise we may end up with duplicate prerequisites. As a
+result, this is how we have to handle this case provided we want to continue
+using patterns to find other, non-generated source files:
\
exe{hello}: {hxx cxx}{* -options} {hxx cxx}{options}
\
+If all our auto-generated source files have a common prefix or suffix, then we
+can exclude them wholesale with a pattern. For example, if all our generated
+files end with the `-options` suffix:
+
+\
+exe{hello}: {hxx cxx}{** -**-options} {hxx cxx}{foo-options bar-options}
+\
+
If the name pattern includes an absolute directory, then the pattern match is
performed in that directory and the generated names include absolute
directories as well. Otherwise, the pattern match is performed in the
@@ -5683,12 +5905,12 @@ does not break or produce incorrect results if the environment changes.
Instead, changes to the environment are detected and affected targets are
automatically rebuilt.
-The two use-cases where hermetic configurations are really useful are when we
-need to save an environment which is not generally available (for example, an
-environment of a Visual Studio development command prompt) or when our build
-results need to exactly match the specific configuration (for example, because
-parts of the overall result have already been built and installed, as is the
-case with build system modules).|
+The two use-cases where hermetic configurations are especially useful are when
+we need to save an environment which is not generally available (for example,
+an environment of a Visual Studio development command prompt) or when our
+build results need to exactly match the specific configuration (for example,
+because parts of the overall result have already been built and installed, as
+is the case with build system modules).|
If we now examine \c{config.build}, we will see something along these lines:
@@ -5919,30 +6141,54 @@ of the Introduction, the \c{install} module defines the following standard
installation locations:
\
-name default config.* override
----- ------- -----------------
-root install.root
+name default config.install.*
+ (c.i.*) override
+---- ------- ----------------
+root c.i.root
-data_root root/ install.data_root
-exec_root root/ install.exec_root
+data_root root/ c.i.data_root
+exec_root root/ c.i.exec_root
-bin exec_root/bin/ install.bin
-sbin exec_root/sbin/ install.sbin
-lib exec_root/lib/<private>/ install.lib
-libexec exec_root/libexec/<private>/<project>/ install.libexec
-pkgconfig lib/pkgconfig/ install.pkgconfig
+bin exec_root/bin/ c.i.bin
+sbin exec_root/sbin/ c.i.sbin
+lib exec_root/lib/<private>/ c.i.lib
+libexec exec_root/libexec/<private>/<project>/ c.i.libexec
+pkgconfig lib/pkgconfig/ c.i.pkgconfig
-etc data_root/etc/ install.etc
-include data_root/include/<private>/ install.include
-share data_root/share/ install.share
-data share/<private>/<project>/ install.data
+etc data_root/etc/ c.i.etc
+include data_root/include/<private>/ c.i.include
+include_arch include/ c.i.include_arch
+share data_root/share/ c.i.share
+data share/<private>/<project>/ c.i.data
+buildfile share/build2/export/<project>/ c.i.buildfile
-doc share/doc/<private>/<project>/ install.doc
-legal doc/ install.legal
-man share/man/ install.man
-man<N> man/man<N>/ install.man<N>
+doc share/doc/<private>/<project>/ c.i.doc
+legal doc/ c.i.legal
+man share/man/ c.i.man
+man<N> man/man<N>/ c.i.man<N>
\
+The \c{include_arch} location is meant for architecture-specific files, such
+as configuration headers. By default it's the same as \c{include} but can be
+configured by the user to a different value (for example,
+\c{/usr/include/x86_64-linux-gnu/}) for platforms that support multiple
+architectures from the same installation location. This is how one would
+normally use it from a \c{buildfile}:
+
+\
+# The configuration header may contain target architecture-specific
+# information so install it into include_arch/ instead of include/.
+#
+h{*}: install = include/libhello/
+h{config}: install = include_arch/libhello/
+\
+
+The \c{buildfile} location is meant for exported buildfiles that can be
+imported by other projects. If a project contains any \c{**.build} buildfiles
+in its \c{build/export/} directory (or \c{**.build2} and \c{build2/export/} in
+the alternative naming scheme), then they are automatically installed into
+this location (recreating subdirectories).
+
The \c{<project>}, \c{<version>}, and \c{<private>} substitutions in these
\c{config.install.*} values are replaced with the project name, version, and
private subdirectory, respectively. If either is empty, then the corresponding
@@ -5961,7 +6207,9 @@ The private installation subdirectory is specified with the
directory and may include multiple components. For example:
\
-$ b install config.install.root=/usr/local/ config.install.private=hello/
+$ b install \
+ config.install.root=/usr/local/ \
+ config.install.private=hello/
\
\N|If you are relying on your system's dynamic linker defaults to
@@ -5979,6 +6227,153 @@ $ b install \
|
+
+\h#install-reloc|Relocatable Installation|
+
+A relocatable installation can be moved to a directory other than its original
+installation location. Note that the installation should be moved as a whole
+preserving the directory structure under its root (\c{config.install.root}).
+To request a relocatable installation, set the \c{config.install.relocatable}
+variable to \c{true}. For example:
+
+\
+$ b install \
+ config.install.root=/tmp/install \
+ config.install.relocatable=true
+\
+
+A relocatable installation is achieved by using paths relative to one
+filesystem entry within the installation to locate another. Some examples
+include:
+
+\ul|
+
+\li|Paths specified in \c{config.bin.rpath} are made relative using the
+\c{$ORIGIN} (Linux, BSD) or \c{@loader_path} (Mac OS) mechanisms.|
+
+\li|Paths in the generated \c{pkg-config} files are made relative to the
+\c{${pcfiledir\}} built-in variable.|
+
+\li|Paths in the generated installation manifest (\c{config.install.manifest})
+are made relative to the location of the manifest file.||
+
+While these common aspects are handled automatically, if a projects relies on
+knowing its installation location, then it will most likely need to add manual
+support for relocatable installations.
+
+As an example, consider an executable that supports loading plugins and
+requires the plugin installation directory to be embedded into the executable
+during the build. The common way to support relocatable installations for such
+cases is to embed a path relative to the executable and complete it at
+runtime, normally by resolving the executable's path and using its directory
+as a base.
+
+If you would like to always use the relative path, regardless of whether the
+installation is relocatable of not, then you can obtain the library
+installation directory relative to the executable installation directory like
+this:
+
+\
+plugin_dir = $install.resolve($install.lib, $install.bin)
+\
+
+Alternatively, if you would like to continue using absolute paths for
+non-relocatable installations, then you can use something like this:
+
+\
+plugin_dir = $install.resolve( \
+ $install.lib, \
+ ($install.relocatable ? $install.bin : [dir_path] ))
+\
+
+Finally, if you are unable to support relocatable installations, the correct
+way to handle this is to assert this fact in \c{root.build} of your project,
+for example:
+
+\
+assert (!$install.relocatable) 'relocatable installation not supported'
+\
+
+
+\h#install-filter|Installation Filtering|
+
+While project authors determine what gets installed at the \c{buildfile}
+level, the users of the project can further filter the installation using the
+\c{config.install.filter} variable.
+
+The value of this variable is a list of key-value pairs that specify the
+filesystem entries to include or exclude from the installation. For example,
+the following filters will omit installing headers and static libraries
+(notice the quoting of the wildcard).
+
+\
+$ b install config.install.filter='include/@false \"*.a\"@false'
+\
+
+The key in each pair is a file or directory path or a path wildcard pattern.
+If a key is relative and contains a directory component or is a directory,
+then it is treated relative to the corresponding \c{config.install.*}
+location. Otherwise (simple path, normally a pattern), it is matched against
+the leaf of any path. Note that if an absolute path is specified, it should be
+without the \c{config.install.chroot} prefix.
+
+The value in each pair is either \c{true} (include) or \c{false} (exclude).
+The filters are evaluated in the order specified and the first match that is
+found determines the outcome. If no match is found, the default is to
+include. For a directory, while \c{false} means exclude all the sub-paths
+inside this directory, \c{true} does not mean that all the sub-paths will be
+included wholesale. Rather, the matched component of the sub-path is treated
+as included with the rest of the components matched against the following
+sub-filters. For example:
+
+\
+$ b install config.install.filter='
+ include/x86_64-linux-gnu/@true
+ include/x86_64-linux-gnu/details/@false
+ include/@false'
+\
+
+The \c{true} or \c{false} value may be followed by comma and the \c{symlink}
+modifier to only apply to symlink filesystem entries. For example:
+
+\
+$ b config.install.filter='\"*.so\"@false,symlink'
+\
+
+A filter can be negated by specifying \c{!} as the first pair. For example:
+
+\
+$ b install config.install.filter='! include/@false \"*.a\"@false'
+\
+
+Note that the filtering mechanism only affects what gets physically copied to
+the installation directory without affecting what gets built for install or
+the view of what gets installed at the \c{buildfile} level. For example, given
+the \c{include/@false *.a@false} filters, static libraries will still be built
+(unless arranged not to with \c{config.bin.lib}) and the \c{pkg-config} files
+will still end up with \c{-I} options pointing to the header installation
+directory. Note also that this mechanism applies to both \c{install} and
+\c{uninstall} operations.
+
+\N|If you are familiar with the Debian or Fedora packaging, this mechanism is
+somewhat similar to (and can be used for a similar purpose as) the Debian's
+\c{.install} files and Fedora's \c{%files} spec file sections, which are used
+to split the installation into multiple binary packages.|
+
+As another example, the following filters will omit all the
+development-related files (headers, \c{pkg-config} files, static libraries,
+and shared library symlinks; assuming the platform uses the \c{.a}/\c{.so}
+extensions for the libraries):
+
+\
+$ b install config.install.filter='
+ include/@false
+ pkgconfig/@false
+ \"lib/*.a\"@false
+ \"lib/*.so\"@false,symlink'
+\
+
+
\h1#module-version|\c{version} Module|
A project can use any version format as long as it meets the package version
@@ -6249,7 +6644,7 @@ just not ordered correctly. As a result, we feel that the risks are justified
when the only alternative is manual version management (which is always an
option, nevertheless).
-When we prepare a distribution of a snapshot, the \c{version} module
+When we prepare a source distribution of a snapshot, the \c{version} module
automatically adjusts the package name to include the snapshot information as
well as patches the manifest file in the distribution with the snapshot number
and id (that is, replacing \c{.z} in the version value with the actual
@@ -6280,12 +6675,9 @@ for our \c{libhello} library. To accomplish this we add the \c{version.hxx.in}
template as well as something along these lines to our \c{buildfile}:
\
-lib{hello}: ... hxx{version}
+lib{hello}: {hxx cxx}{** -version} hxx{version}
hxx{version}: in{version} $src_root/file{manifest}
-{
- dist = true
-}
\
The header rule is a line-based preprocessor that substitutes fragments
@@ -6459,12 +6851,12 @@ config.c
config.cxx
cc.id
- c.target
- c.target.cpu
- c.target.vendor
- c.target.system
- c.target.version
- c.target.class
+ cc.target
+ cc.target.cpu
+ cc.target.vendor
+ cc.target.system
+ cc.target.version
+ cc.target.class
config.cc.poptions
cc.poptions
@@ -6651,7 +7043,7 @@ symbols for all the Windows targets/compilers using the following arrangement
\
lib{foo}: libul{foo}: {hxx cxx}{**} ...
-lib{foo}: def{foo}: include = ($cxx.target.system == 'win32-msvc')
+libs{foo}: def{foo}: include = ($cxx.target.system == 'win32-msvc')
def{foo}: libul{foo}
if ($cxx.target.system == 'mingw32')
@@ -6661,6 +7053,9 @@ if ($cxx.target.system == 'mingw32')
That is, we use the \c{.def} file approach for MSVC (including when building
with Clang) and the built-in support (\c{--export-all-symbols}) for MinGW.
+\N|You will likely also want to add the generated \c{.def} file (or the
+blanket \c{*.def}) to your \c{.gitignore} file.|
+
Note that it is also possible to use the \c{.def} file approach for MinGW. In
this case we need to explicitly load the \c{bin.def} module (which should be
done after loading \c{c} or \c{cxx}) and can use the following arrangement:
@@ -6677,7 +7072,7 @@ if ($cxx.target.class == 'windows')
\
lib{foo}: libul{foo}: {hxx cxx}{**} ...
-lib{foo}: def{foo}: include = ($cxx.target.class == 'windows')
+libs{foo}: def{foo}: include = ($cxx.target.class == 'windows')
def{foo}: libul{foo}
\
@@ -6917,6 +7312,119 @@ config.c.internal.scope
\
+\h#c-objc|Objective-C Compilation|
+
+The \c{c} module provides the \c{c.objc} submodules which can be loaded in
+order to register the \c{m{\}} target type and enable Objective-C compilation
+in the \c{C} compile rule. Note that \c{c.objc} must be loaded after the \c{c}
+module and while the \c{m{\}} target type is registered unconditionally,
+compilation is only enabled if the C compiler supports Objective-C for the
+target platform. Typical usage:
+
+\
+# root.build
+#
+using c
+using c.objc
+\
+
+\
+# buildfile
+#
+lib{hello}: {h c}{*}
+lib{hello}: m{*}: include = ($c.target.class == 'macos')
+\
+
+Note also that while there is support for linking Objective-C executables and
+libraries, this is done using the C compiler driver and no attempt is made to
+automatically link any necessary Objective-C runtime library (such as
+\c{-lobjc}).
+
+
+\h#c-as-cpp|Assembler with C Preprocessor Compilation|
+
+The \c{c} module provides the \c{c.as-cpp} submodules which can be loaded in
+order to register the \c{S{\}} target type and enable Assembler with C
+Preprocessor compilation in the \c{C} compile rule. Note that \c{c.as-cpp}
+must be loaded after the \c{c} module and while the \c{S{\}} target type is
+registered unconditionally, compilation is only enabled if the C compiler
+supports Assembler with C Preprocessor compilation.
+
+Typical usage:
+
+\
+# root.build
+#
+using c
+using c.as-cpp
+\
+
+\
+# buildfile
+#
+exe{hello}: {h c}{* -hello.c}
+
+# Use C implementation as a fallback if no assembler.
+#
+assembler = ($c.class == 'gcc' && $c.target.cpu == 'x86_64')
+
+exe{hello}: S{hello}: include = $assembler
+exe{hello}: c{hello}: include = (!$assembler)
+\
+
+\
+/* hello.S
+ */
+#ifndef HELLO_RESULT
+# define HELLO_RESULT 0
+#endif
+
+text
+
+.global hello
+hello:
+ /* ... */
+ movq $HELLO_RESULT, %rax
+ ret
+
+#ifdef __ELF__
+.section .note.GNU-stack, \"\", @progbits
+#endif
+\
+
+The default file extension for the \c{S{\}} target type is \c{.S} (capital)
+but that can be customized using the standard mechanisms. For example:
+
+\
+# root.build
+#
+using c
+using c.as-cpp
+
+h{*}: extension = h
+c{*}: extension = c
+S{*}: extension = sx
+\
+
+Note that \c{*.coptions} are passed to the C compiler when compiling Assembler
+with C Preprocessor files because compile options may cause additional
+preprocessor macros to be defined. Plus, some of them (such as \c{-g}) are
+passed (potentially translated) to the underlying assembler. To pass
+additional options when compiling Assembler files use \c{c.poptions} and
+\c{c.coptions}. For example (continuing with the previous example):
+
+\
+if $assembler
+{
+ obj{hello}:
+ {
+ c.poptions += -DHELLO_RESULT=1
+ c.coptions += -Wa,--no-pad-sections
+ }
+}
+\
+
+
\h1#module-cxx|\c{cxx} Module|
\N{This chapter is a work in progress and is incomplete.}
@@ -7689,7 +8197,7 @@ header-like search mechanism (\c{-I} paths, etc.), an explicit list of
exported modules is provided for each library in its \c{.pc} (\c{pkg-config})
file.
-Specifically, the library's \c{.pc} file contains the \c{cxx_modules} variable
+Specifically, the library's \c{.pc} file contains the \c{cxx.modules} variable
that lists all the exported C++ modules in the \c{<name>=<path>} form with
\c{<name>} being the module's C++ name and \c{<path>} \- the module interface
file's absolute path. For example:
@@ -7700,15 +8208,15 @@ Version: 1.0.0
Cflags:
Libs: -L/usr/lib -lhello
-cxx_modules = hello.core=/usr/include/hello/core.mxx hello.extra=/usr/include/hello/extra.mxx
+cxx.modules = hello.core=/usr/include/hello/core.mxx hello.extra=/usr/include/hello/extra.mxx
\
Additional module properties are specified with variables in the
-\c{cxx_module_<property>.<name>} form, for example:
+\c{cxx.module_<property>.<name>} form, for example:
\
-cxx_module_symexport.hello.core = true
-cxx_module_preprocessed.hello.core = all
+cxx.module_symexport.hello.core = true
+cxx.module_preprocessed.hello.core = all
\
Currently, two properties are defined. The \c{symexport} property with the
@@ -8569,6 +9077,34 @@ macros may not be needed by all consumers. This way we can also keep the
header macro-only which means it can be included freely, in or out of module
purviews.
+\h#cxx-objcxx|Objective-C++ Compilation|
+
+The \c{cxx} module provides the \c{cxx.objcxx} submodules which can be loaded
+in order to register the \c{mm{\}} target type and enable Objective-C++
+compilation in the \c{C++} compile rule. Note that \c{cxx.objcxx} must be
+loaded after the \c{cxx} module and while the \c{mm{\}} target type is
+registered unconditionally, compilation is only enabled if the C++ compiler
+supports Objective-C++ for the target platform. Typical usage:
+
+\
+# root.build
+#
+using cxx
+using cxx.objcxx
+\
+
+\
+# buildfile
+#
+lib{hello}: {hxx cxx}{*}
+lib{hello}: mm{*}: include = ($cxx.target.class == 'macos')
+\
+
+Note also that while there is support for linking Objective-C++ executables
+and libraries, this is done using the C++ compiler driver and no attempt is
+made to automatically link any necessary Objective-C runtime library (such as
+\c{-lobjc}).
+
\h1#module-in|\c{in} Module|
@@ -8636,13 +9172,13 @@ symbol is expected to start a substitution with unresolved (to a variable
value) names treated as errors. The double substitution symbol (for example,
\c{$$}) serves as an escape sequence.
-The substitution mode can be relaxed using the \c{in.substitution} variable.
-Its valid values are \c{strict} (default) and \c{lax}. In the lax mode a pair
-of substitution symbols is only treated as a substitution if what's between
-them looks like a build system variable name (that is, it doesn't contain
-spaces, etc). Everything else, including unterminated substitution symbols, is
-copied as is. Note also that in this mode the double substitution symbol is
-not treated as an escape sequence.
+The substitution mode can be relaxed using the \c{in.mode} variable. Its
+valid values are \c{strict} (default) and \c{lax}. In the lax mode a pair of
+substitution symbols is only treated as a substitution if what's between them
+looks like a build system variable name (that is, it doesn't contain spaces,
+etc). Everything else, including unterminated substitution symbols, is copied
+as is. Note also that in this mode the double substitution symbol is not
+treated as an escape sequence.
The lax mode is mostly useful when trying to reuse existing \c{.in} files from
other build systems, such as \c{autoconf}. Note, however, that the lax mode is
@@ -8655,7 +9191,7 @@ substitutions as is. For example:
h{config}: in{config} # config.h.in
{
in.symbol = '@'
- in.substitution = lax
+ in.mode = lax
CMAKE_SYSTEM_NAME = $c.target.system
CMAKE_SYSTEM_PROCESSOR = $c.target.cpu
@@ -8669,6 +9205,42 @@ target-specific variables. Typed variable values are converted to string
using the corresponding \c{builtin.string()} function overload before
substitution.
+While specifying substitution values as \c{buildfile} variables is usually
+natural, sometimes this may not be possible or convenient. Specifically, we
+may have substitution names that cannot be specified as \c{buildfile}
+variables, for example, because they start with an underscore (and are thus
+reserved) or because they refer to one of the predefined variables. Also, we
+may need to have different groups of substitution values for different cases,
+for example, for different platforms, and it would be convenient to pass such
+groups around as a single value.
+
+To support these requirements the substitution values can alternatively be
+specified as key-value pairs in the \c{in.substitutions} variable. Note that
+entries in this substitution map take precedence over the \c{buildfile}
+variables. For example:
+
+\
+/* config.h.in */
+
+#define _GNU_SOURCE @_GNU_SOURCE@
+#define _POSIX_SOURCE @_POSIX_SOURCE@
+\
+
+\
+# buildfile
+
+h{config}: in{config}
+{
+ in.symbol = '@'
+ in.mode = lax
+
+ in.substitutions = _GNU_SOURCE@0 _POSIX_SOURCE@1
+}
+\
+
+\N|In the above example, the \c{@} characters in \c{in.symbol} and
+\c{in.substitutions} are unrelated.|
+
Using an undefined variable in a substitution is an error. Using a \c{null}
value in a substitution is also an error unless the fallback value is
specified with the \c{in.null} variable. For example:
@@ -8682,11 +9254,21 @@ h{config}: in{config}
}
\
-A number of other build system modules, for example, \l{#module-version
-\c{version}} and \l{#module-bash \c{bash}}, are based on the \c{in} module and
-provide extended functionality. The \c{in} preprocessing rule matches any
-\c{file{\}}-based target that has the corresponding \c{in{\}} prerequisite
-provided none of the extended rules match.
+\N|To specify a \c{null} value using the \c{in.substitutions} mechanism omit
+the value, for example:
+
+\
+in.substitutions = _GNU_SOURCE
+\
+
+|
+
+A number of other build system modules, for example,
+\l{https://github.com/build2/libbuild2-autoconf/ \c{autoconf}},
+\l{#module-version \c{version}}, and \l{#module-bash \c{bash}}, are based on
+the \c{in} module and provide extended functionality. The \c{in} preprocessing
+rule matches any \c{file{\}}-based target that has the corresponding \c{in{\}}
+prerequisite provided none of the extended rules match.
\h1#module-bash|\c{bash} Module|
@@ -8739,11 +9321,12 @@ buildfiles.
The \c{say-hello.bash} module is \i{imported} by the \c{hello} script with the
\c{@import\ hello/say-hello@} substitution. The \i{import path}
-(\c{hello/say-hello} in our case) is a relative path to the module file within
-the project. Its first component (\c{hello} in our case) must be the project
-base name and the \c{.bash} module extension can be omitted. \N{The constraint
-placed on the first component of the import path is required to implement
-importation of installed modules, as discussed below.}
+(\c{hello/say-hello} in our case) is a path to the module file within the
+project. Its first component (\c{hello} in our case) must be both the project
+name and the top-level subdirectory within the project. The \c{.bash} module
+extension can be omitted. \N{The constraint placed on the first component of
+the import path is required to implement importation of installed modules, as
+discussed below.}
During preprocessing, the import substitution will be replaced with a
\c{source} builtin call and the import path resolved to one of the \c{bash{\}}
@@ -8762,11 +9345,12 @@ OS. The script, however, can provide a suitable implementation as a function.
See the \c{bash} module tests for a sample implementation of such a function.|
By default, \c{bash} modules are installed into a subdirectory of the \c{bin/}
-installation directory named as the project base name. For instance, in the
-above example, the script will be installed as \c{bin/hello} and the module as
-\c{bin/hello/say-hello.bash} with the script sourcing the module relative to
-the \c{bin/} directory. Note that currently it is assumed the script and all
-its modules are installed into the same \c{bin/} directory.
+installation directory named as the project name plus the \c{.bash} extension.
+For instance, in the above example, the script will be installed as
+\c{bin/hello} and the module as \c{bin/hello.bash/say-hello.bash} with the
+script sourcing the module relative to the \c{bin/} directory. Note that
+currently it is assumed the script and all its modules are installed into the
+same \c{bin/} directory.
Naturally, modules can import other modules and modules can be packaged into
\i{module libraries} and imported using the standard build system import
@@ -8833,8 +9417,9 @@ for example, \c{libhello}. If there is also a native library (that is, one
written in C/C++) that provides the same functionality (or the \c{bash}
library is a language binding for said library), then it is customary to add
the \c{.bash} extension to the \c{bash} library name, for example,
-\c{libhello.bash}. Note that in this case the project base name is
-\c{libhello}.
+\c{libhello.bash}. Note that in this case the top-level subdirectory within
+the project is expected to be called without the \c{bash} extension,
+for example, \c{libhello}.
Modules can be \i{private} or \i{public}. Private modules are implementation
details of a specific project and are not expected to be imported from other
@@ -8881,4 +9466,498 @@ corresponding \c{in{\}} and one or more \c{bash{\}} prerequisites as well as
\c{bash{\}} targets that have the corresponding \c{in{\}} prerequisite (if you
need to preprocess a script that does not depend on any modules, you can use
the \c{in} module's rule).
+
+
+\h1#json-dump|Appendix A \- JSON Dump Format|
+
+This appendix describes the machine-readable, JSON-based build system state
+dump format that can be requested with the \c{--dump-format=json-v0.1} build
+system driver option (see \l{b(1)} for details).
+
+The format is specified in terms of the serialized representation of C++
+\c{struct} instances. See \l{b.xhtml#json-output JSON OUTPUT} for details on
+the overall properties of this format and the semantics of the \c{struct}
+serialization.
+
+\N|This format is currently unstable (thus the temporary \c{-v0.1} suffix)
+and may be changed in ways other than as described in \l{b.xhtml#json-output
+JSON OUTPUT}. In case of such changes the format version will be incremented
+to allow detecting incompatibilities but no support for older versions is
+guaranteed.|
+
+The build system state can be dumped after the load phase (\c{--dump=load}),
+once the build state has been loaded, and/or after the match phase
+(\c{--dump=match}), after rules have been matched to targets to execute the
+desired action. The JSON format differs depending on after which phase it is
+produced. After the load phase the format aims to describe the
+action-independent state, essentially as specified in the \c{buildfiles}.
+While after the match phase it aims to describe the state for executing the
+specified action, as determined by the rules that have been matched. The
+former state would be more appropriate, for example, for an IDE that tries to
+use \c{buildfiles} as project files. While the latter state could be used to
+determine the actual build graph for a certain action, for example, in order
+to infer which executable targets are considered tests by the \c{test}
+operation.
+
+While it's possible to dump the build state as a byproduct of executing an
+action (for example, performing an update), it's often desirable to only dump
+the build state and do it as quickly as possible. For such cases the
+recommended option combinations are as follows (see the \c{--load-only} and
+\c{--match-only} documentation for details):
+
+\
+$ b --load-only --dump=load --dump-format=json-v0.1 .../dir/
+
+$ b --match-only --dump=match --dump-format=json-v0.1 .../dir/
+$ b --match-only --dump=match --dump-format=json-v0.1 .../dir/type{name}
+\
+
+\N|Note that a match dump for a large project can produce a large amount of
+data, especially for the \c{update} operation (tens and even hundreds of
+megabytes is not uncommon). To reduce this size it is possible to limit the
+dump to specific scopes and/or targets with the \c{--dump-scope} and
+\c{--dump-target} options.|
+
+The complete dump (that is, not of a specific scope or target) is a tree of
+nested scope objects (see \l{#intro-dirs-scopes Output Directories and Scopes}
+for background). The scope object has the serialized representation of the
+following C++ \c{struct} \c{scope}. It is the same for both load and match
+dumps except for the type of the \c{targets} member:
+
+\
+struct scope
+{
+ string out_path;
+ optional<string> src_path;
+
+ vector<variable> variables; // Non-type/pattern scope variables.
+
+ vector<scope> scopes; // Immediate children.
+
+ vector<loaded_target|matched_target> targets;
+};
+\
+
+For example (parts of the output are omitted for brevity):
+
+\N|The actual output is produced unindented to reduce the size.|
+
+\
+$ cd /tmp
+$ bdep new hello
+$ cd hello
+$ bdep new -C @gcc cc
+$ b --load-only --dump=load --dump-format=json-v0.1
+{
+ \"out_path\": \"\",
+ \"variables\": [ ... ],
+ \"scopes\": [
+ {
+ \"out_path\": \"/tmp/hello-gcc\",
+ \"variables\": [ ... ],
+ \"scopes\": [
+ {
+ \"out_path\": \"hello\",
+ \"src_path\": \"/tmp/hello\",
+ \"variables\": [ ... ],
+ \"scopes\": [
+ {
+ \"out_path\": \"hello\",
+ \"src_path\": \"/tmp/hello/hello\",
+ \"variables\": [ ... ],
+ \"targets\": [ ... ]
+ }
+ ],
+ \"targets\": [ ... ]
+ }
+ ],
+ \"targets\": [ ... ]
+ }
+ ]
+}
+\
+
+The \c{out_path} member is relative to the parent scope. It is empty for the
+special global scope, which is the root of the tree. The \c{src_path} member
+is absent if it is the same as \c{out_path} (in source build or scope outside
+of project).
+
+\N|For the match dump, targets that have not been matched for the specified
+action are omitted.|
+
+In the load dump, the target object has the serialized representation of the
+following C++ \c{struct} \c{loaded_target}:
+
+\
+struct loaded_target
+{
+ string name; // Relative quoted/qualified name.
+ string display_name; // Relative display name.
+ string type; // Target type.
+ optional<string> group; // Absolute quoted/qualified group target.
+
+ vector<variable> variables; // Target variables.
+
+ vector<prerequisite> prerequisites;
+};
+\
+
+For example (continuing with the previous \c{hello} setup):
+
+\
+{
+ \"out_path\": \"\",
+ \"scopes\": [
+ {
+ \"out_path\": \"/tmp/hello-gcc\",
+ \"scopes\": [
+ {
+ \"out_path\": \"hello\",
+ \"src_path\": \"/tmp/hello\",
+ \"scopes\": [
+ {
+ \"out_path\": \"hello\",
+ \"src_path\": \"/tmp/hello/hello\",
+ \"targets\": [
+ {
+ \"name\": \"exe{hello}\",
+ \"display_name\": \"exe{hello}\",
+ \"type\": \"exe\",
+ \"prerequisites\": [
+ {
+ \"name\": \"cxx{hello}\",
+ \"type\": \"cxx\"
+ },
+ {
+ \"name\": \"testscript{testscript}\",
+ \"type\": \"testscript\"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+\
+
+The target \c{name} member is the target name that is qualified with the
+extension (if applicable and known) and, if required, is quoted so that it can
+be passed back to the build system driver on the command line. The
+\c{display_name} member is unqualified and unquoted. Note that both the target
+\c{name} and \c{display_name} members are normally relative to the containing
+scope (if any).
+
+The prerequisite object has the serialized representation of the following C++
+\c{struct} \c{prerequisite}:
+
+\
+struct prerequisite
+{
+ string name; // Quoted/qualified name.
+ string type;
+ vector<variable> variables; // Prerequisite variables.
+};
+\
+
+The prerequisite \c{name} member is normally relative to the containing scope.
+
+In the match dump, the target object has the serialized representation of the
+following C++ \c{struct} \c{matched_target}:
+
+\
+struct matched_target
+{
+ string name;
+ string display_name;
+ string type;
+ optional<string> group;
+
+ optional<path> path; // Absent if not path target, not assigned.
+
+ vector<variable> variables;
+
+ optional<operation_state> outer_operation; // null if not matched.
+ operation_state inner_operation; // null if not matched.
+};
+\
+
+For example (outer scopes removed for brevity):
+
+\
+$ b --match-only --dump=match --dump-format=json-v0.1
+{
+ \"out_path\": \"hello\",
+ \"src_path\": \"/tmp/hello/hello\",
+ \"targets\": [
+ {
+ \"name\": \"/tmp/hello/hello/cxx{hello.cxx}@./\",
+ \"display_name\": \"/tmp/hello/hello/cxx{hello}@./\",
+ \"type\": \"cxx\",
+ \"path\": \"/tmp/hello/hello/hello.cxx\",
+ \"inner_operation\": {
+ \"rule\": \"build.file\",
+ \"state\": \"unchanged\"
+ }
+ },
+ {
+ \"name\": \"obje{hello.o}\",
+ \"display_name\": \"obje{hello}\",
+ \"type\": \"obje\",
+ \"group\": \"/tmp/hello-gcc/hello/hello/obj{hello}\",
+ \"path\": \"/tmp/hello-gcc/hello/hello/hello.o\",
+ \"inner_operation\": {
+ \"rule\": \"cxx.compile\",
+ \"prerequisite_targets\": [
+ {
+ \"name\": \"/tmp/hello/hello/cxx{hello.cxx}@./\",
+ \"type\": \"cxx\"
+ },
+ {
+ \"name\": \"/usr/include/c++/12/h{iostream.}\",
+ \"type\": \"h\"
+ },
+ ...
+ ]
+ }
+ },
+ {
+ \"name\": \"exe{hello.}\",
+ \"display_name\": \"exe{hello}\",
+ \"type\": \"exe\",
+ \"path\": \"/tmp/hello-gcc/hello/hello/hello\",
+ \"inner_operation\": {
+ \"rule\": \"cxx.link\",
+ \"prerequisite_targets\": [
+ {
+ \"name\": \"/tmp/hello-gcc/hello/hello/obje{hello.o}\",
+ \"type\": \"obje\"
+ }
+ ]
+ }
+ }
+ ]
+}
+\
+
+The first four members in \c{matched_target} have the same semantics as in
+\c{loaded_target}.
+
+The \c{outer_operation} member is only present if the action has an outer
+operation. For example, when performing \c{update-for-test}, \c{test} is the
+outer operation while \c{update} is the inner operation.
+
+The operation state object has the serialized representation of the following
+C++ \c{struct} \c{operation_state}:
+
+\
+struct operation_state
+{
+ string rule; // null if direct recipe match.
+
+ optional<string> state; // One of unchanged|changed|group.
+
+ vector<variable> variables; // Rule variables.
+
+ vector<prerequisite_target> prerequisite_targets;
+};
+\
+
+The \c{rule} member is the matched rule name. The \c{state} member is the
+target state, if known after match. The \c{prerequisite_targets} array is a
+subset of prerequisites resolved to targets that are in effect for this
+action. The matched rule may add additional targets, for example, dynamically
+extracted additional dependencies, like \c{/usr/include/c++/12/h{iostream.\}}
+in the above listing.
+
+The prerequisite target object has the serialized representation of the
+following C++ \c{struct} \c{prerequisite_target}:
+
+\
+struct prerequisite_target
+{
+ string name; // Absolute quoted/qualified target name.
+ string type;
+ bool adhoc;
+};
+\
+
+The \c{variables} array in the scope, target, prerequisite, and prerequisite
+target objects contains scope, target, prerequisite, and rule variables,
+respectively.
+
+The variable object has the serialized representation of the following C++
+\c{struct} \c{variable}:
+
+\
+struct variable
+{
+ string name;
+ optional<string> type;
+ json_value value; // null|boolean|number|string|object|array
+};
+\
+
+For example:
+
+\
+{
+ \"out_path\": \"\",
+ \"variables\": [
+ {
+ \"name\": \"build.show_progress\",
+ \"type\": \"bool\",
+ \"value\": true
+ },
+ {
+ \"name\": \"build.verbosity\",
+ \"type\": \"uint64\",
+ \"value\": 1
+ },
+ ...
+ ],
+ \"scopes\": [
+ {
+ \"out_path\": \"/tmp/hello-gcc\",
+ \"scopes\": [
+ {
+ \"out_path\": \"hello\",
+ \"src_path\": \"/tmp/hello\",
+ \"scopes\": [
+ {
+ \"out_path\": \"hello\",
+ \"src_path\": \"/tmp/hello/hello\",
+ \"variables\": [
+ {
+ \"name\": \"out_base\",
+ \"type\": \"dir_path\",
+ \"value\": \"/tmp/hello-gcc/hello/hello\"
+ },
+ {
+ \"name\": \"src_base\",
+ \"type\": \"dir_path\",
+ \"value\": \"/tmp/hello/hello\"
+ },
+ {
+ \"name\": \"cxx.poptions\",
+ \"type\": \"strings\",
+ \"value\": [
+ \"-I/tmp/hello-gcc/hello\",
+ \"-I/tmp/hello\"
+ ]
+ },
+ {
+ \"name\": \"libs\",
+ \"value\": \"/tmp/hello-gcc/libhello/libhello/lib{hello}\"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+\
+
+The \c{type} member is absent if the variable value is untyped.
+
+The \c{value} member contains the variable value in a suitable JSON
+representation. Specifically:
+
+\ul|
+
+\li|\c{null} values are represented as JSON \c{null}.|
+
+\li|\c{bool} values are represented as JSON \c{boolean}.|
+
+\li|\c{int64} and \c{uint64} values are represented as JSON \c{number}.|
+
+\li|\c{string}, \c{path}, \c{dir_path} values are represented as JSON
+ \c{string}.|
+
+\li|Untyped simple name values are represented as JSON \c{string}.|
+
+\li|Pairs of above values are represented as JSON objects with the \c{first}
+ and \c{second} members corresponding to the pair elements.|
+
+\li|Untyped complex name values are serialized as target names and represented
+ as JSON \c{string}.|
+
+\li|Containers of above values are represented as JSON arrays corresponding to
+ the container elements.|
+
+\li|An empty value is represented as an empty JSON object if it's a typed
+ pair, as an empty JSON array if it's a typed container or is untyped, and
+ as an empty string otherwise.||
+
+One expected use-case for the match dump is to determine the set of targets
+for which a given action is applicable. For example, we may want to determine
+all the executables in a project that can be tested with the \c{test}
+operation in order to present this list to the user in an IDE plugin or
+some such. To further illuminate the problem, consider the following
+\c{buildfile} which declares a number of executable targets, some are
+tests and some are not:
+
+\
+exe{hello1}: ... testscript # Test because of testscript prerequisite.
+
+exe{hello2}: test = true # Test because of test=true.
+
+exe{hello3}: ... testscript # Not a test because of test=false.
+{
+ test = false
+}
+\
+
+As can be seen, trying to infer this information is not straightforward and
+doing so manually by examining prerequisites, variables, etc., while possible,
+will be complex and likely brittle. Instead, the recommended approach is to
+use the match dump and base the decision on the \c{state} target object
+member. Specifically, a rule which matched the target but determined that
+nothing needs to be done for this target, returns the special \c{noop}
+recipe. The \c{build2} core recognizes this situation and sets such target's
+state to \c{unchanged} during match. Here is what the match dump will look
+like for the above three executables:
+
+\
+$ b --match-only --dump=match --dump-format=json-v0.1 test
+{
+ \"out_path\": \"hello\",
+ \"src_path\": \"/tmp/hello/hello\",
+ \"targets\": [
+ {
+ \"name\": \"exe{hello1.}\",
+ \"display_name\": \"exe{hello1}\",
+ \"type\": \"exe\",
+ \"path\": \"/tmp/hello-gcc/hello/hello/hello1\",
+ \"inner_operation\": {
+ \"rule\": \"test\"
+ }
+ },
+ {
+ \"name\": \"exe{hello2.}\",
+ \"display_name\": \"exe{hello2}\",
+ \"type\": \"exe\",
+ \"path\": \"/tmp/hello-gcc/hello/hello/hello2\",
+ \"inner_operation\": {
+ \"rule\": \"test\"
+ }
+ },
+ {
+ \"name\": \"exe{hello3}\",
+ \"display_name\": \"exe{hello3}\",
+ \"type\": \"exe\",
+ \"inner_operation\": {
+ \"rule\": \"test\",
+ \"state\": \"unchanged\"
+ }
+ }
+ ]
+}
+\
+
"