// file : doc/testscript.cli // copyright : Copyright (c) 2014-2016 Code Synthesis Ltd // license : MIT; see accompanying LICENSE file "\name=build2-testscript-language" "\subject=Testscript language" "\title=Testscript Language" // NOTES // // - Maximum
line is 70 characters. // " \h0#preface|Preface| This document describes the \c{build2} Testscript language. It starts with a discussion of the motivations behind a separate domain-specific language for running tests and then introduces a number of Testscript concepts with examples. The remainder of the document provides a more formal specification of the language, including its integration into the build system, parsing and execution model, lexical structure, as well as grammar and semantics. The final chapter describes the recommended Testscript style as used in the \c{build2} project. In this document we use the term \i{Testscript} (capitalized) to refer to the Testscript language. Just \i{testscript} means some code written in this language. For example: \"We can pass addition information to testscripts using target-specific variables.\" Finally, \c{testscript} refers to the file name. We also use the equivalent distinction between \i{Buildfile} (language), \i{buildfile} (code), and \c{buildfile} (file). \h1#intro|Introduction| The \c{build2} \c{test} module provides the ability to run an executable target as a test, including passing options and arguments, providing \c{stdin} input, as well as comparing the \c{stdout} output to the expected result. For example: \ exe{hello}: test.options = --greeting 'Hi' exe{hello}: test.arguments = - # Read names from stdin. exe{hello}: test.input = names.txt exe{hello}: test.output = greetings.txt \ This works well for simple, single-run tests. If, however, your testing requires multiple runs with varying input and/or analyzing output, traditionally, you would resort to using a scripting language, for instance Bash or Python. This, however, has a number of drawbacks. Firstly, this approach is usually not portable (there is no Bash or Python on Windows \i{out of the box}). It is also hard to write concise tests in a general-purpose scripting language. The result is often a test suite that has grown incomprehensible with everyone dreading adding new tests. Secondly, it is hard to run such tests in parallel without a major effort, for example, by having a separate script for each test and implementing some kind of a test harness. Testscript is a domain-specific language for running tests. It vaguely resembles Bash and is optimized for concise test description and fast execution by focusing on the following functionality: \ul| \li|Supplying input via command line and \c{stdin}.| \li|Comparing to expected exit status.| \li|Comparing to expected output for \c{stdout}/\c{stderr}, including using regex.| \li|Setup/teardown commands and automatic file/directory cleanups.| \li|Simple (single-command) and compound (multi-command) tests.| \li|Test groups with common setup/teardown.| \li|Test isolation for parallel execution.| \li|Portable POSIX-like builtins subset.| \li|Test documentation.|| Note that Testscript is a \i{test runner}, not a testing framework for a particular programming language. It does not concern itself with how the test executables themselves are implemented. As a result, it is mostly geared towards functional testing but can also be used for unit testing if external input/output is required. Testscript is an extension of the \c{build2} build system and is implemented by its \c{test} module. As a quick introduction to Testscript's capabilities, let's test a \"Hello, World\" program. For a simple implementation the corresponding \c{buildfile} might look like this: \ exe{hello}: cxx{hello} \ We also assume that the project's \c{bootstrap.build} loads the \c{test} module which implements the execution of testscripts. To start, we create an empty file called \c{testscript}. To indicate that a testscript file tests a specific target we simply list it as a target's prerequisite, for example: \ exe{hello}: cxx{hello} test{testscript} \ Let's assume our \c{hello} program expects us to pass the name to greet on the command line. And if we don't pass anything, it prints an error following by usage and terminates with a non-zero exit status. We can test this failure case by adding the following line to the \c{testscript} file: \ $* 2>- != 0 \ While it sure is concise, it may look cryptic without some explanation. When the \c{test} module runs tests, it (by default) passes to each testscript the target path of which this testscript is a prerequisite. So in our case the testscript will receive the path to our \c{hello} executable. The buildfile can also pass along additional options and arguments. Inside the testscript, all of this (target path, options, and arguments) are bound to the \c{$*} variable. So in our case, if we expand the above line, it would be something like this: \ /tmp/hello/hello 2>- != 0 \ Or, if we are on Windows, something like this: \ C:\projects\hello\hello.exe 2>- != 0 \ The \c{2>-} redirect is the Testscript equivalent of \c{2>/dev/null} that is both portable and more concise (\c{2} here is the \c{stderr} file descriptor). If we don't specify it and our program prints anything to \c{stderr}, then the test will fail (unexpected output). The remainder of the command (\c{!= 0}) is the exit status check. If we don't specify it, then the test is expected to exit with zero status (which is equivalent to specifying \c{== 0}). If we run our test, it will pass provided our program behaves as expected. One thing our test doesn't verify, however, is the diagnostics that gets printed to \c{stderr} (remember, we ignored it with \c{2>-}). Let's fix that assuming this is the code that prints it: \ cerr << \"error: missing name\" << endl << \"usage: \" << argv[0] << \"\" << endl; \ In testscripts you can compare output to the expected result for both \c{stdout} and \c{stderr}. We can supply the expected result as either \i{here-string} or \i{here-document}, both which can be either literal or regex. The here-string approach works best for short, single-line output and we will use it for another test in a minute. For this test let's use here-document since the expected diagnostics has two lines: \ $* 2>>EOE != 0 error: missing name usage: hello EOE \ Let's decrypt this: the \c{2>>EOE} is a here-document redirect with \c{EOE} (stands for End-Of-Error) being the string we chose to mark the end of here-document. Next comes the here-document fragment followed by the end marker. Now, when executing this test, the \c{test} module will check two things: it will compare the \c{stderr} output to the expected result using the \c{diff} tool and it will make sure the test exits with a non-zero status. Let's give it a go: \ $ b test testscript:1:1: error: stderr doesn't match expected output info: produced stderr: test-hello/1/stderr info: expected stderr: test-hello/1/stderr.orig info: stderr diff (test-hello/1/stderr.diff): --- test-hello/1/stderr.orig +++ test-hello/1/stderr @@ -1,2 +1,2 @@ error: missing name -usage: hello +usage: /tmp/hello/hello \ While not what we expected, at least the problem is clear: the program name varies at runtime so we cannot just hardcode \c{hello} in our expected output. How do we solve this? The best fix would be to use the actual path to the target; after all, we know it's the first element in \c{$*}: \ $* 2>>\"EOE\" != 0 error: missing name usage: $0 EOE \ You can probably guess what \c{$0} expands to. But did you notice another change? Yes, those double quotes in \c{2>>\"EOE\"}. Here is what's going on: similar to Bash, single-quoted strings (\c{'foo'}) are taken literally while double-quoted ones (\c{\"foo\"}) have variable expansion, escaping, etc. This semantics is extended to here-documents in a curious way: if the end marker is single-quoted then the here-document lines are taken literally and if it is double-quoted, then there can be variable expansions, etc. An unquoted end marker is treated as single-quoted (note that this is unlike Bash where here-documents always have variable expansions). This example illustrated a fairly common testing problem: output variability. In this case we could fix it perfectly since we could easily calculate the varying parts exactly. But often figuring out the varying part is difficult of outright impossible. A good example would be a system error message based on the \c{errno} code, such as file not being found. Different C runtimes can phrase the message slightly differently or it can be localized. Worse, it can be a slightly different error code, for example \c{ENOENT} vs \c{ENOTDIR}. To handle output variability, Testscript allows us to specify the expected output as regular expressions. For example, this is an alternative fix to our usage problem that simply ignores the program name: \ $* 2>>~/EOE/ != 0 error: missing name /usage: .+ / EOE \ Let's explain what's going here: to use a regex here-string or here-document we add the \c{~} redirect modifier. In this case the here-document end marker must start and end with the regex introducer character of your choice (\c{/} in our case). Any line inside the here-document fragment that begins with this introducer is then treated as a regular expression rather than a literal (@@ ref to regex). While this was a fairly deep rabbit hole for a first example, it is a good illustration of how quickly things get complicated when testing real-world software. Now that we have tested the failure case, let's test the normal functionality. While we could have used here-document, in this case here-string will be more concise: \ $* 'World' >'Hello, World!' \ Nothing new here. It's also a good idea to document our tests. Testscript has a formalized test description that can capture the test \i{id}, \i{summary}, and \i{details}. All three components are optional and how thoroughly you document your tests is up to you. The description lines precede the test command. They start with a colon (\c{:}), and have the following layout: \ : : : : : ... \ The recommended format for \c{} is \c{ - ...} with at least two keywords. The id is used in diagnostics, to name the test working directory, as well as to run individual tests. The recommended style for \c{ } is that of the \c{git(1)} commit summary. The detailed description is free-form. Here are some examples (\c{#} starts a comment): \ # Only id. # : missing-name $* 2>>\"EOE\" != 0 ... # Only summary. # : Test handling of missing name ... # Both id and summary. # : missing-name : Test handling of missing name ... # All three: id, summary, and a detailed description. # : missing-name : Test handling of missing name : : This test makes sure the program detects that the name to greet : was not specified on the command line and both prints usage and : exits with non-zero status. ... \ The recommended way to come up with an id is to distill the summary to its essential keywords by removing generic words like \"test\", \"handle\", and so on. If you do this, then both the id and summary will convey essentially the same information. As a result, to keep things concise, you may choose to drop the summary and only have the id. This is what we often do in \c{build2}. Note that if the id is not provided, then it will be automatically derived from the line number in testscript. Either the id or summary (but not both) can alternatively be specified inline in the test command after a colon (\c{:}), for example: \ $* 'World' >'Hello, World!' : command-name \ Similar to handling output, Testscript provides a convenient way to supply input to the test's \c{stdin}. Let's say our \c{hello} program recognizes the special \c{-} name as an instruction to read the names from \c{stdin}. This is how we could test this functionality: \ $* - <
>EOO : stdin-names Jane John EOI Hello, Jane! Hello, John! EOO \ As you might suspect, we can also use here-string to supply \c{stdin}, for example: \ $* - <'World' >'Hello, World!' : stdin-name \ Let's say our \c{hello} program has a configuration file that captures custom name-to-greeting mappings. A path to this file can be passed as a second command line argument. To test this functionality we first need to create a sample configuration file. We do these non-test actions with \i{setup} and \i{teardown} commands, for example: \ +cat < >>hello.conf; John = Howdy Jane = Good day EOI $* 'Jane' hello.conf >'Good day, Jane!' : config-greet \ The setup commands start with the plus sign (\c{+}) while teardown \- with minus (\c{-}). Notice also the semicolon (\c{;}) at the end of the setup command: it indicates that the following command is part of the same test \- what we call a multi-command or \i{compound} test. Other than that it should all look familiar. You may be wondering why we don't have a teardown command that removes \c{hello.conf}? It is not necessary because this file will be automatically registered for cleanup that happens at the end of the test. We can also register our own files and directories for automatic cleanup. For example, if the \c{hello} program created the \c{hello.log} file on unsuccessful runs, then here is how we could have cleaned it up: \ $* ... &hello.log != 0 \ What if we wanted to run two tests for this configuration file functionality? For example, we may want to test the custom greeting as above but also make sure the default greeting is not affected. One way to do this would be to repeat the setup command in each test. But there is a better way: testscripts can define test groups. For example: \ : config { conf = $~/hello.conf +cat < >>$conf John = Howdy Jane = Good day EOI $* 'John' $conf >'Howdy, John!' : custom-greet $* 'Jack' $conf >'Hello, Jack!' : default-greet } \ A test group is a scope that contains several test/setup/teardown commands. Variables set inside a scope (like our \c{conf}) are only in effect until the end of the scope. Plus, setup and teardown commands that are not part of any test (notice the lack of \c{;} after \c{+cat}) are associated with the scope; their automatic cleanup only happens at the end of the scope (so our \c{hello.conf} will only be removed after all the tests in the group have completed). Note also that a scope can have a description. In particular, assigning a test group an id allows us to run tests only from this specific group. Other than that the two other things we need to discuss in this example are \c{$~} and \c{cat}. The \c{$~} variable is easy: it stands for the test/group working directory. But what is \c{cat} exactly? While most POSIX systems will have a program with this name, there is no such thing in vanilla Windows. To help with this Testscript provides a subset (both in terms of the number and supported features) of POSIX utilities, such as, \c{echo}, \c{touch}, \c{cat}, \c{mkdir}, \c{rm}, and so on (@@ ref builtins). Besides explicit group scopes each test is automatically placed in its own implicit test scope. However, we can make the test scope explicit, for example, for better visual separation of complex tests: \ : config-greet { conf = hello.conf +cat <'Jane = Good day' >>>$conf; $* 'Jane' $conf >'Good day, Jane!' } \ We can conditionally exclude sections of testscripts using \c{if-else} branching. This can be done both at the scope level to exclude test or group scopes as well as at the command level to exclude individual commands or variable assignments. Let's start with a scope example by providing a Windows-specific implementation of a test: \ : config-empty : if ($cxx.target.class != windows) { $* 'Jane' /dev/null >'Hello, Jane!' } else { $* 'Jane' nul >'Hello, Jane!' } \ Note the \c{if-else} chain is treated as variants of the same test thus the single description at the beginning. Let's now see an example of command-level \c{if-else} by reimplementing the above as a single test with some branching and without using the \c{nul} device on Windows (notice the semicolon after \c{end}): \ : config-empty : if ($cxx.target.class != windows) conf = /dev/null else conf = empty touch $conf end; $* 'Jane' $conf >'Hello, Jane!' \ You may have noticed that in the above examples we referenced the \c{cxx.target.class} variable as if we were in a buildfile. We could do that because the testscript variable lookup continues in the buildfile starting from the testscript target and continuing with the standard buildfile variable lookup. In particular, this means we can pass arbitrary information to testscripts using target-specific variables. For example, this how we can move the above platform test to \c{buildfile}: \ # buildfile exe{hello}: cxx{hello} test{testscript} test{*}: windows = ($cxx.target.class == windows) \ \ # testscript if! $windows conf = /dev/null else ... \ Note also that in cases when you simply need to conditionally pick a value for a variable, the \c{build2} evaluation context will often be more concise than \c{if-else}. For example: \ : config-empty : conf = ($windows ? nul : /dev/null); $* 'Jane' $conf >'Hello, Jane!' \ Similar to Bash, test commands can be chained with pipes (\c{|}) and combined with logical operators (\c{||} and \c{&&}). Let's say our \c{hello} program provides the \c{-o} option to write the result to a file instead of \c{stdout}. Here is how we could test it: \ $* -o hello.out - < >EOO John Jane EOI Hello, John! Hello, Jane! EOO \ Similarly, if it has the \c{-r} option to reverse the greetings back to their names (as every \c{hello} program should), then we could write a test like this: \ $* - < >EOO John Jane EOI John Jane EOO \ To conclude, let's put all our (sensible) tests together so that we can have a complete picture: \ $* 'World' >'Hello, World!' : command-name $* 'Jonh' 'Jane' >EOO : command-names Hello, Jane! Hello, John! EOO $* - < >EOO : stdin-names Jane John EOI Hello, Jane! Hello, John! EOO : config { conf = $~/hello.conf +cat < >>$conf John = Howdy Jane = Good day EOI $* 'John' $conf >'Howdy, John!' : custom-greet $* 'Jack' $conf >'Hello, Jack!' : default-greet } $* 2>>\"EOE\" != 0 : missing-name error: missing name usage: $0 EOE \ The execution of these tests happens in parallel. Testscript will start running all script-level tests as well as the \c{config} group immediately. Inside \c{config}, once the setup command (\c{cat}) is performed, the two inner tests are executed in parallel as well. @@ temp directory structure (why ../)? term: 'test working directory' \h1#integration|Build System Integration| The integration of testscripts into buildfiles is done using the standard \i{target-prerequisite} mechanism. In this sense, a testscript is a prerequisite that describes how to test the target similar to how, for example, the \c{INSTALL} file describes how to install it. For example: \ exe{hello}: test{testscript} doc{INSTALL README} \ By convention the testscript file should be either called \c{testscript} if you only have one or have the \c{.test} extension, for example, \c{basics.test}. The \c{test} module registers the \c{test{\}} target type for testscript files. A testscript prerequisite can be specified for any target. For example, if our directory contains a bunch of shell scripts that we want to test together, then it makes sense to specify the testscript prerequisite for the directory target: \ ./: test{basics} \ During variable lookup if a variable is not found in a testscript, then its search continues in the buildfile starting with the target-specific variables of the target being tested (e.g., \c{exe{hello\}}; called \i{test target}), then target-specific variables of the the testscript target (e.g., \c{test{basics\}}; called \i{script target}), and the continuing with the scopes starting from the one containing the testscript target. This means a testscript can \"see\" all the existing buildfile variables plus we can use target-specific variables to pass additional information to testscrips, for example: \ # basics.test if ($cxx.target.class == windows) test.arguments += $foo end if $windows test.arguments += $bar end \ \ # buildfile exe{hello}: test{basics} # All testscripts in this scope. # test{*}: windows = ($cxx.target.class == windows) # All testscripts for target exe{hello}. # exe{hello}: bar = BAR # Only basics.test. # test{basics}@./: foo = FOO \ Additionally, a number of \c{test.*} variables are used by convention to pass commonly required information to testscripts. Unless set manually as a test or script target-specific variable, the \c{test} variable is automatically set to the target path being tested. For example, given this \c{buildfile}: \ exe{hello}: test{testscript} \ The value of \c{test} inside the testscript will be the absolute path to the \c{hello} executable. If the \c{test} variable is set manually to a name of a target, then it is automatically converted to the target path. This can be useful when testing a program that is built in another subdirectory of a project. For example, our \c{hello} may reside in the \c{hello/} subdirectory while we may want to keep the tests in \c{tests/}: \ hello/ ├── hello/ │ └── hello* └── tests/ ├── buildfile └── testscript \ This is how we can implement \c{tests/buildfile} for this setup: \ hello = ../hello/exe{hello} ./: $hello test{testscript} ./: test = $hello include ../hello/ \ The other \c{test.} variables are \c{test.options}, \c{test.arguments}, \c{test.redirects}, and \c{test.cleanups}. You can use them to pass additional command line options, arguments, redirects, and cleanups to your test scripts and together with \c{test} they form the test target command line which, for conciseness, is bound to the following read-only variable aliases: \ $* - $test $test.options $test.arguments $test.redirects $test.cleanups $0 - $test $N - (N-1)-th element in the {$test.options $test.arguments} array \ Note that these aliases are read-only; if you need to modify any of these values from within testscripts, then you should use the original variable names, for example: \ test.options += --foo $* bar # Includes --foo. \ Note also that the \c{test.} variables only establish a convention. You could also put everything into, say \c{test.arguments}, and it will still work as expected. A testscript would normally contain multiple tests and sometimes it is desirable to only run a specific test or a group of tests. For example, you may be debugging a failing tests and would like to re-run it. Each test and group in a testscript has an id. As a result each test has an \i{id path} that uniquely identifies it. The id path starts with the testscript file name (corresponds to the id of the implied outermost group, as described below), may include a number of intermediate group ids, and ends with the test id. The ids in a path are separated with a forward slash (\c{/}). Note that this also happens to be the filesystem path to the temporary directory where the test is executed (again, as discussed below). As an example, consider the following testscript file called \c{basics.test}: \ $* foo : foo : fox { $* fox bar : bar $* fox baz : baz } \ The id paths for the three test will then be: \ basics/foo basics/fox/bar basics/fox/baz \ To only run individual tests, test groups, or testscript files we can specify their id paths in the \c{config.test} variable, for example: \ $ b test config.test=basics # Run all tests in basics.test. $ b test config.test=basics/fox # Run bar and baz. $ b test config.test=basics/foo # Run foo. $ b test \"config.test=basics/foo basics/fox/bar\" # Run fox and bar. \ @@ Maybe test.id instead of config.test? @@ What about running from root, with multiple basics.test? \h1#lexical|Lexical Structure| Testscript is a line-oriented language with a context-dependent lexical structure. It \"borrows\" several building blocks (variable expansion, function calls, and evaluation contexts; collectively called \i{expansions} from now on) from the Buildfile language. In a sense, testscripts are specialized (for testing) continuations of buildfiles. Except in here-document fragments, leading whitespaces and blank lines are ignored except for the line/column counts. A non-empty testscript must end with a newline. Except in single-quoted strings and single-quoted here-document fragments, the backslash (\c{\\}) character followed by a newline signals the line continuation. Both this character and the newline are removed (note: not replaced with a whitespace) and the following line is read as if it was part of the first line. Note that \c{'\\'} followed by EOF is invalid. For example: \ $* foo | \ $* bar \ Except in here-document fragments, an unquoted and unescaped \c{'#'} character starts a comment; everything from this character until the end of line is ignored. For example: \ # Setup foo. $* foo $* bar # Setup bar. \ There is no line continuation support in comments; the trailing \c{'\\'} is ignored except in one case: if the comment is just \c{'#\\'} followed by the newline, then it starts a multi-line comment that spans until closing \c{'#\\'} is encountered. For example: \ #\ $* foo $* bar #\ $* foo #\ $* bar $* baz #\ \ Similar to Buildfile, the Testscript language supports two types of quoting: single (\c{'}) and double (\c{\"}). Both can span multiple lines. The single-quoted strings and single-quoted here-document fragments do not recognize any escape sequences (not even for the single quote itself or line continuations) or expansions with all the characters taken literally until the closing single quote or here-document end marker is encountered. The double-quoted strings and double-quoted here-document fragments recognize escape sequences (including line continuations) and expansions. For example: \ foo = FOO # 'FOO true' # bar = \"$foo ($foo == FOO)\" # 'FOO bool' # $* <<\"EOI\" $foo $type($foo == FOO) EOI \ Characters that have special syntactic meaning (for example \c{'$'}) can be escaped with a backslash (\c{\\}) to preserve their literal meaning (to specify literal backslash you need to escape it as well). For example: \ foo = \$foo\\bar # '$foo\bar' \ Note that quoting could often be a more readable way to achieve the same result, for example: \ foo = '$foo\bar' \ Inside double-quoted strings only the \c{\"\\$(} character set needs to be escaped. Inside double-quoted here-document fragments \- only \c{\\$(}. The lexical structure of a line depends on its type. The line type may be dictated by the preceding construct, as is the case for here-document fragments. Otherwise the line type is determined by examining the leading character and, if that fails to determine the line type, leading tokens, as described next. A character is said to be \i{unquoted} and \i{unescaped} if it is not escaped and is not part of a quoted string. A token is said to be unquoted and unescaped if all its characters are unquoted and unescaped. The following characters determine the line type if they appear unquoted and unescaped at the beginning of the line: \ ':' - description line '.' - directive line '{' - block start '}' - block end '+' - setup command line '-' - teardown command line \ If the line doesn't start with any of these characters then the first token of the line is examined in the \c{first_token} mode (see below). If the first token is an unquoted word, then the second token of the line is examined in the \c{second_token} mode (see below). If it is a variable assignment (either \c{+=}, \c{=+}, or \c{=}), then the line type is a variable line. Otherwise, it is a test command line. Note that this means computed variable names are not supported. The Testscript language defines the following distinct lexing modes (or contexts): \dl| \li|\n\n\cb{command_line}\n Whitespaces are token separators. The following characters and character sequences (read vertically) are recognized as tokens: \ :;=!|&<>$(# == \ | \li|\n\n\cb{first_token}\n Like \c{command_line} but recognizes variable assignments as separators.| \li|\n\n\cb{second_token}\n Like \c{command_line} but recognizes variable assignments as tokens.| \li|\n\n\cb{command_expansion}\n Subset of \c{command_line} used for re-lexing expansions (see below). Only the \c{|&<>} characters are recognized as tokens. Note that whitespaces are not separators in this mode.| \li|\n\n\cb{variable_line}\n Similar to the Buildfile value mode. The \c{;$([]} characters are recognized as tokens.| \li|\n\n\cb{description_line}\n Like a single-quoted string.| \li|\n\n\cb{here_line_single}\n Like a single-quoted string except it treats newlines as a separator and quotes as literals.| \li|\n\n\cb{here_line_double}\n Like a double-quoted string except it treats newlines as a separator and quotes as literals. The \c{$(} characters are recognized as tokens.|| Besides having varying lexical structure, parsing some line types involves performing expansions (variable expansions, function calls, and evaluations contexts). The following table summarizes the mapping of line types to lexing modes and indicates whether they are parsed with expansions: \ variable line variable_line directive line command_line expansions description line description_line test command line command_line expansions setup command line command_line expansions teardown command line command_line expansions here-document single-quoted here_line_single here-document double-quoted here_line_double expansions \ Finally, unquoted expansions in command lines (test, setup, and teardown) are re-lexed in the \c{command_expansion} mode in order to recognize command line syntax tokens (redirects, pipes, etc). To illustrate why this re-lexing is necessary, consider the following example of a \"canned\" command line: \ x = echo >- $x foo \ The command line token sequence will be \c{$}, \c{x}, \c{foo}. After the expansion we get \c{echo}, \c{>-}, \c{foo}, however, the second string is not (yet) recognized as a redirect. To achieve this we need to re-lex the result of the expansion. Note that besides the few command line syntax characters, re-lexing will also \"consume\" quotes and escapes, for example: \ args = \"'foo'\" # 'foo' echo $args # echo foo \ To preserve quotes in this context we need to escape them: \ args = \"\'foo\'\" # \'foo\' echo $args # echo 'foo' \ Alternatively, for a single value, we could quote the expansion: \ arg = \"'foo'\" # 'foo' echo \"$arg\" # echo 'foo' \ To minimize unhelpful consumptions of escape sequences (e.g., in Windows paths), re-lexing performs only \"effective escaping\" for the \c{'\"\\} characters. All other escape sequences are passed through uninterpreted. Note that this means there is no way to escape command line syntax characters. The idea is to use quoting except for passing literal quotes, for example: \ args = \'&foo\' # '&foo' echo $args # echo &foo \ \h1#grammar|Grammar and Semantics| \h#grammar-notation|Notation| The formal grammar of the Testscript language is specified using an EBNF-like notation with the following elements: \ foo: ... - production rule foo - non-terminal - terminal 'foo' - literal foo* - zero or more multiplier foo+ - one or more multiplier foo? - zero or one multiplier foo bar - concatenation (foo then bar) foo | bar - alternation (foo or bar) (foo bar) - grouping {foo bar} - grouping in any order (foo then bar or bar then foo) foo\ bar - line continuation # foo - comment \ Rule right-hand-sides that start on a new line describe the line-level syntax and ones that start on the same line describes the syntax inside the line. If a rule contains multiple lines, then each line matches a separate line in the input. If a multiplier appears in from on the line then it specifies the number of repetitions for the whole line. For example, from the following three rules, the first describes a single line of multiple literals, the second \- multiple lines of a single literal, and the third \- multiple lines of multiple literals. \ # foofoofoo # text-line: 'foo'+ # foo # foo # foo # text-lines: +'foo' # foo # foofoo # foofoofoo # text-lines: +('foo'+) \ A newline in the grammar matches any standard newline separator sequence (CR/LF combinations). An unquoted space in the grammar matches zero or more non-newline whitespaces (spaces and tabs). A quoted space matches exactly one non-newline whitespace. Note also that in some cases components within lines may not be whitespace-separated in which case they will be written without any spaces between them, for example: \ foo: 'foo' ';' # 'foo;' or 'foo ;' or 'foo ;' bar: 'bar'';' # 'bar;' baz: 'baz'' '+';' # 'baz ;' or 'baz ;' fox: bar''bar # 'bar;bar;' \ You may also notice that several production rules below end with \c{-line} while potentially spanning several physical lines. In this case they represent \i{logical lines}, for example, a command line and its here-document fragments. \h#grammar-all|Testscript Grammar| The complete grammar of the Testscript language is presented next with the following sections discussing the semantics of each production rule. \ script: scope-body scope-body: *setup *(scope|directive|test) *tdown scope: ?description scope-block|scope-if scope-block: '{' scope-body '}' scope-if: ('if'|'if!') command-line scope-block *scope-elif ?scope-else scope-elif: ('elif'|'elif!') command-line scope-block scope-else: 'else' scope-block directive: '.' include include: 'include' (' '+'--once')*(' '+ )* setup: variable-like|setup-line tdown: variable-like|tdown-line setup-line: '+' command-like tdown-line: '-' command-like test: ?description +(variable-line|command-like) variable-like: variable-line|variable-if variable-line: ('='|'+='|'=+') value-attributes? value-attributes: '[' ']' variable-if: ('if'|'if!') command-line variable-if-body *variable-elif ?variable-else 'end' variable-elif: ('elif'|'elif!') command-line variable-if-body variable-else: 'else' variable-if-body variable-if-body: *variable-like command-like: command-line|command-if command-line: command-expr (';'|(':' ))? *here-document command-expr: command-pipe (('||'|'&&') command-pipe)* command-pipe: command ('|' command)* command: (' '+( |redirect|cleanup))* command-exit? command-exit: ('=='|'!=') command-if: ('if'|'if!') command-line command-if-body *command-elif ?command-else 'end' (';'|(':' ))? command-elif: ('elif'|'elif!') command-line command-if-body command-else: 'else' command-if-body command-if-body: *(variable-line|command-like) redirect: stdin|stdout|stderr stdin: '0'?(in-redirect) stdout: '1'?(out-redirect) stderr: '2'(out-redirect) in-redirect: '<-'|\ '<+'|\ '<'{':'?} |\ '<<'{':'?} |\ '<<<' out-redirect: '>-'|\ '>+'|\ '>&' ('1'|'2')|\ '>'{':'?'~'?} |\ '>>'{':'?'~'?} |\ '>>>'{'&'?} cleanup: ('&'|'&!'|'&?') ( | ) here-document: * description: +(':' ) \ \h#grammar-script|Script| \ script: scope-body \ A testscript file is an implicit group scope with its id being the file name without the \c{.test} extension. \h#grammar-scope|Scope| \ scope-body: *setup *(scope|directive|test) *tdown scope: ?description scope-block|scope-if scope-block: '{' scope-body '}' \ A scope is either a test group scope or an explicit test scope. An explicit scope is a test scope if it contains a single test, only variable assignments in setup commands, no teardown commands, and only the scope having the description, if any. Otherwise, it is a group scope. If there is no explicit scope for a test, one is established implicitly. Group scopes are used to organize related tests with potentially shared variables as well as setup and teardown commands. Explicit test scopes are normally used for better visual separation of complex tests. A scope establishes a nested variable and cleanup context. A variable set within a scope will only have effect until the end of this scope. All scope-level cleanups are triggered at the end of the scope. Entering a scope triggers the creation of a nested temporary directory with the group/test id as its name. This directory then becomes the group/test working directory (\c{CWD}). When leaving the scope, this temporary directory is automatically removed provided that it is empty. If it is not empty, the test fails (unexpected output). As an example, consider the following testscript file which we assume is called \c{basics.test}: \ test &out-test: test : group { foo = bar +setup1 +setup2 &out-setup2 test1 &out-test1: test1 : test2 { foo = baz test2 $foo } test3 $foo: test3 -teardown2 -teardown1 } \ Below is its annotated version that shows all the \i{as-if} transformations as well as various actions performed during its execution: \ # Set CWD=$out_root/ : basics # Implicit group scope for the script. { # Create basics/ subdirectory, set CWD=.../basics/ : test # Implicit test scope. { # Create test/ subdirectory, set CWD=.../basics/test/ test &out-test } # Remove out-test, remove test/, set CWD=.../basics/ : group { # Create group/ subdirectory, set CWD=.../basics/group/ # Execute setup commands foo = bar +setup1 +setup2 &out-setup2 : test1 # Implicit test scope. { # Create test1/ subdirectory, set CWD=.../group/test1/ test1 &out-test1: test1 } # Remove out-test1, remove test1/, set CWD=.../group/ : test2 { # Create test2/ subdirectory, set CWD=.../group/test2/ foo = baz test2 $foo # test2 baz } # Inner variable foo is no longer in effect # Remove remove test2/, set CWD=.../group/ : test3 # Implicit test scope. { # Create test3/ subdirectory, set CWD=.../group/test3/ test3 $foo # test3 bar } # Remove remove test3/, set CWD=.../group/ -teardown2 -teardown1 } # Execute teardown commands # Variable foo is no longer in effect # Remove out-setup2, group/, set CWD=.../basics/ } # Remove basics/, set CWD=$out_root/ \ Because of this nested directory structure, a test can use \c{../}-based relative paths to refer to, for example, a file created by a group's setup command. For example: \ { +setup >>>test.conf test1 ../test.conf test2 ../test.conf } \ Alternative, one can use an absolute path: \ { conf = $~/test.conf +setup >>>$conf test1 $conf test2 $conf } \ \h#grammar-scope-if|Scope-If| \ scope-if: ('if'|'if!') command-line scope-block *scope-elif ?scope-else scope-elif: ('elif'|'elif!') command-line scope-block scope-else: 'else' scope-block \ A scope, either test or group, can be executed conditionally. The condition \c{command-line} is executed in the context of the outer scope. Note that all the scopes in an \c{if-else} chain are alternative implementations of the same group/test (thus the single description). If at least one of them is a group scope, then all the others are treated as groups as well. \h#grammar-directive|Directive| \ directive: '.' include \ A line that starts with \c{.} is a Testscript directive. Note that directives are evaluated during parsing, before any command is executed or testscript variable assigned. You can, however, use variables assigned in the buildfile. For example: \ include common-$(cxx.target.class).test \ \h2#grammar-directive-include|Include| \ include: 'include' (' '+'--once')*(' '+ )* \ While in the grammar the \c{include} directive is shown to only appear interleaving with scopes and tests, it can be used anywhere in the scope body. It can also contain several parts of a scope, for example, setup and test lines. The \c{--once} option signals that files that have already been included in this scope should not be included again. The implementation is not required to handle links when determining if two paths are to the same file. Relative paths are assumed to be relative to the including testscript file. \h#grammar-setup-teardown|Setup and Teardown| \ setup: variable-like|setup-line tdown: variable-like|tdown-line setup-line: '+' command-like tdown-line: '-' command-like \ Setup and teardown commands are executed sequentially in the order specified. Note that variable assignments (including \c{variable-if}) do not use the \c{'+'} and \c{'-'} prefixes. A standalone (not part of a test) variable assignment is automatically treated as setup if no tests have yet been encountered in this scope and as teardown otherwise. \h#grammar-test|Test| \ test: ?description +(variable-line|command-like) \ A test that contains multiple lines is called \i{compound}. In this case each (logical) line except the last must end with a semicolon to signal the test continuation. For example: \ conf = test.conf; cat <'verbose = true' >>>$conf; test $conf \ \h#grammar-variable|Variable| \ variable-like: variable-line|variable-if variable-line: ('='|'+='|'=+') value-attributes? value-attributes: '[' ']' \ The Testscript variable assignment semantics is equivalent to Buildfile except that no \c{{\}}-based name-generation is performed. For example: \ args = [strings] foo bar 'fox baz' echo $args # foo bar fox baz \ \h#grammar-variable-if|Variable-If| \ variable-if: ('if'|'if!') command-line variable-if-body *variable-elif ?variable-else 'end' variable-elif: ('elif'|'elif!') command-line variable-if-body variable-else: 'else' variable-if-body variable-if-body: *variable-like \ A group of variables can be set conditionally. The condition \c{command-line} semantics is the same as in \c{scope-if}. For example: \ if ($cxx.target.class == 'windows') slash = \\ case = false else slash = / case = true end \ When conditionally setting a single variable, using the evaluation context with a ternary operator is often more concise: \ slash = ($cxx.target.class == 'windows' ? \\ : /) \ Note also that the only purpose of having a separate (from \c{command-if}) variable-only if-block is to remove the error-prone requirement of having to specify \c{+} and \c{-} prefixes in group setup/teardown. \h#grammar-command|Command| \ command-like: command-line|command-if command-line: command-expr (';'|(':' ))? *here-document command-expr: command-pipe (('||'|'&&') command-pipe)* command-pipe: command ('|' command)* command: (' '+( |redirect|cleanup))* command-exit? command-exit: ('=='|'!=') \ A \c{command-line} is \c{command-expr}. If it appears directly (as opposed to inside \c{command-if}) in a test, then it can be followed by \c{;} to signal the test continuation or by \c{:} and the trailing description. A \c{command-expr} can combine several \c{command-pipe}'s with logical AND and OR operators. Note that the evaluation order is always from left to right (left-associative), both operators have the same precedence, and are short-circuiting. Note, however, that short-circuiting does not apply to expansions (variable, function calls, evaluation contexts). The logical result of a \c{command-expr} is the result of the last \c{command-pipe} executed. A \c{command-pipe} can combine several \c{command}'s with a pipe (\c{stdout} of the left-hand-side command is connected to \c{stdin} of the right-hand-side). The logical result of a \c{command-pipe} is the logical AND of all its \c{command}'s. A \c{command} begins with a command path following by options/arguments, redirects, and cleanups, all optional and in any order. A \c{command} may specify an exist code check. If executing a \c{command} results in an abnormal process termination, then the whole outer construct (e.g., test, setup/teardown, etc) summarily fails. Otherwise (that is, in case of a normal termination) the exit code is checked. If omitted, then the test is expected to succeed (0 exit code). The logical result of executing a \c{command} is therefore a boolean value which is used in the higher-level constructs (pipe and expression). \h#grammar-command-if|Command-If| \ command-if: ('if'|'if!') command-line command-if-body *command-elif ?command-else 'end' (';'|(':' ))? command-elif: ('elif'|'elif!') command-line command-if-body command-else: 'else' command-if-body command-if-body: *(variable-line|command-like) \ A group of commands can be executed conditionally. The condition \c{command-line} semantics is the same as in \c{scope-if}. Note that in a compound test commands inside \c{command-if} must not end with \c{;}. Rather, \c{;} may follow \c{end}. For example: \ if ($cxx.target.class == 'windows') foo = windows setup1 setup2 else foo = posix end; test $foo \ \h#grammar-redirect|Redirect| \ redirect: stdin|stdout|stderr stdin: '0'?(in-redirect) stdout: '1'?(out-redirect) stderr: '2'(out-redirect) \ The file descriptors must not be separated from the redirect operators with whitespaces. And if leading text is not separated from the redirect operators, then it is expected to be the file descriptor. As an example, the first command below has \c{2} as an argument and redirects \c{stdout}, not \c{stderr}. While the second is invalid since \c{a1} is not a valid file descriptor. \ $* 2 >- $* a1>- \ \h#grammar-in-redirect|Input Redirect| \ in-redirect: '<-'|\ '<+'|\ '<'{':'?} |\ '<<'{':'?} |\ '<<<' \ The \c{stdin} data can come from a pipe, here-string (\c{<}), here-document (\c{<<}), a file (\c{<<<}), or \c{/dev/null}-equivalent (\c{<-}). Specifying both a pipe and a redirect is an error. If no pipe or \c{stdin} redirect is specified and the test tries to read from \c{stdin}, it is considered to have failed. However, whether this is detected and diagnosed is implementation-defined. To allow reading from the default \c{stdin} (for instance if the test is really an example), the \c{<+} redirect is used. The \c{:} here-string and here-document redirect modifier is used to suppress the otherwise automatically-added terminating newline. \h#grammar-in-output|Output Redirect| \ out-redirect: '>-'|\ '>+'|\ '>&' ('1'|'2')|\ '>'{':'?'~'?} |\ '>>'{':'?'~'?} |\ '>>>'{'&'?} \ ======================================================================= The \c{stdout} and \c{stderr} stream data can go to a pipe (\c{stdout} only), file (append if \c{>>>&}), or \c{/dev/null} (\c{>!}). It can also be compared to a string or the here-document fragment. For \c{stdout} specifying both pipe and redirect is an error. If no explicit \c{stderr} redirect is specified and the test is expected to fail (non-zero exit status), then an implicit \c{2>!} redirect is assumed. If no \c{stdout} or \c{stderr} redirect is specified and the test tries to write any data to either stream, it is considered to have failed. If you need to allow writing to the default \c{stdout} or \c{stderr}, specify \c{>?} and \c{2>?}, respectively. We can also merge \c{stderr} to \c{stdout} (\c{2>&1}) or vice versa (\c{1>&2}). If a command creates extra files or directories then we can register them for automatic cleanup at the end of the test. Files mentioned in redirects are registered automatically. Note that unlike shell no whitespaces around \c{<} and \c{>} redirects or after the \c{&} cleanups are allowed. A here-document redirect must be specified \i{literally} on test command line. Specifically, it must not be the result of a variable expansion or context evaluation, which rarely makes sense anyway since the following here-document fragment itself cannot be the result of the expansion/evaluation either; in a sense they both are part of the syntax. This requirement is imposed in order to be able to skip test lines and their associated here-document fragments in the \c{if-else} directives without performing any expansions/evaluations (which may not be valid). The skipping procedure for a line that is either a variable assignment or a test command is as follows: The line is lexed until the newline or EOF which checking each token either for one of the variable assignment operators or here-document redirects. If both kinds are present then this is an ambiguity error which can be resolved by quoting either of the token, depending on the desired semantics (variable assignment or test command). Otherwise, all the here-document redirects are noted and the corresponding number of here-document fragments is skipped (which \c{here-end} match/order validation). Note also that this procedure is applied even in case of \c{if-else} with \c{directive-block} since the block end (\c{.\}}) may appears literally in one of the here-document fragments. ======================================================================= In merge redirects the left-hand-side descriptor (implied or explicit) must not be the same as the right-hand-side. Having both merge redirects at the same time is illegal. Here-line is like double-quoted string but recognizes newlines. It is an error to specify both leading and trailing descriptions. \h#grammar-line|Line| \ script-line: directive-line | \ variable-line | \ test-line | setup-line | teardown-line \ A testscript line is either a directive, a variable assignment, a setup/teardown command, or a test command. To distinguish between the variable assignment and test command line the parsing and expansion is performed in the \i{chunking} mode, that is, the parser parses a minimum amount of semantically complete input and stops. If parsing the first chunk of the input resulted in a single simple name and the following lexer token is one of \c{=}, \c{+=}, or \c{=+}, then this line is treated as a variable assignment. Otherwise, it is a test command line. Similar to the Buildfile language, this semantics supports indirect/computed variable names, for example: \ foo = bar $bar = baz \ \h#grammar-description|Description| \ description-line: ': ' (': ' )* \ Description lines start with a colon (\c{:}) and are used to document tests (either single-line or compound) as well as test groups. In a sense, they are formalized comments. By convention the description has the following format with all three components being optional. \ : : : :
\ If the first line in the description does not contain any whitespaces, then it is assumed to be the test or test group id. If the next line is followed by a blank line, then it is assume to be the test or test group summary. After the blank line come optional details which are free-form. If an id is not specified then it is automatically derived from the test or test group location. If the test or test group is contained directly in the top-level testscript file, then just its start line number is used as an id. Otherwise, if the test or test group reside in an included file, then the start line number (inside the included file) is prefixed with the line number of the \c{.include} directive followed by the included file name (without the extension) in the form \c{- -}. This process is repeated recursively for nested inclusion. The start line for a block (either test or group) is the line containing opening brace (\c{{}) and for a simple test \- the test line itself. \h#grammar-directives|Directives| \ directive-line: include if-else \ All directive lines start with a leading dot (\c{.}). To specify a non-directive line that starts with a dot you can either escape or quote it, for example: \ \.include '.include' \ \h2#grammar-directives-include|\c{.include}| \ include: '.include' ( )+ \ The \c{include} directive includes one or more testscript files into another. If the specified path is not absolute, then it is interpreted as being relative to the including file. The semantics of inclusion is \i{as if} the contents of the included file appeared directly in the including file except for deriving test/group ids and displaying locations in diagnostics. The reminder of the line after the \c{'.include'} word is expanded as a Buildfile variable value. \h2#grammar-directives-if-else|\c{.if} \c{.else}| \ if-else: ('.if' | '.if!') script elif* else? '.end' elif: ('.elif' | '.elif!') script else: '.else' script \ The \c{if-else} directives allow for conditional exclusion of testscript fragments. The body of the \c{if-else} directive can be either a single (logical) line, a single block, or multiple lines/blocks. For example: \ .if ($foo == FOO) bar = BAR .if ($cxx.target.class != windows) $* foo .if ($cxx.target.class != windows) { $* foo $* bar } .if ($foo == FOO) .{ $* foo bar = BAR baz = BAZ { $* $bar $* $baz } .} \ Note that \c{if-else} operates on logical lines/blocks, for example: \ .if ($foo == FOO) : foo-bar : Test foo bar combination $* foo bar >>EOO foo bar EOO .if ($foo == FOO) : foo-bar : Test foo bar combination : foo-bar { $* foo $* bar } \ The reminder of the line after the \c{'.if'} and \c{'.elif'} words is expanded as a Buildfile variable value and should evaluate to either \c{'true'} or \c{'false'} text literals. \h#grammar-here-document|Here-Document| \ here-document: * \ The here-document fragments can be used to supply data to \c{stdin} or to compare output to the expected result for \c{stdout} and \c{stderr}. Note that the order of here-document fragments must match the order of redirects, for example: \ : select-no-table-error $* --interactive >>EOO < >EOE enter query: EOO SELECT * FROM no_such_table EOI error: no such table 'no_such_table' EOE \ Here-strings can be single-quoted literals or double-quoted with expansion. This semantics is extended to here-documents as follows. If the end marker on the command line is single-quoted, then the here-document lines are parsed as if they were single-quoted except that the single quote itself is not treated as special. In this mode there are no expansions, escape sequences, not even line continuations \- each line is taken literally. If the end marker on the command line is double-quoted, then the here-document lines are parsed as if they were double-quoted except that the double quote itself is not treated as special. In this mode we can use variables expansions, function calls, and evaluation contexts. However, we have to escape the \c{$(\\} character set. If the end marker is not quoted then it is treated as if it were single-quoted. Note also that quoted end markers must be quoted \i{wholly}, that is, from the beginning and until the end and without any interruptions. If the preceding command line starts with leading whitespaces, then the equivalent number is stripped (if present) from each here-document line (including the end marker). For example, the following two testscript fragments are equivalent: \ { $* < ~'/fo+/' 2>>~/EOE/ /ba+r/ baz EOE \ The regular expression used for output matching has two levels. At the outer level the expression is over lines with each line treated as a single character. We will refer to this outer expression as \i{line-regex} and to its characters as \i{line-char}. A line-char can be a literal line (like \c{baz} in the example above) in which case it will only be equal to an identical line in the output. Or a line-char can be an inner level regex (like \c{ba+r} above) in which case it will be equal to any line in the output that matches this regex. Where not clear from context we will refer to this inner expression as \i{char-regex} and its characters as \i{char}. A line is treated as literal unless it starts with the \i{regex introducer character} (\c{/} in the above example). In contrast, the line-regex is always in effect (in a sense, the \c{~} modifier is its introducer). Note that the here-string regex naturally must always start with an introducer. A char-regex line that starts with an introducer must also end with one optionally followed by \i{match flags}. Currently the only supported flag is \c{i} for case-insensitive match. For example: \ $* >>~/EOO/ /ba+r/i /ba+z/i EOO \ Any character can act as a regex introducer. For here-strings it is the first character in the string. For here-documents the introducer is specified as part of the end marker. In this case the first character is the introducer, everything after that and until the second occurrence of the introducer is the actual end marker, and everything after that are global match flags. Global match flags apply to every char-regex (but not literal line) in this here-document. Note that there is no way to escape the introducer character inside the regex. As an example, here is a shorter version of the previous example that also uses a different introducer character. \ $* >>~%EOO%i %ba+r% %ba+z% EOO \ By default a line-char is treated as an ordinary, non-syntax character with regards to line-regex. Lines that start with a regex introducer but do not end with one are used to specify syntax line-chars. Such syntax line-chars can also be specified after (or instead of) match flags. For example: \ $* >>~/EOO/ /( /fo+x/| /ba+r/| /ba+z/ /)+ EOO \ As an illustration, if we call the \c{/fo+x/} expression \c{A}, \c{/ba+r/} \- \c{B}, and \c{/ba+z/} \- C, then we can represent the above line-regex in the following more traditional form: \ (A|B|C)+ \ Only characters from the \c{.()|*+?{\}\\0123456789,=!} set are allowed as syntax line-chars with presence of any other character being an error. A blank line as well as the \c{//} sequence (assuming \c{/} is the introducer) are treated as an empty line-char. For the purpose of matching, newlines are viewed as separators rather than being part of a line. In particular, in this model, the customary trailing newline at the end of the output introduces a trailing empty line-char. As a result, unless the \c{:} (no newline) redirect modifier is used, an empty line-char is implicitly added to line-regex. \h1#style|Style Guide| This section describes the Testscript style that is used in the \c{build2} project. The primary goal of testing in \c{build2} is not to exhaustively test every possible situation. Rather, it is to keep tests comprehensible and maintainable in the long run. To this effect, don't try to test every possible combination; this striving will quickly lead to everyone drowning in hundreds of tests that are only slight variations of each other. Sometimes combination tests are useful but generally keep things simple and test one thing at a time. The believe here is that real-world usage will uncover much more interesting interactions (which must become regression tests) that you would never have thought of yourself. To quote a famous physicist, \"\i{... the imagination of nature is far, far greater than the imagination of man.}\" To expand on combination tests, don't confuse them with corner case tests. As an example, say you have tests for feature A and B. Now you wonder what if for some reason they don't work together. Note that you don't have a clear understanding of why they might not work together; you just want to add one more test, \i{for good measure}. We don't do that. To put it another way, for each test you should have a clear understanding of what logic in the code you are testing. One approach that we found works well is to look at the diff of changes you would like to commit and make sure you at least have a test that exercises each \i{happy} (non-error) \i{logic branch}. For critical code you may also want to do so for unhappy logic branches. It is also a good idea to keep testing in mind as you implement things. When tempted to add a small special case just to make the result \i{nicer}, remember that you will also have to test this special case. If the functionality is well exposed in the program, prefer functional to unit tests since the former test the end result rather than something intermediate and possibly mocked. If unit-testing a complex piece of functionality, consider designing a concise textual \i{mini-format} for input (either via command line or \c{stdin}) and output rather than constructing the test data and expected results programmatically. Documentation-wise, each test should at least include explicit id that adequately summarizes what it tests. Add summary or even details for more complex tests. Failure tests usually fall into this category. Use the leading description for multi-line tests, for example: \ : multi-name : $* 'John' 'Jane' >>EOO Hello, John! Hello, Jane! EOO \ Here is an example of a description that includes all three components: \ : multi-name : Test multiple name arguments : : This test makes sure we properly handle multiple names passed as : separate command line arguments. : $* 'John' 'Jane' >>EOO Hello, John! Hello, Jane! EOO \ Separate multi-line tests with blank lines. You may want to place larger tests into explicit test scopes for better visual separation (this is especially helpful if the test contains blank lines, for example, in here-document fragments). In this case the description should come before the scope. Note that here-documents are indented as well. For example: \ : multi-name : { $* 'John' 'Jane' >>EOO Hello, John! Hello, Jane! EOO } \ One-line tests may use the trailing description (which must always be the test id). Within a test block (one-liners without a blank between them), the ids should be aligned, for example: \ $* John >'Hi, John!' : custom-john $* World >'Hello, World!' : custom-world \ Note that you are free to put multiple spaces between the end of the command line and the trailing description. But don't try to align ids between blocks \- this is a maintenance pain. If multiple tests belong to the same group, consider placing them into an explicit group scope. A good indication that tests form a group is if their ids start with the same prefix, as in the above example. If placing tests into a group scope, use the prefix as the group's id and don't repeat it in the tests. It is also a good idea to give the summary of the group, for example: \ : custom : Test custom greetings : { $* John >'Hi, John!' : john $* World >'Hello, World!' : world } \ In the same vein, don't repeat the testscript id in group or test ids. For example, if the above tests were in the \c{greeting.test} testscript, then using \c{custom-greeting} as the group id would be unnecessarily repetitive since the id path would become, for example, \c{greeting/custom-greeting/john}. We quote values that are \i{strings} as opposed to options, file names, paths (unless contain spaces), integers, or boolean. When quoting, use the single quote unless you use expansion (or single quotes) inside. Note that unlike Bash you do not need to quote variable expansions in order to preserve whitespaces. For example: \ arg = 'Hello Spaces' echo $arg # Hello Spaces \ For further reading on testing that we (mostly) agree with, see: \dl| \li|\n\n\l{https://blog.nelhage.com/2016/12/how-i-test/ How I Write Tests} by Nelson Elhage\n The only part we don't agree on is the (somewhat implied) suggestion to write as many tests as possible.|| "