Prerequisites
-------------
+The test system itself requires:
+
+ - bash(1) version 4.0 or newer
+
+Without bash 4.0+ the tests just refuse to run.
+
Some tests require external dependencies to run. Without them, they
will be skipped, or (rarely) marked failed. Please install these, so
that you know if you break anything.
+ - GNU tar(1)
- dtach(1)
- emacs(1)
- emacsclient(1)
- gpg(1)
- python(1)
+If your system lacks these tools or have older, non-upgradable versions
+of these, please (possibly compile and) install these to some other
+path, for example /usr/local/bin or /opt/gnu/bin. Then prepend the
+chosen directory to your PATH before running the tests.
+
+e.g. env PATH=/opt/gnu/bin:$PATH make test
+
+For FreeBSD you need to install latest gdb from ports or packages and
+provide path to it in TEST_GDB environment variable before executing
+the tests, native FreeBSD gdb does not not work. If you install
+coreutils, which provides GNU versions of basic utils like 'date' and
+'base64' on FreeBSD, the test suite will use these instead of the
+native ones. This provides robustness against portability issues with
+these system tools. Most often the tests are written, reviewed and
+tested on Linux system so such portability issues arise from time to
+time.
+
+
Running Tests
-------------
The easiest way to run tests is to say "make test", (or simply run the
notmuch-test script). Either command will run all available tests.
Alternately, you can run a specific subset of tests by simply invoking
-one of the executable scripts in this directory, (such as ./search,
-./reply, etc). Note that you will probably want "make test-binaries"
+one of the executable scripts in this directory, (such as ./T*-search.sh,
+./T*-reply.sh, etc). Note that you will probably want "make test-binaries"
before running individual tests.
The following command-line options are available when running tests:
As the names depend on the tests' file names, it is safe to
run the tests with this option in parallel.
---root=<dir>::
- This runs the testsuites specified under a separate directory.
- However, caution is advised, as not all tests are maintained
- with this relocation in mind, so some tests may behave
- differently.
+Certain tests require precomputed databases to complete. You can fetch these
+databases with
- Pointing this argument at a tmpfs filesystem can improve the
- speed of the test suite for some users.
+ make download-test-databases
+
+If you do not download the test databases, the relevant tests will be
+skipped.
When invoking the test suite via "make test" any of the above options
can be specified as follows:
make test OPTIONS="--verbose"
-You can choose an emacs binary to run the tests in one of the
-following ways.
+You can choose an emacs binary (and corresponding emacsclient) to run
+the tests in one of the following ways.
+
+ TEST_EMACS=my-emacs TEST_EMACSCLIENT=my-emacsclient make test
+ TEST_EMACS=my-emacs TEST_EMACSCLIENT=my-emacsclient ./T*-emacs.sh
+ make test TEST_EMACS=my-emacs TEST_EMACSCLIENT=my-emacsclient
+
+Some tests may require a c compiler. You can choose the name and flags similarly
+to with emacs, e.g.
- TEST_EMACS=my-special-emacs make test
- TEST_EMACS=my-special-emacs ./emacs
- make test TEST_EMACS=my-special-emacs
+ make test TEST_CC=gcc TEST_CFLAGS="-g -O2"
+
+Quiet Execution
+---------------
+
+Normally, when new script starts and when test PASSes you get a message
+printed on screen. This printing can be disabled by setting the
+NOTMUCH_TEST_QUIET variable to a non-null value. Message on test
+failures and skips are still printed.
Skipping Tests
--------------
items, so you cannot arbitrarily skip any test and expect the
remaining tests to be unaffected.
+Currently we do not consider skipped tests as build failures. For
+maximum robustness, when setting up automated build processes, you
+should explicitly skip tests, rather than relying on notmuch's
+detection of missing prerequisites. In the future we may treat tests
+unable to run because of missing prerequisites, but not explicitly
+skipped by the user, as failures.
+
Writing Tests
-------------
-The test script is written as a shell script. It should start with
-the standard "#!/usr/bin/env bash" with copyright notices, and an
-assignment to variable 'test_description', like this:
+The test script is written as a shell script. It is to be named as
+Tddd-testname.sh where 'ddd' is three digits and 'testname' the "bare"
+name of your test. Tests will be run in order the 'ddd' part determines.
+
+The test script should start with the standard "#!/usr/bin/env bash"
+and an assignment to variable 'test_description', like this:
#!/usr/bin/env bash
- #
- # Copyright (c) 2005 Junio C Hamano
- #
test_description='xxx test (option --frotz)
After assigning test_description, the test script should source
test-lib.sh like this:
- . ./test-lib.sh
+ . ./test-lib.sh || exit 1
This test harness library does the following things:
There are a handful helper functions defined in the test harness
library for your script to use.
- test_expect_success <message> <script>
+ test_begin_subtest <message>
+
+ Set the test description message for a subsequent test_expect_*
+ invocation (see below).
+
+ test_expect_success <script>
- This takes two strings as parameter, and evaluates the
+ This takes a string as parameter, and evaluates the
<script>. If it yields success, test is considered
- successful. <message> should state what it is testing.
+ successful.
- test_begin_subtest <message>
+ test_expect_code <code> <script>
- Set the test description message for a subsequent test_expect_equal
- invocation (see below).
+ This takes two strings as parameter, and evaluates the <script>.
+ If it yields <code> exit status, test is considered successful.
test_subtest_known_broken
test_expect_equal_file <file1> <file2>
- Identical to test_exepect_equal, except that <file1> and <file2>
+ Identical to test_expect_equal, except that <file1> and <file2>
are files instead of strings. This is a much more robust method to
compare formatted textual information, since it also notices
whitespace and closing newline differences.
+ test_expect_equal_json <output> <expected>
+
+ Identical to test_expect_equal, except that the two strings are
+ treated as JSON and canonicalized before equality testing. This is
+ useful to abstract away from whitespace differences in the expected
+ output and that generated by running a notmuch command.
+
test_debug <script>
This takes a single argument, <script>, and evaluates it only
generated script that should be called instead of notmuch to do
the counting. The notmuch_counter_value() function prints the
current counter value.
+
+There are also functions which remove various environment-dependent
+values from notmuch output; these are useful to ensure that test
+results remain consistent across different machines.
+
+ notmuch_search_sanitize
+ notmuch_show_sanitize
+ notmuch_show_sanitize_all
+ notmuch_json_show_sanitize
+
+ All these functions should receive the text to be sanitized as the
+ input of a pipe, e.g.
+ output=`notmuch search "..." | notmuch_search_sanitize`