Advanced multi-threaded unit testing framework with minimal to no boilerplate
To use this package, run the following command in your project's root directory:
Put the following dependency into your project's dependences section:
Multi-threaded unit test framework for D. Based on similar work for C++11.
"But doesn't D have built-in
unittest blocks"? Yes, and they're
massively useful. Even short scripts can benefit from them with 0
effort and setup. In fact, I use them to test this library. However,
for larger projects it lacks some functionality:
- If all tests pass, great. If one fails, it's hard to know why.
- The only tool is assert, and you have to write your own assert messages (no assertEqual, assertNull, etc.)
- No possibility to run just one particular test
- Only runs in one thread.
So I wrote this library in and for a language with built-in support for unit tests. Its goals are:
- To run in parallel (by default) for maximal speed and turnaround for TDD
- To make it easy to write tests (functions as test cases)
- No test registration. Tests are discovered with D's compile-time reflection
- Support for built-in
- To be able to run specific tests or group of tests via the command-line
- Suppress tested code stdio and stderr output by default (important when running in multiple threads).
- Have a special mode that only works when using a single thread under which tested code output is turned back on, as well as special writelnUt debug messages.
- Ability to temporarily hide tests from being run by default whilst stil being able to run them
The library is all in the
unit_threaded package. There are two
example programs in the
example folder, one with passing
unit tests and the other failing, to show what the output looks like
in each case. Because of the way D packages work, they must be run
from the top-level directory of the repository.
The built-in D unittest blocks are included automatically, as seen in
the output of both example programs
example.tests.pass_tests.unittest and its homologue in
example_fail). A name will be automatically
generated for them. The user can specify a name by decorating them
with a string UDA or the included
The easiest way to run tests is by doing what the example code does:
the modules containing the tests as compile-time arguments. This can
be done as symbols or strings, and the two approaches are shown in
There is no need to register tests. The registration is implicit by
TestCase and overriding
test() or by writing a
function whose name is in camel-case and begins with "test"
testGadget()). Specify which modules contain
tests when calling
runTests() and that's it. Private functions
TestCase also has support for
classes need only override the appropriate functions(s).
Don't like the algorithm for registering tests? Not a problem. The
@DontTest can be used to opt-in or
opt-out. These are used in the examples.
Tests can also be hidden with the
@HiddenTest attribute. This means
that particular test doesn't get run by default but can still be run
by passing its name as a command-line argument.
a compile-time string to list the reason why the test is hidden. This
would usually be a bug id but can be anything the user wants.
@ShouldFail is used to decorate a test that is
expected to fail, an also requires a compile-time string.
@ShouldFail should be preferred to
@HiddenTest. If the
relevant bug is fixed or not-yet-implemented functionality is done,
the test will then fail, which makes them harder to sweep
under the carpet and forget about.
It is possible to instantiate a function test case multiple times,
once per value to be passed in. To do so, simply declare a test
function that takes on parameter and add UDAs of that type to
the test function. The
testValues function in the
Since D packages are just directories and there the compiler can't
read the filesystem at compile-time, there is no way to automatically
add all tests in a package. To mitigate this and avoid having to
manually write the name of all the modules containing tests, a utility
dtest can be used to
generate a source file automatically. Simply pass in the desired
directories to scan as command-line arguments. It automatically
generates a file, executes it with rdmd, and prints the result.
Use the -h option to get help on the command. To try it out,
dtest -usource -t tests/pass to run the passing tests,
dtest -usource -t tests/fail to run the failing tests,
dtest to run all of them. You can also run
either example file with
rdmd -Isource example/<filename>.
There is support for debug prints in the tests with the
This is only supported in single-threaded mode (
-s will trigger a warning followed by the forceful use of
-s. TestCases and test functions can print debug output with the
writelnUt available here.
Tests can be run in random order. To do so, use the
A seed will be printed so that the same run can be repeated by
--seed option. This implies running in a single thread.
Since code under test might not be thread-safe, the
attribute can be used on a test. This causes all tests in the same
module that have this attribute to be executed sequentially so they
don't interleave with one another.
- Registered by Atila Neves
- 0.5.8 released 8 years ago
- BSD 3-clause
- Copyright © 2013, Atila Neves
2.1.9 2024-Jan-23 2.1.8 2023-Nov-02 2.1.7 2023-Jul-31 2.1.6 2023-Apr-25 2.1.5 2023-Mar-17
- Download Stats:
421 downloads today
3145 downloads this week
14631 downloads this month
749000 downloads total
- Short URL: