xUnit Testing Framework for D
|Copyright||Copyright © 2016, Mario Kröplin|
|Authors||Juan Manuel Cabo, Mario Kröplin|
|Registered by||Mario Kröplin|
To use this package, put the following dependency into your project's dependencies section:
xUnit Testing Framework for D
Testing Functions vs. Interactions
D's built-in support for unittests is best suited for testing functions,
when the test cases can be expressed as one-liners.
(Have a look at the documented unittests for the
But you're on your own, when you have to write a lot more code per test case, for example for testing interactions of objects.
So, here is what the xUnit Testing Framework has to offer:
- tests are organized in classes
- tests are always named
- tests can reuse a shared fixture
- you see the progress as the tests are run
- you see all failed tests at once
- you get more information about failures
Failures vs. Errors
Specialized assertion functions provide more information about failures than
will not only report the faulty value but will also highlight the difference:
expected: <ba<r>> but was: <ba<z>>
The more general
assertOp!">="(a, b); // alias assertGreaterThanOrEqual
(borrowed from Issue 4653) will at least report the concrete values in case of a failure:
condition (2 >= 3) not satisfied
Together with the expressive name of the test (that's your responsibility) this should be enough information for failures. On the other hand, for violated contracts and other exceptions from deep down the unit under test you may wish for the stack trace.
That's why the xUnit Testing Framework distinguishes failures from errors,
dunit.assertion doesn't use
but introduces its own
User Defined Attributes
Thanks to D's User Defined Attributes, test names no longer have to start with "test".
mixin UnitTest; in your test class and attach
(borrowed from JUnit 5)
to the member functions to state their purpose.
Test results are reported while the tests are run. A "progress bar" is written
. for each passed test, an
F for each failure, an
E for each error,
S for each skipped test.
In addition, an XML test report is available that uses the JUnitReport format. The continuous integration tool Jenkins, for example, understands this JUnitReport format. Thus, Jenkins can be used to browse test reports, track failures and errors, and even provide trends over time.
Run the included example to see the xUnit Testing Framework in action:
(When you get four failures, one error, and one skip, everything works fine.)
Have a look at the debug output of the example in "verbose" style:
rdmd -debug -Isrc example.d --verbose
Or just focus on the issues:
./example.d --filter Test.assert --filter error
Alternatively, build and run the example using dub:
dub --build=plain --config=example -- --verbose
assertEquals(expected, actual) got changed into
assertEquals(actual, expected), which feels more natural.
Moreover, the reversed order of arguments is more convenient for
D's Uniform Function Call Syntax:
The only effect, however, is on the failure messages,
which will be confusing if the order is mixed up.
So, if you prefer TestNG's order of arguments,
instead of the conventional
The xUnit Testing Framework also supports the "fluent assertions" from unit-threaded.
For an example, have a look at fluent-assertions. Build and run the example using
(When you get three failures, everything works fine.)