I’m thinking about enhancing library(plunit) in a few ways … what do you think of these possibilities? (And please add suggestions of your own)
The changes I’m proposing appear to be easy to add, but I might have read the code too optimistically.
At the level of a test suite:
true, the tests are run in a random order. (I want this because I’m seeing some memory-access errors in foreign functions that depend on the order that blobs are garbage collected). If any tests fail, this would also output the ordering of the tests (see option
order(List)- a list of test names, causing the tests to run in that specific order rather than in sequential order (the default) or random order (if
For individual tests, this can be done by the option
repeat(Number)- runs the entire test suite this many times (default is 1) or until first test failure.
forall(between(1,N,_)). Repeated execution until a failure (for finding rare but flakey tests) can be done by
repeat, \+ run_tests.
test_cleanup(Goal)- adds a
cleanup(Goal)to each test within a test suite - the test suite’s
test_setupruns before the individual test’s
setup(if any) and the test suite’s
test_cleanupruns after the individual test’s
cleanup(if any). (Currently, I use term-expansion to do this, which is a bit ugly.)
As an option to set_test_options/1:
true, output a message for each test, not just the ones that fail. This is useful when adding some print debug to code. Note that there is already a predicate
For an individual test:
trueoptions. Currently, if I have two things I want to check, my choices are:
test(a_test) :- do_something(A, B), assertion(A == expected_value_A), assertion(B == expected_value_B).
test(a_test, [A,B] == [expected_value_A, expected_value_B]) :- do_something(A, B).
and I’m proposing to allow this:
test(a_test, [A == expected_value_A, B == expectedValue_B]) :- do_something(A, B).
verbosefor test suite
orderfor test suite
repeatand clarified what happens when test fails
- added comment to
repeatand removed it.