Enhancements to plunit

I’m thinking about enhancing library(plunit) in a few ways … what do you think of these possibilities? (And please add suggestions of your own)

The changes I’m proposing appear to be easy to add, but I might have read the code too optimistically.

At the level of a test suite:

  • random(Bool) - if true, the tests are run in a random order. (I want this because I’m seeing some memory-access errors in foreign functions that depend on the order that blobs are garbage collected). If any tests fail, this would also output the ordering of the tests (see option order).
  • order(List) - a list of test names, causing the tests to run in that specific order rather than in sequential order (the default) or random order (if random(true) is specified).
  • repeat(Number) - runs the entire test suite this many times (default is 1) or until first test failure. For individual tests, this can be done by the option forall(between(1,N,_)). Repeated execution until a failure (for finding rare but flakey tests) can be done by repeat, \+ run_tests.
  • test_setup(Goal) and test_cleanup(Goal) - adds a setup(Goal) and cleanup(Goal) to each test within a test suite - the test suite’s test_setup runs before the individual test’s setup (if any) and the test suite’s test_cleanup runs after the individual test’s cleanup (if any). (Currently, I use term-expansion to do this, which is a bit ugly.)

As an option to set_test_options/1:

  • verbose(Bool) - if true, output a message for each test, not just the ones that fail. This is useful when adding some print debug to code. Note that there is already a predicate set_test_options(silen(true)).

For an individual test:

  • multiple true options. Currently, if I have two things I want to check, my choices are:
test(a_test) :-
    do_something(A, B),
    assertion(A == expected_value_A),
    assertion(B == expected_value_B).

or

test(a_test, [A,B] == [expected_value_A, expected_value_B]) :-
    do_something(A, B).

and I’m proposing to allow this:

test(a_test, [A == expected_value_A, B == expectedValue_B]) :-
    do_something(A, B).

EDITS

  • option verbose for test suite
  • option order for test suite
  • changed multiple to repeat and clarified what happens when test fails
  • added comment to repeat and removed it.
3 Likes

For those who use Discourse via email, I’ll be editing my original post as ideas occur to me (or are suggested by others).

Some other tests print the random seed in case of errors (they first explicitly seed the random generator using set_random/1. Some tests also produce random amounts of garbage (atoms) in situations like this. As explained offline, this is not going to solve the library(archive) as the randomness is based on the IO stream table.

I typically solve that using

?- repeat, \+ run_tests(Spec).

That surely covers a common use case.

I think that should be part of set_test_options/1 instead of the test suite. It is something one may want to easily figure out what happens in each test without counting the dots :slight_smile:

Outputting the random state when running the tests in random order is a good idea – for rare bugs, this would help in reproducing the bug. (I almost always find that I’ve forgotten to output something that helps reproduce a rare bug.)

Nice trick! (It took me a bit of thinking to realize that it meant “re-run the test suite until a test fails”.)

True; but if I had run the tests in random order, I would have found the bug sooner.

I’ve edited my original post accordingly.