If a decision will immediately block without changing the global state, then
there is no point in making it: no state will become reachable from it which
isn't reachable through some other option we have available.
This has three parts:
- When the prefix runs out and the scheduler is making decisions, it filters
out decisions which will immediately block.
- When a subtree is being added, it records which decisions will immediately
block.
- When backtracking points are being added, it filters out ones in this block
list.
This optimisation is likely to only be useful when threads are communicating a
lot. For instance, a `parMap id` is totally unaffected by this, but the test
cases drop from an average of 64 runs to 42.
This further improves performance in all cases, although it's still far worse
on some of the included tests, and I have yet to figure out why.
See also: "Partial-Order Methods for the Verication of Concurrent Systems"
[Godefroid 1996]
This performs better with "real" code (the Par monad) but surprisingly does far
worse with the included tests! The next thing to do is implement the orthogonal
sleep sets algorithm to cut down on available choices even further and
hopefully correct this issue.
See also: "Bounded Partial-Order Reduction" [Coons, Musuvathi, McKinley 2013]
This allows results to be naturally reported as lazy trees, rather
than as lists representing a tree traversal. This in turn means
that the actual bound can be moved outwards to the testing code, and
not used at all in the runner. Trees let us do nice things with
shrinking and short-circuiting, if we make the (fairly reasonable)
assumption that the children of a buggy result will exhibit the same
bug.
Storing results as trees does complicate the predicate helper
functions somewhat, but I think the clarity gained in the actual
SCT code is well worth it.
The literature sttes that many bugs are found with 2 concurrent
threads, so encouraging even more parallelism is possibly a win in
only a small number of cases, but potentially a huge cost in many.