What I’ve found, is that coverage is a good start – and may be enough for most teams (among teams who can’t run all of their tests quickly on every build). But adding some other selection factors (or heuristics) and applying some weights can take you a bit farther.
Some heuristics I’ve used in the past for test prioritization / selection include:
- Has the test found a bug before? Some tests are good at finding product bugs. I give these tests more weight.
- When was the last time the test ran? If a test has run every day for a year and never failed, I don’t give it much weight. We testers are always paranoid that the moment we choose not to run a test that a regression will appear. This weighted heuristic helps combat the conundrum of running the test that never fails vs. fear of missing the regression
- How flaky is the test? If you never have flaky tests, skip this one. For everyone else, it makes sense to run my tests that return false positives less often (or at the end of the test pass)
- How long does the test take? I put more weight on tests that run faster {…}
Continue to read the full article here on Tooth of the Weasel, notes and rants about software and software quality.
About our q-leap’s expert recommending this Article
Stefan Papusoi is a Test Specialist at q-leap. As a Context Driven and Exploratory Tester, he is constantly increasing his experience in testing, automating scripts, and managing and improving the testing processes.