Techniques |
Applicability |
TCP
|
Industry Motivation
Practitioner Feedback
|
Experiment subject(s) |
Industrial Partner |
Programming Language |
N/A |
Undisclosed industrial partner (Germany) |
|
Effectiveness Metrics |
Efficiency Metrics |
Other Metrics |
|
|
|
Information Approach |
Algorithm Approach |
Open Challenges |
|
|
When prioritizing, should the focus be entirely on the safety-critical tests, or should some other tests be sprinkled in-between?
|
Abstract
Automated production systems become more and more complex. This makes it
increasingly difficult to keep track of performed changes and already
executed test cases. This endangers the systems quality as the risk of
missing important test cases while planning the test execution is high,
especially for testers with little experience. To face this challenge,
testers should be supported by an automatic test prioritisation based on
metrics in selecting the right test cases for the test execution. In
industry, many different test prioritisation criteria and strategies are
used for this purpose. In an industrial interview, experts discussed
and ranked prioritisation criteria that are currently used within the
respective companies. As a result, this paper presents the cactus
prioritisation model, which graphically resembles the industrial ranking
and weighting of the criteria. Based on the prioritisation cactus and
its criteria, a simple prioritisation metric is introduced to determine
the utility of each test case regarding the system under test. The test
cases are prioritised according to their descending utility.
Furthermore, approaches and metrics to realise the different individual
prioritisation criteria are proposed.