Techniques |
Applicability |
TCP
|
Industry Motivation
|
Experiment subject(s) |
Industrial Partner |
Programming Language |
Industrial proprietary, large to very large scale
- Calibre Auto-Waivers (2k TCs)
- Calibre PERC (11k TCs)
- Calibre nmDRC (3k TCs) |
Siemens EDA (Egypt) |
Unclear |
Effectiveness Metrics |
Efficiency Metrics |
Other Metrics |
Average Percentage of Faults Detected (APFD), Accuracy/precision/recall
|
|
|
Information Approach |
Algorithm Approach |
Open Challenges |
|
Machine learning-based
|
"For future work, more incremental learning
approaches will be explored and tested."
|
Abstract
Regression testing is facing a bottleneck due to the growing number of test cases and the wide adoption of continuous integration (CI) in software projects, which increases the frequency of running software builds, making it challenging to run all the regression test cases. Machine learning (ML) techniques can be used to save time and hardware resources without compromising quality. In this work, we introduce a novel end-to-end, self-configurable, and incremental learning deep neural network (DNN) tool for test case prioritization (TCP-Net). TCP-Net is fed with source code-related features, test case metadata, test case coverage information, and test case failure history, to learn a high dimensional correlation between source files and test cases. We experimentally show that TCP-Net can be efficiently used for test case prioritization by evaluating it on three different real-life industrial software packages.