This work was developed in an industrial setting towards UI regression testing, where we do not have access to source code and the majority of test cases are manually executed (and only part of the regression-based test cases can be executed due to limited resources). Test case prioritization (TCP) is indicated for such a scenario. But characteristic of many TCP techniques is that they rely on source code coverage information, whereas we just have access to test cases, change requests, and their features. Thus, our goal is to investigate which criteria is the most relevant for prioritization. Thus, according to the literature we create an optimization model based on historical data. This model is embedded in a constraint solver designed for optimization. Our optimization function is based on the APFD (Average of the Percentage of Faults Detected) metric, but other metrics can be used as well. We have found that our partner already uses an appropriate criterion to identify failures which is statistically equivalent to other criteria used in experiments using our optimization model.