Almost each software component has automated tests. These tests are run on a periodic basis and are executed in Google’s Cloud platform. But frequent testing leads to a considerable amount of resources and time being needed, which can become substantial and can lead to more cost spent (for using google cloud services) than budgetted. This is partly tackled by a test selection mechanism that aims to select only relevant tests for the code changes that have been done. However this test selection is coarse (only based on the changed software components). In the recent past, work has been done to do explore test runtime forecasting, outcome correlation and predictive test selection (to perform test prioritization) using machine learning (see University of Groningen master’s thesis Max Valk, subject: Test selection, minimization and prioritization). The goal of this assignment is to build upon this work and deploy it in Metrology Leveling’s continuous integration pipelines.