The model quality drift metric compares the estimated runtime accuracy to the training accuracy to measure the drop in accuracy.
Metric details
Model quality drift is a drift v2 evaluation metric that evaluates data distribution changes for machine learning models.
Scope
The model quality drift metric evaluates machine learning models only.
Types of AI assets: Machine learning models
Scores and values
The model quality drift metric score indicates the drop in accuracy from the training accuracy to the estimated runtime accuracy.
- Best possible score: 0.0
- Ratios:
- At 0: No change in accuracy
- Over 0: Increasing change in accuracy
Evaluation process
A drift detection model is built that processes your payload data when you configure drift v2 evaluations to predict whether your model generates accurate predictions without the ground truth. The drift detection model uses the input features and class probabilities from your model to create its own input features.
Do the math
The following formula is used to calculate model quality drift:
The accuracy of your model is calculated as the base_accuracy
by measuring the fraction of correctly predicted transactions in your training data. During evaluations, your transactions are scored against the drift detection model
to measure the amount of transactions that are likely predicted correctly by your model. These transactions are compared to the total number of transactions that are processed to calculate the predicted_accuracy
. If the predicted_accuracy
is less than the base_accuracy
, a model quality drift score is generated.
Parent topic: Evaluation metrics