The false positive rate difference metric calculates the percentage of negative transactions that were incorrectly scored as positive by your model.
Metric details
False positive rate difference is a fairness evaluation metric that can help determine whether your asset produces biased outcomes.
Scope
The false positive rate difference metric evaluates generative AI assets and machine learning models.
- Types of AI assets:
- Prompt templates
- Machine learning models
- Generative AI tasks: Text classification
- Machine learning problem type: Binary classification
Scores and values
The false positive rate difference metric score indicates the ratio of false positive rate for the monitored and reference groups.
- Range of values: 0.0-1.0
- Best possible score: 0.0
- Ratios:
- Under 0: Less false positives in monitored group
- At 0: Both groups have equal odds
- Over 0: Higher rate of false positives in monitored group
Evaluation process
To calculate the false discovery rate difference, confusion matrices are generated for the monitored and reference groups to identify the amount of false positives and true negatives for each group. The false positive and true negative values are used to calculate the false positive rate for each group. The false positive rate of the reference group is subtracted from the false positive rate of the monitored group to calculate the false positive rate difference.
Do the math
The following formula is used for calculating false positive rate (FPR):
The following formula is used for calculating false positive rate difference:
Parent topic: Evaluation metrics