The false omission rate difference metric calculates the number of false negative transactions as a percentage of all transactions with a negative outcome.
Metric details
False omission rate difference is a fairness evaluation metric that can help determine whether your asset produces biased outcomes.
Scope
The false omission rate difference metric evaluates generative AI assets and machine learning models.
Scores and values
The false omission rate difference metric score indicates the pervasiveness of false negatives among all negative transactions for monitored and reference groups.
- Range of values: 0.0-1.0
- Best possible score: 0.0
- Ratios:
- Under 0: Less false negatives in monitored group
- At 0: Both groups have equal odds
- Over 0: Higher rate of false negatives in monitored group
Evaluation process
To calculate the false discovery rate difference, confusion matrices are generated for the monitored and reference groups to identify the amount of false and true negatives for each group. The false and true negative values are used to calculate the false omission rate for each group. The false omission rate of the reference group is subtracted from the false omission rate of the monitored group to calculate the false omission rate difference.
Do the math
The following formula is used for calculating the false omission rate (FOR):
The following formula is used for the false omission rate difference:
Parent topic: Evaluation metrics