error_parity.evaluation

A set of functions to evaluate predictions on common performance and fairness metrics, possibly at a specified FPR or FNR target.

Based on: https://github.com/AndreFCruz/hpt/blob/main/src/hpt/evaluation.py

Functions

eval_accuracy_and_equalized_odds(y_true, ...)

Evaluate accuracy and equalized odds of the given predictions.

evaluate_fairness(y_true, y_pred, ...[, ...])

Evaluates fairness as the ratios between group-wise performance metrics.

evaluate_performance(y_true, y_pred)

Evaluates the provided predictions on common performance metrics.

evaluate_predictions(y_true, y_pred_scores)

Evaluates the given predictions on both performance and fairness metrics.

evaluate_predictions_bootstrap(y_true, ...)

Computes bootstrap estimates of several metrics for the given predictions.