AUC Precision Recall
The Area Under the Curve (AUC) of the Precision-Recall curve is a metric used to evaluate the performance of a binary classification algorithm, especially in cases where the classes are imbalanced. It summarizes the trade-off between the true positive rate (Recall) and the positive predictive value (Precision) for a predictive model using different probability thresholds.
Unlike the ROC AUC (Receiver Operating Characteristic Area Under the Curve), which is calculated with respect to false positive rate and true positive rate, the Precision-Recall AUC is very useful when we have a highly skewed or imbalanced dataset. This is because it puts more emphasis on the ability of the classifier to correctly classify the minority class.
The Precision-Recall curve is a plot of the Precision (y-axis) and the Recall (x-axis) for different thresholds, similar to the ROC curve. The AUC measures the entire two-dimensional area underneath the entire Precision-Recall curve (from (0,0) to (1,1)).
A model with perfect skill is depicted as a point at (1,1). A skillful model is represented by a curve that bows towards (1,1) above the flat line of no skill. The larger the AUC, the better the performance of the model. An area of 1.0 represents a model that made all predictions perfectly. An area of 0.5 represents a model that is as good as random.
Updated 5 months ago