Recall Score
Recall, also known as Sensitivity, Hit Rate, or True Positive Rate, is a metric that measures the proportion of actual positives that are correctly identified as such.
In the context of a binary classification problem (where the outcomes are classified into one of two classes, labeled as positive(1) and negative(0)), recall is the proportion of true positive predictions (TP) out of all actual positive instances.
Mathematically, Recall is defined as:
Recall = True Positives / (True Positives + False Negatives)
Here,
True Positives (TP) are the cases when the actual class of the observation is 1 (Positive) and the prediction is also 1 (Positive)
False Negatives (FN) are the cases when the actual class of the observation is 1 (Positive) but the prediction is 0 (Negative).
So in simple terms, recall answers the question: "What proportion of actual positives was identified correctly?"
Recall is a useful measure when the cost of False Negatives is high. For example, in medical testing, recall is often the metric of choice because failing to identify a positive case (e.g., failing to diagnose a disease) can have more serious consequences than a false positive.
However, recall alone does not provide a complete picture of a model's performance. It doesn't account for false positive errors. Therefore, it's often used in conjunction with Precision and F1 Score to provide a more comprehensive evaluation of a model's performance.
Updated 5 months ago