F1 Score
F1 score is another important metric used in machine learning for evaluating binary classification models, and it's especially useful when dealing with imbalanced datasets.
The F1 Score is the harmonic mean of precision and recall, two metrics that are also commonly used in evaluating classification models.
Before explaining F1 Score in detail, let's first understand precision and recall:
Precision is the fraction of relevant instances among the retrieved instances. In other words, it's the number of true positive results divided by the number of all positive results returned by the classifier.
Precision = True Positives / (True Positives + False Positives)
Recall (or sensitivity) is the fraction of relevant instances that have been retrieved over the total amount of relevant instances. It's the number of true positive results divided by the number of all samples that should have been identified as positive.
Recall = True Positives / (True Positives + False Negatives)
Now, the F1 Score is defined as:
F1 Score = 2 (Precision . Recall) / (Precision + Recall)
By combining precision and recall into one metric, the F1 score captures both false positives and false negatives. F1 score ranges between 0 and 1, with 1 being perfect precision and recall, and 0 being the worst.
The F1 score is particularly useful if you have an uneven class distribution. If the cost of false positives and false negatives are very different, it’s better to look at both Precision and Recall, or the F1 score.
In other words, if the cost of missing positive instances (low recall) or wrongly predicting negatives as positives (low precision) is high, it's better to use the F1 score as it takes both these errors into account. This is why it is commonly used in information retrieval for measuring search, document classification, and query classification performance.
Updated 5 months ago