The F2 Score (also known as the F-beta score) is a measure of a test's accuracy. It's a type of F score that considers both precision and recall. The F-beta score weighs the recall higher than precision by attributing more importance to false negatives.

The general formula for F-beta score is:

F-beta = (1 + beta^2) (precision . recall) / ((beta^2 . precision) + recall)

When beta is 2, we get the F2 score formula:

F2 = 5 (precision . recall) / (4 . precision + recall)

The F2 score is used when you want to tune your model towards recall. That is, when you want to minimize false negatives.

If you have a classification problem where false negatives are more detrimental than false positives, the F2 score might be the best metric to use. It gives more importance to recall than precision.

On the other hand, if false positives are more important, you would give more importance to precision, and use an F0.5 score instead (beta=0.5). The traditional F1 score (beta=1) considers precision and recall equally important.

Note: The beta parameter determines the weight of recall in the combined score. beta < 1 lends more weight to precision, while beta > 1 favors recall. beta -> 0 considers only precision, beta -> inf only recall.