# Linear Cohen Kappa

The Linear Cohen's Kappa (**also known as the Weighted Cohen's Kappa**) is a metric used to measure the agreement between two raters, where the ratings are on an ordinal scale. It is a type of Cohen's Kappa that incorporates a weight matrix to account for the fact that some disagreements may be more serious than others.

While Cohen’s Kappa calculates the agreement of two raters very well for nominal or categorical scales, for ordinal scales it may be important to distinguish between disagreements of different severity.

The calculation for Linear Cohen's Kappa is similar to the regular Cohen's Kappa, but instead of only considering whether ratings agree or disagree, it also considers the degree of disagreement.

For example, in a five-point rating scale, a rating of 5 vs a rating of 4 may be seen as a smaller disagreement than a rating of 5 vs a rating of 1.

The weight given to each type of disagreement can be customized based on the problem at hand. In a linearly weighted kappa, the weight for each type of disagreement is proportional to the distance between the categories.

This version of Cohen's Kappa is especially useful when dealing with ordinal data, where the order of categories is important. It can provide a more nuanced understanding of the model's performance, beyond simply whether the predicted and actual categories are exactly the same.

For instance, in a medical diagnosis problem, predicting the correct severity of the disease (mild, moderate, severe) might be crucial, and misclassifying a severe disease as mild would be much more problematic than misclassifying it as moderate. Linear Cohen's Kappa can account for this kind of nuance.

Updated 5 months ago