Evaluation metrics are used to measure the quality of statistical or machine learning models. Depending on the type of problem (e.g., regression, classification, clustering, etc.), different types of evaluation metrics may be used. These metrics provide a way to quantify the performance of a model and compare it against other models or benchmark performance.
Here are some commonly used evaluation metrics along with their respective math formulas:
Accuracy: This is used for classification problems. It is the ratio of correctly predicted observations to the total observations.
Accuracy = (TP + TN) / (TP + TN + FP + FN)
Here, TP, TN, FP, and FN are the numbers of true positives, true negatives, false positives, and false negatives, respectively.
Precision: Also used for classification problems, precision is the ratio of correctly predicted positive observations to the total predicted positive observations.
Precision = TP / (TP + FP)
Recall (Sensitivity): This is the ratio of correctly predicted positive observations to all actual positives.
Recall = TP / (TP + FN)
F1 Score: This is the weighted average of Precision and Recall. It tries to find the balance between precision and recall.
F1 Score = 2(Recall Precision) / (Recall + Precision)
Mean Absolute Error (MAE): Used for regression problems, MAE is the average of the absolute difference between the predicted and actual values.
MAE = (1/n) Σ |Yi - Ŷi|
Mean Squared Error (MSE): Also used for regression problems, MSE is the average of the squared difference between the predicted and actual values.
MSE = (1/n) Σ (Yi - Ŷi)^2
Root Mean Squared Error (RMSE): This is the square root of the mean of the squared differences between the predicted and actual values. It is also used in regression problems.
RMSE = sqrt[(1/n) Σ (Yi - Ŷi)^2]
Area Under the ROC Curve (AUC-ROC): This metric is used for binary classification problems. It measures the entire two-dimensional area underneath the entire Receiver Operating Characteristic (ROC) curve (a plot of the true positive rate against the false positive rate).
Each of these evaluation metrics has strengths and weaknesses, and none is universally best for all types of problems. The choice of evaluation metric depends on the specific objectives and requirements of the problem at hand.
The Mean Absolute Percentage Error (MAPE) is a statistical measure used to assess the accuracy of a forecasting method in predictive analytics. It expresses the forecast error as a percentage, making it a scale-independent indicator.
The MAPE is particularly useful when you want to compare the forecasting errors across different time series of different scales.
The formula for the MAPE for a time series is defined as:
MAPE = (1/n) Σ (|Yt - Ft| / |Yt|) 100%
n is the number of fitted points,
Yt is the actual value at time t,
Ft is the forecasted value at time t,
|Yt - Ft| is the absolute error,
Σ represents the sum from t=1 to t=n (i.e., over all data points).
Note: Because it involves taking the absolute value of the percentage errors, MAPE ignores the direction of over/under prediction. Thus, an under prediction of 10% and an over prediction of 10% will both contribute the same amount (i.e., 10%) to the MAPE.
Another important thing to keep in mind is that MAPE is not defined for time series involving zero values as this would involve division by zero in the formula. Alternative error metrics should be used in such cases.
The Mean Absolute Error (MAE) is a statistical measure used to quantify the accuracy of predictions in regression analysis and forecasting. Unlike the Mean Squared Error (MSE), which squares the difference between the predicted and actual values before averaging them, the MAE takes the absolute value of these differences. This means that the MAE gives equal weight to all errors, regardless of their direction or magnitude, which is not the case with the MSE.
The mathematical formula for the MAE is:
MAE = (1/n) Σ |Yi - Ŷi|
n is the total number of data points or observations.
Yi is the actual value of the data point i.
Ŷi is the predicted value of the data point i.
|Yi - Ŷi| is the absolute difference between the actual and predicted values.
Σ represents the sum over all data points from i=1 to i=n.
In the context of predictive modeling, a smaller MAE indicates a model that makes predictions closer to the actual values, and is therefore better at prediction. However, like any measure of prediction error, the MAE should be used in conjunction with other measures to fully assess a model's predictive accuracy.
Updated 5 months ago