Accuracy is a very straightforward and commonly used metric for evaluating classification models in machine learning.

Accuracy is defined as the proportion of true results (both true positives and true negatives) in the total number of cases examined. To put it simply, it's the number of correct predictions made by the model over the total number of predictions.

Mathematically, it's calculated as:

Accuracy = (Number of Correct Predictions) / (Total Number of Predictions)

Or in the context of binary classification problems (where the outcomes are classified into one of two classes, labeled as positive(1) and negative(0)):

Accuracy = (True Positives + True Negatives) / (True Positives + True Negatives + False Positives + False Negatives)

Where:

True Positives (TP) are the cases when the actual class of the observation is 1 (Positive) and the prediction is also 1 (Positive)
True Negatives (TN) are the cases when the actual class of the observation is 0 (Negative) and the prediction is also 0 (Negative)
False Positives (FP) are the cases when the actual class of the observation is 0 (Negative) but the prediction is 1 (Positive)
False Negatives (FN) are the cases when the actual class of the observation is 1 (Positive) but the prediction is 0 (Negative)

Accuracy is an appropriate measure when the target variable classes in the data are nearly balanced. However, it's not a good metric to use with imbalanced datasets. Here's why:

Consider a dataset with 950 negative instances and 50 positive instances. A naive model that predicts every instance to be negative would still be 95% accurate. But this model would be entirely unhelpful in predicting the positive instances, which might be the event of interest.

So, while accuracy can give a general idea of how well a model is doing, it's important to use it along with other metrics like Precision, Recall, F1 score, ROC-AUC, and Log Loss to get a comprehensive understanding of a model's performance.

Example:

Let's consider a binary classification model which classifies emails as either "spam" (positive class) or "not spam" (negative class).

Suppose we have 1000 emails which are pre-labeled, and we use this data to evaluate our model. The results are as follows:

True Positives (TP): The model correctly predicted "spam" for 200 emails.
True Negatives (TN): The model correctly predicted "not spam" for 600 emails.
False Positives (FP): The model incorrectly predicted "spam" for 50 emails. These were actually "not spam".
False Negatives (FN): The model incorrectly predicted "not spam" for 150 emails. These were actually "spam".

Now, let's calculate the accuracy:

Accuracy = (TP + TN) / (TP + TN + FP + FN)
Accuracy = (200 + 600) / (200 + 600 + 50 + 150)
Accuracy = 800 / 1000
Accuracy = 0.8

So, the accuracy of our model is 0.8, or 80%. This means our model correctly predicted the class for 80% of the emails in our evaluation dataset.

Keep in mind, as mentioned earlier, accuracy is a good measure when the classes are balanced. In our example, if most of the emails were "not spam" and few were "spam", and the main purpose of the model was to identify "spam" emails, then this accuracy would not be as meaningful. Other metrics like precision, recall or F1-score would be more informative in such scenarios.