Decision Trees
Decision trees recursively split the input space into regions, in order to minimize the impurity of the target variable within each region. For classification problems, impurity is often measured using Gini impurity or entropy, whereas for regression problems, it's typically measured using variance or mean squared error.
For example, in a classification problem, the splitting criterion at each node might be chosen to minimize the Gini impurity:
Gini(t) = 1 - Σ [p(i|t)]^2
where t is the subset of training examples reaching that node, i ranges over the target classes, and p(i|t) is the proportion of examples in t that belong to class i.
Updated 5 months ago