Tree-based algorithms are a type of supervised learning algorithm that splits the input space into distinct regions, according to certain criteria. These algorithms use a decision tree as their model structure, where each internal node of the tree corresponds to a decision (a "test") on one of the input variables, each branch represents the outcome of the test, and each leaf node represents a prediction for the target variable.

Two of the most common tree-based algorithms are Decision Trees and Random Forests.

Tree-based methods have many advantages, such as their ability to handle both numerical and categorical data, their interpretability, and their flexibility. However, they can also be prone to overfitting if not properly regularized, and can struggle with tasks where the decision boundary is very smooth or involves complex interactions in high-dimensional spaces.