Decision trees are top-down, recursive, divide-and-conquer classifiers. 1 Decision trees are generally composed of 2 main tasks: tree induction and tree pruning. Tree induction is the task of taking a set of pre-classified instances as input, deciding which attributes are best to split on, splitting the dataset, and recursing on the resulting split datasets until all training instances are categorized. While building our tree, the goal is to split on the attributes which create the purest child nodes possible, which would keep to a minimum the number of splits that would need to be made in order to classify all instances in our dataset. This purity is measured by the concept of information, which relates to how much would need to be known about a previously-unseen instance in order for it to be properly classified.

A completed decision tree model can be overly-complex, contain unnecessary structure, and be difficult to interpret. Tree pruning is the process of removing the unnecessary structure from a decision tree in order to make it more efficient, more easily-readable for humans, and more accurate as well. This increased accuracy is due to pruning’s ability to reduce overfitting.


  1. Matthew, Mayo. “Machine Learning Key Terms, Explained.” KDnuggets, KDnuggets, 10AD, 2016, (1)

Decision Tree (last edited 2018-03-12 00:59:08 by notAndrey)

MoinMoin Appliance - Powered by TurnKey Linux