A summary of TAO (tree alternating optimization) — A Decision Regressor Tree

AI Maverick
2 min readFeb 15, 2022

--

If you are working in the machine learning area, you heard about the decision tree regressor, and you may already implement them. There are different types of trees, such as CART or C5.0, which you can execute or use in your algorithm.

This method was introduced in 2018 by Miguel Á. Carreira-Perpiñán and Pooya Tavallali.

Recently, I have been working on a new algorithm for that, I needed to implement a Decision Regressor Tree, and from there, I saw this new Tree.
The decision Regressor Tree I am going to talk about in this article is tree alternating optimization (TAO). This tree has introduced two advantages, fewer nodes and lower error.

For instance, if you build a random Forest RF with the TAO tree, in the end, the population of the RF would be less than other RF with CART.

In contrast to CART, TAO Tree does not learn regression or classification in a top-down structure. The CART considers all the nodes for impurity minimization y computing a binary decision from the top to the tree root. The TAO, however, reviews the built tree iteratively from bottom to top and optimizes all the terminal nodes.

One interesting fact regarding this kind of tree I found was that it could reshape its body during the tree building and delete some of its impure branches.

TAO, reviews the built tree iteratively from bottom to top and optimizes all the terminal nodes.

Reference

Miguel Á. Carreira-Perpiñán. 2020. The Tree Alternating Optimization (TAO) Algorithm: A NewWay To Learn Decision Trees and Tree-BasedModels. (2020).

Miguel Á. Carreira-Perpiñán and Pooya Tavallali. 2018. Alternating Optimization of Decision Trees, with Application to Learning Sparse Oblique Trees. In Advances in Neural Information Processing Systems (NEURIPS), S. Bengio, H.Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), Vol. 31. MIT Press, Cambridge, MA, 1211–1221.

--

--