Multi-criteria decision making — TOPSIS
In the following article, I talk about the TOPSIS method, which is a well-known approach for ranking options and items based on different criteria. The reason for selecting this method to talk about is that in machine learning, sometimes we need to rank our results and observations to have better insight for making the final decision.
TOPSIS
The TOPSIS method is a Multi-Criteria Decision-Making (MCDM) method that ranks options [1]. This method uses the concepts of ideal solution and similarity to the perfect solution. The ideal solution is the best solution, which generally does not exist in practice, and it tries to get closer to it. In order to measure the similarity of an option to an ideal and counter-ideal solution, the distance of that option from the ideal and anti-ideal solution is measured. The possibilities are then evaluated and ranked based on the ratio of the distance from the negative-ideal solution to the total distance from the positive-ideal and negative-ideal solutions. The word TOPSIS is derived from the initials of the phrase Technique for Order of Preference by Similarity to Ideal Solution.
TOPSIS includes m options as a geometric system with m points in the n dimension space. The method is based on the concept that the alternative should have the shortest distance from the positive-ideal solution and the longest distance from the negative ideal solution. TOPSIS defines an indicator called similarity with a positive-ideal solution and avoidance of a negative-ideal solution. It then selects the alternative method with the maximum similarity to the positive-ideal solution.
If an option is close to the ideal solution, have a higher degree. The ideal solution is practically the best in every aspect that does not exist, and we try to approximate it. To measure the similarity of the option with an ideal and non-ideal level, we consider the distance of that design from the ideal and non-ideal solution.
Assumptions
A- The desirability of each criterion should be uniform, increasing or decreasing. The desirability of a criterion, whether qualitative or quantitative, should be always increased or decreased by changing its value. Criteria must be uniformly decreasing or increasing for the best available value to be considered a positive ideal and the worst value to be considered a negative ideal.
B- Criteria should be designed independently (with zero correlation).
C- The distance between the options from the positive ideal and the negative ideal solution is calculated as a key distance.
You can find related useful links including the Python library at the end of this article.
Advantages
- Possibility of decision making with both positive and negative criteria.
- You can add as much as you need as the criteria to the decision matrix.
- Simple and fast approach for considering different options and criteria.
- Convert qualitative criteria to the quantified easily.
- The output is quantitative and the ranking of other options is expressed numerically.
- This method work with distances. TOPSIS selects the option with the farthest distance from the worst option and the shortest distance from the optimal option.
pip install topsis2
TOPSIS method
1- Decision matrix: The first step is the decision matrix. The decision matrix consists of a set of criteria and options. The criteria are placed in columns, and the options are in a row. And each matrix cell evaluates each option against each criterion. For completing the matrix the real values of each criterion must add.
2- Scaling the decision matrix (normalization) using the norm method.
3- Determining the weights by multiplying the weight of the criteria obtained from other methods in the normal matrix.
4- Finding the worst alternative and best alternative. The criteria type (positive or negative) must be specified.
5- Calculating the distance between the worst and best alternative and the option values.
6- Calculating the similarity index and ranking the options
References
- [1] Hwang, Ching-Lai, and Kwangsun Yoon. “Methods for multiple attribute decision making.” Multiple attribute decision making. Springer, Berlin, Heidelberg, 1981. 58–191.