https://pubsonline.informs.org/doi/epdf/10.1287/ited.2013.0124
Most real-life decisions involve multiple criteria, yet many business and management university courses, MBA included, do not deal explicitly with this topic within any quantitative module. It is a remarkable fact that a number of current operations research and management sciences textbooks do not cover the topic of multiple criteria decision analysis (e.g., Hillier and Lieberman 2010, Powell and Baker 2011).
(In the original, the two books mentioned are links to where they appear in the References section at the end.)
This was written in 2013. I wonder if he means that these books actually don’t talk about multi-criterion decision making at all, or that they just don’t give a (flawed) mathematical method for doing it like AHP.
This paper basically says that adding weighted factors is highly flawed but everyone is doing it anyway. It says we need non-linear weightings and multiplication instead. It says multiplying factors is not a new idea by has been ignored for too long.
The paper says people routinely weight factors twice, first in a normalization step and then in a weighting step. This leads to some problems. Different choices of normalization, with the same weights, can lead to dramatically different rankings for options.
Some systems people use for combining weighted factors change the ranks between options A and B based on whether option C is included in the analysis or not. Adding or removing other factors can cause A to go from worse than B to superior to B. That is bad!
He also points out that simple additive weighted systems are unable to ever select (4.5, 4.5) as the best option when the alternatives are (1,9) and (9,1), even though a well-rounded option may be better despite having a slightly lower unweighted total. Weighting either the first or second factor more will never fix this – it’ll just make one of the polarized options win by more. This was called the “linearity trap” in a 1982 paper; it’s not new. Broadly, the paper says weighted sum addition systems have a tendency to select unbalanced options over balanced ones, contrary to how human preferences often are (the suggested solutions are extra decision rules or non-linear weighting functions). In a real life example in Poland:
Organizers of the tenders soon discovered that they were forced to select the offer that is cheapest and worst in quality, or the best in quality but most expensive [the decision making system didn’t let them pick moderate options]
Another example is diamonds. Their price depends primarily on four factors: carat (weight), cut, colour, and clarity. An additive weighted model works poorly because to be really valuable a diamond can’t be bad at any factor. A multi-factor weighted multiplicative model does better (according to research using 257 real diamond prices).
When multiplying criteria, simple weightings of factors can be done with exponents. And, contrary to addition, multiplication tends to favor balanced options (e.g. for multiplication without weightings, (4.5,4.5) wins significantly vs (1,9) or (9,1)). Note that this sort of favoring doesn’t come up with CF’s multiplication of only 1s and 0s.
The paper uses the term “units of goodness”, in quotations, to refer to how multiple judges on a panel may score options in essentially different units even if using the same scale (like 1-10). This is like how some teachers will give 100% grades and others think nothing is perfect and won’t give 100% – those teachers are effectively grading with different units or on different scales, despite both appearing to be grading in the same way, from 0% to 100%. Grades from those two teachers are not directly comparable but many people would incorrectly directly compare them.
The paper independently overlaps with my position by favoring multiplication over addition for combining factors. It does not consider avoiding weighting or combining of factors, as my approach does, nor does it consider issues like binary factors, focus, breakpoints, bottlenecks and excess capacity. Non-linear weightings (aka conversions between dimensions) don’t address the fundamental problems I raised with factors being in different dimensions. It doesn’t matter what complex conversion aka weighting function you use; apples aren’t oranges. I do agree that non-linear weightings make more sense than linear weightings in many scenarios (the paper points out that many things have diminishing returns as you get more, and that weightings can be curves. I think spectrums often have breakpoints/discontinuities rather than being smooth curves).
Multiplying non-linearly weighted factors is problematic because the weighting part is essentially converting their units to a generic goodness dimension. If you assume that’s fine, then whether it’s better to add multiple different goodness factors, or multiply them, depends. The paper does have some good points about the advantages of multiplication for many real-world scenarios (ignoring the qualitative difference aka dimension conversion problem), however multiplication is also imperfect and there is no single perfect answer for that (how best to combine numeric factors, even if they can be combined, depends on your goals and context).