Multi-Factor Decision Making Math [CF Article]

What do people do about that? They multiply by weighting factors that get results they think are reasonable. But that isn’t actually a way of making decisions. They’re using their intuition, common sense or something else. That is how the decision is actually being made and that pre-existing conclusion is biasing their decision making process.

I’m not sure about “But that isn’t actually a way of making decisions.”

I think that the big quote is making these points:
(note: these are not in the same order as the quote, also the list isn’t meant to be exclusive)

  • there’s a common decision making process
  • which is based on ppl’s intuition / common sense
  • that method is implicit is is what they actually use (and also biases the process)
  • a similar explicit method is to pick weighting factors that makes seemingly-sensible answers come out
  • the explicit method is basically what the common method is doing, but people don’t actually use the explicit method (they use the implicit one)

“But that isn’t actually a way of making decisions.” is literally saying that the subject (that: the explicit method) isn’t a decision making method. But I think it’s intention is to say: the explicit method isn’t what is actually done by ppl most of the time. I’m not sure tho – like I think that explicit-weighting-factors is a method (well, once you add a way to like rank or pick a winner). Isn’t Analytic hierarchy process an example of that?

I think grammar and text analysis trees would locally help you here, however I doubt this passage is a constraint. You didn’t say goals or relate these comments to goals though so it’s hard to know. But I’m guessing you should focus on trying to understand the major themes of the post.

Yeah, I don’t think it’s a constraint either.

The goal was to figure out if I was reading “But that isn’t actually a way of making decisions.” correctly[1]. I’ll do a tree and mb post to grammar questions.


  1. I’m not sure that i know how to pick precise-enough goals. ↩︎

What? Why?

The issue isn’t goal precision. It’s connecting local goals to the bigger picture.

Your post was written more like a correction than a question.

That wasn’t the goal I would have guessed from your post at all.

You started with:

I read that as you saying you thought Elliot was maybe wrong, not as you saying you weren’t sure if you were reading/understanding the article correctly.

This is also written like a correction. You aren’t asking for help or saying you are having trouble understanding something. You seem to be trying to correct Elliot and tell him that he didn’t make his point properly, he used the wrong words for what he meant.

1 Like

Yeah, you’re right. There’s something going on there but I have some pre-existing commitments now. I will come back to this.

Notes/comments on “Averaging Factors” section:

Summary: Averaging doesn’t help because we can’t simplify and combine the terms, so it has the same issues as addition.

With median, you’d get the value for a single dimension. “And you’d need to be able to rank all the factors from different dimensions, in a single ranking, in order to find the median.”

(My thought) I’m not sure I quite follow this point. It’s unclear if Elliot is saying you just need to be able to say e.g. z > x > y or if he’s saying you need to be able to put the whole term (e.g. 5z) in a single ranking in order to find the median. I would think it is the latter, since even if you know 1z is greater than 1y, that doesn’t tell you if 5z is greater than 8y. You’d need more data about the relationship there to rank them for certain.

Elliot says (in a statement that covers both median and mode) “[b]oth of those would get a result in a single dimension rather than combining factors from different dimensions.” I guess you could just look at the coefficients to figure out the most frequent value. But then sometimes there might be no mode (if all numbers are different). And if you had say 2x, 2y, 3z, how does saying the mode is 2 help? And in that case it’s actually referring to two dimensions and you’re dropping the z…anyways I found this part a bit confusing.

You don’t look at just coefficients for mode. Mode means:

Statistics the value that occurs most frequently in a given set of data.

Consider this data: 2x, 2x, y, 2y, 3y, 4y, 5y, 5z, 5x, 5w.

The mode is 2x because it’s the one thing that there are 2 of.

ah okay that makes sense, thanks.

I liked this point/example and thought it was especially clear (emphasis added):

Conversions work in both directions (if you can convert miles to feet, you can also convert feet to miles). Suppose you can convert both friendliness and cuteness to goodness. That means you can convert from cuteness to goodness and then convert that from goodness to friendliness. This implies a conversion from cuteness to friendliness.

In other words, suppose you come up with these two conversions (f is friendliness, c is cuteness, and g is goodness):

f = 4g
c = 2g

Mathematically, that implies (using substitution):

f = 2c

If you can’t accurately compare friendliness and cuteness directly, you also can’t convert them both to the same type of goodness.

In the section “Converting Dimensions to Goodness”, Elliot talks about how you can rank some stuff - like the cuteness of a pet. And then you can come up with approximate scores that reflect both the rankings and also the distance between the rankings. This is fine if you keep in mind that it’s an approximation. Quote:

We can’t measure cuteness of a pet either, but we can rank it. Which pet do we think is the cutest, second cutest, etc., in our opinion? And we can assign the pets numerics cuteness scores which are compatible with the rankings (highest ranked pet has the highest score, second highest ranked pet has the second highest score, etc.). We can also make cuteness scores approximate our opinion of how different the pets are. E.g. if the best two pets are similarly cute, and the third pet is much less cute, we might score them 85, 80 and 40. Those scores fine as long as we know they aren’t real measurements like inches. The cuteness scores just mean two things: the rank ordering we assigned the pets and a rough approximation of how close or far apart the pets are on cuteness.

My comment: Yeah or like if you think of the tastiness of foods and you say “pizza>>>>cheeseburgers>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>boiled cabbage”, that’s fine as a ranking and also as an approximate statement of your opinion about boiled cabbage tastiness relative to the other two foods, but if you started to try to treat each “>” as like a precise thing corresponding to something in reality in the same way an inch does, you’d get in trouble.

Yeah. One workaround I use for copy/pasting mixed text/math is this (basically TextSniper with math support):

It has a limited number of free uses per month tho, and is like, an extra tool to worry about that you need to install instead of just being able to copy/paste normally

I did some editing to try to make that part easier to understand:

What do people do about that? They multiply by weighting factors that get results they think are reasonable. But that isn’t actually a way of making decisions. How do they know what’s reasonable? They must be using their intuition, common sense or something else other than weighted factor summing. So the weighted factor summing method doesn’t work as a self-contained solution to decision making. It relies on pre-existing opinions reached some other way. People often make the math (or non-numerical estimate) come out to fit what they already think (without realizing they’re doing it). For example, college rankings often start with the pre-existing idea that Harvard is good and then give high weightings to whatever factors Harvard is good at so that Harvard-like schools come out on top, which seems like a reasonable conclusion to people who already believed that Harvard is one of the best schools.

Any method involving arbitrary choices (like what unit conversions or weights to make up) runs into a major problem: You have no good way to make an arbitrary choice unless you have pre-existing knowledge of what a good answer is.

This reminds me of the point expressed here (and I’m sure other places) about how induction doesn’t work as an explanation of how people think.

So we have this graph and we’re connecting the dots. Induction says: connect the dots and what you get is supported, it’s a good theory. How do I connect them? It doesn’t say. How do people do it? They will draw a straight line, or something close to that, or make it so you get a picture of a cow, or whatever else seems intuitive or obvious to them. They will use common sense or something – and never figure out the details of how that works and whether they are philosophically defensible and so on.

People will just draw using unstated theories about which types of lines to prefer. That’s not a method of thinking, it’s a method of not thinking.

They will rationalize it. They may say they drew the most “simple” line and that’s Occam’s razor. When confronted with the fact that other people have different intuitions about what lines look simple, they will evade or attack those people. But they’ve forgotten that we’re trying to explain how to think in the first place. If understanding Occam’s razor and simplicity and stuff is a part of induction and thinking, then it has to be done without induction. So all this understanding and stuff has to come prior to induction. So really the conclusion is we don’t think by induction, we have a whole method of thinking which works and is a prerequisite for induction. Induction wouldn’t solve epistemology, it’d presuppose epistemology.

So the connection I have in mind is this: weighing factors according to reasonableness doesn’t work as a self-contained general purpose solution to making decisions, since it’s relying on some pre-existing opinions reached by means outside the weighing factors decision-making method. Induction doesn’t work as a complete/general purpose explanation of how knowledge is created, since it apparently has to rely on ideas about simplicity and Occam’s razor and other ideas that need to be created without induction.

1 Like

The arguments in this article seem to refute judicial balancing tests!

One balancing test from American administrative procedure law applies to the question of due process of law, a consideration arising from the Fifth Amendment and Fourteenth Amendments to the constitution. Due process questions concern what type of procedures are appropriate when the government takes away property or a privilege from an individual; the individual would argue that the government should have, for example, given them a hearing before taking away their driver’s license or cutting off their Social Security benefits. This balancing test, of which it weighs considerations:

  1. Private interest affected by an official action taken by a government agency, official or non-governmental entity (company) acting as a governmental agency. (i.e., how important is the property or privilege that is being withheld or confiscated?)
  2. The risk of some deprivation being erroneously inflicted on the respondent through the process used or if no process is used. (i.e., does giving the person a hearing or whatever else they asked for actually make it less likely that the government will make some sort of error by giving the individual an opportunity to point out the government’s mistake?)
  3. The government’s interest in a specific outcome (for example, the government may say that giving a hearing is too expensive).

There’s no way to convert Importance of Private Interest Affected and Risk of Deprivation and Importance of Government’s Interest into some common factor. So the result IRL is that judges are just gonna go according to their intuition of what’s fair and right - so the supposed “test” isn’t doing much except maybe serving as a reminder list of things to consider when using their intuition.

If for factor 1 you had “Affects an important private interest?” and important private interest were defined in some reasonably objective and clear way, and then you did the same things with 2 and 3 (basically convert them to binary factors), then i think you could maybe have something inspired by the above test that could perhaps be applied in some reasonable and consistent way. But not as it stands.

Yes. That is actually reasonable. There’s nothing wrong with “here are a few tips about stuff to keep in mind”. That does help people make better decisions and judgments. People sometimes forget key stuff or don’t know which factors merit major emphasis. The balancing test helps guide that.

The issue is when the tips are presented as an actual decision making method. It’s problematic when it’s like 30% complete – some guidance but the judge has to figure out the majority of the matter himself – but it’s seen as 90% complete.

1 Like

I like the new version.

I’ve thought about this most days since then, and I get a bit stuck. I have some ideas, though. I’m intending to make a linked post soon. (This reply is brief b/c of that getting-stuck-ness. It’s been a barrier to posting, tho, so I’m saying this in part to get past that)

I had an idea today about a convergence between a common thinking technique and the OP.

One of the ways I make ideas more manageable is to use a deliberately limited context – like say that 2 factors are equal, or some factor has no effect. That way, it’s easier to reason about fundamentals (and if you find a problem at that point then it’s probs generalizable). Like someone might say all else being equal, if x goes up then y goes down. I think this is a pretty common method.

If you do this with MFDMM (mutli-factor decision making math – I guess abbreviation-pending), there are two important changes that can happen:

  1. Two factors can be equal, i.e., their conversion constant is 1.[1] This is important when factors are multiplied or conversion usually matters. (nb: this includes ‘canceling out’ via division)
  2. A factor could be set to zero (i.e. has no effect), which makes a difference when things are being summed

Both cases can make some previously unanalyzable situation analyzable.

Anyway, this seemed like a notable convergence between an existing traditional method and MFDMM (which, AFAIK, didn’t have a good rule of thumb about when to do what, but MFDMM does).


  1. or they could be set to a fixed ratio, which is thus equal to the conversion constant. ↩︎