what about ‘comparisonism’ or something? (edit, I realize i said ‘comparitivism’ before, which is pretty close – this has a different take)
Comparison can mean both orderable (gt/eq/lt) and also just distinguishable (eq) – I only mean the first one in this case. The idea is: ppl try to rank thinks (which requires orderability) and they try to assign some numerical value (which implies orderability) – I think both of those are things you’re against (e.g. both the intuitive ‘more true’ and the explicit assignment of credence-values).
But yes/no isn’t orderable – you can’t sort a list of booleans (unless you convert-with-context, like C treating them as ints, or a convention like true > false).
Maybe something like ‘sortism’, or ‘orderabilityism’ could work (though I don’t like them specifically)
I don’t object to ranking things in one dimension like looking at cats and ranking what you find cutest. Even scoring in one dimension is fine if you don’t try to convert to another dimension and don’t believe the numbers are precise, rigorous or measured (it basically just means a ranking plus some rough comparisons – you can use score numbers to indicate if the difference between candidates is big, medium or small).
If I score cat cuteness as 82, 80 and 20, that gives you information that isn’t contained in the rankings 1st, 2nd, 3rd. But if I was trying to say every cuteness point is worth $5 of price, then I think those numbers would be problematic. They’re kinda made up and inaccurate. I think they accurately represent that cats 1 and 2 are close on cuteness and 3 is ugly, but I don’t think they’re very suitable/accurate for use as weighted factors. There are some issues where we can use numbers more precisely but there are a lot that are more like cat cuteness numbers.
What’s generally the hard problem for decision making or idea evaluation is taking into account more than one non-binary factor. (You can rank the cats on cuteness and also take into account several pass/fail factors at the same time. You just exclude cats with any fails and then pick from the remaining cats based on the one dimensional ranking. But the moment you have two non-binary factors, in different dimensions, it’s not straightforward what to do. That’s the problem people try to use weighted factors to address.)
Trying to approach the issue from the beginning again. A thing I object to, and want a name for, is indecisive, partial arguments. Or in other words, judging by amounts of goodness. Lots of ppl do indecisive arguments without any quantities or numbers, but i consider it basically equivalent. what can an argument do if it isn’t either reaching a decisive conclusion or increasing some quantity that can add up to a conclusion? non-numerical type ppl will object tho.
And another thing is trying to add factors from different dimensions. This is using factors as indecisive, partial arguments and it’s trying to judge by amounts of goodness.
what ppl should do instead is focus on error correction. errors aren’t partial. something is or is not an error. or if you define goals or standards vaguely enough to change that, and blur the line between correctness and error, then i think you’re in trouble.