Help Me Coin New Philosophy Term

Yes. People don’t understand what the full issue is. I think naming it will help as against not naming it. I think I should explain it and name it.

The thing I want a name for is basically anything that is a score system or equivalent. I consider “strong” arguments equivalent because they’re worth a larger amount than weak arguments (whether that is an amount of points, goodness, or whatever else). Or more broadly, I think it’s anything where people try to combine multiple factors into one overall evaluation. I don’t know of other types of factor combining methods that aren’t in some way the same basic idea of weighting and adding stuff. All ways of combining factors to achieve a combined evaluation seem to follow the same pattern.

Cost effectiveness analysis tries to figure out two factors in two different dimensions (e.g. benefit in quality adjusted life years that a charity intervention adds, and cost in dollars), but then it divides them instead of adding or subtracting them.

You can’t add (or subtract) miles plus hours. Dimensions do not combine by adding. But you can multiply (or divide) dimensions to get multi-dimensional units, like miles per hour. Most multi-dimensional units are not useful, but some are, including cost/benefit, length/time, length * length (area), length * length * length (volume), speed * mass (momentum), and Goldratt’s dollars * days for late orders (which fits the common pattern of multiplying an amount by a magnitude/weight/importance).

Philosophers often use the word credence to describe the weight they attach to ideas. Maybe credenceism would work.

1 Like

Yeah “credence” could maybe work. They also use “confidence” and “probability” but I don’t think a single word term based on either those would work well.

New Oxford dict says credence is:

**1 **belief in or acceptance of something as true: psychoanalysis finds little credence among laymen.
**• **the likelihood of something being true; plausibility: being called upon by the media as an expert lends credence to one’s opinions.

That’s actually pretty good for what I want. “plausibility” is another synonym people use for how good ideas are.

I wondered if “credence” and “credential” come from the same root word. They do:

from Latin credere “to believe, trust”

Which is apparently related to “credo” and having a “creed” and believing/trusting (having faith in) religion. So these concepts are connected with e.g. accepting intellectual authority. And the page on “credential” says:

Earlier in English as an adjective, “confirming, corroborating” (late 15c.).

1 Like

Hey rationality friends, I just made this FAQ for the credence calibration game. So if you have people you’d like to introduce to it—for example, to get them used to thinking of belief strengths as probabilities—now is a good time :)

The Less Wrong mindset like this (credence calibration and thinking of belief strengths as probabilities) is one of the major things I want to criticize. I think similar ideas are widespread. They don’t all claim it’s actual mathematical probability, but I have criticism either way. (I have extra criticism if they say literally probability because it’s a category error or metaphor to apply probability to ideas instead of physical events or states. Even at LW they are not consistent about claiming it’s literally probability.)

http://acritch.com/credence-game/#credence

What is credence?

Credence is a measure of belief strength in percentage.

And they want to think of it in terms of what you’d bet on being right about an idea at what odds (often I think they’re assuming you care about expectation value of the bet and nothing else like bankroll and risk management).

https://www.lesswrong.com/posts/iuLEjELoB2JcvgG5r/off-the-cuff-brangus-stuff?commentId=D7L9wo8D4yYmMZ6Sv

When your partner presents a new argument or piece of evidence, be honest about whether you have heard it before. If you have not, it should change your credence some. How much do you think? Write down your new credence. I don’t think you should worry too much about being a consistent Bayesian here or anything like that. Just move your credence a bit for each argument or piece of evidence you have not heard or considered, and move it more for better arguments or stronger evidence. You don’t have to commit to the last credence you write down, but you should think at least that the relative sizes of all of the changes were about right.

This is a nice example of not thinking in terms of criticism, and being able to essentially ignore any arguments with no rebuttal by simply subtracting some points for them (and then telling anyone who complains that you did factor in those arguments but they didn’t change the conclusion due to some other arguments being so strong – so instead of getting direct responses to arguments you get indirect responses about how great some other stuff is. And sometimes these things are directly related. They think their argument is +500 points and your criticism of it takes away 300 because it’s actually a great criticism and you might be right. Note btw you basically can’t win b/c they certainly won’t award your criticism 1000 points. The best you could hope for is 500 but probably less so they’ll still favor their thing some despite the refutation. 300 points on a criticism for something worth 500 is actually high – like they think there is an over 50% chance you’re right and they’re wrong – but they still are at +200 in favor of their thing…).

It’s also so awful when they are like “i heard that before and already updated on it, so i will not make any update when you bring it up now” and essentially just stonewall and ignore arguments (often your argument is significantly different than what they already knew, but even if it’s identical it still makes sense to want a rebuttal instead of to be told that the point scoring came out against you despite that argument being powerful).

What he means by updating your credence each time you get new evidence/argument is to take into account that new evidence/argument as another factor in a multi-factor weighted score system. He’s saying decide how much weight this factor is worth and then add (or subtract) it from the current total score to get a new, updated score.

That the factors are all (or at least mostly) in different dimensions doesn’t stop people from trying to additively combine the factors like this. How do they combine factors from different dimensions? The basic method here is to 1) convert to generic goodness points (called credence or probability or whatever) and 2) normalize and weight the factors rather than using raw numbers. (Note: failing to normalize or weight factors doesn’t help. That step is there for good reason given the other steps and premises.)

Philosophers say stuff like

a measure of goodness that is additive over individual credences

They use a lot of similar terminology to what I use when trying to criticize them. They will talk about amounts of goodness using the awkward-but-generic term “goodness” that I’ve used a bunch.

https://philarchive.org/archive/JACTRB-5

[Some] epistemologists theorize about credence, a fine-grained attitude that represents one’s subjective probability or confidence level toward a proposition.

https://philarchive.org/archive/JACTRB-5

I believe both 1 + 1 = 2 and that it will be sunny tomorrow, but my attitude toward these propositions is not exactly the same—the former is more probable. To capture this, epistemologists appeal to another propositional attitude, called credence (also sometimes called partial belief or degree of belief, but see Moon, 2017).

Partial belief! I was just writing about “partial” arguments or partial strength arguments. And “degree of belief”. Exactly. I write a lot about “degree arguments” and trying to evaluate ideas using amounts or degrees.

I think I should quote some stuff like this.

https://philarchive.org/archive/JACTRB-5

Depending on how broad one’s notion of credence is, then, virtually no one defends credence-eliminativism. After all, we are more confident in some of our beliefs than in others, and it is not obvious how to capture this with a belief-only ontology. Further, if we understand credence as closely connected to degree of confidence, credal-eliminativism also requires revision of folk psychology (Eriksson & Hájek, 2007, p. 209). On a third view, neither belief nor credence exists, but, except for those skeptical of all intentional mental states (Churchland, 1981; Rosenberg, 1999, 2018), most will find this implausible.

This says basically that “virtually no one” (except skeptics, who barely exist and are not respected) rejects credences because it’s “not obvious” how to capture some ideas being better and worse than others with any alternative. And there’s an ~obvious common sense idea that e.g. i’m more confidence that 2+2=4 than that tomorrow will be sunny. Or I’m more confidence that some medicines are safe than others.

This is confirming that my position is ~unique as a non-skeptical alternative and that there’s really only the one rival that I disagree with that everyone believes. Maybe credenceism is a good name.

Would it be credenceism or credencism?

Isms where the base word ends with ‘e’ that remove ‘e’:

  • racism
  • activism
  • absolutism
  • imagism
  • fallibilism

Isms that keep ‘e’:

  • ageism

Also ‘c’ is soft when followed by e, i or y. So in “ce” ending words you can change to “cism” without screwing up the pronunciation of the ‘c’. However, this also applies to ‘g’ but ‘ageism’ keeps the ‘e’ anyway.

Merriam Webster claims ‘agism’ is a less common but correct spelling of ‘ageism’.

There are a bunch of words ending in “eism” at Word List: Isms but they are ~all related to God (atheism, theism, deism, etc.)

So I’m thinking credencism and credencist. Can call it “weighing factors” often but make credencism the actual name. That seems to fit well with how academic philosophers talk about their own views.

Accuracy and the Laws of Credence is a book by Richard Pettigrew (a Bayesian academic who received a PhD in “Mathematical Logic” in 2008). Page 1, footnote 1:

I will talk of degrees of belief and credences interchangeably.

He also published a two part series of articles called “An Objective Justification of Bayesianism”. He also published “On the accuracy of group credences” and " What is justified credence?"

Credence seems to be a standard word they use as an alternative to belief. So instead of having a “justified true belief” they may want a “justified strong credence”. So “credence” replaces “belief” and “strong” replaces “true”. (“strong” could be other words like “high”. i’m not sure if that’s standardized much but i think “strong” and “weak” are common words.)

The good things about credencism:

  1. can tell the difference between different levels of confidence instead of just saying of two medicines – one used for hundreds of years and studied extensively, one newly invented – “i believe they are both safe”
  2. probability/betting math and statistics are nice sometimes
  3. not skeptical
  4. not infallibilist

The first one is the main issue.

CF has all 4 of these good points too. How you can get the first one while rejecting credences is the hard part that other people have no solution to.

How CF uses math/stats/etc is relatively simple: it’s just another type of argument. The conclusion of a math calculation is an idea that can be used in thinking like other ideas. So it’s allowed. It just isn’t believed to be fundamental to how epistemology works.

How does CF differentiate different confidences or solve a similar problem? How can CF fulfill the same need? It uses pass/fail grades for ideas. So your first thought may be “two ideas that both pass are equal, and two that both fail are equal, and there’s no way to differentiate two medicines that are both good enough to take (so pass not fail) but one is better”.

That’s the basic issue for why credencists think they’re right.

CF’s answer is to stop giving an idea a single evaluation.

If you only have one single evaluation for an idea, then you need degrees/probabilities/credences or you run into the problems credencists are concerned about.

The way out is to evaluate an idea multiple times. It gets more than one pass/fail grade. That lets two ideas, e.g. two medicines, be differentiated.

What do you evaluate ideas for? Normal issues are like “Is it true?” or “Is it probable?” which lead to giving one evaluation to an idea.

CF evaluates an idea+goal pair. Does this idea work at this goal? (To be more precise, you can add in the context and evaluate an idea+goal+context triple.)

CF rejects talking about “the goal” (very common phrasing) in favor of looking at multiple goals. (I think the usual split of attention for goal and solutions is like 10/90 or even more skewed, but I suspect it should be more like 50/50.)

Two good ideas can be differentiated because, although they both succeed at many goals, there are some goals which one idea passes at and the other fails at.

Instead of saying one idea scores higher than the other, it’s better to identify specifically what goals one idea passes at which the other fails at.

This viewpoint leads to some new challenges and difficulties. For example, there are infinitely many logically possible goals. And every idea passes and fails at infinitely many of those goals. And you can’t get a good overall score (like number of passes divided by sum of passes and fails) to compare ideas. You have to decide what to do by choosing some goals to act on and then acting on any idea which passes on all of your goals (ideas which do that don’t need to be differentiated; if unsatisfied then pick some more ambitious goals). That leads to the question of how to choose goals (or the related question of how to choose values).

Basically, goals are different dimensions, and instead of combining how well an idea does at many goals into a single credence (which is summing weighted factors from different dimensions), you need to leave the dimensions/goals separate and decide which ones you care about (and then use a solution which is a pass grade for all your goals, which you may view as subgoals and “and” together to get a final combination goal that includes all the stuff you want, and then you act on an idea that succeeds at that final goal and it’s differentiated from any idea which doesn’t succeed at that final goal. in other words, act on ideas which succeed rather than fail at all dimensions you want. that’s different than trying to combine dimensions into one score with degrees. it does combine dimensions in the sense that you can logical-AND the dimensions together and then if every dimension is pass the overall result is pass, otherwise it’s fail. that’s actually multiplying. anding is multiplication. it literally works directly with 1 and 0. if you multiple 1s and 0s you get 1 IFF everything being multiplied is 1, otherwise you get 0. so what’s going on here is to get an overall combined evaluation we evaluate everything as 1 or 0 and multiply and then the final result is in multi-dimensional units that include every dimension we care about. because there are infinitely many possible multi-dimensional units we could use, and basically by picking arbitrary ones we could reach any conclusion, it’s crucial to consider what our goals should be – which dimensions do we want to pass/succeed/1 in).

This feels like the best name so far.

‘credenceism’ looks wrong to me (spelling wise), but it’s closer to how I’d pronounce it. ‘credencism’ seems a bit awkward to say aloud the first few times.

There’s an ongoing debate between people who believe in credences, beliefs, or both (including one reduces to the other so one is more fundamental, or they’re equivalent, or they’re separate and we have both).

If I say I’m against credencism people may take me as being on the belief side, which is misleading. I think all sides of the debate are confused.

One issue is people may not realize I disagree with having beliefs about epistemic probability. Basically some people sneak credences into beliefs by having beliefs about credences, which has various names including “epistemic probability”. So on the one hand you can have a belief about (physical, actual, real) probability like “if i flip this coin, there is a 50% chance it’ll land on heads” and on the other hand you can have a (meta) belief about epistemic probability like “i believe there’s a 70% chance that capitalism is a good idea” which is basically the same thing as credences. some academics try to draw a major distinction but i just want to criticize both together.

the whole academic debate is confused in various ways and they seem bad at resolving stuff. like there are people who do equate credences and probability beliefs. but those are different. it’s just “epistemic” probabilities that seem to be credences.

I don’t think I’ll get a name with no downsides, though, so idk. Still undecided.

Found another good summary re credences:

https://philpapers.org/archive/HOLIAA-2.pdf (emphasis omitted):

Let us be clear on what the credence picture involves. It doesn’t simply involve the idea that we can entertain the thought that a certain outcome has a certain chance: that there is a 0.5 chance that a fair coin will come down heads. For that thought can be the content of an all-out belief. If that were all that was involved, we could simply add a few all-out beliefs with a probabilis- tic content to our stock of beliefs, and there can be no doubt that that would be a very useful thing to do. The credence picture is very different. It is that our beliefs are essentially probabilistic. This is because probability is not in the content of a few beliefs but in the attitude of belief itself. It is not that we believe that something is, say, 0.5, or 0.7 probable. It is that we have a 0.5 or 0.7 credence in it happening. Every belief is thus a probabilistic belief (even if the credence happens to be 1 or 0).

I do disagree with that kind of credence. And also other stuff. If I say I’m against credences people will think I mean just this and not something broader :(

https://philarchive.org/archive/BYRPAP-2v1

One very popular framework in contemporary epistemology is Bayesian. The central epistemic state is subjective confidence, or credence.

If I say i’m against credences, what people will think I mean is that I’m against Bayesians. Which I am, but not just them. So I think the term “credencism” won’t work.