Essentialism / what does a universal proposition mean?

Topic Summary: Although I am not an infallibilist, I disagree with Popper about how common it is for true theories to later be proven false. This is because I disagree with him about what a theory is, or what a universal proposition means.

Goal: Become a better philosopher. More specifically, I think that much of Popper’s philosophy—and my disagreements with it—hinges on this topic (and some things closely related to it).

CF relevance: I assume that CF agrees with Popper on this point.

Do you want unbounded criticism? (A criticism is a reason that an idea decisively fails at a goal. Criticism can be about anything relevant to goal success, including methods, meta, context or tangents. If you think a line of discussion isn’t worth focusing attention on, that is a disagreement with the person who posted it, which can be discussed.) Yes.


(All Popper quotes are from Conjectural Knowledge: My Solution to the Problem of Induction).

Popper thinks that Newton’s theory might be false, because he thinks it contradicts the theory of Einstein.

I first turned against [the idea that not all the general propositions of science are mere hypotheses] because of Einstein’s theory of gravity: there never was a theory as well ‘established’ as Newton’s, and it is unlikely that there ever will be one; but whatever one may think about the status of Einstein’s theory, it certainly taught us to look at Newton’s as a ‘mere’ hypothesis or conjecture.

I don’t agree that Newton’s theory contradicts Einstein’s, but talking about Newton’s vs Einstein’s theory is a complicated discussion.

Thankfully, Popper gives some much simpler examples of established laws that he thinks have been refuted in the sense that they were originally meant. One example he gives is

(c) that bread nourishes,

in whatever sense Hume meant (c) when he wrote it. (c) is false, says Popper, because it

was refuted when people eating their daily bread died of ergotism, as happened in a catastrophic case in a French village not very long ago. Of course (c) originally meant that bread properly baked from flour properly prepared from wheat or corn, sown and harvested according to old-established practice, would nourish people rather than poison them. But they were poisoned.

It is bizzare to me that Popper thinks Hume’s common sense theory that “bread nourishes” is false. I think Popper is imagining that “that bread nourishes” is shorthand for some very formal statement, in the vein of analytic philosophy, which says something like “for any bread which exists in the set of all logically possible bread, if it satisfies conditions C1, C2, C3, C4, and C5, then it will always nourish in all circumstances.” I don’t even think that such statements are meaningful (I don’t believe in “logically possible”). But regardless, I think that this isn’t what people actually mean when they say “bread nourishes.”

What do I think it means that “bread nourishes”?

It means that bread, in essence, does nourish people. When it doesn’t nourish, that state of affairs is caused by some existent which is not essential to the nature of bread itself (the Claviceps purpurea fungus in the case of the ergotism), or some existent which is not essential to the nature of man himself. For example, bread will not nourish if it is laced with arsenic, and people who have celiac disease will not be nourished by bread.

What is an essence? Here I take Ayn Rand’s definition (ITOE)

Objectivism holds that the essence of a concept is that fundamental characteristic(s) of its units on which the greatest number of other characteristics depend, and which distinguishes these units from all other existents within the field of man’s knowledge.

In the case of bread, one of its essential characteristics is that it is a foodstuff for man. Many or most of its other characteristics depend on that. Indeed, that’s why bread has the size and shape that it has, that’s why it’s easy to chew, that’s why it’s made of flour, that’s why you can buy it at the grocery store, etc.

The issue isn’t “how common it is for true theories to later be proven false”. If it’s “true”, it can’t later be proven false. I take the issue to be more like: how often are 90% of people wrong? Popper and I say often. And I think that’s a current issue, not just a pre-modern issue. The issue can be looked at in other ways too. How often are people really confident but mistaken? Is high confidence so reliably accurate that, when you’re confident, being dismissive of critics is low risk?

I think (that Popper thinks) that Hume would have been surprised about the bread. Hume would have thought he knew enough to say those people would be nourished by that bread (because it was properly and customarily prepared, not visibly spoiled, etc.), but Hume would have been wrong. I have not read Hume’s original passage to check what he meant, and while it could potentially reveal a scholarship error by Popper, it doesn’t really matter to my philosophical beliefs.

It’s interesting to use this example because, in the last few decades, a lot of people have reported large improvements in their health after they stop eating bread (and usually some other foods along with it). Whether bread is good for you has become more controversial than it used to be. If there is a significant problem with bread, it’s unclear how much this is because the bread we eat is significantly different than the bread our ancestors ate.

Perhaps you’ll tell me bread is essentially good for me even if all the bread at the grocery store today is bad for me. I don’t care to debate the proper definition of “bread” and I don’t think it matters. I think the point is that if the bread at the grocery store is bad for me, then many people are mistaken. So it serves as a (potential) example of fallibility. I’ll further say that many people are very over-confident on this issue, and even if modern bread turns out to be healthy, they didn’t actually know enough to reach that conclusion (they couldn’t address a lot of arguments, they just got lucky).

I also think the common conception of bread as innately nourishing is wrong regardless of the outcome of debates like the one about gluten. I think a better view is that most of what we eat is evolved to be bad to eat. It has knowledge and adaptations to prevent it from nourishing us. Rather than innately nourishing, it’s only nourishing because our body has a bunch of defense mechanisms. Our bodies have to put in work and do stuff to make our food actually benefit us. If the food were merely a neutral resource, it wouldn’t be innately nourishing; it’d be our body’s evolved knowledge of how to use the resource that makes it nourishing. You may see some connections with BoI’s discussions both how suitable deep space and the Earth’s surface are for human life.

Our food isn’t even neutral; it has a bunch of anti-food knowledge/features, which our bodies have to overcome. Digestion and the immune system and all that is really complicated, and taking it for granted so much that you view the food itself as innately nourishing is an inaccurate perspective (I have no idea if @lmf has that perspective, but I think many people do, so it’s relevant to the correctness of standard beliefs about bread). Food innately contains stuff like energy and minerals, but those are common in non-foods too (coal is not thought of as innately nourishing even though it’s certainly possible, with the right technology, to get some nourishment out of it, since both energy and useful minerals are present). I think most people are simply unaware that bread contains stuff that’s bad for them rather than just being nourishing. (I’m ignoring some side issues like that quite a few people think bread has a downside, which is that it has a lot of calories, and they don’t want to get fat, so they view bread partially negatively for that reason.)

I’m not sure if the frequency of errors in some context is the actual point of disagreement (if phrased in terms of opportunities for progress, maybe you’d agree those are common). Another potential issue is what to call it when we improve our ideas. Popper calls it correcting errors or solving problems, but some Objectivists prefer to talk about progress without saying the older stuff was wrong or was a problem. I view this as partly a terminology issue, but I also sympathize more with Popper: I think it’s an error for people to be uncomfortable with error and the idea that progress/evolution involves error correction. Part of Objectivism’s stance is to (sometimes) say the old idea was – always and forever – the right, best answer to its context. But sometimes it wasn’t, and I don’t think Objectivism has any good answer for how to know when it was. Regardless, we aren’t in that context anymore, so it’s an error from the point of view of our context, so it’s natural to call it an “error” even if we consider it reasonable, understandable or even wise for the people in the past to have believed it. Some objectivists also like to say we’re adding to and building on older ideas, but we do subtract from them sometimes too; progress isn’t just additive.

1 Like

Note: If this is too off-topic it can be moved. I wrote this out to practise writing my thoughts I have when I read.

So, the nourishment of anything depends on the knowledge of how to make it nourishing in the same way that the habitability of anywhere depends on the knowledge of how to make it habitable. So speaking about how habitable a place is or how nourishing something is is wrong - it’s not a property of the thing or place. So in the same way that the earth’s biosphere alone is incapable of supporting human life, bread alone is incapable of nourishing humans? I suppose that if you were starving to death and there was a loaf of bread right next to you, you would need to know what bread was, and know that what you were seeing was bread, and know how to eat food, and know that you could eat it, before you would eat it. Lacking this knowledge means you’d starve. Like in this passage from BOI (p. 207):

Before our ancestors learned how to make fire artificially (and many times since then too), people must have died of exposure literally on top of the means of making the fires that would have saved their lives, because they did not know how. In a parochial sense, the weather killed them; but the deeper explanation is lack of knowledge.

The knowledge we bring to bear on wood to warm ourselves is like how we bring knowledge to bear on bread to nourish ourselves.

Yes, of course. I meant something more like the part of your sentence which I bolded. (I had actually considered rewriting that sentence to make it more literal, but I think I decided that it would be obvious from context or something. I regret it.)

I don’t disagree that there could be many facts about bread which Hume didn’t know, or false “facts” about bread which Hume erroneously thought he knew. And I don’t doubt there will be many such surprises in the future, even for modern food scientists who specialize in bread.

However, none of that detracts from the fact that Hume did know some facts about bread. For example, he knew that bread nourishes.

There isn’t a controversy about whether or not bread nourishes. There is controversy about how good bread is for you.

I wouldn’t say that. Judging that something is good is much more complicated than judging that it is nourishing. For example, whether or not something is good depends on the context of what other options are available.

For all I know, all the bread at the grocery store could be bad for you. I seriously doubt it, but I haven’t really looked into it. But regardless of how optimal it turns out that bread is qua part of a healthy diet, I am confident that all the bread at the grocery store, even the Wonder Bread, essentially nourishes.

I do not have that perspective.

I think that there is an important difference between the concepts of ‘innately’ and ‘essentially’. It is always very difficult to define concepts like these, but I know that they are distinct because I use them in different ways. The former is much stronger than the latter. For example, many people are essentially happy, but no one is innately happy. Many foods are essentially good for you, but no food is innately good for you.

It’s actually quite rare that I use the word “innate” in a positive way (i.e. saying that something is innate). It seems like I use it in situations where there’s something almost tautological going on. Like “pleasure is innately pleasurable.” Or something less obvious like “innate in the idea of the right to property is the idea of the right to free speech.” I also would accept your statement that “food innately contains stuff like energy and minerals.”

I wouldn’t put it exactly like you are putting it, but I agree that something like this is part of Objectivism’s stance. And I agree with Objectivism on this point.

Objectivism does not even purport to have a general answer to the question of how to know when it was.

Indeed, I claim that the question of “when is an idea the best answer to its context?” is a version of the problem of induction. It amounts to asking when / how you can achieve contextual certainty about a given universal proposition.

I think that this is a good question and a real problem. (Contrasted with the questions of “when / how do repeated observations justify a given universal proposition?” or “when / how do we reach universal propositions psychologically by observing repeated instances?”, which I still agree are pseudo-problems.)

I will probably make a thread about this later, but my thoughts on the matter are still inchoate at present, and it’s a distinct topic from the essences stuff. Feel free to respond or ask for clarification if you want ofc, but you might not get a reply for a long time.

Agreed.

When I wrote my previous post, one of my goals was to find out more about what we disagree about and what this forum topic is about. But I’m still not really sure what the key issues to debate are (if any).

If you’re just talking about ideas that’s totally fine. But if there is a specific issue that you think involves an important disagreement and want a reply about, I’m not clear on what it is.

Does this help at all?

The disagreement / why it’s important

In concrete terms, the disagreement is that Popper thinks Newton’s theory is false, Hume’s theory “that bread nourishes” is false, etc., whereas I don’t think that those theories are false.

This leads Popper to an abstract conclusion which I didn’t formulate very well. Because of his analysis of concrete examples like these, Popper thinks that rational and honest people, despite their earnest efforts to be objective, are frequently proved wrong about even the most basic and well-established things. (I’m not sure how to make the “frequently” more precise.)

I disagree. Although I hold that any conceptual identification—including basic or well-established ones—could potentially be in error, I don’t think that such things are as common as Popper thinks that they are. (I’m not sure how to make the italicized part more precise.)

I think that this disagreement is important. For example, a big downstream consequence of this seems to be that I think certainty is a valid cognitive category, whereas Popper does not seem to think that we can be certain of anything at all (except maybe analytic statements).

The origin of the disagreement / the title of the thread

I think that the origin of my disagreement with Popper here is that he has a different view than I have about concepts (I think(?) he would call me an essentialist), and about what a universal proposition means. Hence the title of this thread.

A “universal proposition” is a proposition where the subject is a concept like “bread” or “justice,” as opposed to a concrete like “this” or “that” or “Alice.” I think this is standard usage, e.g. it mostly agrees with the definition given here.

In my first post,

I described what I believe a universal proposition like “bread nourishes” means, and the approximately what (I think?) Popper and analytic philosophy believe that the same proposition means. Hume’s theory “that bread nourishes” was wrong if it meant something like what I’m claiming Popper thinks it meant, but it’s right if it meant what I claim it meant.

Whether they’re false depends on details like how you define/state those theories, including clarifying key words like “nourish”. You didn’t go into detail on that, and it sounds to me potentially like parochial details, so maybe a toy example would be better.

I think I agree with Popper about this.

I think over half – and I suspect over 90% – of published, peer-reviewed academic papers have some kind of important way they’re false.

Based on experience, I’ve repeatedly revised my views to expect less basic competence from people regarding reading sentences correctly, math and logic. And those experiences are largely with self-selected people who try to write on intellectual forums.

Another example is Charmin changes toilet paper, swapping straight perforations for wavy tears Are scalloped edges actually better for toilet paper? I don’t know but let’s assume they are. Then why did no one figure that out decades earlier? I figure something was really, really broken there, which involved various errors.

People seem really really sure that high cholesterol causes heart attacks (so then most papers build on that – like they say “X causes heart attacks” but they just measured whether it raised cholesterol without measuring any heart attacks). And they seem really sure eating red meat is bad for you. But I have been unable to find anything resembling decent evidence or arguments. Or they seem so sure that MSG is totally safe for everyone, and claim it’s been researched comprehensively, but from what I can tell it was only researched some, poorly.

I’m not confident that tilling fields, or western farming practices in general, are actually a good idea. I think maintaining green grass lawns around people’s homes is often a bad idea despite being so popular in the US.

Eliezer Yudkowsky (ignorantly) dislikes Popper and has a different epistemology, but his discussion of how we don’t live in an adequate society is also relevant.

And I think there are far more errors that I don’t know about than ones I do.

I don’t know if you disagree with some of my points here, or you think they’re compatible with your conclusion, or both (you think some points are false but would not contradict your conclusion if true), or what.

You’d have to define “certainty” for me to evaluate this and know whether we disagree.

I have no issue with knowing enough to reach a conclusion, and having confidence in it because it’s non-refuted, all known rivals are refuted, it’s been exposed to critical consideration and discussion, etc. And then acting on it, not hesitating, not feeling uncertain, not worrying, not getting stuck due to self-doubt, etc. (People take confident actions inappropriately sometimes, but doing it appropriately is realistically achievable today.) Part of why error is so common is that people frequently do other things instead of the rational process I mentioned, e.g. they avoid critical discussion and then reach conclusions in line with their incentives/biases.

I’m also fine with “decisive” arguments (both the terminology and concept) – which let you actually reach conclusions instead of saying “I now think I’m 63% likely to be right about this, up from 62%”. I developed some of that stuff, and I intended it to make Popperian epistemology better, but I don’t know if he’d agree with me or not. If some of your disagreements are meant to be only with Popper, not me, based on speculation that Popper would disagree with CF, then I say: maybe, that could be. I don’t consider it especially important whether Popper would actually agree with CF.

I make no claim about what Hume meant. You’d have to share and analyze Hume quotes for me to start to form an opinion about what Hume meant. But a toy example sounds better.

Whether they’re false depends on details like how you define/state those theories, including clarifying key words like “nourish”. You didn’t go into detail on that, and it sounds to me potentially like parochial details, so maybe a toy example would be better.

I make no claim about what Hume meant. You’d have to share and analyze Hume quotes for me to start to form an opinion about what Hume meant. But a toy example sounds better.

I originally intended for the bread example to be a toy example, but I can see some ways in which it’s not so great. I will make up another example below, which has a similar logic. It is slightly simpler (it uses only first-level concepts), and more importantly, it circumvents the need to speculate about what Hume and Popper actually thought.

DISCLAIMER: If this story I’m about to tell were about Popper and Hume rather than Hopper and Fume, I would not talk nearly as confidently about what they thought.


Sometime in the 18th century, a philosopher named Tepid Fume said

freshwater quenches thirst.

When he wrote this sentence, Tepid Fume meant it in a common sense way. He just meant it in the sense that a little kid would mean it.

Now, in 2023, a philosopher named Barrel Hopper reads Fume’s theory. Hopper thinks that Fume’s theory means some predicate logic formula like

\forall x,y\left[\left[W(x)\wedge P(y)\wedge \neg A(x)\wedge D(x)\wedge I(x,y)\right]\implies T(y)\right]

where

W(x):=x is freshwater.”
P(x):=x is a person.”
A(x):=x has adulterants.”
D(x):=x was drawn from a water source according to old-established practice.”
I(x,y):=y ingested a large quantity of x.”
T(x):= ”the thirst of x was quenched.”

There’s nothing specific about Fume that made Hopper think that this is his theory. Hopper thinks that this is what all theories look like. Hopper knows that the above is a simplification of whatever was really going on in Tepid Fume’s mind—e.g. many more predicates would need to be added, and the existing predicates would need to be made more precise—but he thinks he is basically on the right track.

Now, it turns out that MDMA can inhibit the hormonal mechanism responsible for making people stop feeling thirsty. That fact wasn’t known to Tepid Fume, because MDMA wasn’t synthesized until the 20th century. Hopper knows this fact though, so it follows that Hopper thinks Fume’s theory is false. Indeed, as Hopper explains:

This theory—a favorite of Fume’s—was refuted when an insatiably thirsty woman died of water poisoning, as happened in a catastrophic case at a rave party in Sweden not long ago. Of course, Fume’s theory originally meant that freshwater, free of adulterants, ingested in large quantities, and drawn from a water source according to old-established practice, would quench a woman’s thirst. But it didn’t.


I think that Hopper is wrong about Fume’s theory. This is not because Hopper is wrong about Fume specifically; it’s not a scholarship error. Rather, it is due to the fact that Hopper is wrong about common sense — that he’s wrong about what a theory is.

I will proceed to explain what Fume’s theory, the common sense theory that “freshwater quenches thirst,” actually means. First I will talk about all the concepts involved. I’m not sure that this will be useful; I was more doing it as an exercise.

Freshwater is water with a low concentration of salt. This differentium (low concentration of salt) is measured perceptually by tasting how salty the water is.

Water is a substance consisting primarily of liquid H2O. Some perceptually available properties of water, measurements of which are omitted to form the concept, include:

  • it is relatively clear
  • it is relatively odorless
  • it is non-viscous
  • it moves around in a certain manner
  • it conducts heat in a certain manner
  • sticking your finger in it feels a certain way
  • drinking it feels a certain way

Identifying that the essential characteristic of water is that it consists primarily of liquid H_2O, was a heroic feat, knowledge of which only became widespread after Fume’s time. Fume’s definition of water therefore would have been different, though his concept was the same.

Thirst is a feeling associated with the need to drink. Since thirst refers to a feeling, it is difficult to describe the measurements involved. It sort of feels like a discomfort in the back of the throat.

To quench is to satisfy thirst; i.e. to make thirst become less intense or to make it disappear. I think that quenching is given to us perceptually; we simply perceive that something we swallowed quenched our thirst.

Now, my theory about what Fume’s common-sense theory means is quite simple, in part because I’ve already defined the terms involved, and in part because it is vague. I say that the common sense theory that “freshwater quenches thirst” means that: in essence, freshwater quenches thirst. In an earlier post, I described what I mean by in essence. In this post, I described what I mean by freshwater and quenches and thirst.

If, in some instance, freshwater doesn’t quench thirst, then an explanation is required — a cause must be identified. There is either something in the freshwater that isn’t essential to the nature of freshwater (maybe like a high concentration of carbonic acid; I have noticed that sparkling water doesn’t quench my thirst very efficiently), or there is something in the drinker’s body that isn’t essential to the nature of man (like MDMA messing with his antidiuretic hormone secretion).

Admittedly, one big problem with my theory is that it is not as detailed as Hopper’s. E.g. it is even farther away from being something that I could write up in code. But I think it is closer to the truth of how thought works.

P.S. One of the reasons why it took me so long to reply to you is that I tried to fill in some more of the details, but found that it is quite difficult.

I don’t think that’s a toy example to illustrate a philosophical concept of general interest. It’s somewhat messy/complex (rather than toy) because it’s just a modified version of the Popper/Hume issue. It looks parochial to me; I don’t see why to talk about it other than because you’re trying to criticize a particular thing Popper said.

If someone meant a statement as a general way things work – but allowing for exceptions, not as a strict universal rule – then criticism like Popper’s/Hopper’s would be wrong. But if they said or meant something more like “bread (given proper preparation and some other conditions) always nourishes humans (given current biology) in all contexts with no exceptions” then they’d be wrong. I imagine we agree on this.

I remain unclear on what, if anything, you disagree with me/CF about.

It is different in each case. I will explain what I think of some of them.

I don’t disagree at all. Sadly, I don’t consider this a refutation of what I said, because I think that “rational and honest people” who are trying their best to be objective, won’t usually publish much or last long in academia.

My guess is that this also explains what you are talking about with the cholesterol / red meat / MSG, though I haven’t looked into it.

I don’t think you showed here that anyone was wrong about something basic and well-established.

We didn’t learn that square perforations aren’t actually a way of dividing toilet paper into useable segments. We only learned that there might be a better way.

If someone in the past made a claim like “straight perforations are the optimal way to divide toilet paper into segments,” he would be wrong, as your example proves (let’s assume). But it would be irrational for him to have made a claim like that in the first place; he couldn’t have actually known that it was true. To establish a claim like that, one would need to know about:

  • all possible materials, synthetic or natural, discovered or undiscovered, which have similar enough measurements to wood pulp that a roll of such material would be classified as “toilet paper.”
  • all possible industrial processes by which those materials might be cut into segments with properties appropriate for toilet paper
  • all the material conditions of the future, which determine how economical each of those processes is
  • etc

which is absurd.

Someone in the past could plausibly have known that “straight perforations are the best way to divide toilet paper into segments that we know about,” but then your example wouldn’t show that he’s wrong, as he wasn’t trying to circumscribe all future toilet paper-related innovations.

It seems like you are just saying that these practices are inoptimal in some way. If so, then the same criticism that I made in the toilet paper case applies.

“Certainty” is a cognitive category into which one puts some propositions, which designates that they are known with some level of finality and security.

Some other cognitive categories in the same genus as “certain” include: “possible,” “plausible,” “likely,” “unlikely,” “very likely,” and “arbitrary.”

The cognitive category of certainty (along with all the ones listed above) exists because it is useful. For example, if an item of knowledge is “certain,” then

  • you can treat it as something that you don’t have to actively think about anymore; you can move on to thinking about one of the unlimited number of things about which you are not yet certain
  • you can feel confident about it
  • you can go full-steam ahead in basing further knowledge upon it

Certainty doesn’t mean you can’t later be proven wrong. Someone could be justifiably certain, but also wrong.

Like everything, certainty depends on context. A proposition which you are certain about in one context, you might not be certain about in another context. For example: in an everyday context, I am certain that “right after I wash my hands, my hands are clean,” but if I were about to perform surgery, then I would think twice about it. This is because the different contexts require different standards of cleanliness.

Deduction is a method for deriving conclusions that are certain from premises that are certain.

I don’t think your definition is the standard or dictionary definition of “certainty”, but I don’t really care to debate this terminological point if you disagree.

I don’t think I object to the concept you’re presenting. It sounds to me basically like reaching a decisive conclusion while using high quality standards. (In many contexts, we reach conclusions and move on to other issues using lower quality standards because of low stakes.)

I do object to the cognitive categories list (which I recognize as stuff Peikoff has talked about), with categories like “likely” and “very likely”, because it uses weighing evidence and arguments.

If you reject these cognitive categories, then why do you use them? If you don’t think you use them, then what do you mean when you use a word such as “likely”?

I think I reject weighing evidence and arguments, but I still think that these categories are valid. I say “I think” because I don’t know precisely what you mean. I have in mind something like Bayesian epistemology, which I thoroughly reject.

When I say “likely” I mean high probability. Probability is a concept that applies to physical events, like dice rolls, not to ideas.

I also use probability terms to speak about incomplete information. Something may be totally determined by information I don’t have, but from my perspective without the information there’s randomness (this may actually apply to dice rolls – if I had more information about the initial position of the dice, the force exerted on them, the exact details of the table, etc., then I might be able to determine how the dice will land). So I might say that given someone’s COVID exposure and symptoms, I estimate a 30% chance that they actually have COVID. One way to look at it is if 100 people were in a similar situation, I’d expect around 30 of them to have COVID. This is useful for e.g. considering risk. Another way to view it is that when you lack information about the precise branch of the multiverse you’re in, you get perceived randomness.

But I consider it an error to say things like “Capitalist economic theory is very likely to be true.” I take it you agree with rejecting stuff like “I have a high (0.9) credence in capitalist economic theory”. I don’t see much difference between the two (I think the use of approximate numbers, or not, is inessential). I think Peikoff has been explicit that his categories like “likely” are determined based on weighing evidence and arguments, like Bayesians also do, just without their particular math and details.

Also, you could probably find me saying an idea was “likely” somewhere, even somewhere recent. If so, I’d probably just say I was being sloppy or approximate. Sometimes when speaking about a different topic, e.g. politics, I may not try to be precise about epistemology, and may instead try to use words in standard ways that I think readers will understand. I might grant that I made a mistake in that instance, but that wouldn’t change my mind about the epistemological concepts.

Getting back to the cognitive categories, here are some quotes from Objectivism: The Philosophy of Ayn Rand by Leonard Peikoff (my bolds):

establish first that the issue is related to the realm of evidence and thus deserves consideration. Then study the evidence, weighing the possibilities in accordance with the principles of logic.

Since it has no relation to evidence, an arbitrary statement cannot be subsumed under concepts that identify different amounts of evidence; it cannot be described as “possible,” “probable,” or “certain.”

Or the data may be so evenly balanced, or so fragmentary and ambiguous—for instance, in regard to judging a certain person’s character—that one simply cannot decide what conclusion is warranted. In such cases, “I don’t know” is an honest and appropriate statement.

In these cases, the validation of an idea is gradual; one accumulates evidence step by step, moving from ignorance to knowledge through a continuum of transitional states. The main divisions of this continuum (including its terminus) are identified by three concepts: “possible,” “probable,” and “certain.”

The first range of the evidential continuum is covered by the concept “possible.” A conclusion is “possible” if there is some, but not much, evidence in favor of it, and nothing known that contradicts it.

(Note: I think Peikoff uses the term “evidence” to include arguments, not just observations.)

I see here an epistemology that makes major use of weighing indecisive arguments/evidence, as well as justifying (positive) arguments. This conflicts with CF’s decisive, critical (negative) approach.

Peikoff seems to have a pretty standard view that we start by checking some types of decisive arguments, like contradiction, and that decisive arguments do have priority over indecisive arguments. So that agrees with some CF claims. But then he, like most people, accepts that we run out of decisive arguments and have to (in my words, not his) tie-break using indecisive arguments. I think people gave up on finding more decisive arguments way too easily and the tie-breaking with indecisive arguments is unnecessary and doesn’t actually work.

If you want to contradict Peikoff and advocate something different than him, that’s fine, but please be explicit about it. Or if you have a defense of Peikoff’s position, please explain.

I think we might be addressing something important here, so I’m going to hold off on replying about “inoptimal” for now (which I was having trouble connecting to important concepts/disagreements). I don’t know if you saw my post in between your two long ones, but I was also having difficulty connecting the discussion to something important there.

I think I now understand why you think I am focusing on something parochial. You think that if Hume was being precise and universal, then Popper would be right (since you think Popper has the correct general idea about how to be precise and universal), and if Hume was not being precise and universal, then Popper would be wrong. Who cares one way or the other?

However, in the above quote, you have not described what I think the common-sense theory actually means; I think that there is a way to be precise and universal that is very different from what Popper has in mind. I will refer this way as common-sense semantics, because I think it reflects how people actually think. I don’t think that “bread nourishes” (under common-sense semantics) is imprecise, and I don’t think it has exceptions.

I think that thinking in terms of essentials gives us a way to be precise and universal, without having to add an unlimited number of extra qualifiers to our theories as we learn more.

The idea is that instead of universal statements meaning something like

Y happens under circumstances where A, B, C, and D hold true,

they mean something like

X causes Y.

The latter statement is universal in the following sense. To say that “X causes Y” never actually means that in every context, an X will always cause a Y with no exceptions. Instead, it means that when there are no confounding factors, an X will always cause a Y with no exceptions.

The latter statement is precise in the same way that concepts and definitions are precise. Which is to say: there is some wiggle room and grey area, but only a limited amount of it. The only term I have used in my explanation which looks like it might be a weasel word is “confounding factors.” But confounding factors are just another way of talking about things not essential to the nature of X or Y. Essences were defined above.

I think that this is obviously an important (i.e. not parochial) philosophical disagreement, because I am saying that I disagree with Popper (and also you) about what actual, real-life theories mean. I should have made that point clear at the outset, rather than talking about the downstream consequence of how often we err.

I agree that my example was complex, but I think that’s necessitated by the subject matter: I can’t put together a much simpler universal proposition.

I think I now know what you mean by “toy example,” though. I think you think I should try to put together together an example for imaginary beings who live in a world much simpler than ours, and whose concepts can therefore be completely and explicitly defined. Something akin to what Hopper did. Something which sacrifices correctness in favor of completeness.

That actually sounds like it could be very interesting (albeit tricky). I think it would be the best way to make progress here, as I am finding it difficult to go into much more detail than what I have written. So I might try to do it soon.

I wasn’t claiming Popper has a better method of being precise or speaking universally than Hume. I just think that Popper is actually right (as best I know) about bread (in particular, proper preparation and the other conditions do not make it always safe). So if Hume did say something contradicting Popper’s better understanding of bread, then Hume was wrong.

I also think a lot of people actually had impactful misconceptions about bread in the past (and too many still today). I heard recently that that may have actually been a cause of the Salem Witch Trials and some other similar events. The “witches” may have had ergot poisoning from grain and people didn’t understand that. (Wikipedia says this idea was proposed in a 1976 article in Science and has been debated since. I have not researched the matter myself but it sounds reasonably plausible to me – like something that could have realistically happened.)

It’s not clear to me that I (or Popper) disagree in a major way with your take on what some statements mean. And it seems like a terminology issue.

To get at importance in a different way: do you think a significant point in one of my essays is wrong due to this issue?

When I mentioned the categories, I wasn’t purposefully referring to this section of OPAR (though I just re-read it right now).

I will say, somewhat tentatively, that I agree with Peikoff about this. One very important thing which I had to check was what he meant by “evidence”:

“Evidence,” according to the Oxford English Dictionary, is “testimony or facts tending to prove or disprove any conclusion.” To determine whether a fact is “evidence,” therefore, one must first define what proof of a given claim would consist of. Then one must demonstrate that the fact, although inconclusive, contributes to such proof, i.e., strengthens the claim logically and thus moves the matter closer to a cognitive resolution. If one has no idea what the proof of a conclusion would consist of—or if one holds that a proof of it is impossible—one has no means of deciding whether a given piece of information “tends to prove” it. If the terminus of a journey is undefined or unknowable, there is no way to judge whether one is moving toward it.

That’s because there are some definitions of evidence that I think are nonsensical. E.g. evidence being specific instances in a process of induction by repetition. I think I have seen Peikoff support wrong stuff like that somewhere, but I’m not sure.

When people say things like that, I think that they are probably making an error, because I interpret them as making a metaphysical claim rather than an epistemological claim. I don’t think that probability is metaphysically fundamental. Things either are the case or they aren’t; things are never the case with probability 0 < p < 1.

However, I would not necessarily consider it to be an error if someone made the similar statement “I think it very likely that capitalist economic theory is true.” (This might even be what you meant for all I know.) I believe that such a statement would be reasonable to say if the speaker has gone a long way towards understanding why capitalist economic theory is true, he has checked a lot of things he knows he needs to check (i.e. he has accumulated a lot of evidence), but he still knows that he needs to think about it more.

Yes, I reject it, but my main reason is that I think the numbers here are completely arbitrary. I don’t think that they measure anything real.

When explaining what their credences mean in reality—what sort of thing they think they are measuring—Bayesians invoke something like the Principal Principle. They assume implicitly that there are metaphysically fundamental probabilities, and adopt as an axiom that it is rational to conform one’s subjective credences to match them. I disagree with this, for a reason I’ve already stated.

I will explain why I believe in the cognitive categories but not in continuous credences. I have had the experience of thinking about a lot of different topics. When thinking, I have some sense of how many more things there are that I must check. When I am onto the truth, it feels a certain way; I can feel things clicking into place and dots connecting. When I have a faulty premise, it feels a certain way; I can feel something bothering me, something making me uncomfortable. Somehow, I subconsciously sum up all of that stuff into an overall judgement about how confident I am in an idea. Since my measurements here are so imprecise, it doesn’t make sense to have more than a handful of categories of confidence, let alone one category for each of the numbers between 0 and 1.

Could the evidentiary continuum ever be quantified precisely, like many of the other measurements that man makes have been? Peikoff has talked about something related to this and it seems like his answer would be no. I am skeptical of him here, but I haven’t thought much about this.

Regardless of whether or not it’s possible, I am confident that the Bayesians haven’t actually done it.

I agree with you that probability applies to dice rolls and not to ideas, but I disagree with you that “likely” means “high probability.” I think that “likely” means one of the aforementioned cognitive categories, which are also useful for reasoning about things like dice rolls.

Part of my reason is empirical. A child understands what “likely” means before he understands probability theory, and it is historical fact that “likely” was a concept long before Cardano came along. This proves that there must be a way of explaining what “likely” means that does not invoke probability theory: probability theory can at best give us a refinement of that explanation.

Part of my reason is theoretical. I think that these cognitive categories are logically prior to probability theory. I might explain what I mean by that another time. I started to write it down, but it’s complicated and I have already have a bit too much to write as-is. The basic idea is that to take action on the basis of a probability, you subsume it under one of the cognitive categories e.g. “likely,” and then you have an enormous body of knowledge at your disposal about how you should act when something is “likely.”

When thinking on your own, do you find that it is sometimes useful to be “sloppy or approximate” in this specific way? If so, if it is sometimes useful to judge ideas as being “likely,” then how do you square that with CF—a philosophy which aims to be practical?

If not, how is that possible? I think that statements such as the following, where “likely” is used to describe ideas rather than physical events, are indispensable in every-day life:

  • I will likely go to the grocery store tomorrow.
  • The resolution will likely pass the Senate.
  • Bob likely agrees with me about this.

Here’s an example of a place where I think that abandoning decisive arguments is both necessary and effective.

Suppose that I am playing a (nonstandard) game of roulette, where 30 of the 38 holes are red and 8 of the 38 holes are black. Unless I have some precision lab equipment, I cannot make a decisive argument about whether the ball will land in a red hole or a black hole. I can only use probability theory. Probability theory tells me that the ball will land in a red hole with 79% probability, but it doesn’t give me a decisive argument as to what will or won’t happen. Therefore, if I must nonetheless take some action on the basis of where I think the ball will land—for example, if I must place a bet----then reality forces me to do tie-breaking. Furthermore, there is a way to do the tie-breaking that works: if I have to bet, it is better for me to bet on red than it is for me to bet on black.

There’s an easy example which came to mind because I saw it earlier today. It’s not quite one of your essays, but it’s a passage from BoI, a book which you edited:

I have often thought that the nature of science would be better understood if we called theories “misconceptions” from the outset, instead of only after we have discovered their successors. Thus, we could say, Einstein’s misconception of Gravity was an improvement on Newton’s Misconception, which was an improvement on Kepler’s. The neo-Darwinian Misconception of evolution is an improvement on Darwin’s misconception, and his on Lamarck’s. If people thought of it like that, perhaps no one would need to be reminded that science claims neither infallibility nor finality.

Do you agree with this? If not, I’ll go search for something from your essays.

I don’t agree with it (and while I do agree with a fair amount of what Deutsch wrote, I also have many disagreements, just like with Popper). I think that terminology suggestion would be confusing. And it’s emphasizing fallibility and error too much while overly deemphasizing that we genuinely create knowledge.

Also, please don’t bring up Deutsch casually (only if it’s important and there aren’t good alternatives). He isn’t interested in criticism of his ideas or debate, and he’s actively trying to harm me and prevent people from discussing my philosophy ideas, so I don’t like to talk or think about him.

1 Like

Using probability theory is the decisively correct way to deal with the incomplete information in that scenario.

Probability theory tells me that the ball will land in a red hole with 79% probability , but it doesn’t give me a decisive argument as to what will or won’t happen.

The decisive conclusion you should reach about what specific outcome will happen is that you don’t know. (Or if you prefer, that they’ll all happen in branches of the multiverse.)

Therefore, if I must nonetheless take some action on the basis of where I think the ball will land—for example, if I must place a bet----then reality forces me to do tie-breaking . Furthermore, there is a way to do the tie-breaking that works : if I have to bet, it is better for me to bet on red than it is for me to bet on black.

Yes you can (with the help of probability theory) decisively reach the conclusion that you should bet on red (if you bet, you’re trying to win, and the colors have even payouts).

None of those three examples are about probabilities of ideas being true. The first two are about events and the third is guessing the current physical state of part of the world given incomplete information (which btw is related to the event of finding out information about the state).

All three examples are related to ideas held by people (as well as to other events like things that might come up so you don’t get a chance to go grocery shopping). People’s ideas play a causal role in what events happen. But none of those examples are about the probability that a particular idea is true.

To use a different example: No, I don’t find it useful to think to myself “it is likely that the many-worlds interpretation (MWI) of quantum physics is true”. Instead I think things more like “To the best of my knowledge, MWI has won in debate so far, to the very limited extent that truth-seeking debates about these issues actually taken place.” Or “I know some explanations that about MWI that seem good to me as a non-expert, and I don’t know a refutation of them or a better viewpoint. I also know some criticisms of some of the popular alternatives and I don’t know refutations of those criticisms. So in my debate tree, currently MWI wins.” Or “MWI has been exposed to a lot of critical thinking, including some by me, and has survived. So I have more confidence in it than in a new idea that hasn’t had much critical attention yet.” These are all somewhat vague, general statements, and I think that’s fine. In a different context, where I actually had to make a decision based on whether MWI is true, I might look at it in another way, but that would depend on the situation and what decision I had to make.

In that text, I said what I meant. I agree that other people commonly use “likely” to speak about epistemology issues.

That sounds like you either disagree with CF or don’t know what it says. Based on other stuff from this post, I think you don’t know enough about what CF claims about epistemology and decisive arguments.