Lmf's quick questions about CF

To address this, I believe we need extra precision that’s usually omitted from discussion and analysis.

P is what we might call an “abstract” idea addressing an “abstract” problem. This may not be the best word but I’ve used it before: Curiosity – Human Problems and Abstract Problems

P is not a decision for a human to make.

An abstract problem is “Are there aliens?” which is the kind of problem that P is responding to. In some sense, the right answer to that is “I don’t know”; it’s not a solved problem. It may not be solved for centuries.

The problems we deal with in our lives are a bit different. They are more like “Should I believe there are aliens?” or “What should I believe about the potential existence of aliens?”

To address problems like those, you use statements similar to but different than P. E.g. you might say “The best conclusion to reach today is that it’s plausible that aliens exist.” This statement contains P as a sub-statement. It could also be written as “The best conclusion to reach today is P”.

This gets into the realm of human action and human decision making. It’s still a fairly abstract type of decision (what to believe). Sometimes what we care about is more concrete: what physical or external action to take. We often are trying to decide on a belief in order to make some decision involving the external world. Like I might be deciding whether or not to build and launch a space probe to look for aliens. Or I might try to form a belief about whether a particular product is counterfeit specifically in order to inform my immediate decision of whether to buy it, not just as an intellectual curiosity. Or I might try to form a belief about whether there is a McDonalds two blocks to the right in order to decide which direction to walk.

Looking at this from another angle, we have to be clearer about the goal. You specified P but didn’t actually specify the goal. Since there is no goal, CF says it’s technically impossible to criticize P except by pointing out the lack of goal (or by inferring or assuming some unstated goal). Adding context about human lives, decisions and actions is a way to think about the goal. For example, the goal might be to reliably arrive at McDonalds, in which case an arbitrary assertion isn’t good enough even though it could be true. To reliably arrive at McDonalds, we’ll want to navigate using arguments, evidence, or more broadly a good methodology. Similarly, any issue related to what I should believe can be interpreted as having a goal about using rational methodology.

The word “plausible” makes P less abstract than it could be. Plausible brings up human mental states in a way that Q = “Aliens exist.” doesn’t. This suggests the goal of P is about figuring out the right mental state to have about Q. If that’s the goal, then methodology and rationality are involved, because the right mental state to have is the one you reach using rational methodology, not whichever one is true (you have no direct access to omniscient truth. if you believe X for no reason, and later everyone agrees X is true, and we assume for discussion that X is true, it still wasn’t the right thing to do to believe X in the past for no reason; you just got lucky).

BTW, “plausible” isn’t how CF typically describes ideas, which I think makes this a bit more confusing.

So the short answer is you skipped the step of explicitly stating the goal and the longer answer involves goals related to rational action, belief and problems people face in their lives, not just abstract truth. That makes criticisms about reasoning, methodology and sources relevant to the goal. For issues that are purely about abstract truth, the right answer tends to be “I don’t know”, or perhaps not to answer since it’s not actually related to your life. CF, like all fallibilist epistemologies, does not definitively prove statements true or false; it merely helps guide humans in their lives, thinking and decision making. All CF refutations are “tentative” and “fallible” (open to potential revision in the future) just like CR refutations.

Also, to avoid potential confusion: I used singular “goal” because you can conjoin goals together, so pluralizing it or not doesn’t really matter.

Coming from the standpoint of a more traditional theory of knowledge (Objectivism), my criticism of the main point you made in this post would be that: To even know whether or not an idea actually satisfies a goal, you have to know something about the facts of reality. So facts of reality come first. Only after we have established something about the facts of reality can we have values, let alone know how to pursue them.

I will venture a guess as to your retort. I think you might say something like:

Philosophical ideas like “fact of reality” are useful in some contexts, but they are approximate. Strictly speaking, in the sense in which the traditional standpoint means it, there is no such thing as a fact of reality, nor is there such a thing as knowing whether or not an idea actually satisfies a goal. There are only IGC triples. And IGC triples aren’t really “about” anything; they don’t actually refer to anything (so in particular, you shouldn’t ever really have being true as a goal, since that’s predicated on the parochial theory that ideas refer to facts of reality). They are just some data structures, whose main function is refuting other IGC triples.

Do you agree / am I in the ballpark?

No.


Facts of reality are great but we don’t have direct or infallible knowledge about them. The way for us to make progress on understanding them better includes things like rational methods. So when we want to know about facts of reality, we have to use things like sources, evidence, arguments and explanations (which means considering criticisms on those topics relevant). We should (in general) not believe arbitrary, unsourced, unargued claims.

Considering the human context (we want to know the facts of reality; we have goals; we want to use effective methods) enables methodological refutations instead of purely truth-and-logic-based refutations. Good methodologies care a lot about truth and logic, and use logic, but we don’t know enough to guide our lives with only truth and logic; we need other tools too.

Induction is a tool that doesn’t follow (deductive) logic, but CR and CF reject it. They favor other tools like evolution and methodologies that enable error correction well.

The issue of how infants get started with learning (or how to build an AGI) is hard. I consider evolution our best clue and I think CF has a lot of relevant ideas but nothing like a full solution. I don’t think anyone understands the details. CF (like most epistemology ideas) is more concerned with the learning of older people who already have some knowledge and some goals. I think claims along the lines of “infants directly perceive some facts of reality to get started” are wrong.

Okay, good.

I think I may have been interpreting some of what CF says in a sense that is more philosophically fundamental than how you actually meant it. You use very simple language in your articles, and I have been reading things between the lines that maybe aren’t there.

Was I at least right that CF treats a refutation (a reason why an idea fails at a goal) as being an IGC triple itself? Or is a refutation something else? Or does CF not take a stand about that?

I encountered something I found confusing about CF. I think I figured it out, but I’ll write it out anyway. It’s related to the above question, because it pertains to CF’s nonstandard view of refutations.

Let

I_1 := “going to the jewelry store on 5th ave.”
I_2 := not going to the jewelry store on 5th ave.”
G := “getting food.”
C := some typical context. A normal adult human in NYC who is hungry.

Clearly (I_1, G, C) is refuted, by the criticism that jewelry stores don’t sell food.

But also, (I_2, G, C) is refuted, because the idea of not doing some specific action isn’t sufficient to meet the positive goal of getting food. To satisfy G, you’d have to add something to I_2, like going to the grocery store on 7th ave instead.

Someone could summarize this situation (and CF often shortens things like this) by saying: going is refuted, and also not going is refuted. And if you’re not careful, that summary looks like it violates the law of excluded middle; it could be misconstrued as meaning that a proposition (“It is the case that I should go to the jewelry store on 5th ave”) and its denial (“It is not the case that I should go to the jewelry store on 5th ave”) are both refuted.

1 Like

Great.

A refutation is a type of IGC triple, yeah. The goal is to criticize a different IGC. Treating criticism explicitly as IGC triples can sometimes be unnecessary/skipped. Having a few core concepts that are used broadly, instead of a bunch of different concepts, is relevant for AGI design (fewer data structures; more elegant code with fewer special cases; also IMO if you can’t figure out how to evolve ideas that function as both solutions and as criticism of other ideas, using the same data type, then you probably aren’t getting anywhere on AGI. The same idea data type should also function as goals or contextual information. Or put another way, I think you should just have one idea replicator not several different replicators for different purposes).

CF denies that you can do criticism (in the CR error correction sense) independent of goal(s) and context. Errors cannot be evaluated without some sort of contextual information about what success and failure are.

Synonyms for goal, like “objective” or Popper’s “problem”, are fine instead. Considering the goal to be part of the context, or vice versa, can work too. It’s also possible to mix the goal and/or context into the idea too. Like you could say proper ideas are IGCs. Not wrong, but not my general recommendation. Depending on the context, the “idea” part of IGC can be better called something more specific, like plan or solution; using the very broad word “idea” to mean something a bit closer to “solution” has some downsides but overall I find it works pretty well. This stuff connects most with CR but also relates to Objectivism’s claim about knowledge being contextual and to Theory of Constraint’s material on goals.

Yeah I agree. I see how that could be confusing. CF basically says it’s valid to criticize things for not being good enough – you can (and often should) demand more from ideas. This comes up a lot in debates/discussions where people concede too early/easily without being demanding enough to get full information about why the new ideas, that they’re (partially) losing the debate to, are good enough to address all their relevant goals/problems. Lots of times people try to change their mind because of some arguments showing a new idea is better (by degree), instead of them actually insisting on understanding how to use it to solve all relevant problems before adopting it. (If their own idea doesn’t meet that standard due to criticism they were just told, they should take the position that they don’t yet know a full solution, can use parts of both their old and new ideas in the mean time, and can do more research.)

In this case, not going to the store isn’t adequate as a positive solution to getting food. That does not mean all options that qualify as “not going to the store” are wrong (which would lead to the logical problem); it just means that the positive knowledge/idea “don’t go to the store” is inadequate to solve the problem. It’s not categorically wrong but it doesn’t offer enough help to solve the problem.

I agree that criticism, like everything else, is dependent on context.

But why does CF believe that criticism cannot be done independently of goals?

Here are some examples of decisive criticisms, in the everyday, non-CF sense, that don’t depend on goals:

  1. “That object on the table is a wallet” can be decisively refuted by noting that: it can’t be, because I left my wallet in the car last night (context: the person thinking this only has one wallet, lives alone, etc.)
  2. “There are finitely many prime numbers” can be decisively refuted by giving a proof to the contrary.
  3. “The earth is flat” can be decisively refuted by observing a tall boat go over the horizon (context: we’re dealing with a very simple flat earth theory which doesn’t contain ad-hoc modifications designed to “explain” observations like that)

All human actions are goal-laden, and so it sounds plausible to me that I could re-cast these criticisms in terms of goals somehow; that I could replace them with non-“abstract” problems. But what’s the point? What is gained by doing this, in situations like the above where decisive criticisms are already possible?

In each of these cases, truth is an implicit, assumed goal.

The criterion of success is truth and the criterion of failure is falsity.

There are many ways to evaluate statements. We may evaluate statements for length. We may have a certain goal length. Commonly, when we don’t specify a specific type of evaluation, then the intended type of evaluation is truth-based. Truth is a common default goal.

Put another way, I read/hear/interpret “That object on the table is a wallet” as a factual statement or factual claim. The adjective factual indicates the goal is truth and tells me in what manner to evaluate the statement (by whether it’s factually correct or not).

A point of bringing up problems like “Should I believe X?” as an alternative to “Is X true?” is it helps clarify points like:

  • we’re fallible
  • we can only reach tentative conclusions about X’s truth, not final, omniscient conclusions
  • morality (“should”) is relevant
  • our goals/purposes are relevant (what you should do depends on what you’re trying to accomplish)
  • what we should believe depends on rational methodology, not merely truth. if you reach a conclusion rationally that you later discover is false, it doesn’t mean you were incorrect about what you should have believed
  • since methodology is relevant, methodological criticisms are appropriate (this is how this issue came up – I was defending/explaining the use of methodological criticism in CF)

It’s often unnecessary to talk about details like this, especially in simple or easy cases, but sometimes they do come up, particularly in epistemology discussions. And there are many cases where we can reach a conclusion without using methodological criticism, and that’s fine.

To put it in more Objectivist terms, we might phrase it as “Is X contextual knowledge (for me, in my current context)?” instead of “Is X true?” That also makes rational methodology relevant. Contextual knowledge means, roughly, conclusions reached using rational methodology within a context with limited information.

BTW, I think you agree with me that methodological criticism is valid. So I wonder how you defend/explain it in cases like these. What’s an alternative to CF that you find satisfactory for this issue?

I’m still thinking about your main point, but I’ll answer your question in the meantime.

Yes. I strongly agree.

How do I explain/defend the validity of methodological criticisms of factual statements? Well, first of all I should say that I don’t consider my ideas about this to be very well-developed. I’ll just give an example of one type of methodological criticism (though many methodological criticisms seem to reduce to it), and my Objectivism-inspired defense of the criticism.

My example of a methodological refutation is Objectivism’s labeling of something as arbitrary. Something is arbitrary if you don’t know of any evidence for it.

I would defend the validity of this criticism as follows. Without any evidence, there’s no means by which one’s mind could evaluate whether or not the given claim is true. So, cognitively speaking, there’s very little that is even possible for someone to do with an arbitrary claim; his only options are to reject it, or to choose some other conclusion about it (to accept it on faith, to file it under his “I don’t know” folder, etc). Anything other than rejecting the claim is wrong, because that strategy puts your mind at the mercy of whatever claims happen, by chance, to be presented to you by the people around you or by the whims of your subconscious.

I could elaborate more, but I’ll leave it at that for now.

You’re probably going to say that my explanation involved goals. Implicitly I’m appealing to goals, like the goal of knowing reality, the goal of not bogging down one’s mind with things like “I don’t know if leprechauns exist,” etc. I don’t disagree.

I read that paragraph as explaining why that’s a good methodology. The issue I thought you were raising, which I was trying to answer, wasn’t the pros and cons of any particular methodologies. It’s why rational methodology issues are valid to bring up at all when dealing with factual claims.

More specifically, you were asking why responses related to methodology, such as “why?” or “source?”, are valid responses to “It is plausible that aliens exist.” Even if someone gives no reasons or sources, that doesn’t imply that “aliens exist” is false, so how can it be a (relevant, valid) refutation?

Similarly, saying “Isn’t that arbitrary?” doesn’t prove something is false, so if all you care about is truth and falsity, I don’t think it’s satisfactory. But if you care about following reason to get the best knowledge you can, and that’s part of your goal, then I think “Isn’t that arbitrary?” is a reasonable argument. Rational methodology helps us make good decisions (including about what actions to take and what to believe) despite limitations like fallibility and incomplete information.

I also agree with that.

2 Likes

This now sounds reasonable to me.

Next, I want to try to understand how this idea of recasting factual statements as IGC triples extends to probabilistic statements (which I think is also relevant to understanding your ideas about degree arguments). You touched on it a bit in the other thread but I don’t have a clear idea yet.

Does CF take a stance on what probability means?

CF says:

Probability within physics is fine. Events can have probabilities. I’m not sure if you wanted an opinion within that topic, e.g. on Bayesian vs. frequentism, or to get into issues about deterministic physics or many worlds. I don’t think CF depends on positions like “probability means the proportion of universes in a branch of the multiverse in which an event happens”. CF is compatible with a variety of laws of physics. CF works with many specific meanings of probability that are compatible with a typical understanding of how probability works with dice rolls.

Probability within epistemology involves various errors. This is the thing CF criticizes. Evaluating ideas doesn’t involve probabilities of truth. Concepts like credences, the weight of the evidence, and strong arguments are errors (with some partial truth mixed in).

In informal discussions/thinking that aren’t about epistemology, the use of probabilistic language to deal with epistemological issues may be a mostly harmless approximation. It’s most commonly harmless when the discussion/thinking is both easy and non-intellectual. (I have a similar view about justificationist language, meaning speaking in terms of positive, supporting arguments.)

There’s also the topic of using probability to deal with incomplete information. A lot of statistics is related to this. E.g. I might believe that, in my standard context, with everything else being equal, 90% of people displaying certain body language are lying. This is approximate, it’s often partly based on past data, and it’s more like dice rolling (assigning probabilities to events) than credences (assigning probabilities to the truth of ideas).

What probability “means” depends on which usage/context you’re dealing with. They aren’t all the same thing.

Consider a context where I am about to do a single roll of two 6-sided dice, and I am considering the possibility that they will land on “snake eyes” (both 1s) – a 1 in 36 chance.

I think that the following two statements mean ~equivalent things:

a) I will probably not roll snake eyes.
b) The theory/idea that “I will roll snake eyes” is probably false.

But I think that you think a) is valid but b) is invalid. You think that a) is an idea about the state of the world which is (decisively) true, and b) is a non-decisive judgement about an idea.

Am I right that this is what you think?

Yes

Do you think a statement like “the predictions of Capitalist economic theory will probably hold true” could be valid?

If not, how do you differentiate it from “I will probably not roll snake eyes”? Aren’t both of them about the state of the world?

If so, how do you differentiate it from “Capitalist economic theory is very likely to be true”? What is Capitalist economic theory beyond what it says about the world?

I think this is a tangent, but:

What predictions? They’re heavily qualified, like “Everything else being equal, in a free market, if demand increases, then price will tend to increase.” It’s unrealistic to evaluate capitalism based on predictions like these because e.g. no country has a fully free economy. Instead, capitalist reasoning should be evaluated primarily in terms of logic, explanations and critical thinking. Those discussions may bring up data or observations but that won’t generally be to say “the prediction was wrong; the theory is refuted” like with science experiments.

Economics has a lot of abstract ideas so a cleaner example is “That object is a pizza.” That is about the state of the world, but it isn’t about a probabilistic event. The object either is or isn’t a pizza; probability isn’t involved.

The differentiating issue is whether random events are involved not whether something deals with the state of the world.

I talked about predictions because it is one way to emphasize that I’m making a statement about the way the world is, rather than a statement about a theory. I recognize that not every statement about the way the world works needs to involve predictions, and that the core of what Capitalist economic theory says is not best described as predictions. Any concrete predictions it makes are rather complicated and qualified. Its predictions are not how I know it is true.

I believe the lone gunman theory of the JFK assassination. It makes sense, and it fits with some broad principles that I believe, like that conspiracies are hard to pull off successfully because lying is hard.

I have seen some attempted refutations of the lone gunman theory. None of those refutations were clearly decisive. Some of those refutations, e.g. an attempted refutation involving bullet physics, were such that I’m not completely sure if they are decisive, because they are a lot of work to look into. Like, the bullet trajectory supposed by the lone gunman theory is weird, but it doesn’t obviously violate any physical principle known to me. There’s a claim that if you look into the details, then you’ll see it is physically impossible. The little I did look into the details, I wasn’t convinced, but I didn’t even get close to finishing the job.

I would summarize my take on the situation by saying “I think the lone gunman theory is likely true, but I haven’t looked into it in much depth.”

Formally speaking, the first half of that statement violates CF.

Does CF say that it’s practical to think things like that anyway?
or
Does CF say that one should refrain from having opinions like “I think the lone gunman theory is likely true”? I.e. does CF say that it’s not actually practical?

That could be reasonable and practical depending on the details. It’s not ideal, but it could be a good enough approximation to serve people reasonably well in fields other than epistemology. It doesn’t (or at least shouldn’t) involve probability, so other language would be better. This way of thinking is well known in our culture so it’s understandable for people to use a tool they have available. I think these misconceptions are causing massive harm on a society-wide level, but that there’s little harm in many individual cases. I mostly blame some intellectuals and thought leaders, not the downstream users of these ideas.

The core concept here is doing a partial investigation of an issue, not a fully thorough investigation, and then reaching a conclusion. I consider that common, proper and necessary.

So there’s a question of how do you decide when you have investigated enough to reach a conclusion and take action. There’s a common decision people make between investigating more or moving on. (CF says investigating “enough” isn’t really the right concept. That’s a mainstream, quantitative formulation of the issue which I thought would be effective for my goal of communicating.)

A typical proposed solution is to end investigations when your conclusion has a high probability of being true. A similar option is to end when you have a high degree of belief (credence) in an idea. Additional factors may be taken into account like the importance of the issue, the difficulty of learning more about it, the downside risk if you’re wrong, the cost of retrying if you fail, etc. These factors can help guide us on whether 0.8 or 0.99 is a better target.

These proposals bring epistemic probability, or something similar, into the discussion and are one of the reasons for terminology like “likely true”. CF rejects these proposals and offers an alternative.

I’m not completely sure how CF thinks one should update one’s beliefs after a partial investigation like this.

It’s obviously not right for me to say that I think the lone gunman theory is refuted.

But it’s also not right for me to say that I think the lone gunman theory is unrefuted. I know of criticisms of the theory whose refutations I do not know.

But it’s also not right for me to say that I don’t have any idea if the lone gunman theory is true. I have some idea; I’ve thought about it a bit.

I think that your answer to this puzzle has to do with your distinction between abstract and non-abstract ideas. I think CF thinks that I should say the non-abstract lone gunman theory, the one in my mind, is unrefuted, and that I have no idea whether the abstract lone gunman theory is refuted or not. Neither judgement involves a word such as “likely.”

Is that the idea?

I am elaborating on the previous post, but I’m writing this 10 minutes later, so I’ll make it a new post.

To know that the abstract lone gunman theory is unrefuted, you’d probably say I would have to systematically engage with all the attempted refutations of it, and refute each one of them. And maybe you’d think I’d also need a debate policy, so as to bring any new attempted refutations to my attention.