Lmf's quick questions about CF

To address this, I believe we need extra precision that’s usually omitted from discussion and analysis.

P is what we might call an “abstract” idea addressing an “abstract” problem. This may not be the best word but I’ve used it before: Curiosity – Human Problems and Abstract Problems

P is not a decision for a human to make.

An abstract problem is “Are there aliens?” which is the kind of problem that P is responding to. In some sense, the right answer to that is “I don’t know”; it’s not a solved problem. It may not be solved for centuries.

The problems we deal with in our lives are a bit different. They are more like “Should I believe there are aliens?” or “What should I believe about the potential existence of aliens?”

To address problems like those, you use statements similar to but different than P. E.g. you might say “The best conclusion to reach today is that it’s plausible that aliens exist.” This statement contains P as a sub-statement. It could also be written as “The best conclusion to reach today is P”.

This gets into the realm of human action and human decision making. It’s still a fairly abstract type of decision (what to believe). Sometimes what we care about is more concrete: what physical or external action to take. We often are trying to decide on a belief in order to make some decision involving the external world. Like I might be deciding whether or not to build and launch a space probe to look for aliens. Or I might try to form a belief about whether a particular product is counterfeit specifically in order to inform my immediate decision of whether to buy it, not just as an intellectual curiosity. Or I might try to form a belief about whether there is a McDonalds two blocks to the right in order to decide which direction to walk.

Looking at this from another angle, we have to be clearer about the goal. You specified P but didn’t actually specify the goal. Since there is no goal, CF says it’s technically impossible to criticize P except by pointing out the lack of goal (or by inferring or assuming some unstated goal). Adding context about human lives, decisions and actions is a way to think about the goal. For example, the goal might be to reliably arrive at McDonalds, in which case an arbitrary assertion isn’t good enough even though it could be true. To reliably arrive at McDonalds, we’ll want to navigate using arguments, evidence, or more broadly a good methodology. Similarly, any issue related to what I should believe can be interpreted as having a goal about using rational methodology.

The word “plausible” makes P less abstract than it could be. Plausible brings up human mental states in a way that Q = “Aliens exist.” doesn’t. This suggests the goal of P is about figuring out the right mental state to have about Q. If that’s the goal, then methodology and rationality are involved, because the right mental state to have is the one you reach using rational methodology, not whichever one is true (you have no direct access to omniscient truth. if you believe X for no reason, and later everyone agrees X is true, and we assume for discussion that X is true, it still wasn’t the right thing to do to believe X in the past for no reason; you just got lucky).

BTW, “plausible” isn’t how CF typically describes ideas, which I think makes this a bit more confusing.

So the short answer is you skipped the step of explicitly stating the goal and the longer answer involves goals related to rational action, belief and problems people face in their lives, not just abstract truth. That makes criticisms about reasoning, methodology and sources relevant to the goal. For issues that are purely about abstract truth, the right answer tends to be “I don’t know”, or perhaps not to answer since it’s not actually related to your life. CF, like all fallibilist epistemologies, does not definitively prove statements true or false; it merely helps guide humans in their lives, thinking and decision making. All CF refutations are “tentative” and “fallible” (open to potential revision in the future) just like CR refutations.

Also, to avoid potential confusion: I used singular “goal” because you can conjoin goals together, so pluralizing it or not doesn’t really matter.

Coming from the standpoint of a more traditional theory of knowledge (Objectivism), my criticism of the main point you made in this post would be that: To even know whether or not an idea actually satisfies a goal, you have to know something about the facts of reality. So facts of reality come first. Only after we have established something about the facts of reality can we have values, let alone know how to pursue them.

I will venture a guess as to your retort. I think you might say something like:

Philosophical ideas like “fact of reality” are useful in some contexts, but they are approximate. Strictly speaking, in the sense in which the traditional standpoint means it, there is no such thing as a fact of reality, nor is there such a thing as knowing whether or not an idea actually satisfies a goal. There are only IGC triples. And IGC triples aren’t really “about” anything; they don’t actually refer to anything (so in particular, you shouldn’t ever really have being true as a goal, since that’s predicated on the parochial theory that ideas refer to facts of reality). They are just some data structures, whose main function is refuting other IGC triples.

Do you agree / am I in the ballpark?


Facts of reality are great but we don’t have direct or infallible knowledge about them. The way for us to make progress on understanding them better includes things like rational methods. So when we want to know about facts of reality, we have to use things like sources, evidence, arguments and explanations (which means considering criticisms on those topics relevant). We should (in general) not believe arbitrary, unsourced, unargued claims.

Considering the human context (we want to know the facts of reality; we have goals; we want to use effective methods) enables methodological refutations instead of purely truth-and-logic-based refutations. Good methodologies care a lot about truth and logic, and use logic, but we don’t know enough to guide our lives with only truth and logic; we need other tools too.

Induction is a tool that doesn’t follow (deductive) logic, but CR and CF reject it. They favor other tools like evolution and methodologies that enable error correction well.

The issue of how infants get started with learning (or how to build an AGI) is hard. I consider evolution our best clue and I think CF has a lot of relevant ideas but nothing like a full solution. I don’t think anyone understands the details. CF (like most epistemology ideas) is more concerned with the learning of older people who already have some knowledge and some goals. I think claims along the lines of “infants directly perceive some facts of reality to get started” are wrong.

Okay, good.

I think I may have been interpreting some of what CF says in a sense that is more philosophically fundamental than how you actually meant it. You use very simple language in your articles, and I have been reading things between the lines that maybe aren’t there.

Was I at least right that CF treats a refutation (a reason why an idea fails at a goal) as being an IGC triple itself? Or is a refutation something else? Or does CF not take a stand about that?

I encountered something I found confusing about CF. I think I figured it out, but I’ll write it out anyway. It’s related to the above question, because it pertains to CF’s nonstandard view of refutations.


I_1 := “going to the jewelry store on 5th ave.”
I_2 := not going to the jewelry store on 5th ave.”
G := “getting food.”
C := some typical context. A normal adult human in NYC who is hungry.

Clearly (I_1, G, C) is refuted, by the criticism that jewelry stores don’t sell food.

But also, (I_2, G, C) is refuted, because the idea of not doing some specific action isn’t sufficient to meet the positive goal of getting food. To satisfy G, you’d have to add something to I_2, like going to the grocery store on 7th ave instead.

Someone could summarize this situation (and CF often shortens things like this) by saying: going is refuted, and also not going is refuted. And if you’re not careful, that summary looks like it violates the law of excluded middle; it could be misconstrued as meaning that a proposition (“It is the case that I should go to the jewelry store on 5th ave”) and its denial (“It is not the case that I should go to the jewelry store on 5th ave”) are both refuted.

1 Like


A refutation is a type of IGC triple, yeah. The goal is to criticize a different IGC. Treating criticism explicitly as IGC triples can sometimes be unnecessary/skipped. Having a few core concepts that are used broadly, instead of a bunch of different concepts, is relevant for AGI design (fewer data structures; more elegant code with fewer special cases; also IMO if you can’t figure out how to evolve ideas that function as both solutions and as criticism of other ideas, using the same data type, then you probably aren’t getting anywhere on AGI. The same idea data type should also function as goals or contextual information. Or put another way, I think you should just have one idea replicator not several different replicators for different purposes).

CF denies that you can do criticism (in the CR error correction sense) independent of goal(s) and context. Errors cannot be evaluated without some sort of contextual information about what success and failure are.

Synonyms for goal, like “objective” or Popper’s “problem”, are fine instead. Considering the goal to be part of the context, or vice versa, can work too. It’s also possible to mix the goal and/or context into the idea too. Like you could say proper ideas are IGCs. Not wrong, but not my general recommendation. Depending on the context, the “idea” part of IGC can be better called something more specific, like plan or solution; using the very broad word “idea” to mean something a bit closer to “solution” has some downsides but overall I find it works pretty well. This stuff connects most with CR but also relates to Objectivism’s claim about knowledge being contextual and to Theory of Constraint’s material on goals.

Yeah I agree. I see how that could be confusing. CF basically says it’s valid to criticize things for not being good enough – you can (and often should) demand more from ideas. This comes up a lot in debates/discussions where people concede too early/easily without being demanding enough to get full information about why the new ideas, that they’re (partially) losing the debate to, are good enough to address all their relevant goals/problems. Lots of times people try to change their mind because of some arguments showing a new idea is better (by degree), instead of them actually insisting on understanding how to use it to solve all relevant problems before adopting it. (If their own idea doesn’t meet that standard due to criticism they were just told, they should take the position that they don’t yet know a full solution, can use parts of both their old and new ideas in the mean time, and can do more research.)

In this case, not going to the store isn’t adequate as a positive solution to getting food. That does not mean all options that qualify as “not going to the store” are wrong (which would lead to the logical problem); it just means that the positive knowledge/idea “don’t go to the store” is inadequate to solve the problem. It’s not categorically wrong but it doesn’t offer enough help to solve the problem.