CF Summary Draft (seeking feedback)

This is for the CF homepage so it needs to be short (this may already be too long?).


Critical Fallibilism (CF) is a rational philosophy which explains how to evaluate ideas using decisive, critical arguments and accept only ideas with zero refutations (no known errors). An error is a reason an idea fails at a goal (in a context). CF rejects judging how good ideas, how strong evidence is or how powerful arguments are, and rejects credences and degrees of belief. CF says we learn by an evolutionary process focused on error correction, not by induction or justification. CF advocates an approach to decision making focused on qualitative differences not quantitative factors.

CF advocates publishing written policies to enable error correction that include mechanisms to reduce bias like transparency. Intellectuals actually could address public questions and criticism without spending too much time and energy. Instead, they use quality filters to reduce what they consider, but most filters are indirectly based on social status, which is irrational. Ask “If this criticism (that you don’t want to engage with) is true, by what process will your error be corrected?” and they have no answer. Intellectuals routinely stay wrong when it’s avoidable because better ideas are already known. CF proposes solutions to engage with ideas in resource-efficient ways. Here’s my debate policy.

CF explains practicing ideas in order to achieve mastery. To learn philosophy effectively and use the ideas in your life, practice activities are necessary. Practicing trains your subconscious to handle some thinking automatically, which frees up your conscious mind to think about more advanced issues.

CF has original ideas developed by Elliot Temple. It also builds on previous ideas, particularly Critical Rationalism (Karl Popper), Objectivism (Ayn Rand) and Theory of Constraints (Eli Goldratt).

1 Like

The most typical answer is that maybe someone else will listen to the idea. Once some high status intellectuals accept the idea or a large number of low status people accept it, then this intellectual will be willing to consider it.

If everyone uses answers like that, then the ideas are blocked. He’s just relying on someone else’s rationality. Either there are other people who are better than him (maybe he should try being more like them? if they can do it, that proves it’s viable) or else they’re all like him and his strategy won’t work.

i don’t think i have space to go into any detail on this though.

I cut this sentence to keep it shorter.

CF concludes that intellectuals who don’t commit themselves in writing to rational policies should not be trusted.

Part of the inspiration for this summary was dividing CF into 3 main parts, 3 big ideas that i want to share: yesno/epistemology, paths forward and automatization. The first 2 are (IMO) really important original ideas and the third is not so original but needed for people to actually use the first 2 ideas. There are many other ways to divide up CF but this one seems OK and simple. I was trying to think if I could tell people a short list of key messages, what would i say? i came up with those 3 messages. they’re writable in one sentence each (they’re focused enough ideas, which is good) but i thought explaining a bit more on my homepage would be good.

1 Like

I think you missed writing the word “are” before the first comma. It should read “CF rejects judging how good ideas are,…”

1 Like

I put a shorter version up on the CF site https://criticalfallibilism.com

Critical Fallibilism (CF) is a rational philosophy which explains how to evaluate ideas using decisive, critical arguments and accept only ideas with zero refutations (no known errors). An error is a reason an idea fails at a goal (in a context). CF explains why it’s a mistake to judge how good ideas are, how weighty evidence is or how strong arguments are, or to use credences and degrees of belief. We learn by an evolutionary process focused on error correction, not by induction or justification. CF offers an approach to thinking and decision making focused on qualitative differences not quantitative factors.

CF is an original philosophy developed by Elliot Temple which takes inspiration from Critical Rationalism (Karl Popper), Objectivism (Ayn Rand) and Theory of Constraints (Eli Goldratt). CF advocates policies to enable error correction (like my debate policy) and practicing with ideas so your subconscious can automatically use them.

I think there are some code smells.

Critical Fallibilism (CF) is a rational philosophy which explains how to evaluate ideas using decisive, critical arguments and accept only ideas with zero refutations (no known errors).

CF explains how to accept only ideas with zero refutations? Like the process of the accepting? That sounds a bit strange to me.

Also, why include both ‘zero refutations’ and ‘no known errors’? (Approximate) duplication can be good for pedagogy, but for a presumably-supposed-to-be-concise summary like this it sounds like you are just pulling a punch.

Here is an alternative first sentence:


Critical Fallibilism (CF) is a rational philosophy, which explains how to evaluate ideas – using decisive, critical arguments – and proposes only to accept ideas with no known errors.


(I think it’s pretty clear here that ‘proposes only’ couldn’t be substituted with ‘only proposes’, and that ‘only-to-accept’ is the right kind of grouping. But maybe Elliot disagrees since he knows a lot more about grammar errors people make.)

An error is a reason an idea fails at a goal (in a context).

Specifically continuing with talking about an ‘error’, following ‘no known errors’, is a weird change of scope because ‘no known errors’ appeared in brackets in the previous sentence. I don’t think that bracketed content should have side effects like that.

(in a context)

Do you think that this is a key caveat to make here? I think it bloats the summary.

Alternative accumulated first and second sentence:


Critical Fallibilism (CF) is a rational philosophy, which explains how to evaluate ideas – using decisive, critical arguments – and proposes only to accept ideas with no known errors. An error is a reason why an idea fails at a goal.


I will look at the rest tomorrow morning (a Saturday!) but exactly right now I am creeping past bedtime.

Shorter Intro v2

Critical Fallibilism (CF) is a rational philosophy which explains how to evaluate ideas using decisive, critical arguments. Decisive criticisms point out errors – reasons ideas fail at goals. CF’s methods enable using only ideas with no known errors.

CF focuses thinking and decision making on qualitative differences and breakpoints. It’s a widespread error to judge how good ideas are, how weighty evidence is or how strong arguments are, or to use credences or degrees of belief. We learn by an evolutionary process of error correction, not by induction or justification.

CF is an original philosophy developed by Elliot Temple which takes inspiration from Critical Rationalism (Karl Popper), Objectivism (Ayn Rand) and Theory of Constraints (Eli Goldratt). CF also advocates policies to enable error correction (like my debate policy) and practicing to subconsciously automatize ideas.

Shorter Intro v3

Critical Fallibilism is a rational philosophy which explains how to evaluate ideas using decisive, critical arguments. Decisive criticisms point out errors – reasons ideas fail at goals. Learn how to use only ideas with no known errors. Read the introduction.

On V2:

This start is better because it links the criticisms to the errors. Maybe the first use of ‘decisive’ could embed a link to a yes/no article.

‘CF’s’ is a bit of a tongue twister if you are a new reader substituting for what CF means ;) . Also, I think that while ‘using’ ideas makes complete sense, it takes a miniature mental gymnastic to figure that out. So an alternative:


Original methods of CF support acting on ideas with no known errors.


The italicised ‘no’ exhibits a link to the previous sentence. I also changed ‘enable’ to ‘support’ because ‘support’ seems more direct. For ease of the rhythm later, I will actually change this to:


; and original methods of CF support acting on ideas with no known errors.


I like this part. I notice that you changed ‘credences and degrees of belief’ to ‘credences or degrees of belief’ (my own emphasis). I second this: a lot of people will regard those as equivalent.

Minor changes:

  • Maybe ‘breakpoints’ could include an article link (a newbie might not be sure what you mean)
  • Similar for the stuff about why degree epistemologies are bad
  • You can delay the punchline for the second sentence to make it more impactful

Putting those changes together into an alternative:


CF focuses thinking and decision making on qualitative differences and breakpoints. To judge how ‘good’ ideas are, how weighty evidence is, strong arguments are, or to use ‘credences’ or degrees of belief, is a widespread error.


Since this sentence continues the paragraph, a non-expert will speculate on how it relates to the previous (two) sentence(s), and whether it does, and plausibly the non-expert does so falsely.

An alternative would be to put the sentence into a new paragraph, like:


CF proposes that we learn through an evolutionary process of error correction, instead of by induction or justification.


(Also, again, I’m thinking that this provides some link-to-article opportunities.) I also use ‘instead of’ because it works better rhythmically later and doesn’t change the meaning. Then, you can hoist up the paths forward and practice stuff which right now appears as a last-minute aside, like:


CF proposes that we learn through an evolutionary process of error correction, instead of by induction or justification. It advocates policies to enable error correction (like my debate policy), and supports the use of practice to automate skills and make your learning subconscious.


Then the last bit would just be like:


CF is an original philosophy, developed by Elliot Temple, which takes inspiration from Critical Rationalism (Karl Popper), Objectivism (Ayn Rand) and Theory of Constraints (Eli Goldratt).


Original is duplicated (see before ‘and original methods of […]’), but that’s kind of good for emphasis. I added commas for ease of the rhythm.

Altogether:


Critical Fallibilism (CF) is a rational philosophy which explains how to evaluate ideas using decisive, critical arguments. Decisive criticisms point out errors – reasons ideas fail at goals; and original methods of CF support acting on ideas with no known errors.

CF focuses thinking and decision making on qualitative differences and breakpoints. To judge how ‘good’ ideas are, how weighty evidence is, strong arguments are, or to use ‘credences’ or degrees of belief, is a widespread error.

CF proposes that we learn through an evolutionary process of error correction, instead of by induction or justification. It advocates policies to enable error correction (like my debate policy), and supports the use of practice to automate skills and make your learning subconscious.

CF is an original philosophy, developed by Elliot Temple, which takes inspiration from Critical Rationalism (Karl Popper), Objectivism (Ayn Rand) and Theory of Constraints (Eli Goldratt).


On V3:

V3 makes it seem like CF is all about yes/no. It excludes the other two pillars about paths forward and practice. (These would be embedded in ‘Learn how […]’, but I wouldn’t know as a reader that these are important, headline ideas.) You could add another bit, like:


Critical Fallibilism (CF) is a rational philosophy which explains how to evaluate ideas using decisive, critical arguments. Decisive criticisms point out errors – reasons ideas fail at goals.

CF includes methods to accomplish this. It focuses on means of error correction, as well as mastery through practice, so that you can automate skills in your subconscious.

Learn how to use (only) ideas with no known errors. Read the introduction.


I put ‘only’ in brackets because the fact of exclusivity is like dropping a new idea. The brackets are conveying that this idea is open to be elaborated on later. Also, ‘includes methods’ is approximate duplication, due to ‘explains how’, but I think that it’s functional.

I don’t want to take mental states out of the picture. Does “believing” instead of “accepting” or “using” also seem confusing to you?

What false speculation do you have in mind?

Conceptually, PF and automatization are sub-ideas which help enable believing and acting on only non-refuted ideas.

Yeah, ‘acting on’ does sound a bit behaviourist-y.

Take the sentence from the shorter-version V1:

Critical Fallibilism (CF) is a rational philosophy which explains how to […] accept only ideas with zero refutations (no known errors).

Replacing ‘accept’ with ‘believe’ here would cause me the same kind of confusion. I tried to figure out what the confusion is but I failed to produce anything satisfactory. I thought: Is it a problem with my not understanding how to define accepting/believing? Is it a problem with accepting/believing not featuring in any of my mental models? Is it a problem with my thinking that accepting/believing states are discontinuous leaps from non-accepting/non-believing states with no transitional steps as’d be described in a how-to? I seriously don’t know. It might be good to get a second opinion, because I don’t want to exaggerate an idiosyncrasy.

E.g. A relationship which I think that you are alluding to is that degree epistemologies are a form of justificationism and/or induction. If I’m wrong about this, then QED, cos I’m the false speculator. If I’m not, then via that relationship’s being non-trivial — taking some background knowledge/skills to infer — QED too, cos the plausible speculation of no relationship is false.

Critical Fallibilism is a rational philosophy which explains how to evaluate ideas using decisive, critical arguments. Decisive criticisms point out errors – reasons ideas fail at goals. Learn how to reach clear conclusions with no known errors. Read the introduction.