Introduction to Critical Fallibilism

Critical Fallibilism (CF) is a philosophical system that focuses on dealing with ideas. It has concepts and methods related to thinking, learning, discussing, debating, making decisions and evaluating ideas. CF combines new concepts with prior knowledge. It was created by me, Elliot Temple.


This is a companion discussion topic for the original entry at https://criticalfallibilism.com/introduction-to-critical-fallibilism/

I’m doing a close reading of this article.

This outline has a few additional comments by me, so it isn’t only trying to represent the article.

CF inspiration outline:

  • CR
    • we’re fallible.
      • we make mistakes and cannot guarantee truth
    • learn by critical discussion
    • learning is an evolutionary process
      • improves by error correction, not positive justification (not adding points)
  • TOC
    • focus on constraints to achieve goals
      • bottlenecks and limiting factors
    • most factors have excess capacity
      • optimization is wasted on these factors
      • local optima vs global optima
    • buffers can help you deal with variance
    • finding silver bullets using inherent simplicity
  • Objectivism
    • proper learning involves integration and automatization
      • limited capacity of active memory/thinking requires that multiple simpler ideas are integrated into a single more complex conceptual unit.
        • I think you can also call it abstraction. can think about difference between abstraction and integration
        • automatization also helps the limited capacity by taking off load and reducing errors
          • happens by practicing
    • CS
      • digital systems are fundamentally better than analog systems at error correction

I haven’t started reading past the first heading.

This took 20 minutes.

1 Like

I’m also adding additional comments in the outline without clear labels separating what tries to represent the article and what are my additional thoughts.

Decisive Arguments

  • CF’s most important original idea: rejection of strong and weak arguments
    • ideas are evaluated in a digital/binary way
      • ideas are either refuted or non-refuted
        • non-refuted means there are no known error for the idea
      • ideas shouldn’t be evaluated by amount of goodness
        • Q: does ideas that attempt to state a universal truth have an amount of goodness that is their similarity to the objective truth? Does general relativity have more goodness than Newton’s theory of gravitation? I’m not talking about how they should be evaluated, but whether the amount of goodness exists as a metaphysical property. This is talking about ideas, so perhaps epistemology is the only way to understand it. Perhaps it only makes sense to talk about them as ideas people have. And therefore how they are evaluated is what matters and not their metaphysical properties.
          • Complicated stuff I can figure out later. Probably has to do with the reality of abstractions
  • criticisms
    • all valid arguments are criticisms
      • valid arguments that don’t seem like criticisms are actually equivalent to criticisms, they can be be rephrased to become criticisms
    • proper criticisms contradict other ideas and explain how they fail
      • cannot accept both the criticism and the idea it criticizes. contradicting each other means they’re not compatible
    • criticisms should be decisive
      • turn the other idea from not refuted to refuted
        • do we need an alternative idea that doesn’t fail where the criticized idea fails to say that the criticized idea is refuted?
      • shouldn’t aim to lower the point total of the other idea
      • merely saying the other idea isn’t great does nothing
  • ideas should be held tentatively
    • since we’re fallible we should always be willing to discard our old ideas and adopt better ones as we gain new knowledge in the future
      • future knowledge is entirely unpredictable, so we don’t know which ideas we will have to change
    • we can reach conclusions, be confident and be decisive
      • but don’t expect that we have the perfect, final truth
        • if we, in fact, had reached the perfect, final truth, we couldn’t be certain that we had reached it
      • knowing further progress is possible doesn’t mean you have to wait for it. You can act now with the current knowledge you have
        • acting now with imperfect knowledge can be the right option
  • CF gives ideas multiple evaluations
    • different ideas can have same score in some cases, but different scores in other cases
    • having many evaluations for an idea is complex and nuanced, but each evaluation is simple. so we catch a lot of complexity and nuance through simple methods
      • it’s like a technique that tackles a complex problem by breaking it down to simple steps
    • an idea gets a binary evaluation for each goal/purpose it is supposed to achieve, either the idea succeeds at the goal or it doesn’t
      • rival ideas are differentiated by having one idea succeeds at at least one goal that the other idea fails at
        • only contradictory ideas are rivals, non-contradictory ideas are compatible and you don’t need to choose between them
        • the goal should be relevant to your life
          • goals are ideas that we can criticize. The goal can be criticized for not being relevant which changes how we evaluate the rival ideas
    • ideas should be evaluated in context
      • we evaluate IGCs
        • the context may be implicit, so we can also evaluate idea-goal pairs
        • each IGC is either refuted or non-refuted
          • many IGCs with a common idea means we have multiple evaluations of the idea
      • we can be wrong about the context, this means we can criticize our idea of the context, or whether we should put ourselves in this context
  • only decisive errors matter because they refute IGCs
    • non-decisive errors don’t cause failure at goal and is therefore compatible with success; they don’t make a difference
    • multiple non-decisive errors can be thought of as a group and therefore a new error. this new error can be decisive
      • the non-decisive errors don’t become a decisive error because they manage to detract enough points from the idea such that it reaches a threshold where the idea becomes refuted. the non-decisive errors combine in a special way such that something made a binary switch
        • if the non-decisive errors are completely independent then they don’t combine into anything that makes a difference.
        • detracting points from factors that don’t cause a failure doesn’t matter, no matter how many points in total has been detracted
          • two non-decisive errors in different factors is a unique situation, either one non-decisive error enhances the other non-decisive error to become decisive, or another factor gets an error that is decisive
        • I think the most common case of multiple non-decisive errors being decisive together is when each non-decisive error drains your attention which causes you not have enough attention for the problem
          • in general: multiple non-decisive errors drains a common resource such that it falls below the breakpoint which is enough for the problem.
            • resource could be time, conscious/unconscious attention, money, space, etc.
  • there are infinite possibilities for goals, ideas and contexts. how do we manage it all?
    • start from the IGCs we already have that we think are good or have potential
      • we generate new ones when we find problems with the ones we have
      • goals and contexts are ideas that we can criticize

I’ve now read until the first sub-heading.

This took 2 hours. It was fun! I find this enjoyable.

Ideas, like genes, have different amounts of knowledge or adaptation.

no. suppose a goal is impossible. then all ideas are refuted for that goal. calling them non-refuted wouldn’t make sense because they won’t work.

whether an idea succeeds or fails at a goal is about that idea and that goal, not a comparison to alternatives.

a common case is when you have 2+ alternatives for how to achieve part of a plan.

like you want to eat at a restaurant. you can walk there or drive there. an error with walking isn’t decisive since you can still drive. an error with driving isn’t decisive since you can still walk. but both errors together is decisive because you need at least one transportation method to work.

:smile: :+1:

Does more knowledge mean closer to truth?

Makes sense. Some (abstract) problems don’t have solutions.

So how do we choose goals to consider? We have to make choices about what we want. It’s up to us to decide what we value.

Goals are ideas we can criticize. However by what standard can we criticize goals by? You can critique it by saying it is hard or impossible to achieve. If we only consider goals that are plausible, then we ask: what goals would lead to good if they were achieved? This is perhaps more of a morality question. Rand’s idea was that there was an ultimate goal, life, that all other goals are subordinate of. Epistemology might be very relevant though because it seems similar to ultimate foundations and final truth.

“It’s up to us to decide what we value” seems to suggest relativist morality, but I think Elliot thinks there exists objectively good values. I think he means that we have to guess/conjecture at good values. Does objective values logically require an ultimate value? If there are epistemological issues with knowing the ultimate value, it might still exist like the final truth exists.


I think I could write more, but I might not have time to write more for a couple of days, so I’m posting this for now.

Something that I’ve been thinking about recently is the asymmetric nature of truth and falsity. False ideas can contradict other false ideas and true ideas, but true ideas can only contradict false ideas. True ideas wont contradict each other. I think this is a big part of why the critical method is truth seeking (correcting error = pursuing the truth.)

So, I think we can also criticise goals if they conflict with other goals we have. They can’t both be objectively right goals if they conflict, so there is a problem there to solve.

2 Likes

Perfect non-contradictory truth is a good abstract concept but it’s not what people generally work with. We use error correction to fix some contradictions we find, making our ideas less contradictory.

@LMD’s suggestion of following non-contradiction may be enough to reach a bunch of good objective goals, maybe all of them. But I’m also interested in whether there are ultimate goals, and whether there exists is a single ultimate goal.

Most goals are sub-goals, they are the means to an end. Sub-goals can form a long chain, one sub-goal is meant for another sub-goal which works toward another sub-goal. But can this go on for infinity? Does it not have to end at some final goal? Wouldn’t the whole chain of sub-goals be meaningless if it didn’t have a final goal? I think there has to be ultimate goals.

Perhaps this is wrong because I have wrong ideas about infinity. Perhaps an infinite chain of sub-goals could have meaning and be objectively good. I wouldn’t know how that would work though.

My intuition tells me that if ultimate values are needed for objective values then there should be a single ultimate value. But I can’t see by logic why there couldn’t be multiple ultimate values.

I checked her writing to check whether this was true, but otherwise what I have written was just things I thought of for myself. Although I’m certain that I originally got these ideas from Rand. I skimmed through The Objectivist Ethics from The Virtue of Selfishness and found the most relevant part:

An ultimate value is that final goal or end to which all lesser goals are the means—and it sets the standard by which all lesser goals are evaluated. An organism’s life is its standard of value: that which furthers its life is the good, that which threatens it is the evil.

Without an ultimate goal or end, there can be no lesser goals or means: a series of means going off into an infinite progression toward a nonexistent end is a metaphysical and epistemological impossibility. It is only an ultimate goal, an end in itself, that makes the existence of values possible.


My current goal is to learn about epistemology, so I’ll prioritize continuing with the article over continuing on this tangent, even though I find this topic very interesting. I’ll answer responses to this tangent though.

1 Like

I share your interest in this topic. Have you read Elliot’s Morality without Foundations article and the dialog[edit: link] it links to? It might interest you.

It talks about how not knowing what the true ultimate goals are doesn’t matter that much and that many different moral goals converge on the same intermediate goals. It’s interesting and for me it was counter-intuitive that the different ultimate goals don’t render a lot of the intermediate goals meaningless.

(I understand that you’re interested in whether or not there are true, objective ultimate goals, and that this is different.)

1 Like

Yeah I can see that it’s not necessary to use the idea of perfect truth to understand why we shouldn’t have contradictory goals. Having contradictory goals is bad for the simple reason that you cant achieve both.

You can’t learn by just following non-contradiction. It doesn’t lead you to knowledge. You learn by evolution, which includes replication with variation (guessing, brainstorming). Non-contradiction is an important tool for the selection (criticism, error correction) part of evolution.

I think Popper is a broadly a better guide than Rand to these issues.

I didn’t realize I was talking about learning. I see that:

means a full process of knowledge creation. What I had in mind was that we might not need to know of an ultimate goal in order to find objective goals, the standard could be non-contradiction.

Non-contradiction is a common standard for all knowledge though (I’m not trying to say perfect non-contradiction is necessary in order to have knowledge, just that it’s something we try to move towards). Evaluating according to the standard of an ultimate goal would be unique to goals. So they aren’t really the same thing. So the question is more whether goals needs this special standard, or whether they’re like all other knowledge.

I think requiring to know the ultimate goal in order to know of any objective goals would be justificationism (I don’t understand justificationism very well though, so I’m uncertain). So epistemologically we don’t need ultimate goals, but I think they have to exist metaphysically (or maybe it’s better to say “by logic” instead of metaphysically. I think this has to do with the reality of abstractions which I need to learn more about later).

To the issue of whether there must exist ultimate goals or not?

Cool! We’ll discuss it in the future then :smile: (or now.)

Yes, some time ago. I have thought about it quite a lot since then. I’ll read it again and I think I’ll have issues that I’ll need to discuss then.

I think that’s not the link you meant.

1 Like

oops fixed thanks!

It’s like all other knowledge: fallible and evolutionary. That’s always the answer for everything. To think maybe otherwise involves approaching the issue from some other perspective besides Popper’s.

I worried about my sentence being interpreted like this (your interpretation is objectively correct, I was uncertain when I wrote it.) I didn’t think that goals have a unique epistemological status. What I had in mind was something like computers being evaluated on the standard of how fast they can compute (among other things.) It would be special like how we don’t evaluate shoes by how fast they can compute. And I wanted to contrast that with how non-contradiction is a standard that applies to all knowledge. I think “whether they’re like all other knowledge” and maybe “special” were bad writing choices.

Since I had those doubts I don’t know why I chose to write it anyway. I may have had a conceptual confusion as well.

Binary Goals

  • success or failure is unambiguous for well-defined goals
    • there is no partial success
    • we can claim that some goals are too ambiguous, or that we don’t know enough, to evaluate the idea-goal
      • this isn’t partial success, there is no evaluation
  • main rival idea: degrees of success (in reality implies binary goal to maximize something)
    • it claims two ideas can succeed but one can succeed more than the other
      • but why not choose the one with more success?
        • all choices except the idea with the most success are failures
          • so it’s actually a binary goal where we attempt to maximize something (most success)
  • maximization usually is bad because we can’t maximize multiple factors
    • we shouldn’t maximize a single factor either because there are almost always multiple factors involved
  • good goals specify what is enough for the relevant factors
    • binary criteria for each factor: enough or not enough
  • we should not focus on improving factors with excess capacity
    • improving them can lead to worse outcomes
      • can take up more space, attention, time, etc.
    • most differences in quantities don’t matter
    • save your resources to cross the breakpoint of bottlenecks instead

Took 45-50 minutes.

1 Like

Breakpoints

  • small digital or analog
    • large digital is approximately analog
  • qualitative difference on an analog spectrum
    • a quantitative difference that crosses a breakpoint makes a qualitative difference
      • 10 point difference anywhere matters way less than 10 points that brings you over the breakpoint
        • or 100 points change that doesn’t cross the breakpoint doesn’t matter whereas 10 points that crosses the breakpoint does matter
      • most quantitative differences don’t matter since breakpoints are sparse on the spectrum
        • excess capacity is more than you need. it’s far from any breakpoint so it won’t make a qualitative difference
        • which means most optimizations don’t matter
          • like how most factors don’t matter; most changes in each factor doesn’t matter either. only changes that crosses breakpoints on bottlenecks matter
            • means we can often be very effective with small effort if we focus well enough
    • this creates discrete categories on the analog spectrum based on qualitative differences
      • each breakpoint makes a binary distinction: has crossed the breakpoint or hasn’t
        • crossing the breakpoint or not is success or failure
  • goals for quantities should be about crossing a crossing a breakpoint, not maximizing them
    • that way goals that seem to be about improving on an analog spectrum are turned into binary goals
      • everything above the breakpoint is good enough, which is all we need
    • we should probably aim for some excess capacity to account for variance because we want to be sure that we cross the breakpoint
      • the excess capacity doesn’t help in itself, but aiming higher helps us increase chance of success
      • base the amount of excess capacity on the variance
  • we only need breakpoints to deal with analog spectrums, factors that deal with qualitative differences are already broken into discrete categories that can be evaluated on binary grounds

40 min.

1 Like

I realized now that if there is a single end goal then that implies we should maximize it. That’s if it’s a quantity, if it’s a small digital thing then we either achieve it once and have nothing left to do, or it’s a thing we have to maintain forever.

What about multiple end goals? Then we should try to maximize all of them, except that doesn’t work. If we try to convert them into a single factor then that actually implies there’s a single end goal: to maximize that factor. A combination of one quantitative goal with many other qualitative goals could work. You would try to maintain the qualitative goals while increasing the quantitative goal as much as possible. Actually I don’t think the impossibility of maximizing multiple factors means there couldn’t be multiple quantitative end goals, it’s just that it would be harder for us to use as a guide as opposed to a single quantitative goal.

This pushes my intuition towards thinking there is only a single end goal. I don’t think reality would pose the impossible problem of maximizing multiple end goals.