I’ll share stuff related to my efforts to learn Yes or No Philosophy and Critical Fallibilism here. Please feel free to offer criticism (including about how I might better approach learning) as it would be very much appreciated.
Disclaimer: This is mostly me trying to articulate my current understanding of some CF concepts. (IOW, it’s not authoritative definitions.) I assume there’s heaps of errors in here. Please let me know if you notice any (or if you have any other feedback or ideas about how to better learn CF, etc., etc.) as my goal is to learn.
Dimensions
A dimension is a distinct property/trait/factor/aspect of an idea/solution that is qualitatively different (incommensurable) from other traits of that same idea. A dimension/trait of an idea can be measured or evaluated.
For example, imagine you want a pet. Your ideas/solutions are: dog, cat, hamster, snake.
Some dimensions/traits:
Can you take the pet hiking? (Yes or no?)
Does it have fur? (Yes or no?)
Size. (Big or small?)
Goals are requirements for a specific dimension/trait. E.g., you want the pet to be big, have fur, and able to go hiking. Not small, furless, or unable to go hiking.
Qualitative vs quantitative, digital (including binary) vs analog, discrete vs degree
Some dimensions are qualitative. Some dimensions are quantitative.
Qualitative (Discrete/Digital): Dimensions that consist of different categories. E.g., color (red or blue?), shape (triangle or circle?), able to go hiking (yes or no?).
Quantitative (Degree/Analog): Dimensions that exist on a spectrum. E.g., height, weight, quantity.
I could try to understand this better. I could also seek more clarity about the differences between qualitative/quantitative vs discrete/degree vs digital/analog.
I saw this post, which made me think that maybe I should study qualitative vs quantitative more (and ditto for other CF concepts!):
Also, if I take The 10% Rule seriously, then I’ve got a whole lot more work to do.
Pessimism/optimism is an example of a degree spectrum:
Breakpoints
(Disclaimer: I made up these two terms.)
Categorization breakpoint: a point on a spectrum at which there is a relevant qualitative difference.
There can be a small number of these on a given spectrum. (E.g., shirt sizes, or BMI categories, e.g., the point at which a person’s BMI goes from merely overweight to clinically obese.)
Decision (or pass/fail) breakpoint: a breakpoint that separates success from failure.
Often there’s only one of these on a given spectrum. (E.g., the point at which a price exceeds your budget, or the number of soldiers needed to win a battle of attrition.)
But sometimes there can be two (e.g., not too hot but not too cold) or even more (e.g., a layover that is either very short or long enough to leave the airport and explore the city but not in the awkward zone where it’s long enough to be boring but too brief to leave the airport).
Purpose of breakpoints: Breakpoints allow you to turn a spectrum into discrete categories. The most important breakpoint is between pass and fail. This is important because success and action are binary: an idea either succeeds or fails, and you either do it or you don’t.
There’s other reasons that breakpoints are important: people can’t think in real numbers, and ditching analog prevents measurement errors from accumulating.
Rankings obscure the binary nature of action and criterion-satisfaction…
Even if you rank your ideas, you usually still end up only acting on one. E.g., even if you rank colleges from best to worst, you’ll usually still only attend one.
Even if you attempt to rank movies by, e.g., creating a top 100 movies of all time list, at any given moment in time, you’re only going to watch one movie. Also, I guess for each individual slot, you can answer: “Is this the number one best movie of all time? Yes or no? Is this the second best movie of all time? Yes or no?” Besides, if you gave the movies scores (e.g., 7.5 out of 10), that’s somewhat meaningless since you either watch it or you don’t. You can’t act on a 7.5 out of 10. You either act or not.
If you rank books into three tiers (e.g., ET’s FI book list), those tiers are discrete categories. E.g., putting a book in tier one means that the book satisfied the criterion of it being necessary to read in order to be “in a good position to have productive, intellectual discussions.”
…whereas CF exposes one’s reasons for rejecting stuff to scrutiny
Rankings also obscure decisive criticism. E.g., if I pick Movie A and reject others, then there’s a reason why I’m rejecting those others (they failed to pass some breakpoint that Movie A passed). There’s a reason why those other movies were not as good as Movie A for the current moment. Rankings allow people to avoid stating their reasons for rejecting stuff by just saying it’s lower on the list or just not as good in some vague way.
CF thinking, by contrast, makes one’s reasons for rejecting stuff explicit. This, in turn, exposes one’s reasons for rejecting stuff to scrutiny. Because making them explicit enables them to be examined. This is a way in which CF thinking can root out hidden biases and make the world more lucid/rational. I find this prospect extremely exciting.
I wonder if CF thinking might also have psychological benefits. Knowing exactly why one rejected Option A and went with Option B might make one feel less conflicted. Whereas rankings might make one feel unhappily torn between two almost equal options.
Generic breakpoints: rounding and maximizing (including constrained maximizing)
Rounding
One generic breakpoint is to use rounding to break up a spectrum into discrete categories (with each category having ”a distinguishable, noticeable amount more than the previous category”). E.g., categorizing people who are 5’9 vs 5’10 vs 5’11 (rather than worrying about fractions of an inch). Or rounding the price of homes that one is considering buying to the nearest $10k.
Why does this matter?
People can’t think in real numbers (cf. crow epistemology)
Worrying about small differences often doesn’t affect one’s goal
People can have the goal of not wasting mental energy on low-impact differences (e.g., a $6.95 and a $6.98 jar of peanut butter can both be rounded to $7—and then one can focus on more significant stuff such as which brand’s jar looks yummier or what else to buy).
Maximizing (and minimizing)
A breakpoint between the best option and everything else. (Or, in the case of minimizing: breakpoint between the minimum and everything else.)
Constrained maximizing
Maximize given some constraints. E.g., I want the fastest car that is under 100k and has a 5-star safety rating. From the presentation: "maximize X while succeeding at requirement Y”.
Another type of constrained maximizing is Maximizing Up to a Point. An example from the presentation: "maximize X up to 60”. From a slide from the presentation:
You could make it more complicated. Maximize X up to 60, but also minimize X down to 90. 60-90 is the preferred range. If no solutions are in that range, then only the closest solution (and anything that ties it) would pass at this sub-goal
You could asymmetrically value closeness for too low or too high
E.g., to avoid going over project budget.
Maybe I should try to come up with examples of all of these types of maximizing.
Degree criticism, praise, decisive criticism
Degree criticism
A degree criticism says why an idea deserves a low score, has low goodness, is weak or kinda bad or mediocre.
But degree criticism doesn’t necessarily cross a pass/fail breakpoint. E.g., a degree criticism might say that a laptop is worse because it has 1 hour less battery life. But if it already has enough battery life for one’s needs, then 1 hour extra battery life is irrelevant.
Praise
Praise says why an idea deserves a high score, has high goodness, is strong or has merit.
But once again, praise doesn’t necessarily cross a pass/fail breakpoint. E.g., you might praise a meal for looking very appetizing, but if the meal has poison in it, then the praise is irrelevant. Or—when interviewing candidates for a job—you might praise one of the candidates for being very friendly, but if they’re inept, then that praise doesn’t matter. Or someone might praise communism for allegedly having a cool anthem (The Internationale) (or they might praise fascism for allegedly having cool aesthetics), but if one values freedom, then that praise doesn’t matter.
An additional reason why praise isn’t very useful: success requires that every (critical) part of a system works, but failure requires only one (critical) part of a system fail. So praising one part doesn’t mean much.
Are informal logical fallacies a species of DCOP (degree criticism or praise)?
E.g., if I criticize the moral character of an idea’s originator (ad hominem) or praise an idea’s popularity (appeal to popularity), that fails to explain whether the idea is true or not. I’m criticizing or praising stuff that’s irrelevant. But perhaps such fallacies are different from DCOP because presumably DCOP would address a relevant dimension of the idea itself (but ignore breakpoints) rather than address totally irrelevant stuff. So I guess there’s no overlap between DCOP and informal logical fallacies.
Decisive criticism
A decisive criticism is a reason/explanation of why an idea fails at a specific goal (in a specific context). I think it’s synonymous with a refutation or identifying an error.
Converting DCOP into decisive criticism
DCOP can be converted into decisive criticism by reframing the DCOP as a dealbreaker or requirement.
Praise can be converted into decisive criticism by reframing the merit as a requirement and rejecting all rivals that lack it.
A degree criticism can be converted into a decisive criticism reframing the weakness as a dealbreaker and rejecting all ideas that possess it.
E.g.: Imagine you’re considering two laptops. One has an 11 hour battery life, the other has a 12 hour battery life. Instead of praising the laptop with 12 hours of battery life or assigning a lower overall score to the laptop with 11 hours of battery life, you could say that one of your goals is to be able to use your laptop on a 12 hour flight. The laptop with only 11 hours of battery life fails to achieve your goal of being able to use it for the full duration of a 12 hour flight.
Tables: The relationship of IGC success to decisive criticism and DCOP
The relationship between decisive criticism and IGC success
IGC Works
IGC Fails
A Decisive Criticism Is True
FAIL: It is IMPOSSIBLE for the IGC to work because the IGC has a known error.
The IGC FAILS as expected because the decisive criticism is true.
All Proposed Decisive Criticisms Are False
The IGC WORKS as expected because it has no known errors.
The IGC FAILS because of a hitherto unknown error.
The relationship between degree criticism and IGC success
IGC Works
IGC Fails
Degree Criticism Is True
The IGC WORKS because the deficit mentioned by the degree criticism didn’t cross the breakpoint.
The IGC FAILS because either the deficit mentioned by the degree criticism did cross the breakpoint or something else went wrong.
Degree Criticism is False
The IGC WORKS.
The IGC FAILS because of some other error.
The relationship between praise and IGC success
IGC Works
IGC Fails
Praise Is True
The IGC WORKS because the total system (including the non-praised parts) meets the breakpoint.
The IGC FAILS because praising one part (or all parts) of the system doesn’t mean that the total system crosses the breakpoint.
Praise Is False
The IGC WORKS because either the praise targeted an irrelevant property or the property that was falsely praised still crossed the breakpoint.
The IGC FAILS because either the false praise was breakpoint-relevant or something else went wrong.
Meta problem solving technique
If you want to do X, but you’re stuck on Y, what should you do? You can ask how to proceed given that situation.
If two ideas that disagree, and you don’t know the answer, what should you do? You can ask how to proceed given that situation. (Rather than waiting for the conflicting ideas to be resolved (which might take centuries of scientific research), you can figure out how to proceed given your current knowledge or situation.)
Meta problem solving chain
If one uses the meta problem solving technique but then gets stuck again, one can use the technique again for the new situation that you’re stuck in. For example:
Given I want to do A and I’m stuck on B and I’m stuck on C, then what should I do?
Given I want to do A and I’m stuck on B and I’m stuck on C and I’m stuck on D, then what should I do?
Ad infinitum.
It should get easier each time because each question is easier and less ambitious. You should be able to come up with something to do in your life rather than being 100% stuck.
Is the meta problem solving technique (MPST) a little bit like an intrapersonal version of impasse chains?
I think the MPST a bit different from impasse chains because with impasse chains (AFAIK, I’m not that familiar with impasse chains) one actually tries to solve the impasses. But with the MPST, one just treats being stuck on X as a given (for now).
Qualitative vs. quantitative differences is a mainstream concept. I’d suggest looking at some other people’s explanations of it too, and also labeling more examples.
Breakpoints are for analog to digital conversion. But lots of stuff is inherently digital.
Inherently binary includes:
yes or no questions
pass/fail
has trait X?
member of category Y?
Inherently non-binary digital commonly involves categorization.
If you’re categorizing words into ~7 parts of speech (noun, verb, etc.) that’s non-binary digital categorization. To get to binary pass/fail, you decide which categories are good/pass and which are bad/fail, and then ask the question “Is it in one of the good categories that is compatible with goal success?” A noun isn’t a different quantity of something that a verb is also a quantity of.
Democrat, Republican, Libertarian, Green is another example of categories. The categories aren’t different amounts of each other. They’re just different things. And there are more than two.
Another example is asian food, italian food, french food, mexican food.
Apples, oranges, pears is another example.
Digital is most interesting when there are a small number of things. If there are a million things, they may have a clear rank ordering or be related to quantities or otherwise approximate analog. If they didn’t approximate analog and there were a lot, even 100, it’d be hard to think about them all as separate things that each require individual attention! A common way to get digital with too many things is just rounding from analog, in which case it clearly approximates analog.
You could also invent “the 177 types of novel plots” and brainstorm them 1 by 1. Then they won’t approximate analog but probably won’t be useful so people rarely do that. Instead of having so many categories, if just a few categories isn’t enough, then you usually want sub-categories. Come up with 2-5 categories, then for each of those come up with 2-5 sub-categories, and repeat for sub-sub-categories and beyond if desired. This makes a tree.
Being able to live independently (not in a nursing home) when one’s 95
At age 95, age-related aches and pains don’t prevent one from doing the things that one likes to do
Being strong and flexible enough to play with one’s grandkids
Relationships
Quantitative relationship metrics:
Number of friends
Number of social gatherings per month
Hours per week spent socializing
Qualitative relationship traits:
Sharing important values and having similar interests
Are one’s friends quality people
Are one’s friends IRL or online
Hobbies
Quantitative hobby metrics:
Hours per week spent on hobbies
Dollars per year spent on hobbies
Qualitative:
Is one’s hobby more intellectual (e.g. studying philosophy) or more physical/sport-like (e.g. trail running)
Does one feel rejuvenated after spending time on one’s hobby?
What type of hobby does one have? (E.g., model building, RC vehicles, mountain biking, fishing)
Work
Quantitative metrics:
Salary
Commute duration
Qualitative traits:
Intellectually stimulating
Important to society
Competence: is one good at one’s work?
The problem with rating qualitative traits
Some qualitative traits could be given a 1 to 5 rating. E.g.:
Health: Rate your age-related aches and pains on a 1 to 5 scale (1 = minimal, 5 = debilitating)
Relationships: Rate the quality of your friends on a 1 to 5 scale (1 = toxic & dishonest, 5 = wise, kind, & erudite)
Hobbies: Rate how rejuvenating your hobby is on a 1 to 5 scale
Work: Rate how intellectually stimulating your career is on a 1 to 5 scale
A problem with rating qualitative traits is that it obscures errors/refutations. E.g., rather than giving one’s job a 2-out-of-5 rating for intellectual stimulation, one should say that one hates one’s job because it never gives one the opportunity to learn new things or solve novel problems or meet cool people.
Review
I didn’t find this helpful. I don’t think it made me understand CF any better.
A problem with that sort of rating is: how do you know what the right number is? How do you come up with a number? What do you evaluate and how do you translate it to a number?
Also, for a 1-5 integer scale, you’re either rounding quantities (3.2 → 3) or you’re using unstated qualitative differences (4 breakpoints) to define 5 separate categories. Then you label the 5 categories with numbers but the number labels are problematic if category 3 isn’t actually 3x as good as category 1. This comes up with e.g. Uber driver ratings: 5 is good and 4 is bad, so 5 is not 25% better than 4 as basic arithmetic says it should be, so the numbers are misleading. (5 should be 25% better than 4 given that this is a linear scale, as I think it’s supposed to be, not an intermediate scale that still needs to be converted to linear goodness numbers. You need it linear before doing weighted averages for multiple factors. Also integers from 1-5 is a misleading way to label a non-linear scale! It’d make more sense to label it e.g. 100, 30, 20, 10, 0 to show the non-linearity if that’s what it really means.)
“How do you come up with a number?” I think CF says (and I agree) that you can’t. It’s just subjectively made up.
That’s a good point. A 4.1-star rating often means that the product/service is pretty bad. And anything below 4.1 or even ~4.2 is too bad to risk buying. So in actuality it is more like “100, 30, 20, 10, 0” rather than 5, 4, 3, 2, 1 as you pointed out.
So, for stuff like Uber and Amazon, one would have, e.g., to convert a 4.2-star rating to ~44/100.
Yes/No approach to balancing one’s constellation of values?
I’ve heard some contemporary Objectivists refer to one’s “constellation of values”. People usually aren’t seeking to maximize just one value in life (e.g., make as much money as possible). Instead, people usually have a bunch (a “constellation”) of different values/goals such as health, wealth, relationships, hobbies, happiness, etc.
In the past, I was unclear about how to think about optimizing each of one’s multiple values. If one has just one value that one is maximizing (e.g., making as much money as possible), then it’s a little more straightforward because one can benchmark every option against whether it seems like it’ll maximize that value or not.
But when one has several values, I wasn’t sure how one could rationally think about balancing(?) them.
So what’s the solution?
I think the solution is ET’s IGC chart.
In my understanding, CF would say that one can’t maximize several qualitatively different goals/values at once—or figure out how to balance/weigh the different values in such a maximization effort.
Instead, the rational way to think about it is binary: set binary goals for each part of one’s life. E.g. (these are just made up, not my actual goals):
Wealth: I want to make 80k/yr.
Health: I want to be healthy enough to retain my independence in old age.
Hobbies: I want to devote 10 hrs per week to a rejuvenating hobby.
Relationships: I want spend 10 hrs per week with quality friends.
In other words, don’t try to somehow simultaneously maximize one’s health (like Bryan Johnson), one’s wealth (like Rockefeller), one’s relationships (like a socialite with an endless Rolodex or a professional PUA or networker), one’s hobbies (like an unemployed polymathic aristocrat)—and vaguely intend to frenetically juggle/maximize or somehow balance being all of these things rolled into one person. Instead, set definite binary goals and then be content and at peace once one has satisfied those binary goals. And then, if one really wants to, one can consider making one (or more) of those binary goals a bit more ambitious. And so on.
I think this is a more rational/tractable approach than feeling vaguely that one wants to somehow maximize them all.
Breakpoints can also be helpful for this because they discourage one from stressing about optimizing small stuff (when it’s far from a breakpoint).
I wrote here about whether there is an ultimate goal/value. Rand thinks there is an ultimate value, which is life. Still you would need to break it up into sub-goals to make progress. You would want to maximize this ultimate goal, which requires a mix of the sub-goals as binary goals, not to maximize any of them.
I’m not sure that one can maximize this ultimate goal. Because it involves several qualitatively different subgoals, I’m not sure that one can convert it into a single clearly maximizable value.
I suppose one could identify which subgoal is the bottleneck in one’s overall flourishing. E.g., if one has enough money but sucks at philosophy, then making one’s philosophy subgoal(s) more ambitious could make sense if one “[a]ccept[s] the irrevocable fact that your life depends upon your mind” and “[a]ccept[s], as your moral ideal, the task of becoming a [rational] man.” (Atlas Shrugged)
(But one’s overall flourishing doesn’t seem clearly quantifiable/maximizable given that it involves many qualitatively different subgoals as I said. Though I do think that one’s flourishing can be continually improved without limit. So maybe that is roughly equivalent to maximizing it.)
I agree that gaining and keeping one’s ultimate value requires achieving a mix of binary subgoals.
You look at how much of the ultimate value get from achieving your alternative goals and choose the ones which in the long run gives you most of the value. Let’s suppose the ultimate value is happiness. We can then identify bottlenecks (like you suggest) to producing the most amount of happiness possible. A binary sub-goal you can identify as most important at one moment might be “get enough money to buy a home and feel secure”, at another point it could be “get a consistent sleep schedule”.
The sub-goals are not ends in themselves. So the point isn’t really to get the thing thing they’re about, but rather to get the ultimate value you eventually get using the outcome of this sub-goal. For example, getting the house isn’t the end in itself. We get the house such that we have a safe and comfortable living space which gives us happiness and also let’s us achieve more goals which give us happiness. Assuming happiness is the ultimate goal. So what I want to get at here is that the sub-goals wouldn’t be mixing qualitatively different values, but they would be measured by the amount of the same ultimate value they produce.
Logically, there could be multiple ultimate goals. That would be there were multiple things which are ends in themselves. In that case there’s no way to maximize all of them. At least I don’t see why that would be logical impossibility.
Elliot pointed out that the goal might not have to be maximized:
The thing about new and interesting ways could apply if the goal isn’t a quantity as well.
Yeah, I see. You choose your subgoals by reference to some ultimate value (like long-term flourishing or happiness).
Yeah, makes sense. I think in practice it’d be pretty hard to measure though.
How does one measure happiness points or Life Points? (Also there might be different flavors of happiness like peaceful/serene happiness versus ecstatic/manic happiness.)
I suppose one should just make the best decision given one’s current knowledge.
Yeah, I think that makes sense. If one crosses the last, best breakpoint then there mightn’t be much point in trying to squeeze out the last few drops.
I assume this “last, best breakpoint” refers to a specific context (like one’s current daily life/routine) rather than life/the world/universe in general. If the latter, I wonder if that might contradict the idea that we’re always at the beginning of infinity.
Yeah that did occur to me but I figured that one could still treat it in a binary way by simply saying Yes to the option with the most points.
But since happiness points or Life Points are not able to be measured exactly, I guess it has the same problem as weighing.
I asked Gemini about this and it gave a good answer (IMO):
You’re [saying] that ‘Happiness’ acts as a single currency that makes all goals commensurable.
However, I think treating happiness as a quantity to be maximized leads back to the problems with ‘Weighing’ that Elliot warns about. If we can’t objectively define a ‘unit of happiness,’ then saying one path gives ‘more’ happiness than another is just a way of hiding our unstated reasons behind a made-up number.
In a Critical Fallibilism framework, we don’t need ‘Happiness Math.’ We use Decisive Criticism. For example, I don’t choose a sleep schedule over a pay raise because of a ‘higher happiness score.’ I choose it because I have an explanation: ‘If I am chronically exhausted, I will fail at my goals of doing good work and enjoying my hobbies.’ That is a binary refutation of the Pay Raise option for the context of my current life.
I wonder how I can avoid making this same mistake (of falling back into the weighing paradigm) again.
I think the thing that tripped me up was thinking that one could have the binary goal of selecting the idea with the most goodness (or happiness/Life) points lol. But that’s still in the paradigm of attempting to measure the goodness of an idea. Hmm. I’d need to think about it some more. Any tips/pointers/etc are welcome.
So I’m thinking that in reality it’s possible that the ultimate value could be a quantity to maximize so I’m thinking about how that would work. I wasn’t trying to introduce point systems everywhere, just to the ultimate value.