Promoting Critical Fallibilism

Topic for sharing or discussion about promoting Critical Fallibilism.

Keep things positive in this topic. For problems, post in Obstacles to Promoting Critical Fallibilism

1 Like

I shared the CF forum to someone for the first time last week. It was to my professor. It was nice cuz I usually am too ashamed in a way to share and have others see my posts. Now im ok with others seeing my posts cuz I feel like I’m participating in important discussion. That and Im getting results from learning about the philosophy. Also im ok with being wrong and the idea that i was wrong in the past.

Edit: i have told a friend before about CF but I liked telling someone I dont really know about it

Nice work! :slight_smile:


Facebook groups & Reddit

In the Oxford Karl Popper Society Facebook group, I shared Responding to AI Summaries of Popper’s Critics. It got 1 like and 1 share.

In the Ayn Rand Facebook group I shared David Deutsch Smears Ayn Rand. It got 6 likes and 2 comments. I also posted it on the Ayn Rand subreddit. It got 4 upvotes, 2.3k views, and 12 comments.

I might try posting Atlas Shrugged Close Reading Chapter 1 to the Facebook group and the AR &/or Oism subreddit(s), too, but will wait a bit before doing that to avoid spamming.


Yaron Brook debate

Inspired by this post, I emailed Yaron Brook requesting that he host a debate with ET about Karl Popper.

To Yaron Brook

Subject: Karl Popper Debate on YBS?

Hi Yaron,

I saw an old clip of the Yaron Brook Show where you said you were looking for a Popper expert. A great Popper expert that I know of (who is also a huge fan of Objectivism) is Elliot Temple. Temple was an editor of ā€œThe Beginning of Infinityā€ by David Deutsch and was a colleague of Deutsch for 10 years. Another example of Temple’s work is his essay, ā€œIntroduction to Critical Rationalismā€ (Critical Rationalism being Karl Popper’s philosophy). (Temple also has close readings/analyses of some Atlas Shrugged chapters, which I liked a lot.)

I think it would be amazingly fascinating if you could host a debate between Elliot Temple and, e.g., Mike Mazza (who gave a speech at OCON critiquing Popper). That’d be an epic Yaron Brook Show episode!

Temple’s email is et@elliottemple.com

Best wishes,
Jarrod

He hasn’t responded. If he doesn’t respond in a week, I might email again. Also, maybe some other forum members can email Yaron the same request (probably best not to copy and paste my exact email though; use your own words).


Animal consciousness debate continued

I emailed another animal consciousness expert (Jonathan Birch) asking him about ET.

To Jonathan Birch \[Disclaimer: I used AI to help write the ā€œP.P.S.ā€ section of the email\]

Subject: Animal Sentience Question

Dear Professor Birch,

I’ve seen compelling arguments which disagree with you about whether non-human animals are sentient—and I haven’t found any satisfactory ways to answer them (not even in your book The Edge of Sentience). I’d be very grateful to know how you’d answer them.

Specifically, the arguments made in the article ā€œAnimal Welfare Overviewā€ by American philosopher Elliot Temple. (As context, in case you haven’t heard of him, Temple was an editor of the book ā€œThe Beginning of Infinityā€ and was a colleague of the physicist David Deutsch for 10 years.)

The article’s core thesis, rooted in the Popperian/Deutschian view of intelligence, is that animals lack subjective experience, including sentience or any kind of suffering, because all such experience requires a capacity for interpretation—an ability which Temple argues is unique to human-like general intelligence.

I’d love to know how you’d address the points he makes there. (Or if you could even just link me to someone who addresses those arguments against animal suffering, I’d be so grateful.)

Thank you so much,
Jarrod

P.S. Skip the ā€œHuman Sufferingā€ section of the ā€œAnimal Welfare Overviewā€ article, which isn’t relevant to whether animals can suffer.

P.P.S. Do you have a preferred process or site for receiving and responding to counterarguments on animal sentience? E.g., do you maintain an ā€œobjections and repliesā€ page (or a forum) and a written process for how you triage and respond to these sorts of counterarguments? Or a debate policy?

If Temple is correct, then much of the work on animal sentience would be overturned. This raises the meta-level question of: How would you find out if a critic like Temple were right? Hence why I ask whether you have a systematic process for engaging with challenges that, if correct, would require revising core assumptions in your field. (I think this is something that intellectuals in all fields should have, as Temple himself advocates.)

He hasn’t responded.

Peter Godfrey-Smith, whom I emailed and received a response from before, still hasn’t responded to my latest email. If he doesn’t respond in a week, I might politely follow up.

I also commented on this Substack post, ā€œBecoming Shrimp-Pilledā€ (about shrimp consciousness and donating to the Shrimp Welfare Project (SWP) to reduce shrimp suffering).

My initial comment

Have you addressed arguments against animal sentience? E.g., https://curi.us/2545-animal-welfare-overview

If they’re right, you’re wrong about ā€œthe unfathomable effectiveness of the SWPā€ and you’re wasting your donations. Will you write an essay responding to the article’s criticisms or participate in a debate with it’s author? (He has a debate policy here: https://www.elliottemple.com/debate-policy)

The author responded.

The author’s response to my comment

Yes https://benthams.substack.com/p/against-yudkowskys-implausible-position?utm_source=publication-search

Now note: the argument for shrimp welfare just depends on the idea that it’s not extremely implausible that shrimp are conscious. That’s all you need to think it has very high expected value. Now, if you want to know why I think it’s not that unlikely that shrimp are conscious see https://benthams.substack.com/p/betting-on-ubiquitous-pain

I replied to his response.

My reply to his response \\\[Disclaimer: I used AI to help me write this response.\\\]

I really appreciate your reply. Thank you for sharing those links, but I couldn’t find anything in them that answers the argument I linked to previously (which is rooted in the Popperian/Deutschian view of intelligence and claims that consciousness requires general intelligence (it also criticizes Yudkowsky’s view of intelligence/mind design space)).

Also, I think your method of evaluating ideas (in this case, about shrimp consciousness) is mistaken.

Your argument rests on evaluating the *likelihood* or *plausibility* of shrimp consciousness. You use phrases like ā€œnot extremely implausible,ā€ ā€œnot that unlikely,ā€ and you argue that the expected value is high even with a low probability. In the second article you linked, you also mention how ā€œdata…provided more and more evidence for consciousness.ā€

This suggests a framework where evidence and arguments add weight, support, probability, or credence to an idea. This way of thinking is a mistake, as explained here by epistemologist Elliot Temple: https://criticalfallibilism.com/introduction-to-critical-fallibilism/#decisive-arguments & https://criticalfallibilism.com/yes-or-no-philosophy-summary/

Instead, ideas should be evaluated in a binary, pass/fail way: an idea is either *refuted* (we know of a decisive criticism against it) or it is *non-refuted* (we don’t).

This is why the expected value calculation doesn’t work. You’re multiplying an enormous number (trillions of shrimp) by a probability that’s assigned to a refuted idea.


TCS

I replied to Aaron Stupple on X/Twitter asking him to address ET’s criticism of TCS and also ET’s new article criticizing ā€œThe Sovereign Childā€ specifically.

1 Like

Whoops, I think I accidentally posted it twice since I tried to edit it so it wasn’t a response to Dface specifically.

Oh I had a similar issue recently. The UI is confusing. I deleted one. For future reference, while you can’t delete posts, you can edit one to say ā€œ[accidental duplicate]ā€ or similar.

1 Like

For promoting ET/CF, I want a (relatively) quick way to introduce people to some of ET’s ideas (or at least give an indication of why I think they’re exciting and worth investigating further). I couldn’t really think of a good place for people to start so I just wrote my own thing so I can link people to it. I’ll post that below. (Disclaimer: I used AI to write the last bits.) I might edit it or post a new version or variants in the future if needed. This is just a very very *very*(!) rough draft/initial stab at solving the problem of how to quickly introduce ppl to ET (or at least share my own personal enthusiasm and hope it’s contagious lol).

Also, let me know if you guys have any ideas on the best way to introduce people to Elliot Temple’s work.

E.g., let me know if you have any favorite ET stuff you’d recommend to newbies as a good starting point/introduction (for someone who has never heard of ET before).

I know there’s the Intro to CF article but personally I worry that’s a bit too dry/formal/abstract/theoretical to be a good starting point/introduction for everyone. It also doesn’t cover anywhere near all of ET’s ideas so I’m not sure it’s really a complete intro/guide to ET’s ideas/work in general. I also don’t think it really explains why ET’s ideas are so exciting and world changing. (Which is fine for that article but I still feel like it could be good to have a more exciting intro for total newbies who’ve never heard of ET or CF before and who may not immediately grasp/appreciate the implications & enormity of what ET is doing.)

Jarrod’s Introduction to Elliot Temple

I was asked what I ā€œview as the best introduction to [Elliot Temple’s] philosophy?ā€

I think this is a legitimate difficulty with Elliot Temple’s work atm. Because it’s spread across heaps of stuff, there’s not one best starting point. It’s like with Rand, without Atlas Shrugged or OPAR by Peikoff, I’d find it hard to point to any single essay of hers as the best place to start. Temple is like that too. (I assume every wide-ranging philosopher is like that, really.)

That said, I can speak to what I personally find most exciting about his philosophy…

(Disclaimer: This is nowhere near a complete list of Temple’s ideas (it’s only a tiny fraction). It’s just some personal favorites that occur to me off the top of my head—and my attempt at explaining why I like them. It’s also just my current personal understanding of his ideas. I’m a new student of this philosophy so I assume I’ve made a bunch of errors in describing it. Also: the order I listed the ideas in doesn’t really matter.)

1. Paths Forward and rational debate methodologies

What is it? Many public intellectuals advance certain claims while not having a way to get corrected if they’re wrong. Paths Forward is Temple’s solution to this problem. It’s a set of ideas and methodologies for creating reliable ways (or ā€œpathsā€) for intellectual progress to happen—especially for correcting errors that other people have already figured out.

Why I like it: There’s some issues where virtually everyone agrees (e.g., 2+2=4, the Earth orbiting the Sun, water consisting of hydrogen and oxygen). But then there’s so many other fields of knowledge where disagreements linger (e.g., Keynesianism vs Austrian economics, saturated fats vs seed oils, Popperianism vs induction, whether it’s better to invest in solving aging vs climate change, etc., etc.—or another example is politics where it’s just accepted that half of Congress disagrees with the other half and that they’ll never convince each other). I think if Paths Forward & rational debate methodologies were a cultural norm, it would enable the resolution of these types of lingering disagreements & massively accelerate the intellectual progress of humanity. (That’s why I’m trying to promote it.)

Some of Temple’s content on the topic:

Here’s another world-changing idea of Temple’s that I love:

2. Methodology for dealing with intuitive disagreements

What is it? Often someone will seemingly ā€œloseā€ a debate—but despite ā€œlosingā€, they still don’t feel convinced by the other side. They haven’t changed their mind. Or they’ll feel like they disagree but struggle to articulate exactly why they don’t agree. In our current culture, people are often jerks about this kinda thing, treating it as a ā€œgotchaā€ moment or pressuring people to suppress their intuition/feelings in favor of forcing themselves to be purely ā€œunemotionalā€ and ā€œrationalā€. None of this actually addresses/refutes the subconscious idea(s) giving rise to the intuitive disagreement though. Besides, the subconscious idea/intuition might be right!

Temple has a method for dealing with intuitive disagreements like this in debates (and other contexts). It involves imagining hypothetical situations and seeing how your intuition feels about them (does your intuition agree or disagree with the hypotheticals?). Doing this allows you to narrow in on exactly what your intuition disagrees/agrees with. Once you pinpoint your intuitive disagreement, you can use it in debate and the other side can let you know if they know why it’s wrong. If they can’t, maybe your intuition is right! (You can also help your debate partner pinpoint any intuitive disagreements he might have too.)

Why I like it: I dislike our current culture’s nasty and combative approach to debates (e.g., dunking on people, ā€œliberal tearsā€, ā€œDESTROYINGā€ ppl with ā€œfactsā€ and ā€œlogicā€, gotcha questions, etc., etc.). Helping people deal with their intuitive disagreements seems really kind. It’s also productive: ostensibly ā€œbeatingā€ someone in debate but actually failing to change their mind often doesn’t really accomplish anything (except entertain a nasty & lowbrow audience, perhaps). Actually finding out why people truly disagree and aren’t convinced—and then having the opportunity to address their actual deep-seated disagreements—seems way more productive and like something that would actually help people to reach true agreement. A world/culture in which this practice was widespread would be way better.

Some of Temple’s content on the topic:

3. Decisive arguments vs weighty arguments

What it it? Most people think that being rational means carefully ā€œweighingā€ the evidence and arguments and seeing which side has the most ā€œsupportā€ for its conclusion. This is a fallacy. There is no rational way to decide how much a certain piece of evidence (or a certain argument) ā€œweighsā€ (which is a bad metaphor) or how much it ā€œsupportsā€ a certain conclusion. Also, a single piece of counterevidence (or a single counterargument) can debunk/refute a claim even if there was a mountain of evidence in its favor. (E.g., the classic example of a single black swan refuting the claim that ā€œall swans are whiteā€ā€”even if you’ve seen millions of white swans.)

Instead, Temple advocates categorizing ideas not in terms of stuff like weight/likelihood/plausibility/support/strength/justification (or anything like that) but into only two categories: refuted or non-refuted. That’s it. It’s binary (rather than a spectrum that goes from ā€œweakā€ to ā€œstrongā€ ideas or from ā€œpossibleā€ to ā€œprobablyā€ to ā€œcertainā€).

Why I like it: I think it can help debates reach conclusions as it eliminates people’s ability to just say stuff like ā€œwell, we have lots of evidence/argument/studies/etc on our sideā€ while refusing to engage with decisive counterarguments. So a world/culture that accepted this epistemological point would be more rational and have more conclusive debates. Also, it just makes sense.

Some of Temple’s content on the topic:

4. Unbounded discussion and criticism

What is it? Current debates/discussions are usually just limited to one topic/ā€œpropositionā€. In those debates/discussions, people often want to stick to the initial topic and resent it if someone goes meta or brings up other issues. But often people are not just wrong about the initial topic—they’re also wrong about underlying assumptions, epistemology, debate methodology, something they’re unwittingly betting their career on (as I talk about below), or even the way they’re living their whole life(!), etc, etc. (In other words, their whole approach to ideas and their life in general is full of known errors.) And these other errors can prevent the discussion from reaching a conclusion about (or are more important & urgent than) the initial topic. So rather than viewing it as bad (or inconvenient, a hassle, a painful imposition) when a critic or discussion partner brings up meta/other issues, a rational person should embrace unbounded discussion so that no part of their life is left unimproved by the awesome power of reason & discussion & criticism. Their whole life and approach to ideas and everything they do should be open to rational discussion & criticism. Not just one narrow topic/ā€œpropositionā€.

Why I like it: I don’t think I’m yet psychologically able to handle unbounded criticism, but I find the idea darkly appealing because I like the idea of being perfectly rational in every aspect of my life & not having any part of my life that’s unimproved by the power of philosophy/rationality. If everybody was radically transparent and open to—and sought out (e.g., via Paths Forward)—unbounded criticism, I think everybody’s personal & intellectual growth would be off the charts. That’d be a great world!

Some of Temple’s content on the topic:

5. Betting your career

What is it? Many intellectual professionals bet their careers on premises they haven’t investigated. For example, approximately all activists and politicians bet their careers on the idea that their anti-capitalist policies are good and will improve the world while being unable to refute the arguments of capitalist thinkers like Mises. As a result, they end up wasting their life making the world worse—even if they actually had good intentions. Likewise, >50% of scientists waste their careers betting that Karl Popper’s epistemology is wrong without actually being able to explain why he’s wrong. Etc., etc.

Why I like this perspective: This idea shines a light on the colossal amount of wasted effort in the world. So many smart, well-intentioned people spend their entire lives working hard on projects that are ineffective, or even harmful, because the foundational ideas they took for granted were wrong. A culture where people were expected to identify and rationally defend their core premises would be radically more productive. Instead of wasting decades on a flawed path, people could correct their course early on. Combining this perspective with Paths Forward (including unbounded discussion) would save countless individuals from the tragedy of a wasted career and redirect all that human talent towards projects that can actually succeed and improve the world.

Some of Temple’s content on the topic:

6. Overreaching

What is it? Overreaching means trying to do things that are too hard for your current skill level—taking on projects where you’ll make errors faster than you can correct them. When your error rate exceeds your error correction rate, problems pile up. You become overwhelmed, start ignoring criticism (because you can’t handle more problems), and either fail or struggle inefficiently. The solution is to do easier things that you can succeed at, build skills through repeated successes (see ā€œPractice and masteryā€ below), and gradually work your way up to more complex things as your skill level increases. This allows you to gradually and sustainably increase your problem-solving/error-correction ā€œbudgetā€ over time, so that things which were once hard eventually become easy.

Some of Temple’s content on the topic:

7. Practice and mastery

What is it? It’s the idea that learning isn’t complete when you can do something once, or even do it successfully most of the time while concentrating hard. True mastery is achieved when a skill or idea becomes second-nature—automatic, intuitive, and easy. It’s when you can apply an idea correctly with very little conscious attention, freeing up your mind to focus on more complex problems. Many people stop practicing too early, leaving skills in an effortful, high-attention state where they are difficult to use and easy to forget.

Some of Temple’s content on the topic:

And so so much more!

As I said, this is just a teeny tiny sample of some of Temple’s ideas. It’d take many books to go through everything.

Also see: Critical Fallibilism Philosophy in 60 Seconds

I would say Philosophy: Who Needs It (the essay) for most people. Even for most people who are already philosophy fans, because most of them don’t think that philosophy is for practical life.

We could divide unbounded criticism into two main types:

  1. Interdisciplinary criticism. Criticism that deals with other fields or with underlying premises (e.g. bringing up chemistry or epistemology in a medicine debate).
  2. Personal criticism that deals with people’s psychology.

I want (1) to be allowed in general. I think it’s important to debates.

I think (2) is important sometimes but risky. It often works best in long, close friendships (when the friend judges it will go well) or with people who want it enough to pay for it.

It’s hard to just allow (1) but not (2) by default in a forum category (with 2 being available just by consent) for a few reasons. There are some issues that blur lines. The underlying issue is basically topic changes/additions, and what ideas are part of what topic is a human construct, not something inherent in the ideas. People can and do disagree about what is on- or off-topic.

One sort of topic change that doesn’t fit cleanly into (1) or (2) is bias. If someone is being biased, that can be a fairly relevant, objective part of the discussion, but also be taken as personal. Like if someone says ā€œI hate asian peopleā€, and you start arguing with racism, is that a personal attack which is hard for them to take, or is it just objectively analyzing what they said? What if they say something more subtle? Does it depend how much you focus on analyzing their words instead of guessing their personality? Their words and personality are related.

I want to be able to point out potential bias. I think that’s important. I wouldn’t want to exclude that for being too personal/psychological. I think that e.g. scientists need to be able to talk about bias and receive criticism about it.

Another topic change that doesn’t fit cleanly into (1) or (2) is reading comprehension errors (or arithmetic errors, logic errors, other ā€œbasicā€ errors). People can get really personally offended by the idea that they’re incompetent or bad at things that are taught in elementary school. In some sense, I think it’s just a topic change to an objective, impersonal field. But I do understand people taking it personally. But I think it’s hard to avoid talking about it if it comes up repeatedly. Sometimes people don’t mind if you point out a reading error or two with no further commentary, but if you point out too many or point out the pattern itself then they get offended, but if it’s a pattern and you don’t talk about the pattern then the errors may ruin the discussion (and if you do talk about them, irrational responses may ruin the discussion, but at least you’re trying to solve the problem).

Anyway, you might want to put some thought into what types of criticism or topic changes you find scary or not scary and why.

I think I’ve made progress since writing this. By using a debate policy, it’s much easier for me to just go silent without explanation, and if someone actually wants to continue they can invoke my debate policy (or ask why I stopped, though they shouldn’t ask me that if they don’t want to hear an honest answer). I also tend to use milder and more polite language now. I think Deutsch was a bad influence in terms of making grand or extreme claims (Rand too) and I’m trying to think and speak more modestly more like Popper or Socrates.

Also, I used to think it was important to treat people as rational until proven irrational (like innocent until proven guilty). Basically, be charitable and give them a chance until they do something wrong. Now, I think it’s better to wait for people to earn rational criticism, do good work and show they want it and may be able to handle it well.

I think one of the things going on was I was trying to treat people as I wanted to be treated, and how Deutsch said rationality works and all rational people, including himself, would want to be treated. It has been confusing to me that many people don’t like things they say they want, including Deutsch but also many others. I wasn’t trying to be mean to people but it sometimes came off that way, partly because I followed Deutsch’s lead (I didn’t understand it at the time, but he actually is a mean person and a social climber).

Also, people may put themselves forward as public figures or authors – if you write a philosophy book or article, I can critique it on my blog, whether you like it or not, but with no expectation that you read my critique, it’s different than direct interaction. That’s different than conversing with people on forums or social media.

I think that urgency was a mistake. I prefer a softer approach. People can go do whatever with their lives. I don’t care. They don’t have to be rational intellectual debaters. I was trying to help people who wanted help but some people felt pressured. I have issues with a lot of stuff I wrote in that article. I think it’s important to focus on people who come to me/philosophy on their own initiative, with their own positive motivations (or if they choose to go be public figures then wanting to debate them is fine). I think a lot of what Deutsch said about irrationality, and how he and Fitz-Claridge tried to recruit for TCS, ARR and CR, was toxic. I’m trying to stay away from that stuff now. It’s interesting to me that you like it enough to highlight it. I worry it may feel motivational and pump you up in a way that won’t last (but that’s just a general concern; it’s not based on personal knowledge about you).

Also in general I think I underestimated the complexity of the world, of people, of social problems. I was really impressed by Deutsch and his grand ideas and I thought I knew a lot. And it was a lot and I know more now than before, but I now see myself as knowing less relative to the amount of knowledge needed to change the world or be really effective. I understand more about the difficulties and how much more there is that I don’t know. I’m less impressed with stuff like TCS – even if i hadn’t come up with some criticisms of TCS, I’d still just generally be more doubtful about how complete and useful it is, and be much more content for people who want to to take some inspiration from it, and not so interested in anyone doing it in a complete or consistent way. I also e.g. think that politics is harder than Deutsch thinks it is, and I’ve lowered my opinion of how well I can evaluate candidates or know what will or should happen. I don’t think other people are good at it either but I don’t care much. My goal isn’t to save the world or individuals. I don’t expect people to radically change from irrational to rational. If people would just ask intellectuals they follow to create debate policies, that’d be pretty good, and they would be contributing usefully to human progress without having to change their lives.

See also Curiosity – Toxic Attitudes about Greatness

I think maybe you like this because you see it as high stakes, important, urgent, a big deal, similar to some stuff above.

It is that in some sense. But my intention with it was partly the opposite: I wanted people to be more humble modest, curious, and less confident, arrogant, dismissive. Maybe there’s a better way I could approach this. One potential change would be discussing it more explicitly in terms of systemic issues and societal results rather than individuals.

I think a lot of people get kinda stuck and don’t know how to master the basics enough. I’m not sure what to do about it but I haven’t been bringing this up a lot because I’m not sure what would be really helpful for people. And it’s something people can feel bad about, so if you don’t have an effective solution to pair with it that can be bad. I do think there are valid abstract issues to be discussed and there are widespread problems where people struggle with all sorts of things due to having issues with other knowledge it builds on (e.g. when people struggle high school math or algebra, they often didn’t learn fractions well enough and revisiting some previous math can help).

People ought to work on small chunks and master stuff on a frequent basis, and I think that should be possible in theory, and I think I’m intuitively good at doing it, but I think a lot of people struggle to break it down and organize it correctly and get that to work, partly because it’s too complex to do just with spreadsheets or checklists, and a lot of intuition and wisdom is needed too.

1 Like

I don’t think AS or OPAR work as introductions to Rand. They’re very long. She wrote a more suitable book titled For the New Intellectual.

I consider Paths Forward suitable for all issues.

Idk how helpful it would be but maybe targeted articles/essays would be a good idea? Like if someone is struggling to make a decision maybe send something like multi-factor decision making math (I don’t think that’s necessarily a good idea, just an example).

Similarly if you found an idea of Elliots particularly helpful in your own life, then you can probably do a good job of explaining that to other people versus something of Elliots you think is intellectually stimulating but you haven’t incorporated into your own life.

Maybe you had issues in the past related to getting good at something. You read some stuff about automatization and practice. Now if you have a friend who is running into an issue related to mastering something. You can share Elliots articles around that topic and give first-hand understanding and experience.

That’s a good point. I think that could be a decent essay to start with.

It reminds me of Alcibiades I (also known as First Alcibiades) by Plato, which apparently used to be the classic entry point for students into Plato’s work. It’s a dialogue where Socrates makes an ambitious and overconfident 19- or 20-year-old (or so) Alcibiades realize that he knows almost nothing about what he thought he knew and is actually woefully unprepared to achieve great things or be a great man. And that if he wants to have any hope at all of living a great life, he desperately needs the guidance of philosophy. So, in other words, the classic intro to Plato was (like PWNI by Rand) about convincing readers of the crucial importance of philosophy to their lives. (Thereby motivating them to read more philosophy.)

Which makes me think: if in the future I work way more on promoting ET/CF, then perhaps I could take a cue from Plato. When introducing people to ET/CF and trying to convince them of its importance, I could emphasize how crucial philosophy is to living a great life and actually being in control of one’s life. So great idea. Thanks for that!

The closest thing of ET’s I can think of is this: Dialog: Non-Consumption of Philosophy

I’ll read that dialogue by Plato, sounds cool.

Also, convincing people about Rand would help for Elliot as well since Rand is the most controversial philosopher Elliot likes.

I might do that. Besides, it’d be a good opportunity to practice your method for investigating intuitions. :slight_smile:

So like acknowledging that one’s making a bet and that one’s bet could be wrong? IOW, viewing it as a bet (vs a sure thing) means being open to the possibility that the bet might not work out/pay off—even if one has done one’s very best to investigate reasons one might be wrong?

If so, that does strike me as a much more modest & intellectually humble perspective. And a perspective that would motivate one to be curious about reasons one’s bet might be wrong. I like it a lot! And hadn’t considered that way of thinking about it before. So thank you for sharing that.

I’m not sure what that means or what it would look like to discuss it in such terms.

I asked AI and it gave an example:

Individual frame: ā€œThat scientist is irrationally betting his career against Popper.ā€

Systemic frame: ā€œThe entire institution of modern academic science, with its ā€˜publish or perish’ incentive structure and peer-review system, creates an environment where challenging foundational premises like Popper’s is career-suicide. The societal result is a massive misallocation of research funding and slower scientific progress for all of humanity.ā€

Is something like that what you meant?

If so, I still think the individual frame is a helpful perspective (even if incomplete). Because even in the AI-generated example, the scientists are still in effect betting that working within academia rather than independently/privately (or working in science rather than philosophy like you) is better.

That said, I suppose acknowledging the role of systemic incentives and norms could give a more complete picture of what’s going on.

Is your goal to not pressure/blame/accuse individuals? (Or am I totally misunderstanding all this?)

I also don’t see how discussing it in terms of systemic issues would make people more humble, curious, etc. though. (Unless the idea is to make them feel less defensive by removing the blame? But that might just trigger something like the bystander effect rather than making them humble and curious.)

I’m pretty confused tbh.

That’s a good idea. If I see an opportunity to do that, I’ll do it.

Re systemic issues, and re pressure, it’s different to say:

  1. The system is bad, which affects > 99% of people. You could help with reform.

vs.

  1. You are bad. You should change.

I promoted TOC on this video Access to information may not be the problem:

If you want to learn about effective thinking that you can apply quickly, you should learn Theory of Constraints by Goldratt. Goldratt is known for business management ideas but his goal was to teach the world to think better. He talks about project management (includes personal projects), bottlenecks, focus, local vs global optima, etc. I think of it as the vim of thinking and productivity itself, since you can use it to speed up all learning and projects.

ā€œvimā€ is a text editor which is a central topic in his channel.

I tried to link Introduction to Theory of Constraints as a reply but I think youtube hid it.

I also made this comment:

Thinking about meta thinking/rationality/philosophy is an investment. Pick some ratio of investment and consumption (that includes productive stuff like programming). What ratio? How much time should you spend thinking about that…? The starting point doesn’t matter much. Just pick a ratio and change it if you think there’s a problem, e.g., you’re not getting enough programming done.

He’s ambitious, thinks knowledge is powerful and wants to change education, from his forum:

I’m in contact with some pretty powerful people (money wise) through networking, and I’m going to attempt to change the world. My biggest enemy is our education systems are crumbling, and our information is rotting and becoming meaningless.

1 Like