JustinCEO Topic

You’re not ambitious. You’re satisfied being a bit better than most people you know (in your own judgment, which isn’t reliable).

You’re bad at consciously understanding social interaction on purpose, so that you can keep engaging in it without consciously knowing how you’re being treated and treating others.

You rationalize ongoing social battles as “harmony”.

You don’t mind being a mean social climber as long as you don’t understand what’s going on, and you’re trying not to discover what’s going on.

You don’t mind alienating people you consider more skilled in order to prioritize people you consider worse.

I think it’s interesting that you’re both proud of your text analysis skill and uninterested in developing it further. There seems to be some tension there.

Since you participate here, you presumably 1) are aware that Elliot notices things doing text analysis that you wouldn’t, 2) value the fruits of such analysis (you may not enjoy watching his text analysis vids, but you like reading the thoughts/arguments that result from such analysis) and 3) are aware of the idea of there being a critical world shortage of quality thinkers. Given all that, thinking that your current level of skill is fine seems odd to me.

I think you’re blaming the practice of literal analysis for your own unresolved internal conflicts, which doing text analysis happens to bring to the forefront. You have some contradictory ideas about how to handle people when you notice they’re engaging in what you consider bullshit. It can be a difficult problem to address, and people can go to (IMHO mistaken) extremes like morally denouncing everyone they encounter. I think blinding yourself to the issue by means of intentionally not developing skill is an awful non-solution, though.

More later.

Analyzing Pinker Sentence 2: Rather, it can be linked to the physical realm of animals and machines via the concepts of information, computation, and control.
  • Justin's analysis:
    • Sentence 2 is pretty complicated in a way. First, it's introducing a contrast to what the previous sentence said ("Rather"), which is a bit more complex than just making a straightforward point or elaborating on a previous point. The "it" is back-referencing the "abstract realm", which itself groups together "knowledge", "reason", and "purpose". And then we're told that that rather complicated grouping can be linked to the physical realm via a bunch of other complicated ideas. So it's quite a lot to take in.
    • In Sentence 1, we were told the abstract realm "does not consist of" certain things. This gives rise to the expectation that Sentence 2 will tell us what the abstract realm does consist of. But no, instead we're told that it "can be linked" to some stuff. "Can be linked" is a much, much weaker statement than saying something consists of something.
  • Elliot's Analysis:
    • Minimal rewrite: abstract realm can be linked to physical realm via concepts
    • linking one realm to another realm is good parallelism
      • link involves information, computation, and control
    • This sentence sets up further discussion
    • "can be" means you don't have to link, it's just a possibility. But Pinker strongly said there are no souls and we've proven in, so this is an issue.

What’s a good method for determining the right level of ambition to have? I have an idea about how to determine that but I’m curious what your idea about it is.

Continuing the discussion from Scholarship Practice (Thorstad Cite Checking) and checking the first cite Elliot provided:

The panel did not lament such a tendency, in word or tone.

One relevant definition for lament is:

an expression of regret or disappointment; a complaint: there were constant laments about the conditions of employment.

From the report being cited:

Panelists reviewed and assessed popular expectations and concerns. The focus group noted a tendency for the general public, science-fiction writers, and futurists to dwell on radical long-term outcomes of AI research, while overlooking the broad spectrum of opportunities and challenges with developing and fielding applications that leverage different aspects of machine intelligence.

(bold added)

Note the use of the word “noted”. That’s very neutral and not lamenting at all. Ok, but maybe they lament elsewhere? Nope. They are skeptical, but in a pretty measured and respectful way:

There was overall skepticism about the prospect of an intelligence explosion as well as of a “coming singularity”, and also about the large-scale loss of control of intelligent systems. Nevertheless, there was a shared sense that additional research would be valuable on methods for understanding and verifying the range of behaviors of complex computational systems to minimize unexpected outcomes. Some panelists recommended that more research needs to be done to better define “intelligence explosion”, and also to better formulate different classes of such accelerating intelligences.

The group suggested outreach and communication to people and organizations about the low likelihood of the radical outcomes, sharing the rationale for the overall comfort of scientists in this realm, and for the need to educate people outside the AI research community about the promise of AI for enhancing the quality of human life in numerous ways, coupled with a re-focusing of attention on on actionable, shorter-term challenges.

Basically: we’re skeptical about stuff, but additional research is warranted, along with outreach and education to share our perspective.

Note that Thorstad also said:

One hypothesis in particular drew the panel’s ire. The singularity hypothesis begins with the supposition that artificial agents will gain the ability to improve their own intelligence.

The above quotes do not reflect “ire” anymore than they reflect lamenting. Thorstad is adding a combative framing to the material he is citing.

I think the cause of the blow-ups is not the analysis itself but your theories (explicit or inexplicit) about what to do with the information that results from the analysis. So it might be worth re-examining those. Suppose the issue caused by doing analysis were something like giving you some more information you’re not sure how to use. You react slower, and then the interaction is awkward and unsmooth. Not great, but I don’t think it’d blow up. I think the blowing up comes from stuff like:

In practice literal analysis sets up situations where I’m internally conflicted between for example, calling people out on their bullshit or contradictions vs. maintaining social harmony by pretending I didn’t notice.

It sounds like you feel like, if you see stuff that you think is wrong due to engaging in careful analysis, you’ve either got to fight with people (“calling people out on their bullshit or contradictions”) or fake reality by pretending you didn’t see what you saw. (Or later you also bring up bailing on them). I can understand how that would be something you want to avoid. I don’t think those are the only possibilities, though.

One possibility is, rather than calling people out, just ask them questions that you think are relevant based on your analysis and see what they say (out of genuine curiosity, and not necessarily trying to lead them to your preferred answer). Another is just to take in the information and refine your model of the person’s thinking, possibly raising a point later if you think they’ll be open to it. This is assuming we’re talking about somebody you care at least somewhat about. And OTOH, if somebody is nothing to you, I don’t see why them thinking dumb stuff should cause any sort of conflict (who cares if some random person is wrong about stuff? It matters in some sense, but it’s a bit like Someone Is Wrong On the Internet.)

BTW I thought you came off pretty harsh in your post in how you talked about people who believe in various things. I am pretty harsh and uncharitable towards some beliefs (I am trying to work on it :grimacing:), but I’m actually relatively charitable towards religious people, so that part stood out to me (your list of “nonsense and bullshit” included people who “literally believe in some version of an invisible man in the sky who grants favors if you ask nicely.”) FYI, I never at any point in life had any serious belief in God. But I think I have some understanding of the meaning people attach to religion and the role it serves in people’s life, and so it is difficult for me to be harshly dismissive of their beliefs as you were (though I would of course argue against them if the point came up).

I bring this up because I think this harshness is related to your ideas about how to deal with information that would be generated if you did more literal analysis. Something like: if you do more analysis, you’ll see more ways in which all the dumbs are stupid. If you see more ways in which the dumbs are stupid, you’ll have to fight them more or feel like a coward. Facing that alternative is not helpful for your life. Therefore, it’s better not to do more analysis.

More Pinker

  • Analyzing Sentence 3: Knowledge can be explained as patterns in matter or energy that stand in systematic relations with states of the world, with mathematical and logical truths, and with one another.
    • Justin's Analysis (again, note I've seen the video before, but am writing my analysis before watching Elliot's again):
      • Rewrite: Knowledge is patterns of matter or energy connected to the world.
      • This sentence seems like a bunch of floating abstractions to me. It's very unclear.
        • He actually sets up 6 different subcategories of knowledge (treating "mathematical and logical truths" as one group). I'm not sure how intentional that was, and I'm definitely not sure what are supposed to be examples of each. 6 categories:
          • patterns...
            • in matter ... that stand in systematic relations with
              • states of the world
              • mathematical and logical truths
              • one another
            • in ... energy that stand in systematic relations with
              • states of the world
              • mathematical and logical truths
              • one another
        • Patterns in matter or energy that stand in systematic relations
          • Isn't any naturally occurring physical structure arguably a pattern in matter that stands in relations with (other) states of the world, with the mathematical and logical truths (that gave rise to its unique characteristics), and to other such patterns that appear in existence? So the definition of knowledge here seems way way way too wide.
        • Systematic
          • is this word adding anything?
        • Matter / energy distinction
          • The significance of bringing up both matter and energy is unclear.
            • It could be trying to cover both human brains and AI "brains"
              • But both human brains and computer CPUs (like those that AIs would use) involve both matter and energy.
            • It could be trying to cover stored knowledge (in books) in both printed (matter) and digital form
              • Digital information is typically stored, at least temporarily, in some kind of matter
            • There's a basic issue about knowledge that Pinker's explanation leaves ambiguous: does knowledge need a knower or can it exist independently (e.g. in books)?. This is a controversy.
              • Given that that issue is left ambiguous, the significance of the matter/energy distinction is ambiguous.
        • States of the world
          • doesn't a pattern of matter or energy reflect a partial state of the world? like if you have some knowledge in a book (matter), the existence of the book is part of the state of the world.
          • and doesn't basically any thing stand in relations to states of the world? Like we're not talking about platonic forms here, we're talking about things in reality. E.g. you could define a cupcake as "a small cake baked in a cup-shaped container and typically iced that stands in relation to states of the world", but I don't think the states of the world part is illuminating or essential to the definition.
          • I wonder if there was something specific that caused Pinker to use the plural for "states".
        • One another
          • If the patterns reflect part of a world state, and if we've already state that the patterns stand in relations to states of the world (which would include other patterns), is specifying that they stand in relation to "one another" adding anything new?
    • Elliot's Analysis:
      • Rewrite: Knowledge is physical patterns.
      • Physical patterns = information (this is field-specific knowledge).
        • Pinker doesn't explain what information is adequately, even in other places where he has more words than this essay.
      • States of the world
        • states of the world are matter and energy. So pinker is saying that patterns in matter and energy are related to matter and energy.
        • These have to be in patterns too, because you can't have a systematic relationship with random chais.
      • mathematical and logical truths
        • these are abstract things, but we were trying to relate the abstract realm to the physical realm, so this is going backwards.
      • one another
      • systematic relation
        • what is a systematic relation? something that's not just some random chaos; instead, there's a pattern.
      • So Pinker has said: Abstract knowledge is physical patterns with pattern relationships with physical patterns, with abstract knowledge, and with physical patterns.
        • Or with some of the original terms in parentheses: Abstract knowledge is physical patterns with pattern relationships (systematic relations) with physical patterns (states of the world), with abstract knowledge (mathematical and logical truths), and with physical patterns (one another).
  • Yes that’s possible. I think I have tried it and it takes the interaction off script & people generally don’t seem to like it.

    It’s possible I’m unskilled in asking genuinely non-leading questions. If so, that’d probably be better for me to work on than improving my skill at text analysis. More on that below.

    I don’t see the benefit in that, which was kind of the point of my harsh statements about what people literally think (more on that below). What sorts of refinement on that thinking which can be gleaned from literal text analysis are likely to be helpful?

    Yes, when I focus on what people literally say & literally think I do come off as harsh.

    Did you ever seriously/maturely believe in anything ridiculous and then change your mind about it?

    I not only quite seriously believed in God but also lots of God-adjacent nonsense like ghosts, demons, angels, magic spells, astral projection, heaven and hell. I changed my mind about all of it at pretty much the same time.

    I also believed in ESP and psychic power type stuff for a while before I changed my mind about that.

    I could even add to my personal list of nonsense that before CF’s predecessor forums I believed in induction and the justified true belief model of knowledge before I changed my mind about that.

    I was never into crystals or alien visits or lefty doomsday stuff though.

    Anyway, I think it took harsh thinking for me to get out of those belief systems - to see them for how silly they really were. That is reflected in how I talk about them when I’m in literal mode.

    Despite the harsh words in literal mode I think I share that understanding, both from my own lived experience and from watching others.

    I’d argue that the understanding you have doesn’t come from literal analysis of what religious people say. It comes from looking at the role religion plays in their actual lives. Stuff like what people actually do with their religious belief vs. what they literally say it is.

    In that mode of analysis my judgement of religion is far less harsh than when I’m analyzing it literally. There are people I know who I can clearly judge that if they stopped believing in their religion their lives would take a severe turn for the worse, or people whose lives got way better by starting to seriously believe in religion.

    High level, my perspective is that what you wrote along with stuff Elliot wrote like:

    …are all fully compatible with the idea that various problems other than lack of text analysis skills are negatively impacting my life, including my ability to productively use text analysis skills. That seems like a reasonable high level explanation for my lack of interest in learning more about text analysis, even though I disagree about some of the details.

    I have found that at least some people are quite happy to talk about their beliefs if you ask them curious questions. Not all of course.

    People believe things for reasons. Some reasons are common across many people, but there is a lot of variance between people (in terms of the specific arguments they bring up, the level of understanding that different people have of specific arguments, and the arguments that are most important to how they think about some issue). Learning about those sorts of details and differences can help with persuading people. Or, if you learn someone’s specific ideas are particularly bad, that can be a reason to interact with them less.

    I think the harshness is not inherent to the literal analysis. It’s something you are bringing – a negative attitude that is not dictated by the content. There’s an Epictetus quote given in various forms but one is something like “It is not things that upset us, but our judgements about those things.” I think that is relevant.

    Yeah. I used to at least give serious credence to the idea of UFOs (as in :alien:). Reasoning was something like: 1) lots of people seem to be claiming to see stuff or have various direct experiences, that seems notable, 2) I don’t trust the govt, they’re a bunch of liars, 3) seems like there should be plenty of aliens out there in space and some should have gotten here by now, so maybe they’re here and just watching and poking us or something.

    That’s interesting. Some words I would use for the thinking needed to come to thoughtful conclusions about stuff: clear, honest, rational, objective, calm, careful, considered, unbiased. Not “harsh” though. This seems like a significant disagreement.

    I think it’s important to distinguish between having a negative evaluation of an idea versus a person. There are various beliefs that I reject and am strongly critical of, and I do judge the quality of a person’s thought based on whether or not they accept them. But I wouldn’t be “harsh” towards the beliefs in light of a full context evaluation of the role the beliefs serve for people.

    I have more to say but want to move onto another post so I’m just going to post this as is and maybe write more later.

    More Pinker:

  • Analyzing Sentence 4: Reasoning can be explained as transformations of that knowledge by physical operations that are designed to preserve those relations.
    • Justin's analysis:
      • Short version: Reasoning is preservatory transformations of knowledge.
      • Let me start just by "expanding" the backreferences in the text a bit.
        • Reasoning (which is part of the abstract realm) can be explained as transformations of (patterns in matter or energy that stand in systemic relations with states of the world, with mathematical and logical truths, and with one another) by physical operations that are designed to preserve those (systematic) relations.
        • So the patterns exist, and reasoning is transforming them in ways that are designed to preserve those relationships.
          • This seems like a weird definition to me. E.g. consider a paper book on botany. It's a pattern in matter that stands in systematic relations to the state of plants in the world, so it qualifies as knowledge. Google Books gets a copy of the botany book to digitize. This digitization is designed to preserve the knowledge of the book by physical operations. So Google Books is reasoning? Not the thought process that went into designing the process of digitizing stuff in Google Books, but Google Books itself? That does not seem accurate, but Google Books = reasoning seems to be an implication of Pinker's definition.
      • Pinker's definition of "reasoning" doesn't mention thinking, thoughts, logic, analysis, deductions, arguments, premises, or conclusions. Seems weird.
    • Elliot's Analysis
      • Minimal rewrite: Reasoning is knowledge-preserving physical transformations of knowledge.
      • This says reasoning requires design as a prerequisite ("are designed to preserve"). You have to already be able to design operations in order to do reasoning
        • But being able to design stuff involves reasoning.
      • Transformations of "knowledge" (information, a.k.a. physical patterns) = computation
        • You need to have pre-existing expert knowledge to know what he's talking about here.
      • Pinker is saying you transform patterns (knowledge) while keeping the patterns (relations). This is problematic. If you keep all the patterns, you can't transform anything, and so there's no room for change. You need to think about which patterns matter and should be preserved.
        • This comes up with induction. Inductivists say future will resemble the past. But future always resembles past in some ways and not others.
        • Reasoning doesn't preserve all relations. It preserves some and discards others. Deciding which patterns matter is a key part of reasoning
  • More Pinker

  • Analyzing Sentence 5: Purpose can be explained as the control of operations to effect changes in the world, guided by discrepancies between its current state and a goal state.
    • Justin's Analysis
      • Short rewrite: Purpose is control to change stuff in the world.
      • Fully expanded rewrite: Purpose can be explained as the control of operations to effect changes in the world, guided by discrepancies between the current state of the world and a goal state of the world.
      • This is another problematic definition. Defining purpose typically involves talking about the reason/point/motive/basis for some action. Pinker is talking instead about control used to do something. Control means the power to take some actions (e.g. if you have control of a corporation, you can direct the corporation's activities in certain ways) but doesn't get to the purpose behind the actions.
        • One way of framing the issue: suppose you want to do some action, whatever the action may be. Let's pick going to the store to buy some cheese. "Control over your legs and some money" answers the "how?" question, as in "how did you get to the store?" "Buy the cheese" answers the "what?" question, as in "What did you do when you got to the store?" But purpose is about the "why?" question, as in "why do you want the cheese?" (e.g. for a dish, to snack on, whatever). So purpose is supposed to be about the "why?" question but Pinker is framing his definition in terms of the "how?" and maybe the "what?" question.
      • If you want to change something, you want it to be different than what it is. If there is a discrepancy between two things in some respect, that means they are different from each other. So another rewrite might be:
        • Purpose can be explained as the control of operations to make things different in the world, guided by differences between the world's current state and a goal state.
          • If you're trying to change stuff, you're of course going to be guided by some difference between the way things are and the way you want them to be. Your efforts at change are not going to be guided by the ways in which things are similar, after all!
    • Elliot's Analysis
      • Minimal rewrite: Purpose is guided physical control.
        • We don't want just any physical control but guided control.
          • Guided physical control is physical control with a purpose. Circularity problem here.
      • Why does he bring up goal states? Because that's Al jargon
      • He's saying Als have purpose because their programming accomplishes things humans call "goal states" because they are our goals that we programmed into our software
      • That's our purpose, not theirs. They don't have their own purpose.
  • More Pinker

  • Analyzing Sentence 6: Naturally evolved brains are just the most familiar systems that achieve intelligence through information, computation, and control.
    • Justin's Analysis:
      • Short rewrite: Brains are familiar, intelligence-achieving systems.
      • This is Pinker elaborating his claim that intelligence is a phenomenon that arises from computation and not e.g. special tissue.
      • It's awkward to a reader who doesn't have the benefit of a field-specific background.
        • As someone in that position, I would expect this sentence to reflect a cashing in on concepts developed in the previous sentences regarding information, computation, and control. But as Elliot points out, information and computation are not directly referenced. They are referenced indirectly. Physical patterns means information in Sentence 3 and transformations of knowledge means computation in Sentence 4. But unless you come in knowing that, this is confusing. (At least Sentence 5 actually mentions control).
      • Pinker didn't say a ton in his previous sentences (see especially the analysis of Sentence 3), so there isn't a lot to conceptually cash in on in the first place.
        • It's actually unclear to me which brains he thinks achieve intelligence. Is it just human brains, or also some (which?) animal brains, or every brain? Because he's introduced so many big, sweeping concepts with essentially no concretization, I'm honestly not sure.
          • E.g. in Sentence 2 he talked about how the abstract realm could be linked to the "physical realm of animals and machines". This is maybe a clue he's including some animals (and machines) as entities that participate in the abstract realm (are intelligent), but honestly I don't know.
    • Elliot's Analysis
      • Minimal rewrite: Brains are just familiar.
      • Pinker's trying to explain why his thesis seems counter-intuitive.
        • Per Pinker: it's counterintuitive because people are used to brains but AIs are weird.
          • Brains aren't special; information, computation, and control are special.
        • Analogy: If you only saw houses made of wood, you might think it has special shelter-related properties, but plenty of other materials can be used.
      • Pinker is being unfair and thinking his opponents are basing their views on familiarity instead of rationality.
        • You could throw that back at him and say maybe he finds atheism familiar. (This would be unfair, but no more unfair than Pinker is being)
  • More Pinker

  • Analyzing Sentence 7: Humanly designed systems that achieve intelligence vindicate the notion that information processing is sufficient to explain it—the notion that the late Jerry Fodor dubbed the computational theory of mind.
    • Justin's Analysis
      • Simple rewrite: AIs show that information processing explains intelligence.
      • This is a complicated sentence.
        • The "it" is a reference back to "intelligence" (I think!), but there are two nouns ("information processing" and "notion") in between the pronoun and its reference noun.
        • The second "notion" is an appositive clause elaborating on the first "notion", and again there is some stuff in between them.
          • It doesn't add anything except a name drop and jargon.
    • Elliot's Analysis
      • Simple rewrite: AI vindicates computational theory of mind.
      • Pinker assumes there is only one type of intelligence without argument.
        • We built these AIs and they're intelligent; therefore, our intelligence doesn't involve souls.
        • Current AIs are way below humans so you can't assume they're the same as humans.
      • "Sufficient to explain" is different than saying it explains it.
      • Elliot notes that Pinker doesn't go into more clarifying detail later regarding various unclear points. Pinker just brings up more complicated and controversial claims.
  • Continuing the discussion from Checking Citations from David Thorstad [CF Article]:

    I analyzed cite 1 earlier in this thread and my analysis and Elliot’s agree.

    I checked out cite 2 before reading Elliot’s analysis but did not see anything wrong. I think the year of publication error and the double quotes error would have been the sort of thing I could have caught if I had been following a checklist. They are pretty basic errors.

    I also checked out cite 3 earlier and didn’t see anything wrong. I could have found the “lead to all sorts” issue easily enough.

    I agree with Elliot’s advice regarding taking additional steps to ensure correct citation.

    Note to @Elliot about this incorrect character that presumably got pasted over:

    Continuing the discussion from Elliot Temple and Corentin Biteau Discussion:

    @CorentinBiteau, would you agree with the following argument (which I’ve attempted to base off the quoted paragraph)?

    1. Humans share a lot of evolutionary history with animals.
    2. If two groups share a lot of evolutionary history, then it is a reasonable default expectation that the two groups will share the same abilities.
    3. Humans have the ability to experience emotions.
    4. Therefore, it is reasonable to expect that animals have the ability to experience emotions.

    People commonly make pretty basic mistakes about math (I’m including myself here). There are broad themes (some misunderstanding of a lower level concept that crops up) but I think there is a lot of variance in the exact error that exists in some particular person’s mind when they make an error. Identifying and correcting those errors involves enough intellectual labor to warrant being its own specialized job (a math tutor).

    I could be wrong, but I bet that:

    1. You respect the value/effort/intellectual labor involved in being a math tutor, and see the point in figuring out the details involved in the errors that exist in people’s minds regarding math ideas, and
    2. You would not think harshness was helpful/valuable/useful in addressing even “basic” or “dumb” math errors.

    Thoughts on this so far?

    Yes, that seems a pretty good summary.

    We might add that animals, like humans, present traits we associate with having internal states of mind : being excited, being taking care of their youth, playing, not wanting to be hurt, crying, etc.

    (however, they do not present traits like “doing long-term planning about the future”, so for such skills the analogy doesn’t work)

    I’m interpreting your reply as: you’re basically okay with the argument as I wrote it, but are bringing up an optional point that could be added. That’s fine, but for the moment, I’d like to focus on the argument that I wrote and you said was pretty good, because I think the point about presenting traits brings up additional details and complications and I want to try to “drill down” on a single line of argument somewhat. Hopefully that seems reasonable for now :slight_smile:

    I think the argument as I wrote it is deeply problematic. There are 4 major issues in the first two points that I can see. Let’s consider the first point:

    1. Humans share a lot of evolutionary history with animals.

    The first issue is that “a lot” is a very vague term. What’s the cutoff for what counts as a lot? That actually has a huge impact on the scope of the creatures in our analysis.

    The second issue is how we define shared evolutionary history. I am by no means an expert in this. As a complete layman, I can intuitively see a couple of ways of doing so. The first is to look at how recent a common ancestor was between the two groups under analysis, and then determine a cutoff point a recentness (i.e. how recent the ancestor has to be to count as “a lot”). A similar method might use amounts of shared genetic code, or some particular subset of it, as a proxy for shared evolutionary history, and then determine a cutoff point (more than 50%, more than 90%, whatever) for what counts as “a lot”.

    The third issue is that “animals” is a very broad category. It does exclude some things, like plants and inanimate matter. But the sweep of the category “animals” is huge, and so how we define what counts as “a lot” and how we define shared evolutionary history becomes really important.

    The fourth issue comes in the second point

    1. If two groups share a lot of evolutionary history, then it is a reasonable default expectation that the two groups will share the same abilities.

    Again, I’m not an expert, but my understanding is that you can gain or lose abilities due to fairly minor evolutionary changes. Even between entities that we’d probably both agree share a lot of evolutionary history or are closely related, traits or abilities can become vestigial (like the growth and use of tails) or fully developed (like human rationality). So the second point doesn’t seem accurate.

    In light of the above points, I think the argument has a fundamental challenge. In essence, if what counts as a lot of shared evolutionary history is defined very broadly, then by similar reasoning you can generate arguments that are facially ridiculous. Suppose, for the sake of illustrating the issue I have in mind, that we specify that humans and rodents share a lot of evolutionary history (they meet whatever criteria we’re using to determine this). So then we might say:

    1. Humans share a lot of evolutionary history with rodents.
    2. If two groups share a lot of evolutionary history, then it is a reasonable default expectation that the two groups will share the same abilities.
    3. Rodents have the ability to squeeze through a hole of the size of a dime.
    4. Therefore, it is reasonable to expect that humans have the ability to squeeze through a hole of the size of a dime.

    Of course, anyone with some common sense and an idea of the rough proportions and limitations of the human body will find that ridiculous. In other words, they will find that it is not a default reasonable expectation to expect humans and rodents to have the same abilities in that respect. So a premise of the argument is mistaken (as shown by this counterexample) and the argument fails.

    The more broadly what counts as a lot of shared evolutionary is defined, the more of this sort of problem you can get into (e.g. expecting that humans should be able to fly or breathe underwater or whatever). I think any pretty broad definition of a lot of shared evolutionary history opens itself up to some of these problematic counterexamples.

    You could choose to define a lot of shared evolutionary history very narrowly (to where it only includes, say, some closely related apes, who are considered by many to be more intelligent than most other animals). I think the overall argument would still fail due to the points I raised about point 2 (namely, that the gain or loss of traits can occur between species closely related). But I think there’s a second problem, which is that the purpose of your argument is to make a broad statement applying to animals in general. So if you “retreated” to arguing about apes and maybe dolphins and the like, that’d be enough of a change in position to constitute a major concession/revision.

    Thoughts?

    Yes. Also: I don’t currently want to be a math tutor nor am I currently very interested in improving my math skills. I don’t see a conflict there. Do you?

    To be clear: I respect the value/effort/intellectual labor involved in Elliot’s text analysis video and your text analysis responses. I’m not saying he shouldn’t have done it or you shouldn’t have or whatever. It’s just not something I’m interested in much / find valuable to me personally right now.

    Right, I wouldn’t, but there is an important reason with math that doesn’t apply to text. People generally react differently to making math errors vs. saying something that’s literally nonsense or the reverse of what they intended.

    If somebody makes a math error and I point it out they’re unlikely to get mad or dig in. A straightforward correction will generally get good results. They typically either directly address the error (either yeah you’re right or no you’re not and here’s why…) or they shrug and say they’re not good at math so whatever.

    They’re not generally gonna do stuff like:

    • Say I should have known they meant 7 when they said 89
    • Imply that if I was their friend I’d have just accepted 89 is the answer
    • Say if I listened to my heart instead of being so anal about everything I’d know 89 is the answer
    • Decide it’s not worth their time talking to me if I’m gonna nitpick every little math error
    • Refuse to consider the answer could be anything other than 89 because they have faith in 89

    If they did that kinda stuff, which is like what they do for text errors, then I could imagine myself getting harsh in response. I’m neutral on whether the harshness would actually help or not. I haven’t thought about it much or had experience with it. My best guess is it’d help with the math itself but cause social problems which might or might not be worse than the math errors depending on context.

    I think you are reading too deeply into this. I didn’t try to make a general law of evolutionary biology - I just made an argument arising in the very specific context of a long debate about the question “do humans and animals have subjective feelings?”. I don’t think anybody should try to apply that uncritically beyond that scope.

    The topic of “having subjective feelings” is broad, which is why the argument I used is broad, and vague. It’s also why very broad terms like “evolutionary history” and “animals” are used, since so many species can be encompassed in the debate.

    I would have been more specific if we had debated a more specific trait.

    In other words, they will find that it is not a default reasonable expectation to expect humans and rodents to have the same abilities in that respect.

    It would be of course ridiculous to expect that. Which is why I added that the argument only makes sense when other animals present traits we associate with the skill in question. Humans of course do not present the trait “being small enough to squeeze through a hole of the size of a dime”.

    What I’m wondering about is : Why didn’t you take into account the point about “similar trait” that I added, when you knew it would solve this point ? What was the value of doing that ?