Thank you for creating this video. I really, really liked it and was beyond amazed by just how good it is. I liked how your video breaks down the process of analyzing and critiquing text into approachable, learnable steps.
I followed along and had a go at doing each step before checking what ET did. Here’s some of what I did, in case it’s of any interest to anyone.
Jarrod’s Sentence Summaries
-
AI proves knowledge isn’t magic
-
It [abstract realm] can be linked to physical stuff
-
Knowledge can be explained as patterns
-
Reason can be explained as transformations/computation
-
Purpose can be explained as control
-
Brains are just one such system (IOW, humans are not unique or special)
-
AI vindicates that it’s just info processing (CTM)
Sentence Analysis
This was for the Sentence Analysis chapter of the YouTube video.
I did these by means of just very quickly freewriting whatever came to mind. They’re super rough and unpolished. They’re just freewritten reactions, not seriously considered conclusions.
I didn’t do the first sentence because I wasn’t sure what ET’s process was.
Sentence 1
Artificial intelligence is an existence proof of one of the great ideas in human history: that the abstract realm of knowledge, reason, and purpose does not consist of an élan vital or immaterial soul or miraculous powers of neural tissue.
Comments on ET’s Analysis:
-
I didn’t know “does not consist of” meant it does not only consist of.
-
I like ET’s point about how souls not existing isn’t really “one of the great ideas in human history”. That didn’t occur to me. But ET is correct to point out that that’s an exaggeration by Pinker and I’d guess it’s just Pinker trying to give extra social status to atheists or whatever
Sentence 2
Rather, it [abstract realm] can be linked to the physical realm of animals and machines via the concepts of information, computation, and control.
Jarrod’s Freewritten Analysis:
So, what to say about this sentence?
it [abstract realm] can be linked to the physical realm
What does “linked to” mean? Isn’t it that the abstract realm is like embodied in physical matter? Like matter and abstraction are the same thing—abstraction is just a way of considering matter (and that way of considering matter is itself stored in matter). So it’s not so much linked to as embodied in (not sure if that’s the right word). Saying “linked to” feels kind of dualist or whatever.
of animals and machines
Some people think animals have souls. Some people might think AGIs have souls.
via
Are they actually linked via… wait… to simplify Pinker’s sentence: Abstract is linked to physical via concepts = Abstract is linked to physical via abstractions. It’s a circular argument.
the concepts of information, computation, and control.
Those three concepts feel a bit random tbh, as if Pinker just picked them somewhat arbitrarily. But idk enough to really know.
One lesson I’ve learned so far: in addition to grammar and paragraph trees, just going slowly over each sentence and questioning each word/phrase/connection/claim can help one to notice errors or sloppiness.
Comments on ET’s Analysis:
-
I like ET’s point about how “can be” means it’s not a definitive proof. That’s such a good point!
-
I’m finding doing the analysis/following along with this video super fun! I’m very energized by it. I agree with ActiveMind saying “It was the most fun text analysis I’ve done” and the YouTube comment on the video which says “This analysis is great! Please make more of different paragraph examples. Thanks much!”
Sentence 3
Knowledge can be explained as patterns in matter or energy that stand in systematic relations with states of the world, with mathematical and logical truths, and with one another.
Jarrod’s Freewritten Analysis:
So, what to say about this sentence? Isn’t it the case that “mathematical and logical truths” are “states of the world"? And so it’s redundant to mention "mathematical and logical truths”? Also, aren’t “truths” synonymous with “Knowledge” in this context? If so, then he’s saying that “Knowledge can be explained as patterns…that…” relate to knowledge. So it’s circular again.
What does it mean to say that they relate to one another? That the patterns stand in relations with one another? Does that mean that the patterns (e.g., the neural connections) are the same in everyone’s brains? I quickly asked Claude Sonnet and Claude said, using a formal logic example, “The premise-patterns in your brain have some relationship (causal? structural? computational?) to the conclusion-patterns”. Maybe I’m an ignoramus, or perhaps the fact that I can’t figure out what he means is an indictment of Pinker’s writing.
Patterns standing in relations with one another still isn’t clear to me. What is the nature of these relations? How are they related? Related by what?
Comments on ET’s Analysis:
-
ET pointed out the matter and energy related to matter and energy issue. (Because “states of the world” are matter and energy. So Pinker is basically saying that “Knowledge [is] patterns in matter or energy that stand in systematic relations with [matter or energy]”.) I didn’t catch that one.
-
ET pointed out that saying that patterns are related to themselves is relating physical to physical which fails to achieve Pinker’s goal of linking the abstract to the physical. Nice catch.
-
Lol ET’s rewrite is gold: “So: Abstract knowledge is physical patterns with pattern relationships with physical patterns, with abstract knowledge, and with physical patterns”
-
ET says: “There isn’t very good content here”

-
ET says: “So when you break things down, sometimes they’re not as impressive as they might have seemed at first.” ET’s video demonstrates this to an astonishing degree. I went into this with not a very high opinion of Pinker but somehow—after ET broke it down and explained it like that—it turns out that Pinker’s writing is so enormously more vacuous than I ever would’ve imagined.
Sentence 4
Reasoning can be explained as transformations of that knowledge by physical operations that are designed to preserve those relations.
Jarrod’s Freewritten Analysis:
So, what about this sentence? So reasoning can be explained as transformations of those physical patterns with pattern relationships by physical operations… not sure what operations means. Webster has both stuff like “logical processes” and “a single step performed by a computer”, and Oxford has “a process…”. Are operations patterns? If so: Reasoning is transformations of patterns by patterns that preserve patterns. It seems a bit… repetitive.
Anyway, so, continuing… “by physical operations…designed [designed by whom?] to preserve those relations.”
Doesn’t saying they’re designed invoke purpose and reason—which he’s trying to explain? So he’s trying to explain concepts by means of those same concepts. It’s circular yet again. Almost every sentence (so far) of Pinker’s is circular.
Comments on ET’s Analysis:
-
I like ET’s point that Pinker fails to adequately explain himself and assumes you have preexisting knowledge about information, computation, etc.
-
Oh ET pointed out that Pinker’s talking about changing patterns while preserving patterns? Which is a contradiction. If those patterns stay the same (are preserved) then… nothing happens.
Sentence 5
Purpose can be explained as the control of operations to effect changes in the world, guided by discrepancies between its current state and a goal state.
Jarrod’s Freewritten Analysis:
control of operations
Control how? By whom? Does control presuppose a purpose?
by discrepancies between its current state and a goal state.
Doesn’t caring about discrepancies between its current state and a goal state presuppose a purpose? Also, doesn’t having a goal state presuppose a purpose lol? So once again, it’s circular. A minimal rewrite of the sentence could be: Purpose is changes guided by purpose(“goal state”).
the control of operations to effect changes in the world
Also, doesn’t “the control of operations” itself require “effect[ing] changes in the world”? So it’s sort of a chicken egg problem.
guided by
Also, doesn’t “guided” presuppose a purpose?
Comments on ET’s Analysis:
- I like ET’s point (speaking loosely from memory, hopefully I’m not putting words in ET’s mouth) that Pinker fails to achieve his own goal of linking the abstract realm of purpose to physical-AI cuz AIs don’t have their own purposes but human-designed/-assigned purposes (thus only humans have purpose which could be cuz humans have souls, a possibility which contradicts Pinker’s argument).
Sentence 6
Naturally evolved brains are just the most familiar systems that achieve intelligence through information, computation, and control.
Jarrod’s Freewritten Analysis:
Idk what “achieve…through information, computation, and control” means. That sounds vague and like hand-waving. Also, saying that that’s how intelligence is achieved is a big claim. Most people don’t know how intelligence is achieved. In fact, I don’t think anyone does or else they’d be able to build AGI. So he’s sort of making it sound like he knows but I don’t think he does.
Sentence 7
Humanly designed systems that achieve intelligence vindicate the notion that information processing is sufficient to explain it—the notion that the late Jerry Fodor dubbed the computational theory of mind.
Jarrod’s Freewritten Analysis:
Minimal rewrite: AI vindicates info processing being enough to explain intelligence.
Humanly designed systems that achieve intelligence
But humanly designed systems (AI) haven’t achieved general intelligence. So how can they vindicate the notion that information processing is sufficient to explain general intelligence? AI doesn’t yet have purpose/volition/initiative/free will/consciousness.
Other Freewritten Comments
-
I liked ET’s point that Pinker apparently doesn’t explain himself more later in the essay but instead just keeps on bringing up new complex stuff. (Such as Pinker’s apparent claim that defeating entropy (or something like that) is our life’s goal—which I think makes no sense.)
- I suppose if Pinker were a serious/rigorous intellectual who actually wanted to make some major/important claim, he’d focus his essay on that topic rather than having his essay be like a kind of rambling conversation where he just brings up new topics that occur to him without really going deep on any in particular.
-
I liked this point by ET:
Pinker should read and engage with the literature instead of assuming machines don’t have souls
I like the idea of engaging with opposing arguments and explaining why they’re wrong rather than just cheerleading one side. That kind of attitude (of engaging with rival arguments and explaining why they’re wrong) seems rational and is one of the things I like about CF’s approach.
-
I liked this point by ET [copied from YouTube transcript with capitalization and punctuation by me]:
This is one of the problems with a lot of intellectuals and their discussions and behavior is they often do not engage with each other. Someone makes some argument, and then someone else makes some other arguments that kind of ignore the prior arguments, and then discussions and issues don’t reach conclusions, because people are not talking to each other in a way that can actually reach a conclusion and get engaged with each other properly.
-
I like ET’s point about Pinker switching his claim from “proof” to “does not consist” to “sufficient to explain”. I suppose Pinker wasn’t very clear what the purpose of his essay was. He wasn’t like Francisco in Altas Shrugged who could always answer “What for?” Instead, I’d guess that Pinker just had a vague feeling of wanting to cheerlead for atheists or dunk on religionists or showoff his erudition and wasn’t rigorously trying to make some particular claim.
-
I like very much ET’s idea of local optima. It reminds me of people on social media who argue. For example, I’ve seen conservatives argue against cancel culture when it’s people that they’re sympathetic to being cancelled. But then when it’s about boycotting a brand that did some woke advertising, those exact same conservatives are suddenly in favor of cancel culture/boycotting. Even though they said they were against cancel culture?! I’ve thought that it’s just that they don’t think in principles or generalize or think to themselves something like, “hmm, what is the moral principle at play here? if this was the reason why something is bad, what would all the implications of that be? what else would also be bad by that logic?” and then try to refine the moral principle to only include/exclude stuff that makes sense. Instead, they just don’t think in terms of principles like that. But I like ET’s local optima framing of this phenomenon. It’s also kind of interesting that Steven Pinker mightn’t be much more rational/thoughtful than these unprincipled/short-sighted people who argue on social media.