Paragraph analysis
Pinker’s explanations of knowledge, reason and purpose didn’t actually say much. He was hedging but also giving the impression that he understood these topics and knew how they could be explained by natural, physical phenomena. Maybe he didn’t say the great idea was, positively, how the mind/intelligence/abstract realm was explained by physical phenomena, because he didn’t understand how.
I can’t think of more things that I or Elliot haven’t already said.
comments while watching Elliot’s analysis
So Pinker was skeptical of general intelligence. It seems he considers ML algorithms and brute-force tricks as intelligence. I think he should’ve clarified what he thinks intelligence is. He did explain what reasoning, or at least attempted to. I think he might have an unconventional idea of what intelligence is and the way he talks about intelligence is therefore misleading to most readers. I think he should especially say more about what he thinks human intelligence is. If he doesn’t have much of an idea of what human intelligence is then he shouldn’t speak about AI disproving souls.
AGIs Are People
Quotes from Pinker’s essay:
a muzzy conception of intelligence that owes more to the Great Chain of Being and a Nietzschean will to power than to a Wienerian analysis of intelligence and purpose in terms of information, computation, and control.
But these scenarios are based on a confusion of intelligence with motivation—of beliefs with desires, inferences with goals, the computation elucidated by Turing and the control elucidated by Wiener.
Intelligence is the ability to deploy novel means to attain a goal. But the goals are extraneous to the intelligence: Being smart is not the same as wanting something.
So Pinker says that AI won’t necessarily have wants and goals. So you could excuse Pinker for not thinking about the freedom of AIs. However, what does this mean for humans? The following quotes confirms what I suspected Pinker thought about purpose and goals in humans, that they were given to us by evolution:
It just so happens that the intelligence in Homo sapiens is a product of Darwinian natural selection, an inherently competitive process. In the brains of that species, reasoning comes bundled with goals such as dominating rivals and amassing resources. But it’s a mistake to confuse a circuit in the limbic brain of a certain species of primate with the very nature of intelligence.
We don’t choose our goals according to Pinker. Given we can tweak the goals of AIs, and human goals aren’t any different in a fundamental way: why give humans freedom then? is there any “sacredness” or morally significant about the goals humans have? Do we need to have any concern for human goals if we think they are wrong and dangerous? Should we rewire the circuits in the limbic brains of humans who don’t behave well like criminals and “mentally ill” people? Maybe he would support that (psychiatry.) He could say humans have souls and therefore that would be wrong, but he wouldn’t like that.
Also, what is “wanting something”? Do we need souls to explain that? Purpose is certainly similar to wanting things, but it seems Pinker says it’s not the same. He didn’t explain how wanting things was explained by the physical realm.
So I think Elliot’s criticism here was fair after all.
Question
Does “of knowledge, reason, and purpose” delimit the area of the abstract realm or does it describe the abstract realm, that the the three main categories of the abstract realm is knowledge, reason, and purpose and everything else is a child of those three? So is it saying the entire abstract realm does not consist of soul, or that only a part of the abstract realm does not consist of soul. Because if it’s delimiting and “wanting something” is part of the part of the abstract realm, then it could mean “wanting something” requires a soul. I don’t think that’s what Pinker thinks, but can it mean that grammatically? My answer currently is that it’s ambiguous, it’s something the grammar doesn’t tell us, we have to guess what the author means.
Feedback
I think this is the best text analysis video. I really liked it.
This was the best showcase for why grammar, grammar trees and paragraph trees are useful.
It was the most fun text analysis I’ve done because it was connected to philosophy and there was philosophy analysis mixed in. It was challenging to do the analysis myself first, but it was also fun.
An Untitled Letter
I like that part of the “letter.” When I was reading/thinking about existentialist philosophy I had the assumptions that the philosopher knew what they were talking about and it was just too deep for me. Which was fair at time since I was a total noob at philosophy. I think I shouldn’t totally dismiss existentialism yet, but I’ll have a different attitude when I consider it in the future. I won’t be intimidated, I won’t assume it’s impossible for me to judge because it’s too deep.
Project notes
I watched the rest of the video. The watching and what I wrote in this post took 1 hour and 50 minutes.
I’ll do a project in the future where I try to do analysis of a paragraph like Elliot did here.