Super Fast Super AIs [curi.us post]

This frames AD as smart and frames his conclusions and judgment as being important things that people should listen to. It also fits the standard pattern of saying you agree then adding a “but” instead of disagreeing more openly.

What happened next? Max explained the movie reference. Super Fast Super AIs [curi.us post] - #8 by Max

Then AD ignored it because if he engaged he’d look bad. Why didn’t he get it in the first place? His memory excuse was ruined because Max’s explanation only relied on a tiny bit of information about a main plot point, not details that AD forgot.

AD is framing this as him being so clever and knowledgeable that he can just assume stuff instead of needing to do analysis.

He doesn’t give reasons to bait people into asking questions. That incorrectly makes it look like they want his help and expertise.

The thing he’s assuming looks like a big claim (and he followed with several statements further making it sound like a big claim). That implies he’s super clever to find it easy, make assumptions, and not find explaining necessary. Like he’s so far above people he forgets they don’t know stuff and need more help to understand.

When questioned, he didn’t explain or defend a big claim. Instead, he claimed (contrary to his own words, that he ignored) that he meant something pretty trivial and irrelevant. That’s a motte and bailey approach, which is a social strategy to avoid admitting error. It’s known for frustrating people and for preventing discussions from making truth-seeking progress.

This is framed as a correction of ET and telling him something he didn’t realize. It’s actually just not engaging with what ET was talking about.

What big claim does AD appear to be talking about? The computing resources for an AI with a trillion times the computer hardware power of a human brain. ET was comparing it to the entire human population and also brought up 100,000 warehouses full of computers.

AD appears to be replying to that and suggesting he’s so clever that he (unlike ET) knows how to get plenty of resources for that.

He immediately made further comments suggesting he was talking about a large amount of resources:

This suggests he’s talking about an amount of resources large enough that it could be hard to build initially. He’s hinting that he may have in mind super intelligent AI designing its own hardware to upgrade itself which makes it even smarter so it can make even better upgrades (which allegedly could lead to the singularity). So maybe that’s where the massive increase in computing power will come from (that is a well known idea, and is the kind of thing informed people would guess he might mean, but he doesn’t say it outright).

AD also frames this as if we should care what he thinks is “highly likely” as if he’s some sort of expert or genius. He doesn’t give an actual reason though, which suggests he’s used to people respecting his opinion and taking his word for things. A typical reason for that is that people are so far ahead of others that explaining everything that others are missing would be too much work, so help is doled out in selective ways that are less than people would like to receive, but which is the best help the smart person can offer given his limited time and energy, and the large number of people who want his help.

This sure makes it sound like AD has in mind a way to get a lot of resources. Not only does he know how to provide a trillion times a human brain worth of compute power that one might expect to involved a hundred thousand warehouses. He actually knows how to provide much more than that – whatever is useful.

AD is bragging like he knows some big, powerful stuff.

This framing suggestions that AD is interdisciplinary and good at bringing in lessons learned from other fields. And that ET is not and has to be told about how steel and cars work. AD is implying that this is the kind of thing he knows and ET doesn’t, and that saying this is a good guess at what will change ET’s mind and correct him. This is condescending, belittling and undermining – and wholly inappropriate. AD has done nothing to establish or find out what the actual disagreement is about or come to any reasonable conclusion about whether he’s right or what arguments or explanations he should be sharing. But he’s faking and posturing, and implying he knows all that stuff and that this is the conclusion.

This frames AD as knowing a lot about how the world works and being able to extrapolate well about it.

But it has nothing to do with the scenario being discussion, which was a thought experiment about an AI that uses much much more computing power. So why would you just look at the current situation and assume it’ll stay the same? The whole point is to consider a thought experiment with a major change and analyze it. But AD isn’t doing that. He didn’t actually think about the scenario and come up with thoughts about it.

After missing the point at the outset, AD then writes a bunch of irrelevant continuations that posture as if he’s clever, does thoughtful analysis, and has stuff to say. But he made no effort to find out what the issue actually is and to say relevant stuff. He just wants to rush into lecturing on topics of his choice (ones where he thinks he can sound smart) in order to imply his superiority by implying that him lecturing the other guy like this is the thing needed for the discussion to make progress.

Here, AD gets pushback on his nonsense and switches from the original big claim to a small claim. Is it plausible that he meant this all along? If he did, he was fully lost from the beginning and asked no questions which is pretty bad. But as we already saw, he made multiple statements suggesting he was talking about a large amount of resources. Plus he was replying to a post that was explicit about talking about a large amount of resources. Plus why would ET bring up potential resource constraints, despite talking about a scale of all of humanity, unless a large amount of resources were involved?

The small claim is that an AI would use computing resources similar to an app, so we’d have plenty of resources. In other words, AD is now suggesting an AI so small, that uses such low resources, that not only can I run it on my laptop or phone, but I can run many of clones of it on my laptop or phone. Just like my laptop can run dozens of apps at once. And an AI is just a new app with similar resource usage as existing apps.

If this was the scenario, why would we be discussing resource usage at all? The resources needed for this are so tiny that it doesn’t even make sense to discuss it. And this has nothing to do with the original discussion.

If AD literally didn’t know how or why large resource usage would come into the discussion, you’d think he’d ask instead of bragging about knowing how to get plenty of resources and making multiple statements suggesting he was talking about a large amount of resources not a new killer app. If you’re thinking it won’t need any warehouses, shouldn’t you ask what the warehouses are for instead of ignoring what you don’t understand and posturing like you have all the answers? Why would you reply to someone who is talking about warehouses full of computers, say plenty of resources will be available not only for what he had in mind but for many clones … and then reveal no plan at all to get more resources and just say you thought we were talking about an insignificant amount of resources?

The original topic was basically: universality arguments say AI’s will have the same functionality as natural intelligences (the universal repertoire of intelligence functionality). So will AIs be super intelligent? That seems to suggest not. But what if we gave them a lot more hardware than the human brain so they could think 100x faster. Would throwing more hardware resources at intelligence be a big advantage or not? AD’s conception of AI as just a killer app has nothing to do with this. But none of his framing of any issues was about him being a confused beginner who failed to understand the main point under discussion. He wasn’t asking questions to learn. His messages were all him trying to sound clever.

The whole thing is nonsense because it was never about finding the truth or learning. It was all social posturing. It’s all ad hoc excuses and reframings. It’s all the spur of the moment and automatized actions that focus on local optima. Every message is a new opportunity to frame things in a favorable light and ignore anything inconvenient. There’s no attempt to have things make sense over time, just to save face or gain status in each individual comment.

This is implying that ET (unreasonably, implausibly) thinks AI would be different than other software. It’s trying to suggest that the problems in the discussion are because ET made weird, unreasonable and previously unstated assumptions.

The question is fully nonsense, since the whole topic was whether scaling up AI but giving it far more compute resources would be effective or not. Doing that would use more compute resources because literally the plan is to use more compute resources on purpose.

He’s not trying to be rational. It’s hard to interpret because it’s bad faith.

It’s not a conscious attempt at malice. It’s just automatized social climbing policies. He isn’t overriding those enough with a bunch of conscious attention and effort, and following clear explicit steps to avoid using intuition, so they’re the main thing determining what he posts.

After being caught being totally lost and writing nonsense, AD reframes the issue as maybe just two reasonable people having a misunderstanding due to the fairly common (and often no-fault) issue of different terminology.

He further reframes the issue as him having a good idea about what’s going on when ET didn’t. He’s trying to take a leadership role and implying that he has a worthwhile, reasonable, plausible candidate theory about what happened that would be good to propose. But his proposal is just an excuse to distract from his errors above and avoid blame or post mortem.

When ET said he didn’t know what was going on, he meant with AD apparently not reading the post this thread is about, not remembering it, or having no idea what it said and then discussing like he know what it was about. AD doesn’t actually address that because he doesn’t know how to engage with it without looking bad.

AD doesn’t reply with any information about how he managed to misread it so badly or what happened.

Instead, AD proposes an ad hoc new massive misreading. He proposes that maybe ET is saying something that has nothing to do with what ET said. It’s just a half baked excuse. AD didn’t think this through or do textual analysis. He’s just following a script. Terminology difference is a standard script to get out of errors.

Here, AD was caught again being totally wrong, lost, confused, etc. When discussing his first major misunderstanding he came up with another big one that doesn’t make sense. Does he then try to analyze what happened and learn from his mistakes? No he goes on offense more with a new, separate distraction.

He frames himself as having made no errors and having no guilt. Instead, he’s allegedly being charitable and trying to make sense of what ET has written. He frames himself as putting in effort to understand just like a rational person would. But ET’s writing is framed as being hard to make sense of.

This statement is also careless which is related to it being an ad hoc excuse. It’s ambiguous whether the “reason” he’s talking about is the reasons for making the statements in the first place or for quoting them.

Now AD is giving naive, simplistic arguments against the first thing ET said. If AD thought it was straightforwardly wrong, why didn’t he open with his counter-arguments? He’s just trying to attack it now to shift blame. If he can attack it successfully, that means ET made an error, that the source of AD’s confusions, misunderstandings, etc., was actually ET’s error. Even if he can’t attack it successfully he can change the topic and distract from his major errors. What is he risking here in the role of critic? He won’t look that bad if his criticisms have rebuttals. Giving rebuttals to them would actually imply that they were reasonable contributions to the discussion that were worth replying to.

This whole post by AD reframes the issue away from how to get a bunch of computing resources, how expensive that would be, whether it’s cheap enough to make lots of clones, etc. Suddenly he starts saying like who cares about computing power:

That last question is dishonest. He reframed the issue to something he’s confident he’s right about. Implying that’s what the disagreement is is a way to attack ET and imply that ET has a dumb position. He’s baiting a reply like “you’re right; you’re not missing anything” which would sure be a great escape from his earlier errors and a convenient point to AFK a few weeks and then join a different discussion.

But the main issue here is how dishonest the framing is. He acts like he’s correcting ET who is too into computing resources. But not focusing on computing resources is a standard point ET has talked about and was one of ET’s conclusions in his post. ET is considering a thought experiment that relies on lots of computing resources, explaining how much it’d be (more than people realize), and talking about how well it’d work even if you had that much compute power (ET gives various reasons to question the value of throwing all that hardware at the problem). It’s absurd to respond to this by framing ET as on the side of thinking more computing resources is what matters rather than intelligence, rationality, creativity, etc. ET was merely willing to take seriously a thought experiment from a rival point of view and now AD is trying to attack ET as a member of that tribe because, by considering the scenario, he associated himself with that type of thinking. And there are superficial connections like ET basically said “no actually i was talking about lots of computer hardware” and then AD is basically like “oh why do you want so much when that isn’t what’s important?” ET didn’t want it or advocate it; he was pointing out that amount of computer hardware being so large is a problem for his intellectual opponents. AD is the one who then took the other side by joining the discussion to say that actually getting enough hardware is no problem, which is a claim that helps the “hardware matters” side. By debating that and claiming actually getting that much hardware is hard, ET was staying on the same side as usual, but now AD is trying to make it sound like ET has the opposite position of his actual view.

You might think after getting so lost AD would start trying to understand something instead of throwing out new claims and attacks, but nah he just holds frame on how he’s a clever, reasonable person and keeps doing automated social attacks.

Calling ET’s writing confusing is one of the least covert attacks AD made. He’s trying to blame ET for his own confusions.

The reason to care is because we’re analyzing a thought experiment about using lots of compute power (“simple mathematical operations”). That was the premise. A person ET disagrees with brought that up and ET was explaining some things about it.

AD frames this like ET cares about the wrong things, which is quite nasty.

This is another ad hoc excuse. AD is framing himself as familiar with and cleverer than this standard view. But if he actually knew about this stuff the whole time, why was he so lost (or pretending to be lost)?

Like did he really know what the normal view is, know that what ET said made sense given the normal view, disagree with the normal view, and then be like “I don’t get what ET is saying; doesn’t seem to make sense” on the unstated premise of his clever, non-standard view, and not mention a word about his unusual premise? No.

BTW, AD’s elaboration on why it’s wrong is partly standard, well known stuff that was already discussed by the people making the estimates. He’s hoping readers don’t know that. Other parts are just ad hoc nonsense like the claim that it might take a trillion neuron firings to do a NAND (or single bit addition or 64 bit addition or something – AD wasn’t very specific, and it doesn’t really matter since, loose ballparks, 64 bit addition is like 100x harder than single bit addition and single bit addition is like 10x harder than NAND, so that could still leave AD claiming many billions of neurons firing does one NAND worth of math.). What on earth does he think neurons do that it could take a trillion of them to do one of the most simple, basic chunks of information processing that there is? How do you divide something that basic into a trillion parts? And neurons aren’t that trivial. These claims have not been thought through. They are presented like some considered, serious, clever position held in advance, and they have a bunch of expertise signaling (like about ANN’s and micro-instructions), but actually AD isn’t dumb enough for this to be a real, considered opinion of his. I’m guessing he’ll just reframe again if he replies to this, e.g. he might claim he was talking about how hard it is to do math using conscious intelligence and was not actually discussing the hardware capabilities of brains, even though the hardware capabilities of brains were the topic and comparing conscious math to non-intelligent computer math would be an apples to oranges and irrelevant comparison.

Also even with corrected numbers according to AD’s own methodology, I suspect AD might still be wrong re resource usage. It’s hard to tell though because he’s not giving any numbers. I doubt he’s done napkin math for this yet. If he does it, it’ll be ad hoc stuff made up at the last second.

And notice how AD’s position keeps changing. Now he’s suggesting that the amount of resources is much lower, and more viable to build, because a standard view about the compute power of a human brain is wrong. But that’s after previously saying he never meant to talk about building an AI with more compute power than a human brain, and he just thought an AI would be a new app. If it’s just a new app, why even debate stuff like how much computer power are in 7 billion brains? He’s just looking for anything to attack with no concern for consistency between his different posts.

The whole conversation doesn’t makes sense because it isn’t a good faith discussion. It’s not conscious bad faith either. It’s controlled by AD’s subconscious. He has automated policies that try to make him look good and make other people look bad. It’s about social climbing. And letting your subconscious make discussion decisions, instead of taking conscious control, is the standard way of discussing in bad faith. Letting your social intuitions make a bunch of decisions is the main way people do bad faith discussions.

A problem with the analysis I did in this topic is that it gives AD too much control of the narrative. He chose what to write about. I’m just responding to stuff he said. So I didn’t analyze key issues that he did not write about.

Why did AD pick this particular topic to post in? How does AD pick topics? What are his goals, problems and plans in life and for this topic? AD shows a pattern of responding in topics where he thinks he can look good, not in topics where there’s something important to learn. He’s not seeking truth or progress. Some other posters are similar.

It’s hard to talk about things people hide information about. Which is the point. People don’t want discussion of their actual constraints and underlying problems. They just expose downstream symptoms, like their own bad behavior, while hiding the causes. Addressing the downstream consequences, not the causes, wouldn’t be effective even if they actually listened and changed their mind, which they mostly don’t.

I think this is correct. I don’t think I can respond in more detail without doing it again.

When I pick a topic to respond to, I’m usually not thinking about trying to learn. My conscious reasoning is usually that I think I already know something that might offer value to the discussion.

My thought on this is: neurons don’t directly do NAND operations, and transistors & logic gates don’t directly do firings. You can emulate one with software running on the other, but it’s pretty inefficient. So I guessed at how many neurons would have to fire for a human to read or recall “2+2” and think “4”.

Ya. Until now I hadn’t even clicked the link at the top of ET’s original post. I think my approach was something like: read ET’s post at a high level, think “ya, but AIs would still be cool cuz state saves”, then go look for material to quote & talk about that.

Now you’re responding more without addressing why you changed your mind or speaking to your goals, project plans, asks, offers, etc.

I didn’t change my mind about my automated policies. I think my automated policies will continue to exert control over my responses.

I’m responding more anyway because I think not responding more would be worse than responding without fixing the automated policies.