Super Fast Super AIs [curi.us post]

He’s not trying to be rational. It’s hard to interpret because it’s bad faith.

It’s not a conscious attempt at malice. It’s just automatized social climbing policies. He isn’t overriding those enough with a bunch of conscious attention and effort, and following clear explicit steps to avoid using intuition, so they’re the main thing determining what he posts.

After being caught being totally lost and writing nonsense, AD reframes the issue as maybe just two reasonable people having a misunderstanding due to the fairly common (and often no-fault) issue of different terminology.

He further reframes the issue as him having a good idea about what’s going on when ET didn’t. He’s trying to take a leadership role and implying that he has a worthwhile, reasonable, plausible candidate theory about what happened that would be good to propose. But his proposal is just an excuse to distract from his errors above and avoid blame or post mortem.

When ET said he didn’t know what was going on, he meant with AD apparently not reading the post this thread is about, not remembering it, or having no idea what it said and then discussing like he know what it was about. AD doesn’t actually address that because he doesn’t know how to engage with it without looking bad.

AD doesn’t reply with any information about how he managed to misread it so badly or what happened.

Instead, AD proposes an ad hoc new massive misreading. He proposes that maybe ET is saying something that has nothing to do with what ET said. It’s just a half baked excuse. AD didn’t think this through or do textual analysis. He’s just following a script. Terminology difference is a standard script to get out of errors.

Here, AD was caught again being totally wrong, lost, confused, etc. When discussing his first major misunderstanding he came up with another big one that doesn’t make sense. Does he then try to analyze what happened and learn from his mistakes? No he goes on offense more with a new, separate distraction.

He frames himself as having made no errors and having no guilt. Instead, he’s allegedly being charitable and trying to make sense of what ET has written. He frames himself as putting in effort to understand just like a rational person would. But ET’s writing is framed as being hard to make sense of.

This statement is also careless which is related to it being an ad hoc excuse. It’s ambiguous whether the “reason” he’s talking about is the reasons for making the statements in the first place or for quoting them.

Now AD is giving naive, simplistic arguments against the first thing ET said. If AD thought it was straightforwardly wrong, why didn’t he open with his counter-arguments? He’s just trying to attack it now to shift blame. If he can attack it successfully, that means ET made an error, that the source of AD’s confusions, misunderstandings, etc., was actually ET’s error. Even if he can’t attack it successfully he can change the topic and distract from his major errors. What is he risking here in the role of critic? He won’t look that bad if his criticisms have rebuttals. Giving rebuttals to them would actually imply that they were reasonable contributions to the discussion that were worth replying to.

This whole post by AD reframes the issue away from how to get a bunch of computing resources, how expensive that would be, whether it’s cheap enough to make lots of clones, etc. Suddenly he starts saying like who cares about computing power:

That last question is dishonest. He reframed the issue to something he’s confident he’s right about. Implying that’s what the disagreement is is a way to attack ET and imply that ET has a dumb position. He’s baiting a reply like “you’re right; you’re not missing anything” which would sure be a great escape from his earlier errors and a convenient point to AFK a few weeks and then join a different discussion.

But the main issue here is how dishonest the framing is. He acts like he’s correcting ET who is too into computing resources. But not focusing on computing resources is a standard point ET has talked about and was one of ET’s conclusions in his post. ET is considering a thought experiment that relies on lots of computing resources, explaining how much it’d be (more than people realize), and talking about how well it’d work even if you had that much compute power (ET gives various reasons to question the value of throwing all that hardware at the problem). It’s absurd to respond to this by framing ET as on the side of thinking more computing resources is what matters rather than intelligence, rationality, creativity, etc. ET was merely willing to take seriously a thought experiment from a rival point of view and now AD is trying to attack ET as a member of that tribe because, by considering the scenario, he associated himself with that type of thinking. And there are superficial connections like ET basically said “no actually i was talking about lots of computer hardware” and then AD is basically like “oh why do you want so much when that isn’t what’s important?” ET didn’t want it or advocate it; he was pointing out that amount of computer hardware being so large is a problem for his intellectual opponents. AD is the one who then took the other side by joining the discussion to say that actually getting enough hardware is no problem, which is a claim that helps the “hardware matters” side. By debating that and claiming actually getting that much hardware is hard, ET was staying on the same side as usual, but now AD is trying to make it sound like ET has the opposite position of his actual view.

You might think after getting so lost AD would start trying to understand something instead of throwing out new claims and attacks, but nah he just holds frame on how he’s a clever, reasonable person and keeps doing automated social attacks.

Calling ET’s writing confusing is one of the least covert attacks AD made. He’s trying to blame ET for his own confusions.

The reason to care is because we’re analyzing a thought experiment about using lots of compute power (“simple mathematical operations”). That was the premise. A person ET disagrees with brought that up and ET was explaining some things about it.

AD frames this like ET cares about the wrong things, which is quite nasty.

This is another ad hoc excuse. AD is framing himself as familiar with and cleverer than this standard view. But if he actually knew about this stuff the whole time, why was he so lost (or pretending to be lost)?

Like did he really know what the normal view is, know that what ET said made sense given the normal view, disagree with the normal view, and then be like “I don’t get what ET is saying; doesn’t seem to make sense” on the unstated premise of his clever, non-standard view, and not mention a word about his unusual premise? No.

BTW, AD’s elaboration on why it’s wrong is partly standard, well known stuff that was already discussed by the people making the estimates. He’s hoping readers don’t know that. Other parts are just ad hoc nonsense like the claim that it might take a trillion neuron firings to do a NAND (or single bit addition or 64 bit addition or something – AD wasn’t very specific, and it doesn’t really matter since, loose ballparks, 64 bit addition is like 100x harder than single bit addition and single bit addition is like 10x harder than NAND, so that could still leave AD claiming many billions of neurons firing does one NAND worth of math.). What on earth does he think neurons do that it could take a trillion of them to do one of the most simple, basic chunks of information processing that there is? How do you divide something that basic into a trillion parts? And neurons aren’t that trivial. These claims have not been thought through. They are presented like some considered, serious, clever position held in advance, and they have a bunch of expertise signaling (like about ANN’s and micro-instructions), but actually AD isn’t dumb enough for this to be a real, considered opinion of his. I’m guessing he’ll just reframe again if he replies to this, e.g. he might claim he was talking about how hard it is to do math using conscious intelligence and was not actually discussing the hardware capabilities of brains, even though the hardware capabilities of brains were the topic and comparing conscious math to non-intelligent computer math would be an apples to oranges and irrelevant comparison.

Also even with corrected numbers according to AD’s own methodology, I suspect AD might still be wrong re resource usage. It’s hard to tell though because he’s not giving any numbers. I doubt he’s done napkin math for this yet. If he does it, it’ll be ad hoc stuff made up at the last second.

And notice how AD’s position keeps changing. Now he’s suggesting that the amount of resources is much lower, and more viable to build, because a standard view about the compute power of a human brain is wrong. But that’s after previously saying he never meant to talk about building an AI with more compute power than a human brain, and he just thought an AI would be a new app. If it’s just a new app, why even debate stuff like how much computer power are in 7 billion brains? He’s just looking for anything to attack with no concern for consistency between his different posts.

The whole conversation doesn’t makes sense because it isn’t a good faith discussion. It’s not conscious bad faith either. It’s controlled by AD’s subconscious. He has automated policies that try to make him look good and make other people look bad. It’s about social climbing. And letting your subconscious make discussion decisions, instead of taking conscious control, is the standard way of discussing in bad faith. Letting your social intuitions make a bunch of decisions is the main way people do bad faith discussions.

A problem with the analysis I did in this topic is that it gives AD too much control of the narrative. He chose what to write about. I’m just responding to stuff he said. So I didn’t analyze key issues that he did not write about.

Why did AD pick this particular topic to post in? How does AD pick topics? What are his goals, problems and plans in life and for this topic? AD shows a pattern of responding in topics where he thinks he can look good, not in topics where there’s something important to learn. He’s not seeking truth or progress. Some other posters are similar.

It’s hard to talk about things people hide information about. Which is the point. People don’t want discussion of their actual constraints and underlying problems. They just expose downstream symptoms, like their own bad behavior, while hiding the causes. Addressing the downstream consequences, not the causes, wouldn’t be effective even if they actually listened and changed their mind, which they mostly don’t.

I think this is correct. I don’t think I can respond in more detail without doing it again.

When I pick a topic to respond to, I’m usually not thinking about trying to learn. My conscious reasoning is usually that I think I already know something that might offer value to the discussion.

My thought on this is: neurons don’t directly do NAND operations, and transistors & logic gates don’t directly do firings. You can emulate one with software running on the other, but it’s pretty inefficient. So I guessed at how many neurons would have to fire for a human to read or recall “2+2” and think “4”.

Ya. Until now I hadn’t even clicked the link at the top of ET’s original post. I think my approach was something like: read ET’s post at a high level, think “ya, but AIs would still be cool cuz state saves”, then go look for material to quote & talk about that.

Now you’re responding more without addressing why you changed your mind or speaking to your goals, project plans, asks, offers, etc.

I didn’t change my mind about my automated policies. I think my automated policies will continue to exert control over my responses.

I’m responding more anyway because I think not responding more would be worse than responding without fixing the automated policies.

My ongoing situation is that I don’t have serious goals or plans related to CF, and I don’t have a good idea about asks or offers either.

Some of my unserious goals related to CF are:

Have an intellectual hobby

Learn things in an easy and unplanned/unstructured way

Stay informed about ideas that could be really important

Bigger picture, I still have the goal of helping life extension research with more than just small donations. I had formulated a plan and written some business collateral involving SENS. I was preparing to go public[1] with it when the Aubrey harassment stuff happened. Now I don’t know what to do.

[1] I have a problem with not knowing how to pseudonymously interact with CF with regard to content that is or will be tied to me IRL. I hadn’t figured out if or how I was going to post links to the material on CF.

That sucks, though I suppose announcing your plan and then having the harassment stuff come out would have been even worse!

Typo? I don’t know what you mean.

2nd CF account?

Had you planned, done or written anything to address my pre-existing criticisms of AdG and SENS, from years ago?

1 Like

Not a typo. What I meant specifically is that I had written some web pages and content for marketing letters.

In response to this question I went looking to see if I was misusing collateral. I found:

informational materials (such as brochures and fact sheets) used in selling a product or service to a prospective customer or buyer

I don’t think I’m misusing collateral, but I don’t have any other guess about what went wrong.

That definition is not in some standard dictionaries at all, and is not the well known meaning of the word. I’d never heard of it.

Yeah it was a new use of the word to me and I like words