Super Fast Super AIs [curi.us post]

1 Like

I don’t think the person has considered that 100 people have 100x the computing power of 1 person. So to a first approximation, a superfast 100x AI is as valuable (mentally not physically) as 100 people.

There’s another value I would consider besides computing power. An AI would presumably be constructed on hardware which provides the AI and/or its creators a kind of lower abstraction level access to the mind that is currently impossible with human beings.

I think the main benefits of this low level access would be its availability for forensic analysis and state saves/copies.

If an AI lived not for 80 years but for millennia, would those problems be massively amplified?

And

Many individuals become very irrational at some point in their life, often during childhood. If our super AI has a similar chance to become super irrational, it’s very risky.

And

A super mind might be more vulnerable to some bad ideology – e.g. a religion – taking over the whole mind.

etc…

I can imagine making state saves periodically, and having the option to revert to a prior state if it is preferable. This of course raises the issue of who can do this - Only the AI itself? It’s creator(s)? Others? But regardless of that, it’d be a very valuable capability that humans utterly lack.

It would be possible for an AI to “unlearn” or “unsee” things in a way no human currently can. It could also A/B test different approaches to learning from a common starting point.

It’s hard to make progress by yourself way ahead of everyone else. You can do it some but the further you get away from what anyone else understands or helps with, the more of a struggle it becomes. This could be a huge problem for the super mind. Especially if it works pretty well, it might have no colleagues it respects.

This problem could be addressed by copies. An AI copies itself at time n - suppose that’s something equivalent to teenage years in humans. Then the copies each pursue different interests independently but at some later period, time n+m, they could be colleagues. It’s not guaranteed to work of course - one of the copies might get so far ahead of the other(s) as to still have this problem. But it’d be a good start.

Copies are independent beings. So that’s going more in the direction of having many AIs that aren’t that much more powerful than humans, rather than one super powerful AI. The more you go in that direction, the more you’re losing some upsides (and downsides) compared to just making more humans.

You either have to split computing resources between copies or else build way more computing resources (but no matter what you build, you can either give it all to one AI or split it up, and if single powerful AIs are great then splitting it up has a major downside, and if they aren’t great then one wonders what’s so inferior about billions of humans).

IDK if forensic analysis would be possible. Even if we had better access (e.g. DMA), that doesn’t mean it’s decodable. Save-states are maybe possible in that case (if you know the full state), but forensic analysis relies on deeply understanding structure of and relationships between data.

Have you seen The Prestige?

I kinda assume that computing resources will not be a significant constraint on either AI size or number once we know how to build AI software. Even if computing resources are a significant constraint on Day 1 of AI invention, I think it’s highly likely they’ll stop being a significant constraint some small number of years later. So the situation I expect is that AIs will be approximately as big as are found to be useful, and as plentiful as are found to be useful. Kinda like today where the number of small cars and big buses there are isn’t constrained by the availability of steel, so we don’t choose whether to make more cars or buses based on how well they make use of the available steel supply.

You can’t raise a human to age 25 then copy him a billion times. You can do that with AI. That means perhaps we could put a billion times the normal upbringing resources into the pre-copy phase of the AI’s life.

I don’t actually think resources are a billion level constraint. Maybe 1000 or 1 million times what is normally spent raising a human would exhaust our knowledge of possible upbringing improvements. And maybe it’s better to make some or all of the copies at the equivalent of age 3, or 7, or 16 rather than 25. Whatever the resource level and ‘age’, the ability to copy AIs would allow us to leverage early life resource spending in some ways it’s not possible to do with billions of humans.

I think that’s right, but I also think the ability to compare save states from before and after single exposures / actions / etc. while learning might help us develop that understanding.

Yes, but it was several years ago, I don’t remember it well and I don’t get the reason for the reference.

Why?

the prestige has a twist at the end which reveals “the prestige” (the final part of a magic act) of the protagonists magic act. The secret is that he creates a copy of himself. The on-stage magician is copied, but is doomed (falls into a properly-locked tank where he drowns), but the copy is created like 100m away, and can appear behind the audience faster than someone could move between those two points.

the magician knows this will happen, but does many nights of shows, each of which dooms a copy of himself.

The idea of an AI doing A/B testing on itself – which in the conventional sense would mean the murder of the less successful copy – reminds me of that.

I think it’s because the main constraints driving the cost of obtaining computing power are currently* knowledge and scale rather than physical resources or labor. And I don’t think that’s going to change significantly between now and AI invention + a few years.

By knowledge constraint I mean things like how to make smaller base components like transistors, how to make effective component combinations of components in 3-D vs. 2-D, how to connect building blocks together efficiently. Improving that kind of knowledge lets us build progressively more computing resources without using more physical or labor resources. We have lots of people working on growing that knowledge, and so I expect it to continue growing though not in all areas equally. For example, we may be approaching a minimum practical size for a transistor soon, so the knowledge of how to make smaller transistors may stop growing, but knowledge about other ways of increasing computing power would continue.

By scale constraint I mean the need to recover the research and setup cost across a production run of chips before new knowledge renders that production capacity obsolete. If a certain process has been developed and tooled up to produce chips, all else equal the price of each of those chips will be lower if we have practical application for a billion of them than if we can only productively use a million of them. So if AI substantially increases the practical applications for computing resources, over time I’d expect that to drive down the unit cost of such resources.

  • I’m using “currently” in a longer-term sense than some people might expect - I mean something like the last 10-40 years. There are some constraints in the chip supply chain right now that have arisen in the last year or so and are negatively affecting the price and availability of computing power today. Absent further government intervention, I expect those constraints to correct long before AI is invented.

The thought experiment was a situation very unlike today – in particular with far far far more computing resources being built – but you’re just trying to extrapolate from today, which is not a way to understand that situation and what the constraints would be.

I don’t think I understand what you had in mind. From a compute resource perspective I was thinking of AI like a new killer app: the scenario where we figure out how to make the software for AI but otherwise not much changes.

What do you think would be different about AI from previous landmark software developments with regard to compute resources?

I don’t understand what’s going on. Here’s a quote from the first paragraph of my post with bold added. The basic topic of the post is an AI being powerful due to being faster (or actually more compute power – i brought up e.g. parallel computing too).

I don’t think the person has considered that 100 people have 100x the computing power of 1 person. So to a first approximation, a superfast 100x AI is as valuable (mentally not physically) as 100 people. If we get an AI that is a billion times faster at thinking, that would raise the overall intelligent computing power of our civilization by around 1/7th since there are around 7 billion people. So that wouldn’t really change the world. If we could get an AI that’s worth a trillion human minds, that would be a big change – around a 143x improvement. Making computers that fast/powerful is problematic though. You run into problems with miniaturization and heat. If you fill up 100,000 warehouses for it, maybe you can get enough computing power, but then it’s taking quite a lot of resources. It still may be a great deal but it’s expensive. That sounds like probably not as big of an improvement to civilization as making non-intelligent computers and the internet, or the improvements related to electricity, gas motors, and machine-powered farming instead of manual labor farming.

The original topic, from the comment I linked in the first paragraph, discussed e.g.:

An AI might run on faster hardware then the brain.

When I talk about a 100x AI that means 100x the compute power of one human brain. The 143x improvement over 7 billion people is from an AI with the compute power of a trillion human brains.

The basic topic was would AI be highly valuable specifically due to giving it better/faster computing hardware. And my basic point is that I don’t see why that would be very valuable unless it has significantly more compute power than all living human beings, but that is a lot of compute power – so the idea of doing tons of clones makes that issue much worse.

Perhaps what’s going on is we’ve been using one set of terms (compute power, compute resources) when we actually need two distinct concepts. I’ll introduce new terms for the two concepts I’m thinking of to avoid confusion:

(1) Mathematical Capacity - Measured in something like teraflops. Refers to the number of some simple operations (adds, memory reads, etc.) a particular piece of hardware can perform.

(2) Thinking Capacity - Measured in something like human mind equivalents. Refers to the amount of intelligent activity (knowledge creation, creativity, etc.) a mind can perform.

If this distinction makes sense to you as it does to me, then I think what’s going on is your original post was talking about thinking capacity but my constraint post was talking about mathematical capacity.

Do you agree that’s at least part of what’s going on?

No, I’ve been talking about “(1) Mathematical Capacity”.

OK, then I don’t think I can make sense of the reason for the statements you quoted from your original post, like:

I guess it’s true that at least in terms of averages, 100 people can perform 100x the number of simple mathematical operations as 1 person can. But so what / why care? 1 electronic computer can perform a million-x, billion-x, or trillion-x the number of simple mathematical operations 1 person can.

The second sentence is even more confusing in the context of mathematical capacity:

Especially after the invention of electronic computers, people are not significantly valuable for their mathematical capacity. People are valuable for their thinking capacity. So would an AI. Right? Any idea what I’m missing here?

No.

Your Brain Is Still 30 Times More Powerful Than The Best Supercomputers. … Based on current market prices, that means you stand to earn between $4,700 and $170,000 if you rented out your brain’s computing power for an hour.

(There are other sources with different numbers for comparing human brains to silicon computers, and there are disagreements about how to generate the numbers, but there seems to be a decent amount of agreement about the general ballpark of these numbers. I’ve seen similar numbers from many sources.)

So an AI with a trillion times the hardware computing resources of a human being would be very expensive in terms of economic resources. Conservatively assume the low end estimate of $4700/hr and assume market prices don’t go up when we start using more compute for AIs. Then conservatively factor this in:

Brains are also about 100,000 times more energy-efficient than computers

by assuming computers fully catch up on efficiency and that 100% of the price is due to electricity usage. So we’ll divide the price by 100,000.

Then we’re talking about $47 billion per hour worth of computing ($412 trillion per year). We could get a lot better at building and running cheap computers in other ways besides energy efficiency without making this anywhere near cheap. So making one AI like this would be a huge problem in terms of resource usage, and cloning it a bunch would be an immense problem.

I’ve seen similar numbers but disagree with the methodology I believe is being used to generate them.

What I think they’re doing: Comparing the number of neurons in a human brain times the neuron firing rate with a computer’s ability to run simultaneous micro-instruction threads times its clock rate.

Why I think that’s what they’re doing (from the linked article):

Neurons, for example, are the brain’s building blocks and can only fire about 200 times per second, or 200 hertz. Computer processors are measured in gigahertz: billions of cycles per second.

Why I think it’s wrong:
A neuron firing is not directly comparable to a CPU micro-instruction or anything you can measure in CPU clock cycles. On the one hand, I think it takes many (hundreds? thousands?) CPU micro-instructions to accurately simulate a single neuron’s firing in something like an Artificial Neural Network. And on the other hand, I think it takes many (millions? billions? trillions?) human neurons firing to perform a simple math operation that can be performed with a handful of CPU micro-instructions.

I think if we want to compare computers & humans in a meaningful way we have to pick something useful and see how much of it they can do over a given amount of time or with a given amount of energy.

For mathematical capacity, that’d be some kind of basic math problem like an addition. In that context I think what I said is true:

For thinking capacity, electronic computers can currently do zero whereas a human brain can do 1, so the human is infinitely more powerful.

Once we figure out how to write AI software then we’ll be able to directly compare things like how much energy is used by an AI with equivalent to 1 human’s thinking capacity. Before that we can only speculate, and the linked article doesn’t give any indication it’s even trying to do that.

This frames AD as understanding what ET talked about and then adding something that ET didn’t talk about. It presents him as smart and knowledgeable, and possibly better than ET.

This was later revealed to be false – AD apparently had no idea what ET was talking about re computing power and should have been asking questions instead of moving on to more advanced points. (Although that later reveal might itself have been false.) But asking questions would imply worse things about AD’s status and knowledge, so he has automated avoiding it.

This dropped/ignored the context. The topic of the discussion involved using a large amount of compute power just for one AI.

While optimizing for saying clever things of his own, AD ignored what was being discussed. Or he never knew it and just pretended to.

Saying he doesn’t remember it well is an excuse to protect against the status loss from not being clever enough to understand the point.