I donât think the person has considered that 100 people have 100x the computing power of 1 person. So to a first approximation, a superfast 100x AI is as valuable (mentally not physically) as 100 people.
Thereâs another value I would consider besides computing power. An AI would presumably be constructed on hardware which provides the AI and/or its creators a kind of lower abstraction level access to the mind that is currently impossible with human beings.
I think the main benefits of this low level access would be its availability for forensic analysis and state saves/copies.
If an AI lived not for 80 years but for millennia, would those problems be massively amplified?
And
Many individuals become very irrational at some point in their life, often during childhood. If our super AI has a similar chance to become super irrational, itâs very risky.
And
A super mind might be more vulnerable to some bad ideology â e.g. a religion â taking over the whole mind.
etcâŚ
I can imagine making state saves periodically, and having the option to revert to a prior state if it is preferable. This of course raises the issue of who can do this - Only the AI itself? Itâs creator(s)? Others? But regardless of that, itâd be a very valuable capability that humans utterly lack.
It would be possible for an AI to âunlearnâ or âunseeâ things in a way no human currently can. It could also A/B test different approaches to learning from a common starting point.
Itâs hard to make progress by yourself way ahead of everyone else. You can do it some but the further you get away from what anyone else understands or helps with, the more of a struggle it becomes. This could be a huge problem for the super mind. Especially if it works pretty well, it might have no colleagues it respects.
This problem could be addressed by copies. An AI copies itself at time n - suppose thatâs something equivalent to teenage years in humans. Then the copies each pursue different interests independently but at some later period, time n+m, they could be colleagues. Itâs not guaranteed to work of course - one of the copies might get so far ahead of the other(s) as to still have this problem. But itâd be a good start.
Copies are independent beings. So thatâs going more in the direction of having many AIs that arenât that much more powerful than humans, rather than one super powerful AI. The more you go in that direction, the more youâre losing some upsides (and downsides) compared to just making more humans.
You either have to split computing resources between copies or else build way more computing resources (but no matter what you build, you can either give it all to one AI or split it up, and if single powerful AIs are great then splitting it up has a major downside, and if they arenât great then one wonders whatâs so inferior about billions of humans).
IDK if forensic analysis would be possible. Even if we had better access (e.g. DMA), that doesnât mean itâs decodable. Save-states are maybe possible in that case (if you know the full state), but forensic analysis relies on deeply understanding structure of and relationships between data.
I kinda assume that computing resources will not be a significant constraint on either AI size or number once we know how to build AI software. Even if computing resources are a significant constraint on Day 1 of AI invention, I think itâs highly likely theyâll stop being a significant constraint some small number of years later. So the situation I expect is that AIs will be approximately as big as are found to be useful, and as plentiful as are found to be useful. Kinda like today where the number of small cars and big buses there are isnât constrained by the availability of steel, so we donât choose whether to make more cars or buses based on how well they make use of the available steel supply.
You canât raise a human to age 25 then copy him a billion times. You can do that with AI. That means perhaps we could put a billion times the normal upbringing resources into the pre-copy phase of the AIâs life.
I donât actually think resources are a billion level constraint. Maybe 1000 or 1 million times what is normally spent raising a human would exhaust our knowledge of possible upbringing improvements. And maybe itâs better to make some or all of the copies at the equivalent of age 3, or 7, or 16 rather than 25. Whatever the resource level and âageâ, the ability to copy AIs would allow us to leverage early life resource spending in some ways itâs not possible to do with billions of humans.
I think thatâs right, but I also think the ability to compare save states from before and after single exposures / actions / etc. while learning might help us develop that understanding.
Yes, but it was several years ago, I donât remember it well and I donât get the reason for the reference.
the prestige has a twist at the end which reveals âthe prestigeâ (the final part of a magic act) of the protagonists magic act. The secret is that he creates a copy of himself. The on-stage magician is copied, but is doomed (falls into a properly-locked tank where he drowns), but the copy is created like 100m away, and can appear behind the audience faster than someone could move between those two points.
the magician knows this will happen, but does many nights of shows, each of which dooms a copy of himself.
The idea of an AI doing A/B testing on itself â which in the conventional sense would mean the murder of the less successful copy â reminds me of that.
I think itâs because the main constraints driving the cost of obtaining computing power are currently* knowledge and scale rather than physical resources or labor. And I donât think thatâs going to change significantly between now and AI invention + a few years.
By knowledge constraint I mean things like how to make smaller base components like transistors, how to make effective component combinations of components in 3-D vs. 2-D, how to connect building blocks together efficiently. Improving that kind of knowledge lets us build progressively more computing resources without using more physical or labor resources. We have lots of people working on growing that knowledge, and so I expect it to continue growing though not in all areas equally. For example, we may be approaching a minimum practical size for a transistor soon, so the knowledge of how to make smaller transistors may stop growing, but knowledge about other ways of increasing computing power would continue.
By scale constraint I mean the need to recover the research and setup cost across a production run of chips before new knowledge renders that production capacity obsolete. If a certain process has been developed and tooled up to produce chips, all else equal the price of each of those chips will be lower if we have practical application for a billion of them than if we can only productively use a million of them. So if AI substantially increases the practical applications for computing resources, over time Iâd expect that to drive down the unit cost of such resources.
Iâm using âcurrentlyâ in a longer-term sense than some people might expect - I mean something like the last 10-40 years. There are some constraints in the chip supply chain right now that have arisen in the last year or so and are negatively affecting the price and availability of computing power today. Absent further government intervention, I expect those constraints to correct long before AI is invented.
The thought experiment was a situation very unlike today â in particular with far far far more computing resources being built â but youâre just trying to extrapolate from today, which is not a way to understand that situation and what the constraints would be.
I donât think I understand what you had in mind. From a compute resource perspective I was thinking of AI like a new killer app: the scenario where we figure out how to make the software for AI but otherwise not much changes.
What do you think would be different about AI from previous landmark software developments with regard to compute resources?
I donât understand whatâs going on. Hereâs a quote from the first paragraph of my post with bold added. The basic topic of the post is an AI being powerful due to being faster (or actually more compute power â i brought up e.g. parallel computing too).
I donât think the person has considered that 100 people have 100x the computing power of 1 person. So to a first approximation, a superfast 100x AI is as valuable (mentally not physically) as 100 people. If we get an AI that is a billion times faster at thinking, that would raise the overall intelligent computing power of our civilization by around 1/7th since there are around 7 billion people. So that wouldnât really change the world. If we could get an AI thatâs worth a trillion human minds, that would be a big change â around a 143x improvement. Making computers that fast/powerful is problematic though. You run into problems with miniaturization and heat. If you fill up 100,000 warehouses for it, maybe you can get enough computing power, but then itâs taking quite a lot of resources. It still may be a great deal but itâs expensive. That sounds like probably not as big of an improvement to civilization as making non-intelligent computers and the internet, or the improvements related to electricity, gas motors, and machine-powered farming instead of manual labor farming.
The original topic, from the comment I linked in the first paragraph, discussed e.g.:
An AI might run on faster hardware then the brain.
When I talk about a 100x AI that means 100x the compute power of one human brain. The 143x improvement over 7 billion people is from an AI with the compute power of a trillion human brains.
The basic topic was would AI be highly valuable specifically due to giving it better/faster computing hardware. And my basic point is that I donât see why that would be very valuable unless it has significantly more compute power than all living human beings, but that is a lot of compute power â so the idea of doing tons of clones makes that issue much worse.
Perhaps whatâs going on is weâve been using one set of terms (compute power, compute resources) when we actually need two distinct concepts. Iâll introduce new terms for the two concepts Iâm thinking of to avoid confusion:
(1) Mathematical Capacity - Measured in something like teraflops. Refers to the number of some simple operations (adds, memory reads, etc.) a particular piece of hardware can perform.
(2) Thinking Capacity - Measured in something like human mind equivalents. Refers to the amount of intelligent activity (knowledge creation, creativity, etc.) a mind can perform.
If this distinction makes sense to you as it does to me, then I think whatâs going on is your original post was talking about thinking capacity but my constraint post was talking about mathematical capacity.
Do you agree thatâs at least part of whatâs going on?
OK, then I donât think I can make sense of the reason for the statements you quoted from your original post, like:
I guess itâs true that at least in terms of averages, 100 people can perform 100x the number of simple mathematical operations as 1 person can. But so what / why care? 1 electronic computer can perform a million-x, billion-x, or trillion-x the number of simple mathematical operations 1 person can.
The second sentence is even more confusing in the context of mathematical capacity:
Especially after the invention of electronic computers, people are not significantly valuable for their mathematical capacity. People are valuable for their thinking capacity. So would an AI. Right? Any idea what Iâm missing here?
Your Brain Is Still 30 Times More Powerful Than The Best Supercomputers. ⌠Based on current market prices, that means you stand to earn between $4,700 and $170,000 if you rented out your brainâs computing power for an hour.
(There are other sources with different numbers for comparing human brains to silicon computers, and there are disagreements about how to generate the numbers, but there seems to be a decent amount of agreement about the general ballpark of these numbers. Iâve seen similar numbers from many sources.)
So an AI with a trillion times the hardware computing resources of a human being would be very expensive in terms of economic resources. Conservatively assume the low end estimate of $4700/hr and assume market prices donât go up when we start using more compute for AIs. Then conservatively factor this in:
Brains are also about 100,000 times more energy-efficient than computers
by assuming computers fully catch up on efficiency and that 100% of the price is due to electricity usage. So weâll divide the price by 100,000.
Then weâre talking about $47 billion per hour worth of computing ($412 trillion per year). We could get a lot better at building and running cheap computers in other ways besides energy efficiency without making this anywhere near cheap. So making one AI like this would be a huge problem in terms of resource usage, and cloning it a bunch would be an immense problem.
Iâve seen similar numbers but disagree with the methodology I believe is being used to generate them.
What I think theyâre doing: Comparing the number of neurons in a human brain times the neuron firing rate with a computerâs ability to run simultaneous micro-instruction threads times its clock rate.
Why I think thatâs what theyâre doing (from the linked article):
Neurons, for example, are the brainâs building blocks and can only fire about 200 times per second, or 200 hertz. Computer processors are measured in gigahertz: billions of cycles per second.
Why I think itâs wrong:
A neuron firing is not directly comparable to a CPU micro-instruction or anything you can measure in CPU clock cycles. On the one hand, I think it takes many (hundreds? thousands?) CPU micro-instructions to accurately simulate a single neuronâs firing in something like an Artificial Neural Network. And on the other hand, I think it takes many (millions? billions? trillions?) human neurons firing to perform a simple math operation that can be performed with a handful of CPU micro-instructions.
I think if we want to compare computers & humans in a meaningful way we have to pick something useful and see how much of it they can do over a given amount of time or with a given amount of energy.
For mathematical capacity, thatâd be some kind of basic math problem like an addition. In that context I think what I said is true:
For thinking capacity, electronic computers can currently do zero whereas a human brain can do 1, so the human is infinitely more powerful.
Once we figure out how to write AI software then weâll be able to directly compare things like how much energy is used by an AI with equivalent to 1 humanâs thinking capacity. Before that we can only speculate, and the linked article doesnât give any indication itâs even trying to do that.
This frames AD as understanding what ET talked about and then adding something that ET didnât talk about. It presents him as smart and knowledgeable, and possibly better than ET.
This was later revealed to be false â AD apparently had no idea what ET was talking about re computing power and should have been asking questions instead of moving on to more advanced points. (Although that later reveal might itself have been false.) But asking questions would imply worse things about ADâs status and knowledge, so he has automated avoiding it.