Would anyone double check that this quote from BoI is fully accurate? I got it in parts instead of all at once (changed my mind about what to include a few times) and there are a bunch of ellipsis. I did double check myself a few weeks ago when I made it, but I thought it needed another check. Then I thought fresh eyes/pov would be good. Thanks.
There should be no paragraphs skipped or I’d do an ellipsis as its own para. the italics should be right.
The jump to universality in digital computers has left analogue computation behind. That was inevitable, because there is no such thing as a universal analogue computer.
That is because of the need for error correction: during lengthy computations, the accumulation of errors due to things like imperfectly constructed components, thermal fluctuations, and random outside influences makes analogue computers wander off the intended computational path.…
For example, tallying is universal only if it is digital. Imagine that some ancient goatherds had tried to tally the total length of their flock instead of the number. As each goat left the enclosure, they could reel out some string of the same length as the goat.…
… in analogue computation, error correction runs into the basic logical problem that there is no way of distinguishing an erroneous value from a correct one at sight, because it is in the very nature of analogue computation that every value could be correct. Any length of string might be the right length.
And that is not so in a computation that confines itself to whole numbers. Using the same string, we might represent whole numbers as lengths of string in whole numbers of inches. After each step, we trim or lengthen the resulting strings to the nearest inch. Then errors would no longer accumulate. For example, suppose that the measurements could all be done to a tolerance of a tenth of an inch. Then all errors would be detected and eliminated after each step, which would eliminate the limit on the number of consecutive steps.
So all universal computers are digital; and all use error-correction with the same basic logic that I have just described, though with many different implementations. Thus Babbage’s computers assigned only ten different meanings to the whole continuum of angles at which a cogwheel might be oriented. Making the representation digital in that way allowed the cogs to carry out error-correction automatically: after each step, any slight drift in the orientation of the wheel away from its ten ideal positions would immediately be corrected back to the nearest one as it clicked into place.…
Fortunately, the limitation that the information being processed must be digital does not detract from the universality of digital computers – or of the laws of physics. If measuring the goats in whole numbers of inches is insufficient for a particular application, use whole numbers of tenths of inches, or billionths… any physical object … can be simulated with any desired accuracy by a universal digital computer. It is just a matter of approximating continuously variable quantities by a sufficiently fine grid of discrete ones.
Because of the necessity for error-correction, all jumps to universality occur in digital systems. It is why spoken languages build words out of a finite set of elementary sounds: speech would not be intelligible if it were analogue. It would not be possible to repeat, nor even to remember, what anyone had said.