If you want to practice research and fact checking type skills, I recently checked some cites from:
This is not covered in my video or blog post about that paper. I got suspicious of the author’s thinking and therefore scholarship quality and wanted to check whether I was right that his cites would be untrustworthy.
I checked the cites for the first 3 quotes (unless I missed one when skimming for quotes). I specifically decided to check quote cites, rather than other cites, because I claim misquoting is common so I wanted to check that while I was at it. I’ll indicate which they are (you may need to read more context):
That panel lamented “a tendency . . . to dwell on radical long-term outcomes of AI research, while overlooking the broad spectrum of opportunities and challenges” raised by machine intelligence (Horvitz and Selman 2012, p. 302).
This is perhaps the best-known form of the singularity hypothesis, introduced to philosophers by I.J. Good:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind (Good 1966, p. 33).
But here is the entirety of Chalmers’ argument against the possibility of diminishing growth:
If anything, 10% increases in intelligence-related capacities are likely to lead to all sorts of intellectual breakthroughs, leading to next-generation increases in intelligence that are significantly greater than 10%. Even among humans, relatively small differences in design capacities (say, the difference between Turing and an average human) seem to lead to large differences in the systems that are designed (say, the difference between a computer and nothing of importance). (Chalmers 2010, p. 27).