Climbing the Tree and Reaching the Moon
Foundations of AI Dystopianism III: Self-Improvement (conclusion)
This is the conclusion of a three-part discussion on one of the cornerstones of both AI Dystopian and AI Utopian thinking: the idea that an Artificial General Intelligence system will inevitably self-improve itself into superintelligence and achieve God-like capabilities by doing so. A vast body of speculation has been built on this idea, and many extraordinary conclusions have been drawn.
In the first post on this topic, I brought up computer scientist Steve Omohundro’s influential 2008 paper The Basic AI Drives, which discussed the idea that an AGI will be driven towards self-improvement by its very nature, and examined some of the practical assumptions underlying this conjecture.
The second post discussed some of the conceptual assumptions built into this idea, including its reliance on an unlikely model for intelligence and a narrow definition of goals. That post also highlighted some logical inconsistencies in what would drive the system towards self-improvement, as well as questionable characterizations of intelligence itself.
Slow or Fast
The core belief underlying warnings of runaway self-improvement in AGI systems is that they are not only possible but inevitable. This is taken as axiomatic and not worth debating, and speculation quickly moves on to how fast such an intelligence explosion will happen and, at least for AI Dystopians, what sort of existential catastrophe will inevitably result.
The speed of self-improvement is usually described as either a soft/slow takeoff, taking years or decades, or a hard/fast takeoff, taking minutes or days, and the majority opinion among both AI Dystopians and AI Utopians leans towards the latter.
While Vernor Vinge discussed this in his 1993 paper on the technological singularity, Nick Bostrom dove more deeply into the topic in his 2014 book Superintelligence: Paths, Dangers, Strategies, characterizing a fast takeoff as so rapid that "Nobody need even notice anything unusual before the game is already lost."
Hitting the Wall of Reality
This certainly sounds dangerous. Yet, as discussed in this series of posts, there are all sorts of practical and conceptual considerations usually glossed over in such speculation. Beyond the already discussed constraints inherent in a system that might severely limit the AGI system's ability to modify its hardware or software, there are other factors restraining the system that are worth consideration as well.
For example, the sort of raw intelligence hypothesized here is simply cognitive potential rather than anything applicable to actions or achievements. Such intelligence is simply an empty vessel until it's filled with knowledge, experience, learned skills, and self-generated cognitive associations.
I've previously proposed that it would certainly be possible to construct the AGI system without access to the Internet and manage it so that it couldn't simply convince an impressionable human into granting that access. But even if we assume it somehow did gain access to the Internet, this would only give it access to already existing knowledge.
While a superintelligent entity could likely make a lot of connections and inferences from existing data that so far have eluded humans, it would still have to interact with the physical world to create new knowledge. It would have to build things, experiment, explore, measure, analyze, etc., all of which are difficult and time-consuming.
These are things that involve external constraints, many of which are difficult to speed up at all let alone in an exponential fashion. Even considering that it may be able to create internal simulations in some areas to generate new data, these are going to only be in very narrow areas where there is already sufficient data available to construct useful models on which the simulations can run.
This need to interact with the real world is not a small factor in any talk of fast takeoffs and intelligence explosions, yet the problem is continually given short shrift. One take is that the AGI system will trick or cajole a large number of humans into helping it, and this will include humans who have competence and access to the necessary resources to manufacture the upgraded components as well as humans who have competence and access to actually implement the upgrades.
Another proposition is that the AGI system will trick a small number of humans into somehow creating nanotech molecular assemblers that will manufacture all the hardware, and that among this small group of humans is also at least one who has competence and access to implement the upgrades. Left out is any consideration of real word practicalities involved in the manufacture, assembly, transportation, and integration of these upgrades, as well as how these tasks are hidden from or defended against the masses of non-tricked people that would undoubtedly notice all this taking place.
Creation is Complex
In 1958 free market proponent Leonard Read wrote the well-known essay I, Pencil to illustrate not only his love of capitalist markets but also the incredible amount of knowledge and number of people needed to make even an object as simple as a pencil. In the essay he points out that no one person has all the knowledge to make a pencil or even a significant portion of it, and certainly no one person possesses the raw material gathering, manufacturing, and transportation resources to create a pencil.
The amount of time, resources, physical capabilities, and knowledge to create unimagined new technology, not to mention build military forces sufficient to overcome the human species, is unfathomable. The two most popular ways to brush off this objection are simply use the words superintelligence and/or nanotechnology.
The first brush off is what I’ll call the Ant Argument: a superintelligent entity would be to us as we are to ants. In other words, humans are incapable of comprehending how much more advanced the thinking of a superintelligent entity would be, and so we can’t project our own constraints onto something beyond our ken.
However, this line of thinking falls apart in two major areas. First, it assumes the starting point is a superintelligent system, yet the whole concept of an intelligence explosion is that the system doesn’t start out as superintelligent. This is a Circular Argument fallacy, in that the end result of the intelligence explosion is necessary to ignite the intelligence explosion in the first place.
The second failing of this Ant Argument is that while ants and humans have vary disparate capabilities, both ants and humans are still constrained by the physical universe and the nature of reality. As discussed in previous posts, superintelligence should not be equated to super powers that can only exist in fantasy realms where time, space, and the laws of thermodynamics are whatever you want them to be.
There are undoubtedly some work-arounds and aspects of nature that we are not yet aware of. However, the underlying physical realities are what they are. It’s not clear in any of these intelligence explosion scenarios how the system, particularly before it’s engaged in enough self-improvement to attain superintelligence, is able to buck what we know of physical reality and get to the point at which its knowledge and capabilities allow it to perform feats indistinguishable from magic.
Nanotechnology is frequently proposed as the talisman that the superintelligent system will use to interact with the physical world and get things done. The meaning of the term here is not the contemporary co-opted usage that actually refers to nanomaterials. Instead, it’s the original meaning that refers to molecular assemblers and other nano-scale machines that can interact with the world to achieve tasks.
But this type of nanotechnology falls into the same camp as warp drives, force fields, teleportation, and zero-point energy. There is some scientific basis to the concepts, but one would be generous to say that they’re even in the very early theoretical stages at this point. Speculating on the dangerous use of any of these technologies is like Leonardo da Vinci speculating on the intricacies of air traffic control.
And, of course, we still run into the Circular Argument fallacy: how can the system use its superintelligence to create nanotechnology to implement its self-improvement before it has self-improved to superintelligence?
The Origin of the Intelligence Explosion
In wrapping up this discussion of self-improvement in an AGI system, it's worth examining in more detail Good's 1965 essay in which he introduced the concept of an intelligence explosion. Early in the paper, Good wrote:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra- intelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.
Let's start by considering the opening proposition of creating the first ultraintelligent machine, one that far surpasses a human in every intellectual activity. This is a pretty tall order and one invariably glossed over by those using Good's intelligent explosion concept over the years.
The first step of the intelligence explosion is remarkably similar to the first step of a Steve Martin routine on how to become a millionaire and not pay any taxes: First, get a million dollars. The initial precondition is brushed aside to get to the much more dramatic and interesting explosion part of the idea, yet it would seem to be a sizable obstacle to overcome.
There's another subtle fallacy embedded in this paragraph that is never addressed but is reflected in much of the subsequent speculation about intelligence explosions. The initial machine is described as surpassing the intellectual capability of any person. Then, since designing machines is an intellectual capability of a person, the machine would surpass that, too.
But there’s some sleight of hand going on here: the initial ultraintelligent machine couldn’t possibly have been designed by any one person. Similarly, while the “design of machines” is an intellectual activity of a person, designing an ultraintelligent machine is definitely not the activity of “a” person.
We are led back to the story of the pencil and the amount of knowledge needed to create it. No one person could have designed and built the initial AGI machine. Nor could that one person have designed and built all the machines and processes needed to bring together the materials and knowledge that are needed in the design and building of the initial AGI machine. Thus, the conclusion that the ultraintelligent machine could also design and implement an ultraintelligent machine is not based on an established foundation. This Unproven Basis fallacy is woven into nearly all the AI Dystopian warnings and thought experiments using the concept of an intelligence explosion.
It’s interesting to note that while the concept of the intelligence explosion introduced by Good has been spread widely over the years, its basis in science fiction and the somewhat simplistic idea of the machine's being docile enough to let us control it is largely left untouched. This is not meant in any way as a criticism of Good's paper, which is certainly noteworthy given its publication date, nor a criticism of science fiction, for which I have a deep affection myself. It's simply worth noting that this single term has been hoisted from the paper while the roughly 30-page context in which it’s developed has, for the most part, been discarded.
Reaching the Moon
Belief in the inevitability of an intelligence explosion in any AGI system is held as gospel in AI Dystopianism. It’s been a key underlying component of the rampant warnings from AI Dystopians regarding AGI’s inevitability catastrophic impact on humanity. These warnings have been happily promoted by media outlets, and yet there has been remarkably little examination of the baseline validity of the belief system underlying the warnings.
Speculating on fantastical science of the future is certainly worthwhile, but basing concrete conclusions and contemporary actions on that speculation is not, particularly when that speculation already has many evident flaws.
It's perhaps worth noting that Good states in the conclusion of his paper that:
It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make, since it will lead to an “intelligence explosion.”
Throughout the history of AI development, many in the field have underestimated the complexity of creating anything even approaching AGI. While progress has been made on the path to machines that seem smart in certain narrow ways, progress towards a machine possessing anything close to human-like intelligence, let alone ultraintelligence, has not advanced much in the nearly sixty years since Good's paper.
The philosopher Hubert Dreyfus, taking a particularly skeptical view of the AGI field, stated in his 1985 book Mind over Machine that:
Current claims and hopes for progress in models for making computers intelligent are like the belief that someone climbing a tree is making progress toward reaching the moon.
And though we may be higher up the tree than when Dreyfus wrote that sentence, we still have a very long way to go to get to the moon. Dreyfus was certainly more pessimistic about the possibility of creating AGI than I am, but it is a point well taken.
Speculation about space travel that’s based on tree climbing is not very likely to be productive. Good himself made no bones about the hypothetical nature of his paper or its debt to science fiction, and it's worth keeping this in mind when basing dire real-world conclusions on the paper's more sensational aspects.