The concept of a thinking machine has been around for a long time, and speculation on how such machines might affect humanity has been around almost as long. Before diving into some of the details underlying contemporary discussions of the dangers and benefits of AGI, it’s worth taking a brief look backwards at some of the milestones along the path that brought us to where we are today.
One of the earliest documented examples of modern speculation on intelligent machines is an 1863 newspaper essay entitled “Darwin Among the Machines,” which was the first of a series of similarly themed essays that the English author Samuel Butler wrote and later combined together as the basis of his satirical 1872 novel Erewhon. Having recently read Darwin's Origin of Species, Butler pondered whether we might be responsible for the same sort of evolutionary development in machines.
He suggested that eventually machines would gain intelligence, develop consciousness, and be able to self-replicate. He also concluded that humans would ultimately be subservient to these machines, although he postulated that such subservience might still leave humans better off than they were in his own time.
While Butler took the concept of evolution quite seriously, his book satirized the theory as well as Victorian society in general, and so it's hard to know exactly how credible he found his own speculation on the future of humans and machines. But whether or not he took these ideas seriously, they would eventually percolate up again during the early days of computer science in the 1940s and 1950s.
Scientific Speculation On AI Begins
John von Neumann is best known in computer science as the originator of the standard architecture used in nearly every modern computer, and he's also one of the earliest scientists to ponder the creation of a computer that could mimic the human brain. In 1948 he wrote a paper on what he termed artificial automata, man-made systems that could function similarly to biological systems. In particular, he compared the functioning and complexity of computers to that of the human brain, and it's clear that the topic remained on his mind for many years; his last written work, The Computer and the Brain, was published posthumously in 1958 and carried this speculation further.
Von Neumann also originated the earliest concept of the technological singularity and was the first person to use the word singularity in referring to it. In a conversation later described by a close friend, von Neumann hypothesized that the progress of technology and changes in society were approaching some essential singularity in the history of the human race beyond which society as we know it could not continue.
Alan Turing was another titan of computer science in those early days, and his contributions to the field form a significant chunk of the foundation underlying modern computation theory. Turing was briefly an assistant to von Neumann and is most known to the general public for supervising the construction of the machine that broke Nazi Germany's Enigma ciphers during World War II. Like von Neumann, with whom he remained in communication for many years, Turing was very interested in the possibility of using a computer to mimic the processes of the brain and wrote a 1950 paper in which he posed the question, "Can machines think?"
In trying to define his terms and make the question less ambiguous, he went on to suggest a revised version of what he called the Imitation Game to judge whether a machine was actually thinking. In the original game, a man and a woman are concealed from a judge and communicate only through the written (or preferably typewritten) word. The idea is that the judge asks questions of each and eventually surmises which is the man and which is the woman. There is an added hitch in that the man tries to fool the judge while the woman tries her best to help the judge choose correctly.
Turing proposed replacing one of the contestants with a digital computer and then having the judge attempt to discern which was the human and which was the machine. This test has become famous as the Turing Test for artificial general intelligence, and its usefulness as a true test of human-like intelligence may come up in a future post. But as discussed in a previous post, it’s probably easier than Turing suspected to fool people into thinking something possesses human-like intelligence when instead it possesses something that appears to be human-like intelligence but is not.
While the Turing Test is the most widely known element from the paper, perhaps more important is that it's one of the first papers to closely examine the potential requirements of a thinking machine and the first to address the multiple arguments against such a machine's being possible at all.
Irving John (I.J.) Good worked with Turing during World War II, and in 1965 he published a paper entitled "Speculations Concerning the First Ultraintelligent Machine" based on earlier lectures he'd given. It contained broad speculation on the nature, importance, and value of a machine possessing a high level of intelligence, and it also introduced one of the key ideas of today's AGI debate: the concept of an intelligence explosion.
Good predicted that if we designed an intelligent machine, it would develop an even more intelligent machine, and then that machine would develop an even more intelligent machine, and so on. The process would rapidly result in a vast intelligence far beyond that of humans. Using admirable skills of prognostication, Good also suggested that any ultraintelligent machine would be a massively parallel, highly connected, self-modifying neural network trained through positive and negative reinforcement of artificial synapse strength, much like the machine learning technology that has proven highly useful today.
The Technological Singularity
The inevitability of an intelligence explosion following the creation of an initial AGI system was also a key component of mathematician, computer scientist, and science fiction author Vernor Vinge's 1993 paper on the technological singularity. Although he spent little time examining the construction of the initial ultraintelligent system, he added to and popularized the overall discussion by speculating on our inability to contain such a system, the possibilities of nanotechnology for good and bad, technological unemployment caused by AI, biological intelligence amplification, merging of humans and machines, and the possibility of immortality through technology.
Vinge felt that if such a thing as the singularity were possible, it would also be inevitable. While his speculations helped provide some of the foundational ideas of both AI Dystopians and AI Utopians, Vinge himself didn’t particularly lean one way or the other in his speculation. By the end of the 1990s, however, the dichotomy between the two camps began to emerge. Although each side was inspired by the same speculations, they had already started down paths to radically different conclusions.
The Roots of Paradise
Among the AI Utopians, no one has been more influential than computer scientist and inventor Ray Kurzweil. In his books and public speaking, Kurzweil has enthusiastically promoted technologies like biotechnology, nanotechnology, and artificial intelligence as well as his belief that progress in these technologies is about to accelerate dramatically. He’s strongly promoted the idea that the technological singularity is a positive step forward for humanity and something that will happen in the next several decades.
Kurzweil bases this timetable on the exponential computational progress predicted by his Law of Accelerating Returns and on our increasing ability to scan the brain and evaluate the processes with which it functions. His surprisingly concrete timeline not only includes artificial human-level intelligence by 2029, but also the arrival of the singularity itself by 2045.
This timeline has in fact become one of the main irritants to detractors of the AI Utopians, who claim that the AI Utopian conclusions are simply conclusions of convenience, in that it would be very convenient if all the technological benefits associated with the singularity occurred in time for those speculating about them to personally benefit. This is particularly true when it comes to talk of achieving immortality and eternal youth.
The Roots of Apocalypse
Although significantly more vocal today, the AI Dystopians have had a more gradual ramp up into the public arena. Certainly a major milestone for their view was the 2000 essay “Why the Future Doesn’t Need Us” published by Wired magazine, in which computer engineer, entrepreneur, and venture capitalist Bill Joy ominously warned about the same technologies that Kurzweil celebrated.
Joy had a distinguished reputation in the field of computer science, having co-founded Sun Microsystems, worked on the development of the Unix operating system at U.C. Berkeley, and managed the development of the Java programming language, a key component of the early World Wide Web. His descriptions of the dangers he saw looming before us and the palpable fear he expressed made a big impression on many within the tech community and beyond.
Another early voice of alarm was that of technology philosopher Eliezer Yudkowsky. He'd entered the discussion with vigor while still in his teens and initially promoted the singularity and its formative technologies with excitement. But as the new millennium dawned, his tone and his thoughts on the subject turned darker. His cautionary speculations about these technologies, especially artificial intelligence, have inspired much of today's discussion on the dangers of AGI and the possibility of uncontrollable superintelligent machines.
Other early leading voices of AI Dystopianism included computer scientist Steve Omohundro and philosopher Nick Bostrom. Both speculated on what we might expect from actual AGI systems, especially given what they considered the possibility, if not outright certainty, that they would quickly and repeatedly reconfigure themselves to become ever more intelligent along the lines of Good's intelligence explosion.
Bostrom in particular built on Yudkowsky's speculations and explored in-depth the implications of superintelligent machines and their impact on humanity. His 2014 book Superintelligence: Paths, Dangers, Strategies was a seminal work in AI Dystopianism and a significant influence on many people inclined to lean in that direction (Elon Musk perhaps being one of the more notable).
Other prominent voices taking up the warning call included computer scientist Stuart Russell and physicist Max Tegmark. Both of these scientists have close ties to the Future of Life Institute and its recent open letter to pause powerful AI research (as well as an earlier editorial about the dangers of AI). Both the open letter and the editorial have been discussed in previous posts (here and here, respectively). Over time these warnings have taken on a darker and darker tone as contemporary AI systems repeatedly rack up impressive achievements and the possibility of AGI seem not only likely but imminent as well.
Science and Belief
To many, talk of utopia or doomsday has seemed uncomfortably close to religion, and neither outcome was generally considered a valid topic for scientific discussion. AI Utopian and AI Dystopian speculation was dismissed as fantasy, a case of either wishful thinking or delusional angst. And based on the historical record, this certainly makes sense: utopia seems as far away as ever and many a day predicted as our last has come and gone without a ripple.
It is easy to see the similarities between AI Utopianism or AI Dystopianism and faith-based belief systems. AI Utopian ideas have occasionally been characterized as "the rapture for nerds," a technology-based faith no different than any other religion. Among those beliefs: immortality for those alive today and possibly even for those who've already died, material abundance without toil, and an omniscient and omnipotent being to usher in all these goodies and guide us along a beneficent path.
These are not new ideas. The idea of immortality, for example, is well represented in Christianity but dates far back into history, at least as far back as the 2nd millennium BCE and the Babylonian Epic of Gilgamesh. The AI Dystopian warnings of apocalypse and destruction are also well represented in religion and mythology, from Pandora's Box to the Great Flood of the ancient Babylonians and Israelites to Ragnarök of Norse mythology to Christian Armageddon.
Yet, while both the conclusion of convenience objection to utopianism and the shoddy record of doomsday predictions are valid reasons to be skeptical, neither constitutes actual evidence or even reasoned argument against these ideas. As scientist and science popularizer Carl Sagan was fond of saying, extraordinary claims require extraordinary evidence. On the flip side, extraordinary claims are not simply wrong because they're extraordinary.
It’s also necessary to point out that although there are similar strains of hope and fear between people of faith and disciples of these technological belief systems, there is also a very significant difference between AI Utopianism / AI Dystopianism and religion / myth: the technological belief systems are, at least in part, physically possible as far as we know. They don’t involve the supernatural, just the very, very difficult. The question is how much they involve the plausible or likely.
Unfortunately, much of the discourse in this area has been the two sides talking past one another, while the general population remains unaware of why either side thinks the way they do. In examining the topic more deeply, it’s worth taking a look at the history, evidence, and logical framework that underlies much of the speculation, and examine how well that speculation holds up.
In upcoming posts I’ll focus attention on the foundations underlying the beliefs of the AI Dystopians and the AI Utopians rather than just the tenuous structures built upon them. These underlying concepts revolve around goals, rationality, self-improvement, self-preservation, resource accumulation, human control, and value alignment.
Although many of these concepts are at the root of both camps, I'll be examining them more in the context of the AI Dystopians simply because their contributions to the discourse are greater in volume (in both aspects of the word), more detailed, and more resolute in concluding outcomes. And besides that, if this all happens to lead to Utopia…well, great. Enjoy! That's not really a problem in the way that rampaging AGI systems with nanotechnology death clouds would be.
Speculation on the future of AGI can seem far-fetched at times, but it's worth keeping in mind that many technological advances, whether powered flight or nuclear energy, seemed impossible until they became possible. So we should view the conjectures of AI Dystopians and AI Utopians with an open mind but a skeptical eye, examine the evidence that exists, and look back over the historical record for relevant clues. We should judge the internal logic of the conjectures and determine whether they make sense given the knowledge we already possess.
Importantly, in examining these concepts, we should consider not only the evidence and implications that are promoted but also those that are ignored. While any definitive truth in this discussion may prove elusive, we should nevertheless strive to distinguish between the probable, the possible, and the impossible.