The relationship between AI/AGI and society has been in the news a lot recently, but it's worth taking a brief moment to look back at one of the inciting incidents that set the tone for our current AI angst.
Over the last decade or so, mass media has painted an ever darker and disturbing picture of AI/AGI to an increasingly alarmed general public. This picture is one of AI Dystopianism, the belief that superintelligent machines present an existential and potentially near-term threat to the human race. In future posts I'll examine some of the fundamental concepts underlying that belief, how well those concepts are supported by our current knowledge of reality, and whether they are logically consistent. In this post, I'd like to look behind current alarming headlines to some of the speculation that inspired them, speculation from the leading voices of AI Dystopianism.
The rise of the AI Dystopians
To do this, let’s go back some years to what was perhaps the first contemporary warning aimed at the general public regarding the dangers of artificial intelligence, a Huffington Post op-ed published in 2014. With the headline "Transcending Complacency on Superintelligent Machines," authors Stephen Hawking, Max Tegmark, Stuart Russell, and Frank Wilczek timed their warning of the existential risk posed by superintelligent machines to coincide with the release of the film Transcendence.
In that movie, Johnny Depp plays a computer scientist who's mortally wounded by an anti-technology terrorist group that fears his work will lead to the subjugation of the human race. To save his life, the wife of Depp's character uploads his brain into an advanced computer system, and his intelligence rapidly develops far beyond that of the original Depp. AI Depp then goes on to engage in many nefarious actions that could definitely lead one to suspect that he/it is in fact trying to subjugate the human race. In the end (spoiler alert), AI Depp is defeated. However, this defeat is accompanied by a global electronic technology collapse — a glass half-full or half-empty ending depending on your point of view.
Before moving on, it has to be stated that the movie Transcendence makes absolutely no sense. It is as accurate to science and to reality in general as is a Marvel superhero movie. But regardless of this movie tie-in, the op-ed is worth examining, because in many ways it's a succinct representation of the arguments still used by those promoting AI Dystopianism as well as the manner in which those arguments are presented. As most of you reading this probably know, it has been superseded by a recent open letter that I’ll discuss in the next post.
The original op-ed starts with what it describes as a contemporary "arms race" of artificial intelligence driving us towards superintelligent machines. As evidence it points to unprecedented investment in AI as well as currently existing AI technology like self-driving cars, the computer system which won the game show Jeopardy!, and digital assistants like Siri, Google Now, Alexa, and Cortana. The implication is that these achievements are steps up a ladder to superintelligence, and that we're getting closer and closer to the top rung through massive investment.
There’s a vast difference between today’s machine learning based-AI and artificial general intelligence (AGI), and it seems unlikely that general human-like intelligence will be achieved using anything like today’s machine learning technology. The recent release of GPT-4 has caused many to question that last proposition, but I believe it still holds true, and the reasons for this will be the subject of future posts.
The majority of computer scientists working in AI today readily admit that our current machine learning technology is not remotely close to general intelligence, and nobody has the slightest idea of how to bridge the gap. It's also worth noting that the vast majority of investment over the last ten years has gone into AI research and extremely little has gone into AGI research. Investors are interested in making better AI and near-term profits, not engaging in expensive, long term AGI research.
Intelligence is Hard
Even the examples listed in the open letter are hardly evidence that supports the overarching proposition. Fully autonomous cars, another topic to be discussed in a later post, have proved to be much more difficult to create than previously estimated. Nine years and billions of dollars worth of direct investment after the publication of this essay, we're still relatively far away from having self-driving cars that could replicate the average person's ability to be plopped down anywhere in the country, day or night, rain or shine, and manage to get from one location to another.
The computer system that won in Jeopardy! was called Watson, and IBM's attempts to shoehorn the original technology into other areas proved fairly underwhelming. Siri, Google Now, Alexa, and Cortana were first released over the span of several years starting in 2010, and while they have all improved since initial release, they're still relatively limited in their functionality. None of them are in any real way a step towards human-level intelligence or anything close to it.
All these applications use substantially similar types of AI technology applied to different tasks, and this technology is one that's been developed and refined over many decades. As such, they represent examples of new and unique applications of this ongoing technological refinement rather than examples of unique or revolutionary breakthroughs in technology on the road to AGI. This isn't to diminish any of them as significant technological achievements, but simply to point out that they don't represent a series of steps along a concrete path to AGI.
Fuzzy Terms
Blurring the line between today's machine learning and the non-existent technology of general intelligence is a common rhetorical tactic that AI Dystopians have often used to make their case. It works well because most people are not technically inclined enough to recognize the difference between the two nor likely to realize that what they're being presented with is a classic Equivocation fallacy. The op-ed presents not only the above direct obfuscation of AI and AGI but also some subtler variations. For example, reasonable concerns about military use of machine learning-driven weapons are conflated with superintelligent killer robots of the future. Concerns about worker displacement due to machine learning-based automation are used to hint at a future of humans relegated to serfdom under superintelligent machine overlords.
Such conflating of terms is just one of several rhetorical devices employed in the op-ed that have becomes mainstays of AI Dystopian discourse. Some others are the use of False Analogy, Hazy Generalization, Cherry Picking, and Appeal to Emotion fallacies; reference to purported oblivious or willfully negligent experts in the field; and a call to immediate action because no one is working at addressing the imminent danger. This last point is worth singling out because it reveals another common trait between the op-ed and many of the subsequent writings of AI Dystopians: self-refuting statements within the same essay.
For example, in the same sentence in which the claim is made that no one is taking these issues seriously, a list of prestigious organizations that are explicitly examining these issues is presented: the Center for Existential Risk at the University of Cambridge, the Future of Humanity Institute at the University of Oxford, the Machine Intelligence Research Institute, and the Future of Life Institute. And these are not tiny, fly-by-night organizations; each has a multi-million dollar endowment.
The list of organizations has grown even longer in the years since the op-ed was published, no doubt in part due to the sustained clamor from AI Dystopians. On top of that, most large companies engaged in AI research have their own AI ethics departments, including Google, Microsoft, Meta, and Amazon. Once could certainly be skeptical of these efforts, but it’s simply untrue to say that no one in the field is paying attention to the dangers of AI.
Use Your Imagination
In its projection of future dangers, the op-ed ultimately boils down to nothing more than some very scary what-ifs. To illustrate the degree of risk we may be facing, it suggests that "One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." Of course, one can imagine many unpleasant future events. The makers of Transcendence certainly did exactly that.
In fact, science fiction movies and novels of the last two decades have concentrated almost exclusively on imagining all sorts of exceedingly unpleasant futures. But basing future actions on fanciful imaginings with no more foundation than movies like Transcendence seems somewhat foolhardy. For speculation to move out of the realm of fiction and into the realm of science, it must have supporting evidence and a theoretical foundation which derives from that evidence. It also must be logically consistent and compatible with what we've already learned with some degree of confidence about the universe around us.
Expert Excesses
While the number of people who saw the original op-ed was fairly small, the appearance of Stephen Hawking's name on the author list helped propel it into the mainstream news. Not to diminish the stature of his fellow authors, but, to paraphrase a 1970s ad campaign, when Stephen Hawking talks about the end of humanity, people listen. Soon after it was published, the Daily Mail shouted, "Artificial intelligence 'could be the worst thing to happen to humanity': Stephen Hawking warns that rise of robots may be disastrous for mankind."
The BBC covered the essay under the headline, "Stephen Hawking: Beware smart machines" and accompanied the article with a de rigueur photo of the killer robot from the movie Terminator. Later that year, Hawking doubled down in an interview with the BBC, which was covered by numerous news outlets with headlines such as:
Stephen Hawking: Artificial intelligence could end up like Skynet
Stephen Hawking is still terrified of the AI revolution
Stephen Hawking: Humans evolve slowly, AI could stomp us out
Artificial intelligence could spell end of human race – Stephen Hawking
Since that first op-ed there has been a fairly steady stream of warnings on the dangers of superintelligent machines. Netflix, Amazon Prime, and YouTube are chock-full of hastily made documentaries on the coming age of superintelligent machine overlords that portend the end of the human race. The impression one gets is that the technology is advancing so rapidly that we may already be too late in addressing the problem posed by these incipient superintelligences.
The blame for such sensationalism cannot be placed solely on the shoulders of journalists and documentarians, though. The AI Dystopians are represented by a significant and vocal group of AI scientists, computer professionals, businesspeople, and other scientists, many of whom are considered experts in their fields.
Experts are a mixed bag when it comes to public opinion. We're very much in favor of them when it comes to surgery or flying airplanes, yet when they disagree with our deeply held beliefs, they become "experts" and are dismissed with a sniff as out-of-touch elitists. Yet even if we have little regard for any of them individually, we still feel compelled to pool their opinions and ask them to vote on the truth and what the future holds. Experts play a large role in the debate on AI and AGI, with some holding expertise in AI research, some in adjacent fields, and some in completely unrelated fields.
There is certainly nothing wrong with relying on experts or with experts themselves, yet there are many problems associated with our expectations of experts. This is particularly the case when it comes to predicting the future. There are also issues related to being an expert, more specifically what's referred to as the Overconfidence Effect, in which experts tend to have subjective confidence in their judgments that is often substantially greater than the objective accuracy of those judgments.
And experts who manage to avoid the Overconfidence Effect are still only experts in their field, and they are unlikely to hold a similar degree of expertise in a partially or wholly unrelated field. Some of the most vocal members of the AI Dystopian community are not experts in the field of artificial general intelligence research or even in related fields like computer science or cognitive science. The physicists Stephen Hawking and Max Tegmark fall into this category. Other voices in the debate are business leaders with some relation to AI or AI adjacent companies, such as Elon Musk and Bill Gates.
While these are certainly all very smart people, there are several things worth keeping in mind when evaluating their expert opinions. The Halo Effect is a well-established cognitive bias that describes the tendency of people to perceive the expertise someone has in one area as applicable to other areas. Celebrities are used as spokespeople for a variety of products and causes because of the Halo Effect. The advice of successful businesspeople is frequently taken as gospel in regard to areas far removed from the business in which they are successful.
Similarly, two popular rhetorical devices are the Appeal to Authority and Appeal to Accomplishment fallacies, in which an assertion is advanced as true simply due to the stature or accomplishments of the person making that assertion rather than any external facts or evidence. On the other side of the coin, we tend to discount the value of an expert opinion when the expert is perceived as an adversary to ourselves or causes we believe in. This is often referred to as Reactive Devaluation bias.
So while the preeminent physicist Stephen Hawking was obviously a brilliant physicist, his expertise in the study of black holes did not give him expertise in the workings of the brain or the functioning of future artificial general intelligence systems. Similarly, Bill Gates and Elon Musk are certainly very successful in their areas of technological and business expertise. But they are far removed from expertise in AGI in the same way that Charles Babbage was an expert in 19th century mechanical computer design yet ill-equipped to pontificate about the impact of the smartphone on human society in the 21st century.
Even when we narrow the field of experts to those whose area of expertise is actually artificial intelligence, we run into another fundamental problem: they don't agree with one another. Stuart Russell and Rodney Brooks are both very vocal about artificial intelligence. Stuart Russell co-wrote the most widely used textbook on artificial intelligence and is a professor at the University of California, Berkeley. Rodney Brooks was a professor at Stanford and the Massachusetts Institute of Technology and director of the MIT Computer Science and Artificial Intelligence Laboratory. He's also co-founded the pioneering robotics companies iRobot (makers of Roomba) and Rethink Robotics.
Stuart Russell is one of the leading voices of AI Dystopian ideas while Rodney Brooks has frequently argued directly against those ideas. They both strongly disagree with Ray Kurzweil, prolific inventor, futurist, National Medal of Technology honoree, and Director of Engineering at Google overseeing AI projects. Ray Kurzweil is one of the leading figures in what might be called AI Utopianism, believing that AGI systems that surpass human intelligence will be developed by 2045.
Stuart Russell, Rodney Brooks, and Ray Kurzweil are all very smart people and they are all true AI experts. Yet, they have diametrically opposed views, and one cannot help but conclude that intelligence and even expertise are not the guiding factors when it comes to point of view on this topic. If we could definitively distinguish truth from untruth and reality from fantasy simply by reaching a particular level of cognitive reasoning, then everyone at or above that level would always agree on such determinations. But that is certainly not what we observe.
Instead, our innate mindset is shaped by our life experience, and both the nature and nurturing of our minds affects our overall intellectual outlook. Our brains are imperfect cognitive organs, and we are all subject to the shaping of our thoughts by varying degrees of failure in our intellects. Experts and laypeople alike are subject to the same constraints of human physiology.
Most importantly, though, is the irrefutable fact that there simply are no experts on the future.
Dancing Angels
Some people are drawn toward scary scenarios. Sometimes those people are right. Some people are drawn toward rosy scenarios. Sometimes those people are right. The only way to determine the value of predictions and warnings is to examine the evidence, logic, and reason that underlies them. If instead there is only unsupported speculation and rhetoric, one should be skeptical. The history of technological and scientific prognostication is packed with examples of failed optimism and failed pessimism, and the field of AI in particular is filled with noteworthy examples of both.
AI is a complex area of study, particularly in regards to the development of human-level artificial intelligence, and it's filled with significantly more unknowns than knowns. Arguing unknowns, while an entertaining pastime on a Discord server or at a dinner party, is not a particularly fruitful path towards illuminating reality or plotting out the future of humankind.
Such an endeavor, futilely searching for truth by wading through vast streams of ignorance, is reminiscent of scholasticism, a form of discourse practiced by medieval theologians. This discourse involved intricately reasoned debate over questions of dubious merit in which no meaningful parameters were known or could be known. (Perhaps the most famous is Thomas Aquinas' discussion of whether several angels can occupy the same place, later somewhat mockingly characterized as, "How many angels can dance on the head of a pin.")
These were arguments in which there was virtually no knowledge that could be brought to bear on the topic being discussed. They were prime examples of the Unproven Basis fallacy, in which fundamental premises were left unexamined, and instead all energy was discharged into debating gossamer wisps of speculation and imagination.
And yet, an AI Dystopian might reasonably ask: what's wrong with simply discussing AI Dystopian concerns and spreading those concerns through the mass media? Well, of course there is nothing inherently wrong with such discussion. In fact, discussion of future possibilities is vital for the advancement of human well-being and the flourishing of society. There's also nothing wrong with examining the dark side of those possibilities, as blind optimism usually ends poorly for the blind optimist.
But let's face it: people tend to notice and react more strongly to negative news than positive news. “If it bleeds, it leads” is a cliché that still governs much of journalism. When it comes to peddling papers or generating clicks, the only thing better than having an expert's opinion is having an expert's opinion on imminent catastrophe.
So how does one accurately assess the validity of the arguments made by those involved in and reporting on this subject?
A first step is examining the foundational ideas underlying the AI Dystopian conclusions, and that is one of the purposes of this blog. But it’s worth pointing out that one should be somewhat wary of the AI Dystopian message simply because of the way the message is being presented. Rhetorical fallacies and hyperbole are the hallmarks of belief systems rather than reasoned and fact-based speculation, and belief systems should not shape our current or future realities but should instead be shaped by them.