The Chinese Room Thought Experiment: Simulation and Synthesis
Modeling intelligence versus creating it
In the last post I focused on the difference between comprehension and competency explored in the famous thought experiment called the Chinese Room. Philosopher John Searle first proposed the experiment in a 1980 paper. The experiment is apropos to the question of whether there’s any actual understanding in today’s LMM-based systems.
The simplified version of the experiment Searle presented in an article published in 2009 is as follows:
Searle proposed placing himself in a room in which he is passed papers with questions written in Chinese, and his task is to answer those questions in Chinese. He doesn’t understand Chinese himself, just English. However, he does have a list of Chinese characters and instructions in English on how to correlate the Chinese characters to the questions in a way that allows him to write down the answers to the questions without understanding the questions or the answers.
Searle specified two principles in that article which he believed were at the heart of the experiment:
The Chinese Room Argument thus rests on two simple but basic principles, each of which can be stated in four words.
First: Syntax is not semantics.
Syntax by itself is not constitutive of semantics nor by itself sufficient to guarantee the presence of semantics.
Second: Simulation is not duplication.
I discussed the first principle in the last post, and in this post I’ll discuss the second. These two principles are the foundation of his argument against the possibility of creating what he terms Strong AI, equivalent to what we would call AGI today, as compared to Weak AI, which is what current AI systems are.
Searle believes that the only AI we can program into computers is Weak AI, in which aspects of the human mind can be simulated on a computer such that the resulting behavior of the computer systems may give the appearance of intelligence. But no matter how many of these processes are simulated on a computer, the capabilities of the human mind — such as cognition and understanding — can never be duplicated.
From his 2009 article:
Computer programs which simulate cognition will help us to understand cognition in the same way that computer programs which simulate biological processes or economic processes will help us understand those processes. The contrast is that according to Strong AI, the correct simulation really is a mind. According to Weak AI, the correct simulation is a model of the mind.
The Brain Simulator
One of the better known arguments against Searle’s Chinese Room is usually referred to as The Brain Simulator Reply, and it helps to illustrate Searle’s second principle.
This argument proposes replacing the computer program (the instructions) that describes how to manipulate the Chinese characters with an exact computer simulation of the brain of a person who understands Chinese down to the neuronal level. In other words, the computer is programmed to create data structures and processes that directly simulate the functioning of a human brain that understands Chinese. The success of this programmed computer system would imply that the system understands Chinese.
Searle refutation of this argument involves a modification of the Chinese Room experiment in the following way:
Suppose we modify the Chinese Room to have an elaborate set of water pipes with valves connecting them. Each valve represents a neuron in the brain of the person who understands Chinese, and the pipes represent all the connections between neurons. Turning a valve on or off represents the firing or suppression of a neuron, respectively. At one end of the structure, the results of the water pipe processing can be read.
The English instructions given to the man no longer guide him in directly manipulating the Chinese characters but instead tell him which valves to turn on and off and the order in which to do so. By turning on and off the right neurons in the right order, the water pipe brain is able to answer the Chinese questions in Chinese.
Searle concludes the following:
Now where is the understanding in this system? It takes Chinese as input, it simulates the formal structure of the synapses of the Chinese brain, and it gives Chinese as output. But the man certainly doesn’t understand Chinese, and neither do the water pipes, and if we are tempted to adopt what I think is the absurd view that somehow the conjunction of man and water pipes understands, remember that in principle the man can internalize the formal structure of the water pipes and do all the "neuron firings" in his imagination.
Now, an obvious and valid critique is whether or not such a system, even in the idealized world of a thought experiment, could actually simulate the functioning of a brain enough to answer questions. But putting that aside, there are still issues with Searle’s statement above.
Searle claims that in such a system the water pipes do not understand Chinese. Yet Searle also posits that these water pipes do actually replicate the neuronal functioning of a human brain and do it successfully enough to interpret questions in Chinese and output appropriate answers in Chinese.
So why should we assume, as Searle does, that this system modeled on a human brain lacks the understanding engendered by that human brain?
Unfortunately, Searle doesn’t provide anything to back up his assertion. It amounts to an Ipse Dixit fallacy, in that he simply makes the claim and does so without proof or evidence. At the very least, one can argue that it’s possible that encoded into the structure and operation of the water pipes is understanding just as it’s encoded into the neural network of a Chinese speaker’s brain.
There’s also a little sleight of hand going on here. Searle claims there’s no understanding in the man and there’s no understanding in the water pipes. However, he’s left out the instructions for manipulating the water pipes. Only together with the instructions are the pipes able to simulate the corresponding brain that answers questions in Chinese. The instructions, of course, were created by one or more humans who do understand Chinese.
The possibility of successfully simulating the structure and processes of a brain with water pipes (or clockwork or electrons in a silicon chip or any non-brain substrate) seems unintuitive to us. This is not, however, an argument against its being possible, at least in theory. General relativity and quantum mechanics are both pretty unintuitive, but both seem to work pretty well. There are obviously many things our brains did not evolve to directly intuit millennia ago on the savannahs and in the jungles of Africa.
Simulation and Duplication
This brings us back to Searle’s second principle listed above and the motivation behind his statement that the water pipes do not understand: simulation is not duplication. In other words, a simulation of something is never equivalent to the thing being simulated. So why does he believe this to be the case?
His argument rests on several conjectures. First, he believes that simply simulating the behavior of a system is not sufficient to duplicate it. As he stated in his 2009 article:
In order actually to create human cognition on a machine, one would not only have to simulate the behavior of the human agent, but one would have to be able to duplicate the underlying cognitive processes that account for that behavior.
This is why he believes that the water pipe brain simulator would never actually understand anything the way a human can, as it only replicates the behavior of the brain rather than all the processes that underly its functioning. From his 1980 paper:
The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states. And that the formal properties are not sufficient for the causal properties is shown by the water pipe example: we can have all the formal properties carved off from the relevant neurobiological causal properties.
As mentioned in the previous post, what Searle means by intentional states are those internal states of the mind that are about or directed towards beliefs, desires, and perceptions of objects, events, or conditions in the world. The causal properties of the brain are what allow it to use these mental states to influence behavior and actions in the real world.
The Domains of the Real and the Digital
Searle is even more skeptical of a digital computer simulation than something like the water pipe simulation, the latter of which at least represents a physical construction in the real world. To him, there is an immutable barrier between the digital world and the physical world that can never be bridged.
In his 2014 The New York Review of Books review of two books related to AI, he stated:
Computer models were useful in constructing artificial hearts, but such a model is not an actual functioning causal mechanism. The actual artificial heart has to duplicate the causal powers of real hearts to pump blood. Both the real and artificial hearts are physical pumps, unlike the computer model or simulation.
So the idea of emulating a human brain on a computer by duplicating it neuron-by-neuron in digital form is not something Searle considers to be possible:
But the computational emulation of the brain is like a computational emulation of the stomach: we could do a perfect emulation of the stomach cell by cell, but such emulations produce models or pictures and not the real thing. Scientists have made artificial hearts that work but they do not produce them by computer simulation; they may one day produce an artificial stomach, but this too would not be such an emulation.
Searle has made it clear the he believes it may be possible one day to create a computer simulation of the structure and functioning of a human brain all the way down to its specific network of neurons and synaptic activations. And he believes it may be possible to do so such that it actually appears to understand Chinese the same way the mind it’s modeled on understands Chinese.
And yet, even with this incredibly complex simulation, he believes it still won’t actually understand anything. From his 2009 article:
Computer simulations of thought are no more actually thinking than computer simulations of flight are actually flying or computer simulations of rainstorms are actually raining. The brain is above all a causal mechanism and anything that thinks must be able to duplicate and not merely simulate the causal powers of the causal mechanism. The mere manipulation of formal symbols is not sufficient for this.
As a brief aside, Searle had originally directed his claim towards computers with formal logic programming to manipulate symbols, but over the years he’s shifted focus slightly to just symbol manipulation as the main impediment to duplicating the brain on a digital computer. This may be because machine learning has largely relegated formal logic AI programming to the sidelines.
Instead, machine learning uses complex data structures, statistical analysis, and various mathematical techniques. But deep down every digital computer system is still a symbol manipulator, because no matter what type of programming is used, it all boils down to juggling 0s and 1s.
The Nature of Simulation
An issue that becomes apparent when examining the back and forth between Searle and his critics over the years is the use of the word simulation. It’s an imprecise word, and this fuzziness can lead smart and knowledgeable people to disagree and argue past each other because they’re not referring to the same thing.
To Searle, simulation refers to modeling some real world phenomenon on a digital computer. This is frequently how other people use the term as well. Simulations can model things like traffic flow, financial activities, or dynamic systems, such as water and smoke, for games and visual effects. They can consist of some simple math calculations to determine the arc of a projectile or incredibly complex calculations with thousands of parameters that vary in space and time to describe a highly chaotic phenonomon like climate.
But there’s another way the term simulation can be used: a simulation can replicate a real-world phenomenon even if it doesn’t duplicate the process that produced that phenomenon. For clarity, I’ll refer to this kind of simulation as synthesis, which I think is more accurate as well. I discussed the use of the words synthesis and synthetic here in reference to the name of this blog and my preference for referring to AGI as Synthetic Cognition.
Synthesis is simply using technology to recreate something that occurs naturally or that is typically created with naturally occurring materials. Often what’s synthesized is as good as or better than the natural version, such as synthetic diamonds. Sometimes it just has desirable characteristics not available in nature, such as certain types of synthetic fibers. An artificial heart such as the one Searle mentioned in the quote above is a synthetic heart.
But there are also a lot of things that can be synthesized on a computer, things that are just as functional as their physical counterparts. A spreadsheet is just a simulation of a paper ledger, an ebook is a simulation of a book. They’re both at least as functional as their physical counterparts, but they are completely synthetic and only exist as a stream of 0s and 1s.
When you have a phone conversation over a cell phone, the voice you hear is not a real voice. It’s a synthetic recreation of a voice that’s been broken down into 0s and 1s and sent to your phone via electromagnetic waves. Once there, the digital information is processed and used to make a speaker vibrate appropriately to recreate the sound of the original voice. Neither party actually hears the other at all, yet the conversation is as real as any taking place face-to-face.
We can synthesize the sound of a grand piano to such a high degree of fidelity that it’s nearly indistinguishable from the real thing. What’s simulated is not the process a piano uses to create sound, but instead the phenomenon of the sound itself. The synthesized music you hear is not simulated music, not artificial music — it’s just music.
The goal of Strong AI and AGI is synthesized cognition on a computer. It’s neither guaranteed nor obvious that this is possible. Yet, there’s also no evidence against it, nor any logical proof that denies its possibility (at least so far). So while Searle states that simulation is not duplication, it’s perhaps more useful to state that the type of simulation he refers to is not synthesis. If one is able to synthesize cognition on a computer, then what results may reasonably be considered cognition, real cognition, and not a simulation of cognition.
The question to resolve, then, is which analogy most accurately describes how cognition relate to the brain to create our concept of mind. Does the mind arise from cognition flowing through the brain as blood flows through the heart? Or is the cognition that results in mind more like music that flows from the brain?
The question to resolve, then, is which analogy most accurately describes how mind and brain are related. Does mind require physical processes in the brain akin to the flow of blood through the heart? Or is mind more like the music of the brain?
The Substance of Understanding and Consciousness
It’s clear that Searle doesn’t believe human cognition can ever be duplicated on a computer. But what about the water pipe example? That represented a physical duplication of the neural network of a brain, and yet Searle still doesn’t believe it could replicate human cognition.
As I mentioned in the last post, Searle stresses the primacy of biology for any machine to have intentionality. He uses the term intentionality as a more specific word than understanding or comprehension, as intentionality implies not only understanding but consciousness.
To Searle, the proposal that programming a computer can never result in understanding also implies that it can never result in consciousness. In fact, the Chinese Room experiment is an argument against synthetic consciousness as much as synthetic cognition.
This isn’t to say that Searle believes that consciousness and understanding are caused by something outside or beyond the physical brain. Instead, he feels that there’s some physical quality inherent in a biological brain that computers and water pipes lack, and it is this physical quality that makes possible both understanding and the consciousness inextricably bound to it.
This idea is the topic of the next post.