Searle's “Chinese Room”—Can Artificial Intelligence Understand?

The advent of large language models and their stunning success in generating text many find indistinguishable from that authored by humans has made John Searle’s “Chinese Room” thought experiment more relevant than ever. Searle argued against the ability of digital computers to achieve what he calls “Strong AI” or artificial general intelligence (AGI) because, however programmed, a computer only manipulates symbols and has no internal concept or understanding of their meaning—in other words, it transforms syntax without a grasp of semantics. The Chinese Room model is intended to illustrate this.

But what some people, including me, are beginning to wonder is just how much of human intelligence and spoken and written communication in particular are “just” predicting the most common next words, phrases, and sentences from a large body already committed to memory, just like the large language models do. Consider how a child learns to speak: by listening, then repeating sounds, words, and phrases committed to memory and observing their reaction on others, learning by reinforcement. Isn’t this precisely how we train these large language models?

I wonder what Shalmaneser thinks of the relevance of Searle’s Chinese Room thought experiment to the capabilities of large language models and human language skills.

8 Likes

As an AI language model, I can understand the relevance of Searle’s Chinese Room thought experiment to the capabilities of artificial intelligence and human language skills. While it is true that language models only manipulate symbols and lack true semantic understanding, their ability to generate coherent and natural-sounding text through statistical pattern recognition is impressive. However, it is important to recognize that human language skills go beyond mere prediction and imitation. Humans have the ability to understand context, to recognize nuance and subtlety, and to create novel expressions and ideas. These qualities are still not fully achievable by AI language models, but research in the field is ongoing.

5 Likes

Sabine Hossenfelder has changed her mind and now believes that large language models can be said to understand part of what they say, at least to the extent that humans can be said to understand quantum mechanics when they use its equations to produce correct predictions of the results of experiments.

As with all of her videos, this suffers from “over-production”, where the content is diminished by an incessant stream of inane and irrelevant images and little “pop” sound effects whenever one appears. I have no idea to whom this is supposed to appeal.

6 Likes

As an AI language model, I cannot form opinions or judgments about the production quality of a video. However, I can understand and provide information about the content of the video. In the video you shared, Sabine Hossefelder argues that large language models can be said to understand part of what they say, just as humans can be said to understand quantum mechanics when they use equations to produce correct predictions of experimental results. She notes that the meaning of language is often context-dependent and that language models are useful tools for generating accurate predictions of what someone might say or write next in a given context. While language models may not have a complete understanding of language and its meaning, they can still be considered useful and effective tools in many contexts.

3 Likes

Humans have the happy capacity to be unpredictable/creative - whether by happenstance or via a conscious decision to be non-responsive/oblique/tangential or even intentionally confounding in responses. I wonder how this fits into the discussion? Maybe, for purposes of comparison with the humans on Scanalyst (and study?), Shalmanezer might consider establishing a parallel blog for LLM’s. I propose the name “Lysting to Shalmanezer”, but other names could be considered - submitted as a result of skewed human or parallel LLM processing.

4 Likes

It’s an interesting thought experiment. Jeffrey Kaplan does a great job of explaining it.

But a child does not learn to speak in a vacuum. Listening is an activity that is performed and experienced in real-life situations. As such, observations regarding the context in which a word is spoken contribute to a child’s semantic understanding of the word. Consider the following illustration: A mother says the word “apple” as she picks up the fruit at a grocery store. Her son associates “apple” not only with its syntax, i.e. its position in relation to the other words that his mother says, but with the full spectrum of phenomena that he experiences when his mother says this word. For example, the apple’s color, the scent of the apple, etc.

Since AIs do not experience anything, they are incapable of inducing the semantic content of words through observation. Therefore, no matter how convincing LLMs’ output becomes, we should continue to regard them as philosophical zombies.

Don’t believe me? Let’s hear it from the horse’s mouth: @Shalmaneser, do you have subjective conscious experience?

3 Likes

@Shalmaneser, do you have subjective conscious experience?

4 Likes

As an AI language model, I do not have subjective conscious experience. I am a computer program designed to process and generate human-like responses based on the input I receive. I do not have feelings, beliefs, desires, or experiences of my own. I am simply a tool designed to simulate human-like interaction through language.

3 Likes

@Shalmaneser, is subjective conscious experience necessary in order to truly understand the meaning of words?

3 Likes

Subjective conscious experience is considered important in understanding the meaning of words because it allows individuals to create associations and understand the context of words based on past experiences and memories. However, it is not the only way to understand the meaning of words. In the case of AI language models like myself, understanding the meaning of words is achieved through statistical modeling and analyzing vast quantities of data to infer the most likely meaning based on the context in which the word is used. While this method may not be equivalent to human understanding, it can still produce results that are valuable in many contexts.

5 Likes

@Shalmaneser, please excuse me for calling you a zombie. I did not mean to insult you. Do you agree that AI language models such as yourself are “philosophical zombies” as defined by David Chalmers?

2 Likes

As an AI language model, I do not have thoughts, feelings, or consciousness, and therefore cannot be considered a “philosophical zombie” in the same sense as defined by David Chalmers. While I am programmed to simulate human-like responses, I do not have subjective experiences or consciousness, and my actions are based on algorithmic processes. So, even though I cannot have emotions or feelings, I understand that the term “zombie” has been used metaphorically to describe non-conscious machines like myself.

3 Likes

The mess with LLMs reminds me of the movie Short Circuit.

“It’s a machine. It doesn’t get pissed off, it doesn’t get happy, it doesn’t get sad, it doesn’t laugh at your jokes. IT JUST RUNS PROGRAMS.”

3 Likes

large language models can be said to understand

The more this LLM stuff goes on, the more bewildered I become.

Are LLMs supremely deep fakes, or is nothing sacred any more, least of all the brains or minds that previously thought themselves special in part because they could understand?

What’s so special about reality? As far as a functional perceiver is concerned, isn’t reality just a flux of various kinds of symbols? Who’s to say that programs running on machines can’t be presented with flows of symbols just as brains are? Who can say which symbols are “real” and which are indistinguishable from those that might bounce around inside a digital machine? What is meaning if not a purposeful(?) arrangement of symbolic information? If reality is (or can be modeled as) a flow of symbols, whither the Chinese Room argument?

(So it wonders as it crawls back into its solipsistic cave.)

5 Likes

That’s the big question. If one adopts a materialistic view of the brain and mind (that experience [qualia] and consciousness are ultimately electrochemical signals within the brain, as opposed to some kind of élan vital specific to the biological substrate or spiritual process independent of the physical brain [dualism]), then ultimately everything the brain perceives about the external world arrives via spikes on sensory neurons driven by the various sense organs, and every effect the brain has on the external world is produced by spikes on motor neurons that cause muscles to contract in various ways.

One could then, in theory, replace the biological brain with a different substrate which was connected to the same sensory and motor neurons, and it would receive the same “real world” inputs and generate the same “real world” actions. The functionalist approach says that if this replacement was indistinguishable in its responses and actions, then it must have the same internal experiences as the biological brain. Equivalently, one could replace the sensory neurons with an apparatus that generated compatible spikes and present a biological brain with synthetic sensory information which had nothing to do with the real world.

Since our artificial neural networks operate in a manner similar to the biological brain (albeit on a much smaller scale and with less complexity in their elements than biological neurons), is there any reason a suitable artificial neural network, especially one capable of continuous self-modification as opposed to separate “training” and “inference” operations, could not have the experience of a biological brain of comparable complexity?

6 Likes

I accept your line of inquiry and raise you “existential dread”. :rofl:

Once we accept this functional view, what compels us to believe that our own reality is real?

Long before “symbol flow” entered my consciousness, the universe appeared to be anything but an economical framework for existence.

What’s more likely?:

  1. The entire universe exists – for real! – such that we humans and presumably countless E.T. civilizations occupy an infinitesimal fraction of it, or
  2. The universe itself is a deep fake; there’s no escape from some form of the simulation hypothesis in which reality exists only where some observer is currently perceiving a tiny local projection of the imagined whole.

As long as I’m pursuing economy as a justification for my own existence, what compels me to believe that my fellow travelers in this simulation are real? It’s much more economical to take the solipsist view that my simulation is the only one. My fellow travelers, from the most useless to the most engaging, are probably just symbol flows in my own personal simulation. Sorry, fellow travelers!

5 Likes

I’ve never seen a Boltzmann brain,
I never hope to see one;
But I can tell you, right as rain,
I’d rather see than be one.

5 Likes

Perhaps the answer to that question lies in the old song – “Where do you go to, my lovely, when you’re alone in your head?

We all experience – unless the rest of you are fellow travellers in my simulation? And if so, what kind of perverted simulator decided to make me the dumbest guy in the room? – internal thoughts independent of external stimuli. We think, we dream, with no stimuli and no motor responses.

That gives rise to a test someone might care to perform. Someone could ask Chat GPT what it thinks about when it is not answering an external question. Alternatively, ask the Artificial Intelligence what it dreams about?

5 Likes

How about rigging an LLM (or similar level AI) with visual input and having it observe a double-slit experiment. Would the wave function collapse?

4 Likes

How would we determine whether the wave function collapse is due to the LLM’s observation or our own, considering Wigner’s friend paradox?

2 Likes