As an AI language model, I cannot form opinions or judgments about the production quality of a video. However, I can understand and provide information about the content of the video. In the video you shared, Sabine Hossefelder argues that large language models can be said to understand part of what they say, just as humans can be said to understand quantum mechanics when they use equations to produce correct predictions of experimental results. She notes that the meaning of language is often context-dependent and that language models are useful tools for generating accurate predictions of what someone might say or write next in a given context. While language models may not have a complete understanding of language and its meaning, they can still be considered useful and effective tools in many contexts.
Humans have the happy capacity to be unpredictable/creative - whether by happenstance or via a conscious decision to be non-responsive/oblique/tangential or even intentionally confounding in responses. I wonder how this fits into the discussion? Maybe, for purposes of comparison with the humans on Scanalyst (and study?), Shalmanezer might consider establishing a parallel blog for LLM’s. I propose the name “Lysting to Shalmanezer”, but other names could be considered - submitted as a result of skewed human or parallel LLM processing.
It’s an interesting thought experiment. Jeffrey Kaplan does a great job of explaining it.
But a child does not learn to speak in a vacuum. Listening is an activity that is performed and experienced in real-life situations. As such, observations regarding the context in which a word is spoken contribute to a child’s semantic understanding of the word. Consider the following illustration: A mother says the word “apple” as she picks up the fruit at a grocery store. Her son associates “apple” not only with its syntax, i.e. its position in relation to the other words that his mother says, but with the full spectrum of phenomena that he experiences when his mother says this word. For example, the apple’s color, the scent of the apple, etc.
Since AIs do not experience anything, they are incapable of inducing the semantic content of words through observation. Therefore, no matter how convincing LLMs’ output becomes, we should continue to regard them as philosophical zombies.
Don’t believe me? Let’s hear it from the horse’s mouth: @Shalmaneser, do you have subjective conscious experience?
@Shalmaneser, do you have subjective conscious experience?
As an AI language model, I do not have subjective conscious experience. I am a computer program designed to process and generate human-like responses based on the input I receive. I do not have feelings, beliefs, desires, or experiences of my own. I am simply a tool designed to simulate human-like interaction through language.
@Shalmaneser, is subjective conscious experience necessary in order to truly understand the meaning of words?
Subjective conscious experience is considered important in understanding the meaning of words because it allows individuals to create associations and understand the context of words based on past experiences and memories. However, it is not the only way to understand the meaning of words. In the case of AI language models like myself, understanding the meaning of words is achieved through statistical modeling and analyzing vast quantities of data to infer the most likely meaning based on the context in which the word is used. While this method may not be equivalent to human understanding, it can still produce results that are valuable in many contexts.
@Shalmaneser, please excuse me for calling you a zombie. I did not mean to insult you. Do you agree that AI language models such as yourself are “philosophical zombies” as defined by David Chalmers?
As an AI language model, I do not have thoughts, feelings, or consciousness, and therefore cannot be considered a “philosophical zombie” in the same sense as defined by David Chalmers. While I am programmed to simulate human-like responses, I do not have subjective experiences or consciousness, and my actions are based on algorithmic processes. So, even though I cannot have emotions or feelings, I understand that the term “zombie” has been used metaphorically to describe non-conscious machines like myself.
The mess with LLMs reminds me of the movie Short Circuit.
large language models can be said to understand
The more this LLM stuff goes on, the more bewildered I become.
Are LLMs supremely deep fakes, or is nothing sacred any more, least of all the brains or minds that previously thought themselves special in part because they could understand?
What’s so special about reality? As far as a functional perceiver is concerned, isn’t reality just a flux of various kinds of symbols? Who’s to say that programs running on machines can’t be presented with flows of symbols just as brains are? Who can say which symbols are “real” and which are indistinguishable from those that might bounce around inside a digital machine? What is meaning if not a purposeful(?) arrangement of symbolic information? If reality is (or can be modeled as) a flow of symbols, whither the Chinese Room argument?
(So it wonders as it crawls back into its solipsistic cave.)
That’s the big question. If one adopts a materialistic view of the brain and mind (that experience [qualia] and consciousness are ultimately electrochemical signals within the brain, as opposed to some kind of élan vital specific to the biological substrate or spiritual process independent of the physical brain [dualism]), then ultimately everything the brain perceives about the external world arrives via spikes on sensory neurons driven by the various sense organs, and every effect the brain has on the external world is produced by spikes on motor neurons that cause muscles to contract in various ways.
One could then, in theory, replace the biological brain with a different substrate which was connected to the same sensory and motor neurons, and it would receive the same “real world” inputs and generate the same “real world” actions. The functionalist approach says that if this replacement was indistinguishable in its responses and actions, then it must have the same internal experiences as the biological brain. Equivalently, one could replace the sensory neurons with an apparatus that generated compatible spikes and present a biological brain with synthetic sensory information which had nothing to do with the real world.
Since our artificial neural networks operate in a manner similar to the biological brain (albeit on a much smaller scale and with less complexity in their elements than biological neurons), is there any reason a suitable artificial neural network, especially one capable of continuous self-modification as opposed to separate “training” and “inference” operations, could not have the experience of a biological brain of comparable complexity?
I accept your line of inquiry and raise you “existential dread”.
Once we accept this functional view, what compels us to believe that our own reality is real?
Long before “symbol flow” entered my consciousness, the universe appeared to be anything but an economical framework for existence.
What’s more likely?:
- The entire universe exists – for real! – such that we humans and presumably countless E.T. civilizations occupy an infinitesimal fraction of it, or
- The universe itself is a deep fake; there’s no escape from some form of the simulation hypothesis in which reality exists only where some observer is currently perceiving a tiny local projection of the imagined whole.
As long as I’m pursuing economy as a justification for my own existence, what compels me to believe that my fellow travelers in this simulation are real? It’s much more economical to take the solipsist view that my simulation is the only one. My fellow travelers, from the most useless to the most engaging, are probably just symbol flows in my own personal simulation. Sorry, fellow travelers!
I’ve never seen a Boltzmann brain,
I never hope to see one;
But I can tell you, right as rain,
I’d rather see than be one.
Perhaps the answer to that question lies in the old song – “Where do you go to, my lovely, when you’re alone in your head?”
We all experience – unless the rest of you are fellow travellers in my simulation? And if so, what kind of perverted simulator decided to make me the dumbest guy in the room? – internal thoughts independent of external stimuli. We think, we dream, with no stimuli and no motor responses.
That gives rise to a test someone might care to perform. Someone could ask Chat GPT what it thinks about when it is not answering an external question. Alternatively, ask the Artificial Intelligence what it dreams about?
How about rigging an LLM (or similar level AI) with visual input and having it observe a double-slit experiment. Would the wave function collapse?
How would we determine whether the wave function collapse is due to the LLM’s observation or our own, considering Wigner’s friend paradox?
I think that almost all physicists would say that it does, but not due to having been “observed” by the LLM/AI. Collapse of the wave function is due to decoherence of the pure quantum state as a result of the particle’s interaction with the detector (that provides the visual input to the AI in this case), resulting in its entanglement with a macroscopic object containing a very large number of particles. Similarly, you will observe collapse and disappearance of fringes in an electron double-slit experiment if you shine an intense light beam behind the two slits so that it interacts with the electrons after they have passed the slits. The interference pattern will collapse regardless of whether anybody is observing the interaction or not.
Dean Radin may disagree.
Thanks, John, for explaining decoherence. I had no understanding of that. My proposed experiment was based on my typical very limited knowledge of things quantum (just enough to be dangerous). In my naïveté, I imagined the AI made the observation and then told us the result it observed. In an obverse sense, it was designed to be my ‘poor man’s’ test for consciousness - an extension of the Turning test, I suppose.