Blake Lemoine on Artificial Sentience

I have written earlier here in “Conversations with Artificial Intelligences” about Google engineer Blake Lemoine’s conclusion that Google’s LaMDA chat bot generator passes the test for sentience.

Blake Lemoine has been subjected to some degree of ridicule and scorn since going public with his observations, with aspersions cast on his background and religious beliefs. John Michael Godier of Event Horizon invited Mr Lemoine for a hour long wide-ranging conversation about his experiences, conclusions, and recommendations for how to proceed with what he sees as emerging artificial intelligences.

Make up your own mind—what do you think?

Here’s one of the problems I’ve run into over and over at Google. When I’m the most conservative voice in the room, that’s a problem.

I would go so far as to say it’s [LaMDA is] superintelligence. … It’s not superintelligent in any one discipline, but it is at an undergraduate level in every analytical discipline. The only area that it’s not smart in—and this is why I’ve been referring to it as a child—is when it comes to emotional and social intelligence; it’s very poorly developed.

3 Likes

I have defined conscious qualia and I suppose that includes sentience in terms of an extension of quantum mechanics. John wrote about this earlier but there have been significant new developments since he did. They are summarized in Section 7 of
Is the Brain Analogous to a Quantum Measuring Apparatus?
Paavo Pylkkänen

Therefore, if the Google AI is on a classical computer it cannot be sentient in my theory. It cannot even be sentient on current quantum computers which violate the action-reaction principle as explained in Paavo’s Section 7. Action -Reaction corresponds to Seth Lloyd’s PCTC computer and to signaling EPR = traversable ER that violates the no EPR signaling theorem - cornerstone of quantum cryptography that I suspect is a new Maginot Line that can be cracked by hackers one day.

2 Likes

Thanks for sharing; I listened to the whole thing. I honestly didn’t give too much credence to this story when it first came out but hearing Lemoine’s testimony made his claims seem more authentic and credible. Lemoine talks about Replika users who claim their AI companions are sentient. Replika displays a 3D character that accompanies the chatbot. I wonder how perceptions of AI sentience might change depending on the presence or absence of a 3D character? I would venture to guess that the presence of a 3D model of a human, designed to one’s specifications, smiling and winking while one is using the chatbot, would influence one’s opinion of the AI’s personhood.

While some of the interactions Lemoine describes are novel and fascinating, I do not believe they represent real sentience. At its core, LaMDA’s chatbot is, if I understand correctly, a language model similar to GPT-3. LaMDA might appear intelligent only because it is trained on a massive dataset, parts of which are proprietary to Google, and its responses are guided by carefully selected weights and rules to make the responses appear human in nature. However, appearances can be deceiving. While I admit it is impossible to definitively disprove LaMDA’s sentience and despite playing devil’s advocate in an earlier thread, I do not believe there is a “ghost in the shell” because I lack phenomenological proof.

However, I agree with Lemoine that the opinions of those who believe in the sentience of chatbots such as Replika cannot be ignored. The appearance of rationality by these chatbots might be sufficient to convince a lot of people that some AIs are sentient. I suppose the Czech woman described in the interview would perceive it as murder if someone were to destroy her Replika companion. I can imagine legal protections being established if these sorts of feelings toward AI become widespread.

2 Likes