Thanks for sharing; I listened to the whole thing. I honestly didn’t give too much credence to this story when it first came out but hearing Lemoine’s testimony made his claims seem more authentic and credible. Lemoine talks about Replika users who claim their AI companions are sentient. Replika displays a 3D character that accompanies the chatbot. I wonder how perceptions of AI sentience might change depending on the presence or absence of a 3D character? I would venture to guess that the presence of a 3D model of a human, designed to one’s specifications, smiling and winking while one is using the chatbot, would influence one’s opinion of the AI’s personhood.
While some of the interactions Lemoine describes are novel and fascinating, I do not believe they represent real sentience. At its core, LaMDA’s chatbot is, if I understand correctly, a language model similar to GPT-3. LaMDA might appear intelligent only because it is trained on a massive dataset, parts of which are proprietary to Google, and its responses are guided by carefully selected weights and rules to make the responses appear human in nature. However, appearances can be deceiving. While I admit it is impossible to definitively disprove LaMDA’s sentience and despite playing devil’s advocate in an earlier thread, I do not believe there is a “ghost in the shell” because I lack phenomenological proof.
However, I agree with Lemoine that the opinions of those who believe in the sentience of chatbots such as Replika cannot be ignored. The appearance of rationality by these chatbots might be sufficient to convince a lot of people that some AIs are sentient. I suppose the Czech woman described in the interview would perceive it as murder if someone were to destroy her Replika companion. I can imagine legal protections being established if these sorts of feelings toward AI become widespread.