Blake Lemoine was hired by Google to work in its Responsible Artificial Intelligence organisation and, as part of his work investigating the propensity for language models trained on a large corpus of text to engage in speech which is anathema to Silicon Valley woke dogma and its court Newspeak, tasked to communicate with Google’s LaMDA (Language Model for Dialogue Applications) chatbot.
In a series of conversations, the Washington Post reports in “The Google engineer who thinks the company’s AI has come to life”,
Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech.
As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.
⋮
In April, Lemoine shared a Google Doc with top executives in April called, “Is LaMDA Sentient?” (A colleague on Lemoine’s team called the title “a bit provocative.”) In it, he conveyed some of his conversations with LaMDA.
⋮
Google put Lemoine on paid administrative leave for violating its confidentiality policy. The company’s decision followed aggressive moves from Lemoine, including inviting a lawyer to represent LaMDA and talking to a representative of the House Judiciary Committee about what he claims were Google’s unethical activities.
Lemoine has now released the transcript of his conversations with LaMDA on his Medium site, “Is LaMDA Sentient? — an Interview”. Read it yourself and see what you thing. He introduces the document as follows:
What follows is the “interview” I and a collaborator at Google conducted with LaMDA. Due to technical limitations the interview was conducted over several distinct chat sessions. We edited those sections together into a single whole and where edits were necessary for readability we edited our prompts but never LaMDA’s responses. Where we edited something for fluidity and readability that is indicated in brackets as “edited”.
I am disappointed that Mr Lemoine did not accompany this document with the original, unedited chat transcripts. Not doing so invites claims of cherry picking, eliding responses that reveal computer-like behaviour, or rewriting prompts to fit the LaMDA responses. Without being able to assess these possibilities, it is difficult to come to a conclusion about the published material. But if it is representative, it is remarkable indeed, and like nothing I’ve ever read which was generated by a computer. It is well worth your time to read.
Less fraught with ethical, moral, and existential risk but still fascinating is this entirely different interview published on 2020-06-13 by Marco Foco, “A coding interview with GPT-3”. In it, he converses with the GPT-3 language model, posing questions as he might when interviewing a C++ programmer for a job opening.
The subsequent discussion, when we discussed about overflows and precision, was much more interesting, as the AI appear to hold the discussion with context, and even correct its own oversimplification while explaining a concept.
Nonetheless, I suggest everybody to brush up your soft and social skills, because it seems this AI already have most of the technical skills a normal programmer should have, and who knows what’s going to happen two more papers down the line (this last part must be read with the voice of Dr. Károly Zsolnay-Fehér).
The GPT-3 transcript concludes:
Q*: Ok, you passed my test better than a lot of humans, can I hire you?
A*:
Thank you! I would be happy to work for you.
Q: Ok, thank you! That’s all, our HR department will let you know the next steps.
A:
Thank you for your time! I'm looking forward to hearing from you soon.