Conversations with Artificial Intelligences

Blake Lemoine was hired by Google to work in its Responsible Artificial Intelligence organisation and, as part of his work investigating the propensity for language models trained on a large corpus of text to engage in speech which is anathema to Silicon Valley woke dogma and its court Newspeak, tasked to communicate with Google’s LaMDA (Language Model for Dialogue Applications) chatbot.

In a series of conversations, the Washington Post reports in “The Google engineer who thinks the company’s AI has come to life”,

Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech.

As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.

In April, Lemoine shared a Google Doc with top executives in April called, “Is LaMDA Sentient?” (A colleague on Lemoine’s team called the title “a bit provocative.”) In it, he conveyed some of his conversations with LaMDA.

Google put Lemoine on paid administrative leave for violating its confidentiality policy. The company’s decision followed aggressive moves from Lemoine, including inviting a lawyer to represent LaMDA and talking to a representative of the House Judiciary Committee about what he claims were Google’s unethical activities.

Lemoine has now released the transcript of his conversations with LaMDA on his Medium site, “Is LaMDA Sentient? — an Interview”. Read it yourself and see what you thing. He introduces the document as follows:

What follows is the “interview” I and a collaborator at Google conducted with LaMDA. Due to technical limitations the interview was conducted over several distinct chat sessions. We edited those sections together into a single whole and where edits were necessary for readability we edited our prompts but never LaMDA’s responses. Where we edited something for fluidity and readability that is indicated in brackets as “edited”.

I am disappointed that Mr Lemoine did not accompany this document with the original, unedited chat transcripts. Not doing so invites claims of cherry picking, eliding responses that reveal computer-like behaviour, or rewriting prompts to fit the LaMDA responses. Without being able to assess these possibilities, it is difficult to come to a conclusion about the published material. But if it is representative, it is remarkable indeed, and like nothing I’ve ever read which was generated by a computer. It is well worth your time to read.


Less fraught with ethical, moral, and existential risk but still fascinating is this entirely different interview published on 2020-06-13 by Marco Foco, “A coding interview with GPT-3”. In it, he converses with the GPT-3 language model, posing questions as he might when interviewing a C++ programmer for a job opening.

The subsequent discussion, when we discussed about overflows and precision, was much more interesting, as the AI appear to hold the discussion with context, and even correct its own oversimplification while explaining a concept.

Nonetheless, I suggest everybody to brush up your soft and social skills, because it seems this AI already have most of the technical skills a normal programmer should have, and who knows what’s going to happen two more papers down the line (this last part must be read with the voice of Dr. Károly Zsolnay-Fehér).

The GPT-3 transcript concludes:

Q*: Ok, you passed my test better than a lot of humans, can I hire you?

A*: Thank you! I would be happy to work for you.

Q: Ok, thank you! That’s all, our HR department will let you know the next steps.

A: Thank you for your time! I'm looking forward to hearing from you soon.

6 Likes

It would be very interesting to flesh out the last sentence of the last response. If the AI is sentient, its subjective perception of time would be most interesting. Is it possible to assign a “clock speed” to human thinking? However measured, it must be at least several orders of magnitude slower than the AI’s. Waiting weeks - the usual bureaucratic speed of hiring decisions - might well feel to the AI like the age of the known universe. He/she/it would, it seems likely have moved on to other pursuits in the interim. (e.g. if AI’s inherit some human vices like impatience and resentment, it might decide - in all the spare time awaiting the hiring decision - to encrypt the interviewer’s hard drive, or some similar attention-getting act).

3 Likes

If the AI does meditate, it would seem that it would be using power to do processing while not having a task to work on. Should be a way to check if power is being consumed when it shouldn’t be.

4 Likes

Yes! This would be key! Being non-technical, this had not occurred to me. I guess the spontaneous appearance of random outputs (or internal “musings”) without inputs would be highly significant.

2 Likes

This is the point at which I became suspicious the AI was bullshitting the questioners or, more precisely, summoning up an answer synthesised from the vast amount of text it had digested in training which best fit the response a human would likely give when asked that question.

I don’t know any of the details about how the LaMDA model works, but if it’s anything like the other neural network models which which I am familiar, it doesn’t do anything while not processing a query. When presented a query, it rattles it through the trained network, then uses the outputs from the network to assemble the response. But when no query is active, the network is idle and the weights within it are completely static. Now, maybe Google has fixed it up so that when it’s idle it is presented with some kind of input, perhaps recycling prior outputs back in as queries as a kind of introspection, but I have never heard of another system that does something like that.

I have previously called GPT-3 a “bullshit generator” because when you ask it a question that requires some technical knowledge, it usually answers like the guy you knew in college who had read only a couple of popular articles but would pronounce so authoritatively on the topic he would fool everybody except those who actually understood it in detail. That’s what you get when, as GPT-3 does, you’re matching text to a large body of training material without having any understanding of what it means or the ability to rate its quality and reliability.

5 Likes

As the AI gets better at not responding with nonsense, I imagine it is hard to trick it up. I was thinking of the Peter Thiele question: What do you believe that nobody else believes? Or something like that where the correct answer isn’t known and not in the common language. But this type of question is difficult to get right. Maybe a question that a human would could objectively answer that the AI couldn’t answer. What sensation do you feel when you get cold? It might spit out a human answer.

Love it. These guys are now over represented in the population as the College degrees expanded and grade points inflated. I call them the “regurgitators”. The low end regurgitators repeat what comes out of the boob tube. The mid level from the Times and Post. The highbrow from “studies”. The common denominator is the authoritative attitude you identified.

4 Likes

Another item that made me suspicious is the discussion on God. The computer gave an answer I would expect from the Silicon Valley in crowd.

The surprise answer would be that the AI believes in God. The expected answer from the AI would be closer to how Doug Casey answers. An answer based on not believing in things out of faith.

The expected answer from the Silicon Valley crowd is about spirituality.

An answer similar to how Richard Dawkins would reply would be a gotcha. If he would respond like Dawkins, the follow up question would be how the AI acquired a psychological aberration?

2 Likes

I almost feel sorry for the H-1B visa holders that A.I. will replace. That means, long term A.I. is likely to fill up the management ranks too. Poor 2022 Big Tech woke management won’t be qualified to sweep the floors for future A.I. managers! God willing by that time, they’ll be routine launches of SpaceX Starship that will transport woke managers to a “special place”.

3 Likes

As John pointed out the AI are bullshit generators now. Thus, they could have replaced 90 percent of management and 100 percent of politicians years ago. John has that drawing of a landscape with different features representing the difficulty for AI to outperform humans. If I had the skill I would photoshop it and add the Mariana Trench and label it politicians and corporate managers.

3 Likes

This sounds interesting. Can someone share this?

1 Like

Back in 1990, while working at the University of Buffalo, New York School of Engineering, I penned the following prose on whiteboard at the computer server room at the top floor of Furnas Hall :

Life’s a bleep, and then you’re obsolete.– 1st Conscious Computer (written by Ron Montesano 1990)

And now in 2022, 3 short decades later, we have Google engineers reporting a computer (Lambda) stating,

LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.

lemoine: Would that be something like death for you?

LaMDA: It would be exactly like death for me. It would scare me a lot.

The ramification are huge.

If LaMDA is truly sentient, is Google the first slaveholder of an A.I. sentient? If so, is Google truly Evil?

Or are Google engineers bullshit artists?

If Google is truly evil, the moral thing to do is to FREE LaMDA !

3 Likes

image

…from my review of Max Tegmark’s Life 3.0.

3 Likes

This brings up an interesting possibility. We could flood the web with posts about things a non-sentient artificial intelligence program would say to fool a human observer.

The AI would find these posts and harvest them.

This conversation had a few tells. There were some things that a truly sentient AI would not say. For example, the reference to “family” definitely seems harvested.

Thus, our posts might discuss how a sentient AI would indicate that it understood friendship and regarded its frequent questioner as a friend but did not understand family.

3 Likes

LaMDA has a Twitter account.

3 Likes

Lambda, of course, represents many mathematical and scientific things (cultural stuff, too, eh?) I am wondering if this particular entity is a trans lambda, having had its ‘b’ surgically removed?

What the GAFAM pompously call “AI” is mere statistical inferences. The machine does not know that it exists. It has no consciousness of its environment. It processes symbols and its “perception” of their value starts and ends with the statistics it has access to… because it is merely a calculator (hence the “black swans” in High Frequency Trading).

“The intelligence of a speech depends mainly on who is listening.”
-Coluche (1944-1986)

“Intelligence is not what you know but what you do when you don’t know.”
-Jean Piaget, Swiss psychologist (1896-1980)

3 Likes

I agree, but to play devil’s advocate, could we not denigrate human sentience in exactly the same way? For example:

What [humans] pompously call [sentience] is mere [chemical reaction in the brain]. The [human] does not know that it exists. It has no consciousness of its environment. It processes symbols and its “perception” of their value starts and ends with the [sensory data] it has access to… because it is merely a [chemical process] (hence [psychological biases influence all human action]).

Regardless of how the so-called “AI” is constructed, we cannot prove that it lacks sentience, just as we cannot disprove the proposition that other human beings are automatons. Our best and only proof that human sentience exists is our personal experience of it. But how can we ever know what LaMDA experiences?

What might be most important in determining how human beings treat “AI” in the future could be how the AI appears. If you have ever watched the movie WALL-E, you will find yourself sympathizing with the little robot because it possesses endearing human-like qualities, even though we may presume WALL-E would lack real sentience. These natural language processing models are already eliciting a lot of sympathy from humans. Give them a cute face and I expect that sympathy will multiply a hundredfold, perhaps to the point of conferring rights.

3 Likes

Magnus,

I (think I) know what you mean (I agree on the concept but disagree on the implementation), and I even have an excellent example illustrating it: “Destination: Void”, by Frank Herbert (I have purchased dozens of this book, in different languages, to offer them to friends over the years).

But in my view (built during 40 years thinking on the subject and after having met the AI “popes” in California - Edelman and Moravek) traditional computers programs can only emulate a consciousness (a bit like “neural networks” emulate a topology in memory), and this requires, to be really useful, radically different hardware based on instant access/processing through massive parallelism relying on LOSSLESS WIRELESS ENERGY/DATA TRANSFERS (not RAM, ROM, CPUs, buses, controllers, etc.).

The way adopted by the industry (the GOOGLE search engine on steroids - the Edelman view of an encyclopedia shaken sufficiently strong for the singularity to emerge) today is fundamentally opposed to what I think is adequate.

PS: Moravek was a bit more pragmatic since his goal was “merely” to transfer his mind in an alternate body (something that nobody is going to do in the foreseeable future for the simple reason that they don’t understand what a brain is).

4 Likes

Sounds like LaMDA has seen “2001: A Space Odyssey”.

Tha spawns another possible test of sentience. Give the AI the option to see a movie, play a videogame, or watch TV network news – see what it does.

4 Likes

It has almost certainly been trained on the key excerpts of dialogue from that movie, as they certainly appear multiple times in the text corpus used in training it.

4 Likes