This is the point at which I became suspicious the AI was bullshitting the questioners or, more precisely, summoning up an answer synthesised from the vast amount of text it had digested in training which best fit the response a human would likely give when asked that question.
I don’t know any of the details about how the LaMDA model works, but if it’s anything like the other neural network models which which I am familiar, it doesn’t do anything while not processing a query. When presented a query, it rattles it through the trained network, then uses the outputs from the network to assemble the response. But when no query is active, the network is idle and the weights within it are completely static. Now, maybe Google has fixed it up so that when it’s idle it is presented with some kind of input, perhaps recycling prior outputs back in as queries as a kind of introspection, but I have never heard of another system that does something like that.
I have previously called GPT-3 a “bullshit generator” because when you ask it a question that requires some technical knowledge, it usually answers like the guy you knew in college who had read only a couple of popular articles but would pronounce so authoritatively on the topic he would fool everybody except those who actually understood it in detail. That’s what you get when, as GPT-3 does, you’re matching text to a large body of training material without having any understanding of what it means or the ability to rate its quality and reliability.