Ever since I started fooling with GPT-2, I have been calling language model synthesis programs “bullshit generators”. Here is how I explained it in a comment here on 2022-06-15,
I have previously called GPT-3 a “bullshit generator” because when you ask it a question that requires some technical knowledge, it usually answers like the guy you knew in college who had read only a couple of popular articles but would pronounce so authoritatively on the topic he would fool everybody except those who actually understood it in detail. That’s what you get when, as GPT-3 does, you’re matching text to a large body of training material without having any understanding of what it means or the ability to rate its quality and reliability.
If you think about what a human master of bullshit does, it’s remarkably similar: based upon a prompt of the topic, assemble a stream of phrases which sound like what an authority would say, perhaps citing other authorities (made-up or attributed without having ever read them). GPT-3, having been trained from a huge corpus of text and using a language model that attempts to mimic text from the corpus, does an excellent job of this. This kind of bullshit can be very persuasive to those who aren’t “read in” to the details being discussed. It can also be very funny to one versed in the subject matter.