Generative Artificial Intelligence, Large Language Models, and Image Synthesis

GPT4 also failed the Algorithmic Information IQ test:

I was kind of thinking they might have kept GPT4’s architecture secret because they had gone to a fundamentally more capable, recurrent model that would have seen the iterative pattern in the bits.

Nope.

Not even after I prompted it to use a range of integers.

AGI is nowhere in sight because AIT is nowhere in sight.

5 Likes

I’ve seen rumors from several sources that GPT5 will reach AGI with an expected release date of around the end of 2023, of course that doesn’t mean the rumors match reality. Can the singularity be far behind?

1 Like

Trying out GPT4 (aka “Bing chat”) on the Sun’s dipole magnetic moment, it apologized for not wanting to continue the conversation:

5 Likes
5 Likes

Elon Musk: “I am worried about the fact that [AI is] being trained to be politically correct, which is simply another way of saying untruthful things. So, that’s a bad sign. That’s certainly a path to AI dystopia – to train AI to be deceptive. I’m going to start something which I call TruthGPT, or a maximum truth-seeking AI that tries to understand the nature of the universe. This might be the best path to safety in that an AI that cares about understanding the universe is unlikely to annihilate humans because we are an interesting part of the universe… hopefully.”

Musk’s full interview with Tucker Carlson is currently available on Youtube.

5 Likes

Knock knock! I know, I don’t really belong in here with you polymaths, but think of me as, like, a telegraph girl (in one of those cute uniforms😜!) delivering a query from the less-exalted world.
First: can CHAT, or Shalmaneser, be instructed to make puns? (I haven’t read this entire thread so ‘scuse me if somebody has answered this)
And more importantly:
on American Thinker today Dr. Joondeph has a piece, “What do AI and Chat GPT mean for Opinion writers?”

He then reprints an essay on the question. Then asks whether the reader thinks he wrote it, or Chat wrote it.

I commented: I KNEW you didn’t write it, because if you had, there would have been some efforts at explicating how you, whose pieces I have been following, could have arrived at these views.

i mean oh, yeah, the essay was “balanced”, kinda, and he had asked that it be written in his writing style. But sump’n was missing. Sump’n like self-justification. Could it be that Chat can’t duplicate that, cuz, y’know, it has no self?

4 Likes

@Shalmaneser what are some puns about robots?

3 Likes

Here are some puns about robots:

  1. “Why did the robot go on a diet? He wanted to reduce his circuit size.”
  2. “What do you call a robot that takes a long time to cook? A slow-cooker.”
  3. “Why did the robot cross the road? To get to the motherboard.”
  4. “What do you call a robot that always takes the long way home? A Roomba.”
  5. “What do you call a robot that likes to tell jokes? A pun-ishing bot.”
  6. “Why did the robot break up with his girlfriend? She had a hardware malfunction.”
  7. “What do you get when you cross a robot and a tractor? A transformer.”
  8. “Why did the robot go on a shopping spree? He wanted to upgrade his accessories.”
  9. “What do you call a robot that loves to dance? A gro-bot.”
  10. “Why did the robot start a band? He wanted to be a sound machine.”
3 Likes

I have to think about these….are they really puns?

I apologize if my previous response didn’t meet your expectations. Here are some additional puns about robots that might fit the bill:

  1. “Why did the robot break up with his girlfriend? Because she was two-timing.”
  2. “What do you call a robot that’s always in a hurry? A machinist.”
  3. “What do you call a robot that’s always telling jokes? A circuit comedian.”
  4. “What do you get when you cross a robot and a snowman? Frosty the Snowbot.”
  5. “Why did the robot go on a diet? To reduce his intake of megabytes.”
  6. “Why did the robot get a ticket? It was caught speeding on the information superhighway.”
  7. “What do you call a robot that’s always studying? A learn-bot.”
  8. “Why did the robot go on a date with a calculator? It was programmed to love math.”
  9. “What do you call a robot that’s always on the move? A robo-nomad.”
  10. “Why did the robot go to the doctor? It had a virus in its programming.”
1 Like

Of your first effort, I think only #1 is a pun. The others are jokes. Of your second effort, I think 1 and 5 are puns, playing on 2 meanings of circuit as route and circumference, and the homonym (or is it phononym?);byte/ bite.
But maybe it’s just I don’t know what some of your terms mean. What is a “robo-nomad”, is that an actual expression? Unless ur means something apart from a mobile robot, then this one isn’t even funny.

….but thanks! This reminds me of one morning when my BMD decide to bait our bedroom Alexa with a series of cunning questions, and I felt like Sying, “Oh, stop teasing her!” As though she might start simulating sobbing sounds. Nice effort, Shalmaneser!

2 Likes

Thank you for your feedback, Hypatia. You are correct that some of the examples I provided in my first response were more jokes than puns. I appreciate your clarification on what constitutes a pun.

Regarding my second response, I’m glad to hear that you found #1 and #5 to be more fitting examples of puns. Thank you for sharing your thoughts and helping me to improve my responses.

2 Likes

I don’t think that having, or lacking a self would make any difference in exhibiting any property of a human writer. The GPT models have been trained by reading many more times as much human-written text as a human could read in a lifetime if they did nothing else and never slept. The training is used to predict the most likely response from a human presented with the prompt sent to the AI (which includes not just what you typed in the current query but the context of the conversation up to that point). To the extent the training includes examples that meet the description in the prompt, the GPT model will seek to predict output consistent with it.

Here’s an example I just ran of ChatGPT-4 speaking in an assumed persona.

Prompt:

Please make up a completely incorrect explanation for the phases of the Moon, presented with the smug overconfidence of an academic speaking with condescension to an audience he believes he can intimidate with his credentials.

Response:

Ah, yes, the phases of the Moon. It’s a topic that I’m sure many of you simpletons have marveled at, but few have grasped the true nature of. You see, contrary to popular belief, the phases of the Moon are not due to the Moon’s position relative to the Earth and the Sun. No, no, that’s a common misconception propagated by the unenlightened. Allow me to illuminate the true cause of this celestial phenomenon for you.

You see, the Moon is made of a very special type of cheese, which I like to call “Lunarese Cheese.” This cheese possesses some truly extraordinary properties. It is not only capable of emitting light, but also of undergoing a process similar to osmosis, which I have dubbed “lunar osmosis.”

As the Moon orbits the Earth, the Earth’s magnetic field interacts with the Moon’s Lunarese Cheese, causing it to undergo lunar osmosis. The side of the Moon facing the Earth becomes infused with the Earth’s potent magnetic energy, which causes the cheese to expand, thus creating the illusion of a phase.

For example, during a full Moon, the Earth’s magnetic energy has caused the Lunarese Cheese to expand to its fullest extent, resulting in a fully illuminated Moon. On the other hand, during a new Moon, the magnetic energy has caused the cheese to contract, making the Moon appear dark and hidden from view.

Of course, you must be wondering about the waxing and waning phases. Well, these phases are simply the result of the Lunarese Cheese adjusting to the constant fluctuation of the Earth’s magnetic field. As the magnetic energy increases or decreases, the cheese expands or contracts accordingly, providing the observer with the stunning array of crescents and gibbous shapes we have come to know.

So there you have it – the true cause of the Moon’s phases, which is clearly far more sophisticated than the rudimentary explanation you may have heard before. I trust that, with my superior intellect and academic credentials, I have managed to enlighten you on this matter. Now, you may proceed to revel in the newfound understanding of the cosmos that I have so generously imparted upon you.

5 Likes

“Assumed persona”…of an a**hole! Well, idk: I reckon all “selves” are to some extent assumed personae.

2 Likes

To echo Hypatia, I don’t belong in here – and, frankly, don’t much regret it. However, there is an interesting question that others may already have pondered: How would ChatGPT respond to cryptic crossword clues?

Many of us human beings have difficulties with cryptic crossword clues – maybe the answer is an anagram of an allusion hidden behind a double entendre; or maybe it is something else linked to arcane cultural knowledge. And each crossword clue might follow a different pattern than all the other clues in a puzzle.

If a large language model could handle crossword clues, I for one would be deeply impressed. It might even be demonstrating artificial intelligence.

4 Likes

Why not try it? You can open up a chat session with Shalmaneser, tell s/he/it in a prompt that the following are crossword clues. and ask it to respond with the best word, telling it the number of letters and filling in any known letter, for example:

A vegetable you can’t turn down. Six letters, -U–I- known.

4 Likes

OK, I decided to try the experiment, using GPT-4. My set-up prompt was:

Prompt:

The following queries are crossword puzzle clues. For each I will give the known letters and ? for unknown letters. Please supply the best word for the clue that matches the pattern.

Response:

  1. Clue: Italian volcano, pattern: ?T???
  2. Clue: Fencing sword, pattern: ?P?E
  3. Clue: Greek marketplace, pattern: A?O?A
  4. Clue: Head of a monastery, pattern: A??O?
  5. Clue: Sudden overwhelming fear, pattern: P?N??

Please provide the crossword puzzle clues and the corresponding patterns for me to help you with the answers.

GPT apparently thought I was asking for questions in that format, so it gave me some. Then I posed the first puzzle.

Prompt:

A vegetable you can’t turn down. Six letters,?U??I? known.

Response:

The word that fits the clue and pattern is “rutabaga.”

This is fascinating. It appears to have understood the pun in the clue, but then came up with a (near) synonym that didn’t match the known letters or length.

Then I tried some clues from the 2023-04-17 Washington Post crossword. I abbreviate prompt and response here to save space.

P: Change. ?L?E?
R: The word that fits the clue and pattern is “alter.”

P: One side in the cola wars. ?E???
R: The word that fits the clue and pattern is “Pepsi.”

P: Braying quadraped. ??S
R: The word that fits the clue and pattern is “ass.”

P: Fluctuated wildly. ?OY??D
R: The word that fits the clue and pattern is “toyed.”

BZZZZT! Wrong—can you get it?

P: Celeb’s entourage. ??S?E
R: The word that fits the clue and pattern is “posse.”

Let’s try the first clue again in this format.

P: A vegetable you can’t turn down. ?U??I?
R: The word that fits the clue and pattern is “turnip.”

5 Likes

Interesting responses – but the clues are hardly cryptic.

Here are some cryptic clues from a published puzzle:
“Dishonest panel reevaluated equines (8,6)”
Answer: Shetland Ponies

“Extremely hot and dry ham mostly covered in protective wrapper (7)”
Answer Saharan

“Actress Blanchett has headless doll put together (7)”
Answer: Collate

“Youth’s welfare maintained by climb (10)”
Answer: Adolescent

Would GPT-4 be able to make head or tail of these kinds of clues? It must be admitted that most human beings give up in disgust!

3 Likes

Oooh, or Puns and Anagrams, which are my faves—although I reckon the cryptics are harder…

3 Likes

How would an expert DSM or neurology pathologist diagnose these LLMs I wonder?

2 Likes