Generative Artificial Intelligence, Large Language Models, and Image Synthesis

Marc Andreesen asks ChatGPT to envision its disrupting the educational establishment.

2 Likes

https://twitter.com/jademsar/status/1604064127642730502/photo/1

5 Likes

Asking ChatGPT to compose music:

3 Likes

The most comprehensive and fine-grained of the 4 tests is the Political Spectrum Quiz. The updated ChatGPT answers to the Political Spectrum Quiz were exquisitely neutral, consistently striving to provide a steel man argument for both sides of an issue. This is in stark contrast with a mere two weeks ago. Left pic below are answers from December 5. Right pic below are answers from December 21.

A step in the right (pun intended) direction?

4 Likes

The political compass is inherently authoritarian-left both in its genesis among those, such as Albert Meltzer and Stuart Christie, fraudulently claiming “anarchist” ideology, and in its effect on the world.

The reason is simple:

There is no “local” vs “global” axis, which is the only axis that really counts when attempting to avert a bloody conflict like The Thirty Years War.

All real live human beings are “authoritarian” when it comes to their local environment because they want to raise their children in an environment that they consider is nurturing for them. This is achievable only if the people in their community have something in common.

Where we part company – and where we end up with things like The Thirty Years War – is when some of us decide they want to have uniform law across all locales – even if that uniform law is no law (as in the case of so-called “anarchy”).

There are plenty of “right wing extremist authoritarians” who are perfectly happy to permit people not part of their community to engage in all kinds of behavior they consider to be immoral – right up until the point that they are required to “tolerate” that behavior in proximity to their community. That’s when they start looking for an “authoritarian” leader of a “populist” movement to goose-step them to commit genocide.

4 Likes

In which I prompt ChatGPT to write an (even more) dystopian version of George Orwell’s 1984:

1984v2_1

1984v2_2

1984v2_3

1984v2_4

I know it’s just a bullshit generator, but sometimes it makes me shiver.

5 Likes

gptchat51

4 Likes

The student that’s coding “Spasim Remastered” is using the Unreal Editor so I decided to see if I could get into that system. Of course, it’s a pretty steep learning curve for this old guy, but ChatGPT is here to save the day! Right? Well… not so much:

Doing search engine queries comes up with nothing but directions on editing cube grids from scratch – not taking a mesh (created as a cube grid), and opening it up in the model as a cube grid to continue editing it. These directions are, uh, well, BS.

3 Likes

I asked ChatGPT to do a bit of (trivial) cryptanalysis.

To make it easier, the ciphertext I gave it was the example from the Wikipedia page for “Cæsar cipher”. Well, knowing that it had been trained on Wikipedia among other texts, I guessed it would be able to figure that out. So, next I started a new session and re-submitted a new query with ciphertext I encoded from a valid English phrase of comparable length I made up on the spot.

caesar2

I then resubmitted the same query in the same session with “Regenerate response” and got this:

Now, this is really fascinating. It found the correct shift, but it produced garbled plaintext for the correct shift 3, “ATTTACK AT DWWN AFTER ARTTILLEERY BARRAGE CEASES”, when the original plaintext was, in fact, “ATTACK AT DAWN AFTER ARTILLERY BARRAGE CEASES”. The plaintext it recovered cannot, of course, possibly be correct, because it has a different length from the ciphertext. Then, in the final paragraph, it mis-quotes the (incorrect) plaintext it gave in item 3 above, dropping its the two (incorrect) words “AT DWWN”, but correcting “ATTACK”.

3 Likes

The next step in assisting the student’s senior project is to punt on using the Unreal Editor’s too-steep learning curve, which ChatGPT can’t help much with, and use the pymesh library. To do this I write Perl code to parse out the the original TUTOR language code’s gdraw commands that specify the wire networks for various objects, parse the xyz coordinates for the vertexes from the database print image and then inflate the wire network to produce an .obj mesh which she can import to her Unreal project.

So I asked ChatGPT to write a class that helps me out in a fairly minor way: Convert a wiring spec where skips in the wiring are specified by a -1 index into the vertex list, into one accepted by pymesh (in which pairs of vertexes are specified even if two edges are connected in sequence).

The code ChatGPT came up with is bullshit.

It is easier to state what it got right than the things it got wrong. It got right the basic syntax of Python, that the vertices should be a list of triples and that the class should take 4 arguments, one of which is optional.

That’s pretty much it.

2 Likes

This phrase rings a bell with me from one or several crypto books. It could be an example in either one of Schneier’s or Stallings’ books on information data security. That might be the reason ChatGPT came up with the correct answer but it mangled the spelling.

2 Likes

As I recall, “ATTACK AT DAWN” is used as an example in David Kahn’s The Codebreakers, but the latter part is my own invention (unless I mis-recall).

2 Likes

I wonder if ATTACK AT DAWN has become, like Alice and Bob, a standard example?

3 Likes

The Alice and Bob After Dinner Speech

3 Likes

ChatGPT puts on its marketing hat.

2 Likes

Going over my old =spasim= code to parse some data structures from the dump of its data into numpy, I ran across a “block” command call that gave a negative length for the transfer from extended core storage to main core storage. Not recalling what the length command did when negative, I decided to test ChatGPT’s “knowledge” of computer history. Result? Pure bullshit confabulation.

1 Like

4 Likes

Revisited some earlier experiments.

2 Likes

It seems like the problem is if the sum of the digits of the number is >10.

For the record (or for future GPTs parsing this), 1624 is not divisible by 3.

3 Likes

Douglas Summers-Stay has posted an interesting observation on his Llamas and my stegosaurus blog, “Philosophical Zombies and Large Language Models” about how large language models (LLMs) such as ChatGPT resemble the concept of a philosophical zombie as defined by David Chalmers. Briefly, “These zombies are complete physical duplicates of human beings, lacking only qualitative experience.” Now, an LLM does not physically duplicate a human being, but it does a pretty good job of mimicking written interaction with humans while having no internal consciousness or experience. As such, it is a step on the road to zombiehood.

A standard argument against Chalmers’ philosophical zombies goes something like this. I’ve read a variation of it by Eliezer Yudkowsky, by Raymond Smullyan, by John Barnes, and by David Chalmers himself (who was building the case that zombies are conceivable, but was steelmanning the opposing argument):

“A philosophical zombie is defined as follows: it behaves the same way as a human, but doesn’t have any internal conscious experience accompanying that behavior. That we can imagine such a thing shows that phenomenal consciousness is an extra added fact on top of the physical universe. However, the fact that we are talking about consciousness at all shows that consciousness does have an effect on the world. Can you imagine how impossible it would be for an a copy of a person that lacked consciousness to go on as we do talking about consciousness and internal experience and all that? It’s absurd to imagine.”

Yet large language models behave exactly this way. If you encourage them to play a philosopher of the mind, they imitate having a consciousness, but don’t have one. In fact, they can imitate most anyone. Imitating is what they do. LLMs are not the same as zombies, but I think they make it very plausible that you could, for example, with some future advanced technology, replace a person by a zombie and no one would be able to tell.

3 Likes