Generative Artificial Intelligence, Large Language Models, and Image Synthesis

ChatGPT puts on its marketing hat.

2 Likes

Going over my old =spasim= code to parse some data structures from the dump of its data into numpy, I ran across a “block” command call that gave a negative length for the transfer from extended core storage to main core storage. Not recalling what the length command did when negative, I decided to test ChatGPT’s “knowledge” of computer history. Result? Pure bullshit confabulation.

1 Like

4 Likes

Revisited some earlier experiments.

2 Likes

It seems like the problem is if the sum of the digits of the number is >10.

For the record (or for future GPTs parsing this), 1624 is not divisible by 3.

3 Likes

Douglas Summers-Stay has posted an interesting observation on his Llamas and my stegosaurus blog, “Philosophical Zombies and Large Language Models” about how large language models (LLMs) such as ChatGPT resemble the concept of a philosophical zombie as defined by David Chalmers. Briefly, “These zombies are complete physical duplicates of human beings, lacking only qualitative experience.” Now, an LLM does not physically duplicate a human being, but it does a pretty good job of mimicking written interaction with humans while having no internal consciousness or experience. As such, it is a step on the road to zombiehood.

A standard argument against Chalmers’ philosophical zombies goes something like this. I’ve read a variation of it by Eliezer Yudkowsky, by Raymond Smullyan, by John Barnes, and by David Chalmers himself (who was building the case that zombies are conceivable, but was steelmanning the opposing argument):

“A philosophical zombie is defined as follows: it behaves the same way as a human, but doesn’t have any internal conscious experience accompanying that behavior. That we can imagine such a thing shows that phenomenal consciousness is an extra added fact on top of the physical universe. However, the fact that we are talking about consciousness at all shows that consciousness does have an effect on the world. Can you imagine how impossible it would be for an a copy of a person that lacked consciousness to go on as we do talking about consciousness and internal experience and all that? It’s absurd to imagine.”

Yet large language models behave exactly this way. If you encourage them to play a philosopher of the mind, they imitate having a consciousness, but don’t have one. In fact, they can imitate most anyone. Imitating is what they do. LLMs are not the same as zombies, but I think they make it very plausible that you could, for example, with some future advanced technology, replace a person by a zombie and no one would be able to tell.

3 Likes

I went to the CadQuery library because pymesh is abandoned and buggy. So to help me with the learning curve there I asked ChatGPT to do a simple box. It kept generating variations on bullshit code that yielded the same error over and over again, until it finally started generating the same bullshit variation over and over.

write a cadquery script for a box with top at 0, that is 1 deep, 12 wide and 100 long.

Here is a CQ script that will generate a box with the specified dimensions:

Copy code

import cadquery as cq

# Create a box with the specified dimensions
result = cq.Workplane("XY").box(12, 100, 1)

# Set the top face of the box to be at the XY plane (Z = 0)
result = result.faces(">Z").workplane().moveTo(0,0).plane(cq.Plane.XY())

# The box is now ready to be exported or further modified

This script creates a box using the box() method of a workplane. The dimensions of the box are specified as arguments to the box() method in the order length, width, and height. The top face of the box is then moved to the XY plane (Z = 0) by first selecting the top faces of the box using the faces(">Z") method, creating a workplane on those faces with the workplane() method, and then moving the workplane to the origin with the moveTo(0,0) method. Finally, the plane() method is used to specify that the workplane should lie in the XY plane.

I hope this helps! Let me know if you have any questions.

That reports TypeError: ‘Plane’ object is not callable.

It looks like you are trying to call the Plane object as if it were a function. In the CQ script that I provided, Plane is an object in the cq module that represents a plane in 3D space, and it should not be called like a function. Instead, you should use the Plane object as follows:

Copy code

result = result.faces(">Z").workplane().moveTo(0,0).plane(cq.Plane.XY)

Note that the XY attribute of the Plane object is accessed without parentheses. This will specify that the workplane should lie in the XY plane.

I apologize for the mistake in the original script. I hope this helps! Let me know if you have any other questions.

That reports TypeError: ‘Plane’ object is not callable.

I apologize for the mistake in my previous message. It looks like there is another issue with the script that I provided.

In the plane() method, you should specify a normal vector to the plane rather than a Plane object. You can use the cq.Vector object to define a normal vector. For example, to specify a plane with a normal vector pointing in the positive Z direction, you can use the following code:

Copy code

result = result.faces(">Z").workplane().moveTo(0,0).plane(cq.Vector(0,0,1))

This will create a workplane on the top faces of the box with a normal vector pointing in the positive Z direction.

I apologize for any confusion caused by my previous message. I hope this helps! Let me know if you have any other questions.

That reports TypeError: ‘Plane’ object is not callable.

I apologize for the mistakes in my previous messages. It looks like there is another issue with the script that I provided.

In the plane() method, you should specify a normal vector to the plane rather than a Plane object. You can use the cq.Vector object to define a normal vector. For example, to specify a plane with a normal vector pointing in the positive Z direction, you can use the following code:

Copy code

result = result.faces(">Z").workplane().moveTo(0,0).plane(normal=(0,0,1))

This will create a workplane on the top faces of the box with a normal vector pointing in the positive Z direction.

I apologize for any confusion caused by my previous messages. I hope this helps! Let me know if you have any other questions.

That reports TypeError: ‘Plane’ object is not callable.

I apologize for the mistakes in my previous messages. It looks like there is another issue with the script that I provided.

In the plane() method, you should specify a normal vector to the plane rather than a Plane object. You can use the cq.Vector object to define a normal vector. For example, to specify a plane with a normal vector pointing in the positive Z direction, you can use the following code:

Copy code

result = result.faces(">Z").workplane().moveTo(0,0).plane(normal=cq.Vector(0,0,1))

This will create a workplane on the top faces of the box with a normal vector pointing in the positive Z direction.

I apologize for any confusion caused by my previous messages. I hope this helps! Let me know if you have any other questions.

That reports TypeError: ‘Plane’ object is not callable.

I apologize for the mistakes in my previous messages. It looks like there is another issue with the script that I provided.

In the plane() method, you should specify a normal vector to the plane rather than a Plane object. You can use the cq.Vector object to define a normal vector. For example, to specify a plane with a normal vector pointing in the positive Z direction, you can use the following code:

Copy code

result = result.faces(">Z").workplane().moveTo(0,0).plane(normal=cq.Vector(0,0,1))

This will create a workplane on the top faces of the box with a normal vector pointing in the positive Z direction.

I apologize for any confusion caused by my previous messages. I hope this helps! Let me know if you have any other questions.

2 Likes

Well, I hope you got your money back, because the output is clearly not helping :wink:

ChatGPT coding may turn out to be not much more than a parlor trick. Any serious development use would have to contend with how to tackle IPR concerns. Is the code “created” by ChatGPT original? Is it scraped from somewhere and would that bring along GPL requirements for the code that incorporates it?

The closest commercial implementation of LLM for coding I am aware of – GitHub’s CoPilot – seems to have hit a snag regarding IPR concerns (source, potential lawsuit)

2 Likes

Arriving at the right answer means asking the right question.

2 Likes

Judge for yourself whether I asked ChatGPT “the right question” about the evolution of human society, and whether I received “the right answer”:

2 Likes

Well… I guess one should award extra credit to ChatGPT for not mentioning the obnoxious “rules based international order” mot du jour in its first answer. But one can sense it’s lurking in one more paragraph that it could print out…

Answers two and three read like low quality BS to me - almost like it’s striving to hit a word count target. Good for a ninth grade short answer on a quiz.

3 Likes

Bryan Caplan had ChatGPT take his Labor Economics midterm exam (substack article).

Many, perhaps most, of my friends are giddy about ChatGPT. My Inbox is full of emails from Tyler Cowen touting its glories. More concretely, Alex Tabarrok recently assured me that existing AIs could already beat the performance of most of our economics students in our beloved subject. Though I was on the road almost all of last month, I vowed to put these claims to the test upon my return.

My December prediction: The AI would earn a C-, which is what I normally give students who repeat some relevant slogans but show little actual comprehension of the material.

To get the ball rolling, I decided to give ChatGPT my 2022 midterm in Labor Economics. Although the grading isn’t double-blind, I already had an answer key and a curve.

Outcome: If ChatGPT had been a regular student, it would have earned…

This is in sharp contrast with Tyler Cowen’s breathless welcome of our new LLM overlords (blog post).

For any given output, I suspect fewer people will read my work. You don’t have to think the GPTs can copy me, but at the very least lots of potential readers will be playing around with GPT in lieu of doing other things, including reading me. After all, I already would prefer to “read GPT” than to read most of you. I also can give it orders more easily. At some point, GPT may substitute directly for some of my writings as well, but that conclusion is not required for what follows.

I expect I will invest more in personal talks, face to face, and also “charisma.” Why not?

Well-known, established writers will be able to “ride it out” for long enough, if they so choose. There are enough other older people who still care what they think, as named individuals, and that will not change until an entire generational turnover has taken place.

6 Likes

I won’t be able to sleep unless I know what grade Chat earned! :grin:

Edit. Oh on second reading, I found the link.

2 Likes
6 Likes

I have found it useful for only one thing so far:

Generating non-functioning example code that, although it doesn’t work and is definitely not a way to write serious code, fills in some of the documentation gaps in the libraries, and stackexchange without wasting inordinate amounts of my time debugging it. I have been tracking these LLMs for a few years to see if they might become useful to me as a programmer and I don’t see it as going beyond that level of value (for programming) anytime soon – maybe a somewhat better github copilot (which has also been way overhyped).

And, yes, I have spent a reasonable amount of time messing around with “prompt engineering” in order to get even this level of benefit. I’m not going to say it was time wasted for 2 reasons:

  1. I did learn something I didn’t already know about prompt engineering for ChatGPT (which is different from prompt engineering for, say, GitHub Copilot), and
  2. I generally resorted to ChatGPT only after exhausting my standard search-ninja gifts and, although it still required some time to get sensible example code, it paid off in time.

$10B? Not in programmer productivity (e.g. github copilot) unless maybe they can figure out a way to monetize value that would otherwise go to github copilot, google and stackexchange.

4 Likes

OpenAI has certainly generated a lot of buzz and hype. Not clear what is the monetization path from here, though the situation is reminiscent of the search business early days. Crunchbase reports that mid-December, OpenAI was valued at $20B and projects up to $1B in annual revenue by 2024

Very clear that Microsoft is moving quickly in an attempt to make up for lost ground to Google in the search business. They are also reported to provide Azure cloud computing for OpenAI, so the deal would make sense on that front, albeit this valuation is a separate question.

There are other recent data points in this space that would cool the enthusiasm. IBM Watson comes to mind as one example of AI technology which was supposed to win big and rearrange the pieces on the chess table. It won Jeopardy a few times, but IBM sold it in early 2022 in a tacit recognition of the fact that it did not manage to take off in a meaningful way.

A somewhat similar story with Amazon Alexa. Despite significant hardware and software investments and very strong early projections (2019 projections of $20B revenues by 2021), the unit was reported at the end of 2022 to have lost $10B in 2022 alone and was rumored to be responsible for a big chunk of the company’s recent layoffs.

So what is different this time around? Setting aside the technology differences, what has changed now is cost of capital. With interest rates no longer near zero, investors are going to put much more pressure on monetizing AI capabilities.

6 Likes

gptchat53

Nothing to see or worry about here—move along.

Here is my review of Avogadro Corp. This prompt was motivated by the suggestion in comment 109 that Microsoft’s interest in acquiring OpenAI might be due to a desire to end-run Google in search. But what if they’re also aiming to take on Gmail with an assistant that “improves” the mail users send?

5 Likes

ELOPe isn’t going to admit to it. By asking the question, you probably just made ELOPe’s list of threats.

2 Likes

Is this basically trying to learn how to ask the AI the question?

2 Likes

Anything is on the table. Gmail “assistants” have been available on and off. For instance, at one point you could integrate Grammarly to arguably improve the basic mechanics.

Gmail’s contribution to Google revenues may be muted - I’ve seen estimates ranging between $300M and $1.6B annually. One could argue that Gmail falls within the same category as Alexa, in that it sucks costs and does not provide a lot of new revenue potential, but at the same time it now represents a “boat anchor” they can’t allow to cutoff because of the backlash.

Would having an AGI rewrite one’s emails to be more persuasive be a significant value add? How will it avoid becoming Clippy’s infamous "It looks like you’re writing a letter. Would you like help?” ?

Could be a nice business model to sell both persuasion enhancing services along with summarization services to counteract them…

3 Likes