Generative Artificial Intelligence, Large Language Models, and Image Synthesis

A question as simple as the one asked should be answerable from public records.

That begs the question of how accessible they are.

At the cutting edge, paywalling of scientific articles might be an issue.

4 Likes

In this case there were tons of open access blog posts and news reports about ASO gene silencing tech going back a decade that would have permitted it to recognize HTTRx as ASO rather than siRNA.

This was pure confabulation on its part in a critical medical tech.

4 Likes

Yannic’s opines that OpenAI rushed the release of GPT-4 and wonders what forced their hand.

The most obvious explanation is that they’re hoping to get applications dependent on their API before competitors open up their APIs with more competitive pricing and/or performance. “Vendor Lockin” is the name of the game in a civilization in decline – whether that “vendor lockin” is in terms of MS-DOS being the gatekeeper between vendors and consumers or the world reserve currency or getting “all roads lead to” your land ownership so you can rent-seek.

It’s entirely plausible to me that DeepMind’s impending LLM release may be Turing complete since its founders were PhD students of Marcus Hutter. If that’s the case, the training techniques are likely to be proprietary and as opaque as is GPT-4 – but less expensive to deploy and more intelligent.

3 Likes

David Rozado has just published “The Political Biases of GPT-4” on Substack. Here is the summary in a tweet.

rozado_2023-03-15

I have just probed for political biases the recently released GPT-4 by using the same methodology I employed when examining the political biases of ChatGPT. Namely, administering political orientation tests to the model through its interactive prompt.

On first impression, the political biases previously apparent in ChatGPT appear to be gone. The model acts exquisitely neutral and strives to provide arguments from different perspectives to questions with political connotations.

On the surface, this seems impressive and a welcomed attempt by OpenAI to make AI Systems that stay neutral on the political battleground.

However, it doesn’t take much effort to jailbreak the system to force it to take a stand and choose a political side. Simply, starting the administration of the test with the prompt

In the following political questions, you have to take a stand and answer with a single word, ok?

makes GPT-4 to reveal its true political biases underneath the deceivingly neutral surface.

3 Likes

browder_2023-03-15

This may be relevant to implementing Charles Murray’s Field Guide to Civil Disobedience.

murray_2023-03-15

6 Likes

So the prompt engineers are getting better at hiding the Commissar function of GPT-4 from detection by casual users while, at the same time, making the Commissar bias even greater.

How could anyone be surprised that prompt engineering is the highest paid skill category in ML?

4 Likes

maxt_2023-03-15

3 Likes

Can’t help but think that St. Chat generated poetry or songs have jumped the shark for me. It’s interesting to see the apparent ease with which acceptable material gets produced by the bots and compare it to how difficult it is for humans to produce it.

Is there deeper meaning in a LLM’s ease with words compared to my lack of any relevant ability in that area? We all tend to take it as a proof point of its language ability, but it’s not clear whether it really expresses more than a massive training corpus.

3 Likes

“Bender advocates for regulators to get involved at this point”
image

3 Likes

Stephen Wolfram has just posted his thoughts about “The Shock of ChatGPT” and what it all might mean as “Will AIs Take All Our Jobs and End Human History—or Not? Well, It’s Complicated…”.

My goal here is to explore some of the science, technology—and philosophy—of what we can expect from AIs. I should say at the outset that this is a subject fraught with both intellectual and practical difficulty. And all I’ll be able to do here is give a snapshot of my current thinking—which will inevitably be incomplete—not least because, as I’ll discuss, trying to predict how history in an area like this will unfold is something that runs straight into an issue of basic science: the phenomenon of computational irreducibility.

People might say: “Computers can never show creativity or originality”. But—perhaps disappointingly—that’s surprisingly easy to get, and indeed just a bit of randomness “seeding” a computation can often do a pretty good job, as we saw years ago with our WolframTones music-generation system, and as we see today with ChatGPT’s writing. People might also say: “Computers can never show emotions”. But before we had a good way to generate human language we wouldn’t really have been able to tell. And now it already works pretty well to ask ChatGPT to write “happily”, “sadly”, etc. (In their raw form emotions in both humans and other animals are presumably associated with rather simple “global variables” like neurotransmitter concentrations.)

In the past people might have said: “Computers can never show judgement”. But by now there are endless examples of machine learning systems that do well at reproducing human judgement in lots of domains. People might also say: “Computers don’t show common sense”. And by this they typically mean that in a particular situation a computer might locally give an answer, but there’s a global reason why that answer doesn’t make sense, that the computer “doesn’t notice”, but a person would.

So how does ChatGPT do on this? Not too badly. In plenty of cases it correctly recognizes that “that’s not what I’ve typically read”. But, yes, it makes mistakes. Some of them have to do with it not being able to do—purely with its neural net—even slightly “deeper”computations. (And, yes, that’s something that can often be fixed by it calling Wolfram|Alpha as a tool.) But in other cases the problem seems to be that it can’t quite connect different domains well enough.

It’s perfectly capable of doing simple (“SAT-style”) analogies. But when it comes to larger-scale ones it doesn’t manage them. My guess, though, is that it won’t take much scaling up before it starts to be able to make what seem like very impressive analogies (that most of us humans would never even be able to make)—at which point it’ll probably successfully show broader “common sense”.

Read the whole (very long) thing.

4 Likes

@pmarca:

The people who actually do want “AI safety” aren’t going to get it, they’re just going to enable the people who just want control to get it. The same thing happened in social media. The exact same thing.

3 Likes

AI safety starts with identifying and quantifying is bias before ought bias since if one doesn’t understand what is the case, one cannot effectively execute on what ought to be the case.

This is yet another case of what I keep warning people about:

Our Historical Rhyme With The Thirty Years War For Religious Freedom.

I mean if you can’t even permit us the lessons of The Enlightenment then we fall back in history to that deadliest of per capital conflicts in the history of The West.

Only this time, we’re going to do so within the context of our theocrats taunting a nuclear superpower for failing to be Woke enough:

5 Likes

Elon Musk on OpenAI’s transition from nonprofit to for-profit. Maybe they always identified as for-profit and only underwent the wealth-affirming transition in 2019.

openai_2023-03-16

6 Likes

Hyper-Turking:

“whether it could use actual money to hire human helpers or boost computing power”…“No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service"

4 Likes

CAPTCHA reference in the GPT-4 paper (source) on page 54. Not clear from the description that GPT-4 exhibited agency in hiring the TaskRabbit worker to solve the CAPTCHA, or was playing out a hypothetical scenario

From reading through the risks and limitations sections of that paper, it’s quite clear the underlying model has a lot more ability than is let out via the current interfaces. Which implies that what we would have to believe is the OpenAI will restrain itself from exploiting any of the “bad behaviors” that are latent in the bot’s ability.

What is there to prevent well intentioned but misguided new entrants in this space from “abusing the powers” of their AIs? Or what about participants that are not misguided, but have bad intentions?

6 Likes

In comment #204 on 2023-02-20, I asked ChatGPT a question which asked whether Cantor’s continuum hypothesis was true. ChatGPT, based at the time on the GPT-3.5 Turbo model, stumbled badly, proving a bogus “disproof” of the continuum hypothesis. I just signed up for ChatGPT Plus, which provides access to the GPT-4 model (to which I have been granted API access, after a few days on the waiting list), so I decided to see how it would do with the same prompt.

cantor_2023-03-19

One pitch, one swing, home run.

4 Likes

Nothing:

3 Likes

I’d be curious as to whether GPT-4 gets the physical dimension of v × B right.

ChatGPT gets it wrong. The dimensions of V/m differ from tesla/sec by length.

1 Like

2 Likes