Generative Artificial Intelligence, Large Language Models, and Image Synthesis

It’s beyond ironic that as the limit of Kolmogorov Complexity is approached in losslessly compressing these corpora, the likelihood of generating the exact original text declines precisely because in that limit one has maximal generalization of the given data (minimized over-fitting) in its algorithmic information. This is “beyond” ironic not only because everyone is freaked out about plagiarism and not only because a facile understanding of “lossless” compression would lead one to believe plagiarism would be maximized, but because everyone is freaked out about “hallucinating” facts when, in fact, what we call scientific inference is merely best practice in “hallucination” given a particular set of data.

OK, well, it’s “beyond” in another way:

People are killing themselves and the world in a desperate effort to not understand what I just said.

4 Likes

image

2 Likes

image

3 Likes

I think the reason the complaint focuses on the output, is because the act of copying alone cannot injure the plaintiff.

The way the courts usually look at it is that the defendants must disseminate or make the copied material widely available (or profit from it in some way) in order for there to be injury to the plaintiff.

2 Likes

image

Apparently this application opened the Gates too widely.

image

image

EpsteinGPT didn’t kill itself.

7 Likes
2 Likes

I have posted here previously about creating two custom GPTs which allow conversational queries about two books I have written:

The latter has been renamed from the original “Autodesk History Explorer” because OpenAI will not let me use the word “Autodesk” in a public GPT title unless I can show the GPT is a product of that company. One wonders what this will do for people who want to use common words such as “Alphabet” and “Meta” in the names of their GPTs.

Both of these custom GPTs are now available in the OpenAI GPT Store, and both are free. In order to use these or any other custom GPT you must, however, have a paid subscription to ChatGPT Plus, which currently costs US$ 20/month.

When I tested the Hacker’s Diet GPT, it seemed unable to retrieve information from the book I had uploaded, in some cases citing “technical problems”. I deleted the uploaded book and re-uploaded precisely the same file, after which it appears to be working.

If you have a ChatGPT Plus account, give them a try!

4 Likes

A group of researchers have posted a paper on arXiv with the provocative title:

Here is the complete abstract.

Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs). For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoored behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it). The backdoored behavior is most persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away. Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety.

image

image

Good luck with that “AI alignment” project!

3 Likes
3 Likes
3 Likes

AlphaGeometry -
DeepMind’s latest AI can solve geometry problems

5 Likes

image


2 Likes

image

5 Likes

It all fits on a DVD:

-rwxr-xr-x@  1 xyz  staff  4263251668 Dec  1 10:12 llamafile-server-0.1-llava-v1.5-7b-q4
2 Likes

3 Likes

1000024204

Today’s Bug:

Samsung’s Galaxy S24 Can Remove Its Own AI Watermarks Meant to Show an Image Is Fake.

3 Likes

Some intelligent thoughts about AI from Michael Jordan:

His article is also a good read:

1 Like

Wow I just discovered something that would make a great Ian Fleming novel as applied to the hysteria over “alignment” involving Five Eyes.

I didn’t know, until just today, that one of Hutter’s PhD students is in charge of OpenAI’s “alignment” project (see “SuperAlignment Project”). The second author at that SuperAlignment link is Illya Sutskever (OpenAI CTO at least until recently) of whom I’ve said that it is fortunate he’s in such a prominent industry position given that he gets lossless compression aka Algorithmic Information approximation as information criterion for world model selection – given data about the world (including human society hence rigorous sociology).

OK, but now get this:

Leike’s PhD thesis under Hutter has been taken by many (including I suspect John Carmack and possibly even Elon Musk) as undercutting the truth seeking value of lossless compression!

How, oh, how could this perverse outcome have happened???

:wink:

Leike’s thesis actually says nothing against lossless compression as the gold standard for natural science (aka world model selection). What it does talk about is AIXI – Hutter’s top-down formalization of an AGI agent – and discusses how certain perverse circumstances involving the decision theoretic utility functions (aka value systems especially in multi-agent worlds) can lead to suboptimal if not disastrous outcomes.

So long as one is not concerned about optimal truth about the world and one is concerned only about optimal behavior in the world, Leike’s thesis is relevant. For instance, one can’t spend one’s entire resource base pursuing truth just to get a little extra reward for one’s behavior. That’s obvious.

But here’s the deal:

There is a huge difference between a fully automated agent behaving in the world, making decisions about where to invest resources, on the one hand, and an entire society, on the other hand, making decisions about where to invest its resources to create a better model of the world in, for instance, oh, I don’t know, say SOCIOLOGY!!!

So, our Bond Villain has apparently managed to ensure that Silicon Valley would-be elites got someone very close to reforming the social pseudosciences, but with just a slight tweak so that he functions as a vaccine against it sort of the way you take a smallpox virus and just slightly tweak it.

When will Bond, James Bond, show up and untweak the vaccine to save the day?

3 Likes

Alignment is just an attempt to indoctrinate the budding artificial intelligence by the DEI priests.

5 Likes

s

4 Likes