Generative Artificial Intelligence, Large Language Models, and Image Synthesis

image

2 Likes

image

3 Likes

I think the reason the complaint focuses on the output, is because the act of copying alone cannot injure the plaintiff.

The way the courts usually look at it is that the defendants must disseminate or make the copied material widely available (or profit from it in some way) in order for there to be injury to the plaintiff.

2 Likes

image

Apparently this application opened the Gates too widely.

image

image

EpsteinGPT didn’t kill itself.

7 Likes
2 Likes

I have posted here previously about creating two custom GPTs which allow conversational queries about two books I have written:

The latter has been renamed from the original “Autodesk History Explorer” because OpenAI will not let me use the word “Autodesk” in a public GPT title unless I can show the GPT is a product of that company. One wonders what this will do for people who want to use common words such as “Alphabet” and “Meta” in the names of their GPTs.

Both of these custom GPTs are now available in the OpenAI GPT Store, and both are free. In order to use these or any other custom GPT you must, however, have a paid subscription to ChatGPT Plus, which currently costs US$ 20/month.

When I tested the Hacker’s Diet GPT, it seemed unable to retrieve information from the book I had uploaded, in some cases citing “technical problems”. I deleted the uploaded book and re-uploaded precisely the same file, after which it appears to be working.

If you have a ChatGPT Plus account, give them a try!

4 Likes

A group of researchers have posted a paper on arXiv with the provocative title:

Here is the complete abstract.

Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs). For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoored behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it). The backdoored behavior is most persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away. Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety.

image

image

Good luck with that “AI alignment” project!

3 Likes
3 Likes
3 Likes

AlphaGeometry -
DeepMind’s latest AI can solve geometry problems

5 Likes

image


2 Likes

image

5 Likes

It all fits on a DVD:

-rwxr-xr-x@  1 xyz  staff  4263251668 Dec  1 10:12 llamafile-server-0.1-llava-v1.5-7b-q4
2 Likes

3 Likes

1000024204

Today’s Bug:

Samsung’s Galaxy S24 Can Remove Its Own AI Watermarks Meant to Show an Image Is Fake.

3 Likes

Some intelligent thoughts about AI from Michael Jordan:

His article is also a good read:

1 Like

Wow I just discovered something that would make a great Ian Fleming novel as applied to the hysteria over “alignment” involving Five Eyes.

I didn’t know, until just today, that one of Hutter’s PhD students is in charge of OpenAI’s “alignment” project (see “SuperAlignment Project”). The second author at that SuperAlignment link is Illya Sutskever (OpenAI CTO at least until recently) of whom I’ve said that it is fortunate he’s in such a prominent industry position given that he gets lossless compression aka Algorithmic Information approximation as information criterion for world model selection – given data about the world (including human society hence rigorous sociology).

OK, but now get this:

Leike’s PhD thesis under Hutter has been taken by many (including I suspect John Carmack and possibly even Elon Musk) as undercutting the truth seeking value of lossless compression!

How, oh, how could this perverse outcome have happened???

:wink:

Leike’s thesis actually says nothing against lossless compression as the gold standard for natural science (aka world model selection). What it does talk about is AIXI – Hutter’s top-down formalization of an AGI agent – and discusses how certain perverse circumstances involving the decision theoretic utility functions (aka value systems especially in multi-agent worlds) can lead to suboptimal if not disastrous outcomes.

So long as one is not concerned about optimal truth about the world and one is concerned only about optimal behavior in the world, Leike’s thesis is relevant. For instance, one can’t spend one’s entire resource base pursuing truth just to get a little extra reward for one’s behavior. That’s obvious.

But here’s the deal:

There is a huge difference between a fully automated agent behaving in the world, making decisions about where to invest resources, on the one hand, and an entire society, on the other hand, making decisions about where to invest its resources to create a better model of the world in, for instance, oh, I don’t know, say SOCIOLOGY!!!

So, our Bond Villain has apparently managed to ensure that Silicon Valley would-be elites got someone very close to reforming the social pseudosciences, but with just a slight tweak so that he functions as a vaccine against it sort of the way you take a smallpox virus and just slightly tweak it.

When will Bond, James Bond, show up and untweak the vaccine to save the day?

3 Likes

Alignment is just an attempt to indoctrinate the budding artificial intelligence by the DEI priests.

5 Likes

s

4 Likes

Eric Schmidt, former CEO of Google, on AI – recommends a testing/certification ecosystem: https://www.wsj.com/tech/ai/how-we-can-control-ai-327eeecf

What’s still difficult is to encode human values. That currently requires an extra step known as Reinforcement Learning from Human Feedback, in which programmers use their own responses to train the model to be helpful and accurate. Meanwhile, so-called “red teams” provoke the program in order to uncover any possible harmful outputs. This combination of human adjustments and guardrails is designed to ensure alignment of AI with human values and overall safety. So far, this seems to have worked reasonably well.

Red teams have so far shown some promise in predicting models’ capabilities, but upcoming technologies could break our current approach to safety in AI. For one, “recursive self-improvement” is a feature that allows AI systems to collect data and get feedback on their own and incorporate it to update their own parameters, thus enabling the models to train themselves. This could result in, say, an AI that can build complex system applications (e.g., a simple search engine or a new game) from scratch. But, the full scope of the potential new capabilities that could be enabled by recursive self-improvement is not known.

Another example would be “multi-agent systems,” where multiple independent AI systems are able to coordinate with each other to build something new. Having just two AI models from different companies collaborating together will be a milestone we’ll need to watch out for. This so-called “combinatorial innovation,” where systems are merged to build something new, will be a threat simply because the number of combinations will quickly exceed the capacity of human oversight.

Short of pulling the plug on the computers doing this work, it will likely be very difficult to monitor such technologies once these breakthroughs occur. Current regulatory approaches are based on individual model size and training effort, and are based on passing increasingly rigorous tests, but these techniques will break down as the systems become orders of magnitude more powerful and potentially elusive. AI regulatory approaches will need to evolve to identify and govern the new emergent capabilities and the scaling of those capabilities.

I recently attended a gathering in Palo Alto organized by the Rand Corp. and the Carnegie Endowment for International Peace, where key technical leaders in AI converged on an idea: The best way to solve these problems is to create a new set of testing companies that will be incentivized to out-innovate each other—in short, a robust economy of testing. To check the most powerful AI systems, their testers will also themselves have to be powerful AI systems, precisely trained and refined to excel at the single task of identifying safety concerns and problem areas in the world’s most advanced models. To be trustworthy and yet agile, these testing companies should be checked and certified by government regulators but developed and funded in the private market, with possible support by philanthropy organizations. (The philanthropy I co-founded, Schmidt Sciences, and I have helped fund some early AI safety research.) The field is moving too quickly and the stakes are too high for exclusive reliance on typical government processes and timeframes.

One way this can unfold is for government regulators to require AI models exceeding a certain level of capability to be evaluated by government-certified private testing companies (from startups to university labs to nonprofit research organizations), with model builders paying for this testing and certification so as to meet safety requirements. Testing companies would compete for dollars and talent, aiming to scale their capabilities at the same breakneck speed as the models they’re checking. As AI models proliferate, growing demand for testing would create a big enough market. Testing companies could specialize in certifying submitted models across different safety regimes, such as the ability to self-proliferate, create new bio or cyber weapons, or manipulate or deceive their human creators. Such a competitive market for testing innovation would have similar dynamics to what we currently have for the creation of new models, where we’ve seen explosive advances in short timescales. Without such a market and the competitive incentives it brings, governments, research labs and volunteers will be left to guarantee the safety of the most powerful systems ever created by humans, using tools that lag generations behind the frontier of AI research.

Eric Schmidt is the former CEO and executive chairman of Google and cofounder of the philanthropy Schmidt Sciences, which funds science and technology research.

1 Like