"Pause Giant AI Experiments"

I’ve long been of the opinion that the majority of cubicle dwelling “knowledge workers” are gainfully employed as a proxy for maintaining the status quo and keeping the right hand side of the Gaussian intelligence distribution busy and off the streets. Thus avoiding rocking the boat and displacing the social order.

When David Graeber’s Bullshit Jobs work came out, I thought the first half of his thesis implicitly supported this view.

8 Likes

Seriously it would not be Arnold Schwarzenegger that Skynet sent back to the present to destroy us all. It would have been Eliezer Yudkowsky. The Eliezer Terminator’s mission is to suck all the oxygen out of the room about the time the Hutter Prize could have, in combination with AIXI, provided a way of clarifying the distinction between “is” and “ought” in a mathematically rigorous manner so that people could start to think more rigorously about what “alignment” really is.

And, no, I’m NOT talking about “the utility function” of the “AGI” in the loose terminology used by the Lesswrong crowd. AIXI and its progeny including the Schmidhuber’s speed prior replacing the size prior – is the only* formal definition of AGI (ie: NOT “not even wrong”) out there and it offers a very clear and simple way of analyzing the problem:

AIXI = SDT ⊗ AIT

The utility function resides in SDT (Sequential Decision Theory).

Before you can think clearly about what “the utility function” is, you must be able to factor out what it is not. It is not AIT (Algorithmic Information Theory). So, to first order, what you do is figure out how to go all Butlarean Jihad on SDT and let AIT proceed since AIT is nothing more than the refinement of the natural sciences for the information age – the algorithmic information age. To second order, you can even permit the Sequential aspect of SDT and leave the Decision up to the human symbiont to choose from a range of options, with the corresponding estimates of externalities based on prior decisions made by the human symbiont.

Now, perhaps what Eliezer is really after is the castration of the natural sciences. This would certainly make sense given what the social sciences would probably say about people like him if the social pseudosciences were replaced by the corresponding sciences.

PS: At several points in the interview with Fridman, The Eliezer Terminator refers to “prizes” that may be offered for advancing science but he never refers to Algorithmic Information Theory’s framework for advancing science: Pick a dataset – any dataset that the contestants/disputants agree includes support for their pet theories, and then fund a lossless compression prize with all the money that would ordinarily go to theoreticians (as opposed to the experimentalists/observations).

The problem with this is of course, that there is “insufficient room for graft and corruption” or virtue signaling by philanthropists, etc. Every dollar paid out would very likely advance theoretic explanatory power per bit of theory.

* All the pedantry-signalers at LessWrong can do in response to AIXI is nitpick at such things as “choice of Turing machine for AIT” or “incomputability”, etc. which are straining at gnats while swallowing camels. Aside from the fact that there are scholarly answers to these gnats, that isn’t the point. The scholarly answers are obvious to the most casual observer – the scholarship is summarily ignored or subjected to further “critique” that has even less merit – all under the haze of fearmongering about “existential risk of misalignment”. Well, duh, if you’re that worried about “misalignment” why are you subjecting the field to limbic writhing rather than going with the best theory out there and rigorously defining your terms as AIXI has? Why offer nothing but bullshit critique when you have nothing to offer of your own with which to compare it? Why shouldn’t we dissect you to find the Skynet chips?

7 Likes

I think you are giving Eliezer much too much credit.

PS Quoting John’s post makes it look like he expresses the opinion that St Chat’s AGI descendants will necessarily kill all of us. I looked up the quote in the context of his post, and it’s actually a quote from Eliezer’s Times article, not John’s position.

7 Likes

I had no confusion on this point which is why I didn’t bother to emphasize the point you just did. Perhaps to others it wasn’t as clear.

On 2023-04-17 in a post on the Open Philanthropy Web site, Luke Muehlhauser, “Senior Program Officer, AI Governance and Policy”, proposed “12 tentative ideas for US AI policy”, noting that “These are my own tentative opinions, not Open Philanthropy’s”.

This is a witches’ brew of bad/stupid ideas, reminiscent of when the U.S. thought it could it could continue to snoop on everybody in the world including its own citizens by declaring encryption technology a “munition” and mandating back-doors and “key escrow” in all implementations of encryption (see “Clipper chip” if you don’t remember that farcical era and “Crypto Wars” for links to different episodes in the long, twilight struggle). Here are the bullet points being proposed—see the complete document for details.

  1. Software export controls
  2. Require hardware security features on cutting-edge chips
  3. Track stocks and flows of cutting-edge chips, and license big clusters
  4. Track and require a license to develop frontier AI models
  5. Information security requirements
  6. Testing and evaluation requirements
  7. Fund specific genres of alignment, interpretability, and model evaluation R&D
  8. Fund defensive information security R&D
  9. Create a narrow antitrust safe harbor for AI safety & security collaboration
  10. Require certain kinds of AI incident reporting
  11. Clarify the liability of AI developers for concrete AI harms
  12. Create means for rapid shutdown of large compute clusters and training runs

The author apparently believes that imposing this Soviet-style regime on researchers and companies in the U.S. will work because (footnote 1):

Many of these policy options would plausibly also be good to implement in other jurisdictions, but for most of them the US is a good place to start (the US is plausibly the most important jurisdiction anyway, given the location of leading companies, and many other countries sometimes follow the US), and I know much less about politics and policymaking in other countries.

The possibility that this will cause AI research to leave the U.S. for free jurisdictions and/or forfeit control of AI to those pursuing it in, say, China, is not discussed. Presumably, he’d be fine with living in a world where AI was developed and controlled by the Chinese Communist Party.

“Open Philanthropy”, the author’s employer, is identified with the cult of “effective altruism”.

10 Likes

On 2023-04-17, the Together Project announced its RedPajama project had released a reproduction of the 1.2 trillion LLaMA training dataset.

together_2023-04-17

The most capable foundation models today are closed behind commercial APIs, which limits research, customization, and their use with sensitive data. Fully open-source models hold the promise of removing these limitations, if the open community can close the quality gap between open and closed models. Recently, there has been much progress along this front. In many ways, AI is having its Linux moment. Stable Diffusion showed that open-source can not only rival the quality of commercial offerings like DALL-E but can also lead to incredible creativity from broad participation by communities around the world. A similar movement has now begun around large language models with the recent release of semi-open models like LLaMA, Alpaca, Vicuna, and Koala; as well as fully-open models like Pythia, OpenChatKit, Open Assistant and Dolly.

We are launching RedPajama, an effort to produce a reproducible, fully-open, leading language model. RedPajama is a collaboration between Together, Ontocord.ai, ETH DS3Lab, Stanford CRFM, Hazy Research, and MILA Québec AI Institute. RedPajama has three key components:

  1. Pre-training data, which needs to be both high quality and have broad coverage
  2. Base models, which are trained at scale on this data
  3. Instruction tuning data and models, which improve the base model to make it usable and safe

Today, we are releasing the first component, pre-training data.

You can download the complete dataset and a smaller random sample from Hugging Face. The full dataset is 5 terabytes uncompressed, and a 3 Tb compressed download. A smaller random sample is also available. The software used to prepare the dataset may be downloaded from GitHub. This software allows anybody with high speed Internet access and a great deal of patience to independently reproduce the dataset.

Here are the sources of data in the dataset with token sizes compared to those in LLaMA.

together_a_2023-04-18

With the pre-training data now published, here are the next steps:

Having reproduced the pre-training data, the next step is to train a strong base model. As part of the INCITE program, with support from Oak Ridge Leadership Computing Facility (OLCF), we are training a full suite of models, with the first becoming available in the coming weeks.

With a strong base model in hand, we are excited to instruction tune the models. Alpaca illustrated the power of instruction tuning – with merely 50K high-quality, diverse instructions, it was able to unlock dramatically improved capabilities. Via OpenChatKit, we received hundreds of thousands of high-quality natural user instructions, which will be used to release instruction-tuned versions of the RedPajama models.

7 Likes

tegmark_2023-04-20

5 Likes

hal_2023-04-20

8 Likes

SemiAnalysis has obtained and published an internal memo from a Google researcher titled “We Have No Moat, And Neither Does OpenAI”, about which they state:

The text below is a very recent leaked document, which was shared by an anonymous individual on a public Discord server who has granted permission for its republication. It originates from a researcher within Google. We have verified its authenticity. The only modifications are formatting and removing links to internal web pages. The document is only the opinion of a Google employee, not the entire firm. We do not agree with what is written below, nor do other researchers we asked, but we will publish our opinions on this in a separate piece for subscribers. We simply are a vessel to share this document which raises some very interesting points.

The memo argues that Google and OpenAI are losing the race toward artificial general intelligence (AGI/AI) to open source projects, and that (my phrasing), developments on the open source AI front have accelerated in the last two months at a rate reminiscent of the run-up to a Singularity.

We’ve done a lot of looking over our shoulders at OpenAI. Who will cross the next milestone? What will the next move be?

But the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI . While we’ve been squabbling, a third faction has been quietly eating our lunch.

I’m talking, of course, about open source. Plainly put, they are lapping us. Things we consider “major open problems” are solved and in people’s hands today. Just to name a few:

While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months. This has profound implications for us:…

Here is a chart illustrating the rate at which open source is catching up to the big proprietary models.

In many ways, this shouldn’t be a surprise to anyone. The current renaissance in open source LLMs comes hot on the heels of a renaissance in image generation. The similarities are not lost on the community, with many calling this the “Stable Diffusion moment” for LLMs.

In both cases, low-cost public involvement was enabled by a vastly cheaper mechanism for fine tuning called low rank adaptation, or LoRA, combined with a significant breakthrough in scale (latent diffusion for image synthesis, Chinchilla for LLMs). In both cases, access to a sufficiently high-quality model kicked off a flurry of ideas and iteration from individuals and institutions around the world. In both cases, this quickly outpaced the large players.

These contributions were pivotal in the image generation space, setting Stable Diffusion on a different path from Dall-E. Having an open model led to product integrations, marketplaces, user interfaces, and innovations that didn’t happen for Dall-E.

The effect was palpable: rapid domination in terms of cultural impact vs the OpenAI solution, which became increasingly irrelevant. Whether the same thing will happen for LLMs remains to be seen, but the broad structural elements are the same.

It’s out there, it’s replicating and building on itself, and there’s nothing that can be done to control it.

Individuals are not constrained by licenses to the same degree as corporations

Much of this innovation is happening on top of the leaked model weights from Meta. While this will inevitably change as truly open models get better, the point is that they don’t have to wait. The legal cover afforded by “personal use” and the impracticality of prosecuting individuals means that individuals are getting access to these technologies while they are hot.

“An entire planet’s worth of free labour…”

Paradoxically, the one clear winner in all of this is Meta. Because the leaked model was theirs, they have effectively garnered an entire planet’s worth of free labor. Since most open source innovation is happening on top of their architecture, there is nothing stopping them from directly incorporating it into their products.

Here is the timeline of recent events in the explosive growth of open source AI. Each of these is explained in a paragraph in the memo.

  • Feb 24, 2023 - LLaMA is Launched
  • March 3, 2023 - The Inevitable Happens (LLaMA leaked to the public)
  • March 12, 2023 - Language models on a Toaster (LLaMA running on Raspberry Pi)
  • March 13, 2023 - Fine Tuning on a Laptop (Stanford releases Alpaca, with tuning on a consumer GPU)
  • March 18, 2023 - Now It’s Fast - Tuning demonstrated on a MacBook with no GPU
  • March 19, 2023 - A 13B model achieves “parity” with Bard (Vicuna trains model woth GPT-4 for US$ 300)
  • March 25, 2023 - Choose Your Own Model (Nomic’s GPT4All integrates model and ecosystem)
  • March 28, 2023 - Open Source GPT-3 (Cerebras trains GPT-3 from scratch)
  • March 28, 2023 - Multimodal Training in One Hour (LLaMA-Adapter tunes in one hour, with 1.2 million parameters)
  • April 3, 2023 - Real Humans Can’t Tell the Difference Between a 13B Open Model and ChatGPT (Berkeley launches Koala, trained entirely with free data for a training cost of US$ 100)
  • April 15, 2023 - Open Source RLHF at ChatGPT Levels (Open Assistant launches a model and dataset for Reinforcement Learning from Human Feedback [RLHF]. Alignment is now in the wild.)
11 Likes

Goodness Gracious! Why didn’t Kurzweil warn us that parameter distillation would be so effective?

snark

It all goes back to taxing economic activity rather than net assets. You can’t afford hood ornaments like Kurzweil to be your AI rabbi unless you have monopoly rents settling into your opioid receptors.

5 Likes

Good one!

4 Likes

At 17:00 UTC on 2023-05-11, Zach Weissmueller of Reason magazine will moderate a debate on YouTube between Jaan Tallin, a founder of Skype and co-founder of the Future of Life Institute that promoted the open letter calling for a pause in AI experimentation and Robin Hanson, professor of economics at George Mason University, on whether we should fear the development of artificial intelligence and support measures to pause, restrain, or regulate it.

Join Reason’s Zach Weissmueller for a discussion of an open letter calling for an immediate pause to artificial intelligence research on large language models like GPT-4 with economist Robin Hanson and tech investor Jaan Tallin, who was part of the software team responsible for creating Skype and a co-founder of the Future of Life Institute that organized and published the open letter.

“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” reads an open letter organized by the Future of Life Institute and endorsed by over 27,000 signatories, including Elon Musk and Apple co-founder Steve Wozniak.

Since the publication of the letter on March 22, the White House has summoned the leaders of the nation’s top artificial intelligence companies for a discussion about regulation. Senate majority leader Chuck Schumer (D–N.Y.) is “taking early steps toward legislation to regulate artificial intelligence technology,” according to reporting from Axios. Sam Altman, CEO of OpenAI, the company responsible for ChatGPT, has said that optimizing artificial intelligence for the good of society “requires partnership with government and regulation.”

But economist Robin Hanson worries that too much of today’s fear of artificial intelligence is a more generalized “future fear” that will imperil technological progress likely to benefit humanity.

“Most people have always feared change. And if they had really understood what changes were coming, most probably would have voted against most changes we’ve seen,” Hanson wrote in a recent post on the topic. “For example, fifty years ago the public thought they saw a nuclear energy future with unusual vividness, and basically voted no.”

Join Reason’s Zach Weissmueller this Thursday at 1 p.m. Eastern [17:00 UTC] for a discussion of the risks and rewards of A.I. with Hanson, an associate professor at George Mason University and research associate at the Future of Humanity Institute at Oxford, and Jaan Tallinn, a tech investor, part of the software team responsible for the technology behind Skype, and co-founder of the Future of Life Institute, which organized and published the open letter calling for a pause.

5 Likes