Andrew Ng says it
Yann Le-Cun says it too:
Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment.
They are the ones who are attempting to perform a regulatory capture of the AI industry.
You, Geoff, and Yoshua are giving ammunition to those who are lobbying for a ban on open AI R&D.If your fear-mongering campaigns succeed, they will inevitably result in what you and I would identify as a catastrophe: a small number of companies will control AI.
The vast majority of our academic colleagues are massively in favor of open AI R&D. Very few believe in the doomsday scenarios you have promoted.
You, Yoshua, Geoff, and Stuart are the singular-but-vocal exceptions.like many, I very much support open AI platforms because I believe in a combination of forces: people’s creativity, democracy, market forces, and product regulations.
I also know that producing AI systems that are safe and under our control is possible. I’ve made concrete proposals to that effect.
This will all drive people to do the Right Thing.You write as if AI is just happening, as if it were some natural phenomenon beyond our control.
But it’s not. It’s making progress because of individual people that you and I know. We, and they, have agency in building the Right Things.
Asking for regulation of R&D (as opposed to product deployment) implicitly assumes that these people and the organization they work for are incompetent, reckless, self-destructive, or evil. They are not.I have made lots of arguments that the doomsday scenarios you are so afraid of are preposterous. I’m not going to repeat them here. But the main point is that if powerful AI systems are driven by objectives (which include guardrails) they will be safe and controllable because e set those guardrails and objectives.
(Current Auto-Regressive LLMs are not driven by objectives, so let’s not extrapolate from their current weaknesses).Now about open source: your campaign is going to have the exact opposite effect of what you seek.
In a future where AI systems are poised to constitute the repository of all human knowledge and culture, we need the platforms to be open source and freely available so that everyone can contribute to them.
Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture.
This requires that contributions to those platforms be crowd-sourced, a bit like Wikipedia.
That won’t work unless the platforms are open.The alternative, which will inevitably happen if open source AI is regulated out of existence, is that a small number of companies from the West Coast of the US and China will control AI platform and hence control people’s entire digital diet.
What does that mean for democracy?
What does that mean for cultural diversity?THIS is what keeps me up at night.