“Regulate Us!”—U.S. Senate Judiciary Committee Hearing on Artificial Intelligence

On 2023-05-16 the U.S. Senate Judiciary Committee held a hearing on artificial intelligence (AI) and possible regulation thereof. Witnesses testifying were Sam Altman, chief executive officer of OpenAI, developer of ChatGPT, GPT-4, DALL-E, and other AI products; Gary Marcus, a retired New York University professor and co-founder of Robust.AI, developer of machine learning software for autonomous robots; and Christina Montgomery, vice president and “Chief Privacy and Trust Officer” for IBM.

The almost three hours of testimony and questions and answers from the solons in attendance might be summed up as “artificial intelligence meets natural stupidity”, as legislator after legislator recounted their own experiences “using ‘AI’ ”. When speaking of the prospect of international regulation of AI, several Senate-bots cited CERN as the international body that “regulates nuclear energy”, apparently confusing a particle physics laboratory with the International Atomic Energy Agency.

Most stunning, until you think a bit about the motivations, was the unanimity among the industry panelists that their work should be subject to regulation, licensing, and prohibition or surveillance of work deemed “hazardous”. The U.S. Food and Drug Administration, responsible for the deaths of millions of people through its heavy-handed regulation and tortuous approval process for new drugs and medical procedures, was cited as a model for an agency charged with AI regulation. Altman testified that this should be a “cabinet-level agency”. They spoke approvingly of the European Union’s proposed “AI Act”, discussed here on 2023-05-16 in “EU AI Act To Target US Open Source Software”.

As that post noted, heavy-handed regulation and licensing of artificial intelligence software would drive a stake through the burgeoning open source AI software development now underway, or else drive it underground and/or to jurisdictions more tolerant of potentially disruptive technologies. This, of course, would serve to entrench the present large players in the sector such as OpenAI (a stalking horse for Microsoft), Google/DeepMind, and IBM, while giving coercive government a small number of high profile targets to command rather than a broadly-based international community of independent developers contributing to open source in a fiercely competitive market, all looking over one another’s shoulders for risks, vulnerabilities, and unforeseen consequences which would otherwise be confined within the walls of the oligopoly of regulated and licensed developers of AI. This, as a moment’s reflexion will make obvious, dramatically increases the existential risk threat from AI, since it makes the emergence of a “singleton” or oligarchy of AIs seizing control more likely, which would be highly improbable in a world with thousands of independently developed AIs contending for customer satisfaction.

It is beyond irony and parody that Altman’s company, OpenAI, was founded in 2015 by a group including Elon Musk to reduce the risk from closed, proprietary development of AI systems, promising to “freely collaborate” with other institutions and researchers by making its patents and research open to the public, hence the name. This noble goal came to an end in 2019, when the company converted from non-profit to for-profit and accepted a US$ 1 billion investment from Microsoft, known worldwide for the openness of its products, the care taken in testing them before broad-based deployment, and for protecting customers from risks due to vulnerabilities in their software. In January, 2023 Microsoft kicked in another US$ 10 billion over ten years. It was later revealed that OpenAI provided early access to ChatGPT to Microsoft for use in their Bing search engine before making it available to their own customers.

When OpenAI was founded in 2015, Elon Musk said,

What is the best thing we can do to ensure the future is good? We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity." Musk acknowledged that “there is always some risk that in actually trying to advance (friendly) AI we may create the thing we are concerned about”; nonetheless, the best defense is "to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.

So much for that.

5 Likes

The surveillance state would, of course, chase this kind of “AI Terrorism” down with accusations of “white supremacy” or “right wing extremism” and receive the full-throated support of (Good Lord, I can’t come up with a disgusting enough phrase for) “the IT industry”. The exceptions will be to the Fortune 500, government and other institutions that can afford to take lawsuits protecting their interests to the Supreme Court.

I tried warning people about the consequences of failing to replace the 16th with a tax on net assets: Attracting the most highly evolved rent seekers from around the world to make like the Face Hugger from Alien trying to bash through all border controls, most egregiously using Affirmative Action extended to all non-white immigrants in the 1970s to shatter any “glass ceilings” to take over Western Civilization. My God I can’t tell you how much I hate “Libertarians” for their part in this, even though I know it was probably their loss of agency to more subtle and malign forces possessing them to do this to us and to themselves. It’s difficult to integrate the full force of the societal trauma at all levels of even individual consciousness and keep things in perspective about “Reason” magazine et al.

2 Likes

Andrew Ng says it

Yann Le-Cun says it too:

Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment.
They are the ones who are attempting to perform a regulatory capture of the AI industry.
You, Geoff, and Yoshua are giving ammunition to those who are lobbying for a ban on open AI R&D.

If your fear-mongering campaigns succeed, they will inevitably result in what you and I would identify as a catastrophe: a small number of companies will control AI.

The vast majority of our academic colleagues are massively in favor of open AI R&D. Very few believe in the doomsday scenarios you have promoted.
You, Yoshua, Geoff, and Stuart are the singular-but-vocal exceptions.

like many, I very much support open AI platforms because I believe in a combination of forces: people’s creativity, democracy, market forces, and product regulations.
I also know that producing AI systems that are safe and under our control is possible. I’ve made concrete proposals to that effect.
This will all drive people to do the Right Thing.

You write as if AI is just happening, as if it were some natural phenomenon beyond our control.
But it’s not. It’s making progress because of individual people that you and I know. We, and they, have agency in building the Right Things.
Asking for regulation of R&D (as opposed to product deployment) implicitly assumes that these people and the organization they work for are incompetent, reckless, self-destructive, or evil. They are not.

I have made lots of arguments that the doomsday scenarios you are so afraid of are preposterous. I’m not going to repeat them here. But the main point is that if powerful AI systems are driven by objectives (which include guardrails) they will be safe and controllable because e set those guardrails and objectives.
(Current Auto-Regressive LLMs are not driven by objectives, so let’s not extrapolate from their current weaknesses).

Now about open source: your campaign is going to have the exact opposite effect of what you seek.
In a future where AI systems are poised to constitute the repository of all human knowledge and culture, we need the platforms to be open source and freely available so that everyone can contribute to them.
Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture.
This requires that contributions to those platforms be crowd-sourced, a bit like Wikipedia.
That won’t work unless the platforms are open.

The alternative, which will inevitably happen if open source AI is regulated out of existence, is that a small number of companies from the West Coast of the US and China will control AI platform and hence control people’s entire digital diet.
What does that mean for democracy?
What does that mean for cultural diversity?

THIS is what keeps me up at night.

3 Likes

The workings of “Effective Altruists” who have a $100M annual influence budget and dominate the conversation: