On 2023-05-16 the U.S. Senate Judiciary Committee held a hearing on artificial intelligence (AI) and possible regulation thereof. Witnesses testifying were Sam Altman, chief executive officer of OpenAI, developer of ChatGPT, GPT-4, DALL-E, and other AI products; Gary Marcus, a retired New York University professor and co-founder of Robust.AI, developer of machine learning software for autonomous robots; and Christina Montgomery, vice president and “Chief Privacy and Trust Officer” for IBM.
The almost three hours of testimony and questions and answers from the solons in attendance might be summed up as “artificial intelligence meets natural stupidity”, as legislator after legislator recounted their own experiences “using ‘AI’ ”. When speaking of the prospect of international regulation of AI, several Senate-bots cited CERN as the international body that “regulates nuclear energy”, apparently confusing a particle physics laboratory with the International Atomic Energy Agency.
Most stunning, until you think a bit about the motivations, was the unanimity among the industry panelists that their work should be subject to regulation, licensing, and prohibition or surveillance of work deemed “hazardous”. The U.S. Food and Drug Administration, responsible for the deaths of millions of people through its heavy-handed regulation and tortuous approval process for new drugs and medical procedures, was cited as a model for an agency charged with AI regulation. Altman testified that this should be a “cabinet-level agency”. They spoke approvingly of the European Union’s proposed “AI Act”, discussed here on 2023-05-16 in “EU AI Act To Target US Open Source Software”.
As that post noted, heavy-handed regulation and licensing of artificial intelligence software would drive a stake through the burgeoning open source AI software development now underway, or else drive it underground and/or to jurisdictions more tolerant of potentially disruptive technologies. This, of course, would serve to entrench the present large players in the sector such as OpenAI (a stalking horse for Microsoft), Google/DeepMind, and IBM, while giving coercive government a small number of high profile targets to command rather than a broadly-based international community of independent developers contributing to open source in a fiercely competitive market, all looking over one another’s shoulders for risks, vulnerabilities, and unforeseen consequences which would otherwise be confined within the walls of the oligopoly of regulated and licensed developers of AI. This, as a moment’s reflexion will make obvious, dramatically increases the existential risk threat from AI, since it makes the emergence of a “singleton” or oligarchy of AIs seizing control more likely, which would be highly improbable in a world with thousands of independently developed AIs contending for customer satisfaction.
It is beyond irony and parody that Altman’s company, OpenAI, was founded in 2015 by a group including Elon Musk to reduce the risk from closed, proprietary development of AI systems, promising to “freely collaborate” with other institutions and researchers by making its patents and research open to the public, hence the name. This noble goal came to an end in 2019, when the company converted from non-profit to for-profit and accepted a US$ 1 billion investment from Microsoft, known worldwide for the openness of its products, the care taken in testing them before broad-based deployment, and for protecting customers from risks due to vulnerabilities in their software. In January, 2023 Microsoft kicked in another US$ 10 billion over ten years. It was later revealed that OpenAI provided early access to ChatGPT to Microsoft for use in their Bing search engine before making it available to their own customers.
When OpenAI was founded in 2015, Elon Musk said,
What is the best thing we can do to ensure the future is good? We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity." Musk acknowledged that “there is always some risk that in actually trying to advance (friendly) AI we may create the thing we are concerned about”; nonetheless, the best defense is "to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.
So much for that.