What a difference a month makes! On 2023-04-03 a post here, “Geoffrey Hinton, Neural Network Pioneer, on Artificial Intelligence”, presented a CBS Saturday Morning interview with artificial intelligence pioneer Geoffrey Hinton, who has divided his time for the last ten years between the University of Toronto and Google’s “Google Brain” project. Hinton was one of the first artificial intelligence (AI) researchers to advocate artificial neural networks and was one of the developers and promoters of the backpropagation technique for training these networks, which has become central to building the large language models and generative AI systems announced in the last two years.
In the interview, he was largely dismissive of apocalyptic scenarios regarding the development and deployment of AI systems and cited Google as a company taking great care to guard against risks posed by this technology.
That was then. Now, on 2023-05-01, he has announced he has quit his job at Google and, in an interview with the New York Times describes why and how he has changed his mind about AI research and deployment.
https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html
On Monday [2023-05-01], however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough.
Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
After discussing the open letter advocating a pause in training large AI systems (see “Pause Giant AI Experiments”, posted here on 2023-03-29) and a second letter from the Association for the Advancement of Artificial Intelligence, the article notes:
Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning, and on Thursday, he talked by phone with Sundar Pichai, the chief executive of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.
⋮
Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
⋮
“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
⋮
Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
He does not say that anymore.
Recall that when Ray Kurzweil, who has worked at Google since 2012, predicted in 2014 that human-level artificial general intelligence would be developed by 2029:
few people took him, or his predictions, seriously.
It’s hard to know where to start with Ray Kurzweil. With the fact that he takes 150 pills a day and is intravenously injected on a weekly basis with a dizzying list of vitamins, dietary supplements, and substances that sound about as scientifically effective as face cream: coenzyme Q10, phosphatidycholine, glutathione?
With the fact that he believes that he has a good chance of living for ever? He just has to stay alive “long enough” to be around for when the great life-extending technologies kick in (he’s 66 and he believes that “some of the baby-boomers will make it through”). Or with the fact that he’s predicted that in 15 years’ time, computers are going to trump people. That they will be smarter than we are. Not just better at doing sums than us and knowing what the best route is to Basildon. They already do that. But that they will be able to understand what we say, learn from experience, crack jokes, tell stories, flirt. Ray Kurzweil believes that, by 2029, computers will be able to do all the things that humans do. Only better.
Now they’re starting to pay attention.