Geoffrey Hinton, Neural Network Pioneer, on Artificial Intelligence

During the 1980s, when almost all research in artificial intelligence was concentrated on symbolic logical system and expert systems, Geoffrey Hinton, along with David Rumelhart and Ronald J. Williams, were among the few who argued that the most promising path to machine intelligence was artificial neural networks that learn and retrieve information the way the brain does, by training and experience. They developed, demonstrated, and popularised [PDF] the backpropagation algorithm for training multi-layer neural networks, a central part of today’s large-scale neural networks used in machine learning systems. Since 2013, Hinton has divided his time between the University of Toronto and the Google Brain deep learning project.

In this interview, he discusses how he came to advocate neural networks, why it took more than three decades to demonstrate their capability, the nature of the intelligence demonstrated by today’s large language models and generative image synthesis systems, how close we are or aren’t to artificial general intelligence, whether that development poses existential risks to life on Earth, the prospects for avoiding those risks, and the consequences for the economy and employment of the wide-scale deployment of these technologies.

5 Likes

Hinton’s take is quite sensible. Berkeley’s Stuart Russell, on the other hand, seems a bit unhinged to me, yet his authority is freaking out pretty solid researchers in AI.

There’s also quite a bit of buzz around David Chapman’s version of AI pessimism.

2 Likes

What a difference a month makes! On 2023-04-03 a post here, “Geoffrey Hinton, Neural Network Pioneer, on Artificial Intelligence”, presented a CBS Saturday Morning interview with artificial intelligence pioneer Geoffrey Hinton, who has divided his time for the last ten years between the University of Toronto and Google’s “Google Brain” project. Hinton was one of the first artificial intelligence (AI) researchers to advocate artificial neural networks and was one of the developers and promoters of the backpropagation technique for training these networks, which has become central to building the large language models and generative AI systems announced in the last two years.

In the interview, he was largely dismissive of apocalyptic scenarios regarding the development and deployment of AI systems and cited Google as a company taking great care to guard against risks posed by this technology.

That was then. Now, on 2023-05-01, he has announced he has quit his job at Google and, in an interview with the New York Times describes why and how he has changed his mind about AI research and deployment.

https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html

On Monday [2023-05-01], however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.

Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough.

Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.

After discussing the open letter advocating a pause in training large AI systems (see “Pause Giant AI Experiments”, posted here on 2023-03-29) and a second letter from the Association for the Advancement of Artificial Intelligence, the article notes:

Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning, and on Thursday, he talked by phone with Sundar Pichai, the chief executive of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.

Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”

As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”

Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”

He does not say that anymore.

Recall that when Ray Kurzweil, who has worked at Google since 2012, predicted in 2014 that human-level artificial general intelligence would be developed by 2029:

few people took him, or his predictions, seriously.

It’s hard to know where to start with Ray Kurzweil. With the fact that he takes 150 pills a day and is intravenously injected on a weekly basis with a dizzying list of vitamins, dietary supplements, and substances that sound about as scientifically effective as face cream: coenzyme Q10, phosphatidycholine, glutathione?

With the fact that he believes that he has a good chance of living for ever? He just has to stay alive “long enough” to be around for when the great life-extending technologies kick in (he’s 66 and he believes that “some of the baby-boomers will make it through”). Or with the fact that he’s predicted that in 15 years’ time, computers are going to trump people. That they will be smarter than we are. Not just better at doing sums than us and knowing what the best route is to Basildon. They already do that. But that they will be able to understand what we say, learn from experience, crack jokes, tell stories, flirt. Ray Kurzweil believes that, by 2029, computers will be able to do all the things that humans do. Only better.

Now they’re starting to pay attention.

5 Likes

The merger of Google Brain with DeepMind probably contributed to this – and DeepMind has had some pretty substantial wins recently such as protein folding – not just cute tricks like taking world champions in Go and Chess. Marcus Hutter is the senior scientist at DeepMind and is not nearly as likely to be swept away by less-than-rigorous thinking about AGI since he actually has a formal theory of AGI, unlike Hinton. He cut his teeth on the “artificial scientist” problem under Schmidhuber and so is further immunized to is/ought confusion that has sent the field into hysterics.

Moreover, he’s editing an upcoming issue on Algorithmic Information Theory (mathematical formalization of natural science) and Machine Learning (its implementation):

From: Marcus Hutter marcus.hutter@gmx.net
Date: Wed, Apr 19, 2023 at 1:43 AM
Subject: [AIT] Special issue on AIT and Machine Learning
To: ait0@googlegroups.com

Dear all,

Just a quick note that the special issue in Physica D on AIT & ML is still open, and that reviews/surveys are particularly welcome in addition to original research papers.

Manuscripts should be submitted using the online submission system of the journal at Editorial Manager®. Authors should select article type ‘VSI: ML & AIT ’ during the submission. The author guidance can be found at https://www.elsevier.com/journals/physica-d-nonlinear-phenomena/0167-2789/guide-for-authors.

Guest editors:
Managing Guest Editor
Dr. Boumediene Hamzi
The Alan Turing Institute
bhamzi@turing.ac.uk
Co-Guest Editors
Dr. Marcus Hutter
Deep Mind
mhutter@deepmind.com
Dr. Kamaludin Dingle
GUST
Dingle.K@gust.edu.kw
Dr. Ard Louis
Rudolf Peierls Centre for Theoretical Physics, University of Oxford, Oxford, UK
ard.louis@physics.ox.ac.uk
Manuscript submission information:
Manuscript Submission Deadline *: 30-Jun-2023

More details are at https://www.sciencedirect.com/journal/physica-d-nonlinear-phenomena/about/call-for-papers#call-for-paper-on-special-issue-machine-learning-and-algorithmic-information-theory

Also, please disseminate this call for papers among your colleagues.

Looking forward to your submissions !
best wishes,
Marcus

3 Likes

“What does the winning world even look like here?..How do we get to the point that the United States and China signed a treaty whereby they would both use nuclear weapons against Russia if Russia builds a GPU cluster that was too large?”

This is what comes of people fearing the truth coming out:

Nuke the truth.

1 Like

Dr.Hinton should have no regrets. He and his fellow researchers were trying to do good for the World and fellow beings. But in the wrong hands the same AI could cause have serious repercussions, perhaps death. But bad people will always take something good and do bad things with it.

3 Likes

Also, I’d take anything published by NYT with a grain of salt:

Screen Shot 2023-05-03 at 6.56.28 AM

4 Likes