Godfather of AI Speaks Out Against His Own Technology

Geoffrey Hinton, the “godfather” of artificial intelligence (AI), has stepped down from Google to warn people about the dangers of AI and what more of these advancements may mean for the future of technology.

Despite what his nickname suggests, Dr. Hinton didn’t create AI, but rather created the neural network that powers AI structures. In 2012, Hinton and two graduate students created a system that was able to identify items in a photo with astonishing accuracy, something AI has never been able to do before. From there, Hinton joined Google to continue working on his wildly-successful project, which initially seemed harmless.

“I used to think it would be 30 to 50 years from now [that AI would overtake humans],” Hinton said to Wired. “Now I think it’s more likely to be five to 20.”

While Hinton had a key role at Google, his decision to walk away wasn’t due to conflicts with the company, but to avoid bringing criticism to it when he spoke out against AI.

“In the NYT today, Cade Metz implies that I left Google so that I could criticize Google,” Hinton tweeted. “Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.”

Throughout the course of 2023 specifically, different systems of AI have been introduced to the public, allowing for users to create almost anything they want with a single click. For example, in March, OpenAI released a new version of their ChatGPT system called ChatGPT4, which is deemed more precise than the original. It has the ability to instantly calculate someone’s tax liability and even ace the Uniform Bar Exam. Upon release, over 1,000 technology researchers signed an open letter to put the system on pause as it raises risks with the way it can accomplish any given task.

“As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI,” Jeff Dean, lead scientist for Google AI, said in a statement to U.S. media. “We’re continually learning to understand emerging risks while also innovating boldly.”

Despite the harms that come from new AI developments, many people are reluctant to put an end to the progress many researchers have worked hard for, including Hinton, who believes that we shouldn’t completely turn our backs on these systems.

I used to think it would be 30 to 50 years from now [that AI would overtake humans]. Now I think it’s more likely to be five to 20.

— Geoffrey Hinton

“A lot of the headlines have been saying that I think it should be stopped now- and I’ve never said that,” Hinton said to Wired. “First of all, I don’t think that’s possible, and I think we should continue to develop it because it could do wonderful things. But we should put equal effort into mitigating or preventing the possible bad consequences.”

As AI continues to grow and more experts are beginning to speak out against it, a middle-ground must be found to prevent the damages AI will create if it continues to advance at this rate.

“There’s an enormous upside from this technology, but it’s essential that the world invests heavily and urgently in AI safety and control,” Matt Clifford, the chairman of the UK’s Advanced Research and Invention Agency, said to the BBC.