‘Godfather of AI’ Resigns from Google, Warns of the Danger of Artificial Intelligence

Geoffrey Hinton, chief scientific adviser at the Vector Institute, speaks during The Inter
Cole Burston/Bloomberg

Renowned AI pioneer Geoffrey Hinton has quit his job at Google to voice his concerns about the risks associated with generative AI, the technology behind popular chatbots like ChatGPT and Google Bard.

The New York Times reports that Geoffrey Hinton, a pioneer in artificial intelligence, has resigned from his position at Google in order to express his concerns about the potential dangers of generative AI technology.

Google's Senior Vice President Sundar Pichai gives a keynote address during the opening day of the 2015 Mobile World Congress (MWC) in Barcelona on March 2, 2015. Phone makers will seek to seduce new buyers with even smarter Internet-connected watches and other wireless gadgets as they wrestle for dominance at the world's biggest mobile fair starting today. AFP PHOTO / LLUIS LLENE (Photo by Lluis GENE / AFP) (Photo by LLUIS GENE/AFP via Getty Images)

Google CEO Sundar Pichai  (Photo by LLUIS GENE/AFP via Getty Images)

In a recent in-depth interview, Dr. Hinton expressed regret over his life’s work, which formed the basis for the AI systems used by significant tech companies. He stated, “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.” Industry leaders believe that generative AI could result in important advances in a variety of industries, including drug research and education, but there is growing concern about the risks that this technology might present.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said. He emphasized the potential for generative AI to contribute to the spread of misinformation, displace jobs, and even threaten humanity in the long term.

Hinton, who is frequently referred to as “the Godfather of AI,” has had a distinguished career in the field that has spanned many years. His early research on neural networks, which dates back to the 1970s, eventually resulted in a ground-breaking development in 2012, when he and his graduate students at the University of Toronto created systems that were capable of deciphering and identifying objects in tens of thousands of photos. This accomplishment paved the way for cutting-edge AI innovations including chatbots like ChatGPT and Google Bard.

However, Hinton thinks that as AI systems have grown more powerful, so have the risks associated with them. “Look at how it was five years ago and how it is now,” he said. “Take the difference and propagate it forwards. That’s scary.” He points to the intensifying rivalry between Google and Microsoft as a reason it might be impossible to stop the creation and application of potentially harmful AI technology.

Dr. Hinton’s top concern is the possibility that the internet will be overrun with false information, making it challenging for the average person to distinguish between fact and fiction. “My fear is that the internet will be inundated with false photos, videos, and text, and the average person will not be able to know what is true anymore,” he explained.

Hinton is also concerned about how AI will affect the job market, especially for positions requiring repetitive work like paralegals, personal assistants, and translators. “It takes away the drudge work,” he said. “It might take away more than that.”

Hinton worries about the long-term development of autonomous weapons and the potential for AI to surpass human intelligence. “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Read more at the New York Times here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan

COMMENTS

Please let us know if you're having issues with commenting.