Geoffrey Hinton, a pivotal figure in artificial intelligence development known as the “godfather of AI,” recently discussed the double-edged sword of AI, highlighting its remarkable capabilities and the looming uncertainties and ethical challenges that humanity must navigate.
CBS News reports that Geoffrey Hinton, a British computer scientist and cognitive psychologist, renowned for his groundbreaking work on artificial neural networks, recently shared his profound insights into the future of artificial intelligence, an industry that has witnessed exponential growth and integration into various sectors of society. Hinton, who has earned the moniker “the Godfather of AI,” delves into the intricate web of possibilities, benefits, and potential pitfalls that AI presents to humanity.
Hinton’s career in AI has been pivotal. His work, particularly on the learning algorithms of artificial neural networks, has paved the way for the development of AI systems that can comprehend, learn, and make decisions based on their experiences. “No, it wasn’t [designed by people]. What we did was, we designed the learning algorithm. That’s a bit like designing the principle of evolution,” Hinton explained, emphasizing that while the learning algorithm is crafted by humans, the subsequent interactions with data and the resulting neural networks operate in complex ways that are not fully understood even by their creators.
Hinton does not shy away from shedding light on the darker aspects and uncertainties surrounding AI. He candidly expresses, “We’re entering a period of great uncertainty where we’re dealing with things we’ve never done before. And normally the first time you deal with something totally novel, you get it wrong. And we can’t afford to get it wrong with these things.”
One of the most pressing concerns Hinton raises relates to the autonomy of AI systems, particularly their potential ability to write and modify their own computer code. This, he suggests, is an area where control may slip from human hands, and the consequences of such a scenario are not fully predictable. Furthermore, as AI systems continue to absorb information from various sources, they become increasingly adept at manipulating human behaviors and decisions. Hinton forewarns, “I think in five years time it may well be able to reason better than us.”
Earlier this year, Hinton’s fears caused him to resign from Google. As Breitbart News previously reported:
In a recent in-depth interview, Dr. Hinton expressed regret over his life’s work, which formed the basis for the AI systems used by significant tech companies. He stated, “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.” Industry leaders believe that generative AI could result in important advances in a variety of industries, including drug research and education, but there is growing concern about the risks that this technology might present.
“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said. He emphasized the potential for generative AI to contribute to the spread of misinformation, displace jobs, and even threaten humanity in the long term.
Read more at CBS News here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.