Former OpenAI Researcher: AI at Human Level Means a ’50/50 Chance of Doom’

Terminator 2 hand in Cyberdyne Systems
CBS Photo Archive/Getty

A former OpenAI researcher, Paul Christiano, has expressed serious concerns about the potential risk that AI poses to humanity, estimating a 10-20 percent chance of an AI takeover resulting in a high number of human fatalities. According to Christiano, if AI reaches “human level” thinking, the human race approaches “a 50/50 chance of doom.”

Business Insider reports that Paul Christiano, a former researcher for OpenAI, has expressed his concerns about the threat that artificial intelligence could pose to humanity, estimating that there is a 10–20 percent chance that an AI takeover will result in a significant number of fatalities.

Christiano, who currently heads the nonprofit Alignment Research Center and formerly oversaw the language model alignment team at OpenAI, said during an appearance on the Bankless podcast, “I think maybe there’s something like a 10-20 percent chance of AI takeover, [with] many [or] most humans dead. I take it quite seriously.” The hosts, David Hoffman and Ryan Sean Adams, quizzed Christiano on the likelihood of a “full-out Eliezer Yudkowsky doom scenario,” referring to Yudkowsky, an AI skeptic who has been highlighting the dangers of AI technology for more than two decades.

The researcher made it clear that he disagreed with Yudkowsky’s assessment of the potential rate of advancement of AI technology. “Eliezer is into this extremely fast transformation once you develop AI,” he said. “I have a little bit less of an extreme view on that.”

Christiano instead sees a years-long shift away from AI systems that have a significant impact and toward a time of accelerating change, followed by even more accelerating change. “I think once you have that view then, sort of, a lot of things may feel like AI problems because they happen very shortly after you build AI,” he explained.

He predicts that as soon as AI systems with human-level capabilities are developed, the probability of doom could increase to 50/50. “Overall, maybe you’re getting more up to a 50/50 chance of doom shortly after you have AI systems that are human level,” he warned.

Christiano’s worries are consistent with those of other AI professionals who recently signed an open letter requesting a six-month halt in the development of advanced AI in order to address potential risks.

Read more at Business Insider here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan

COMMENTS

Please let us know if you're having issues with commenting.