‘Probability of Doom:’ Silicon Valley Invents Metric to Measure Chance AI Kills Us All

terminator.0
Orion Pictures

According to a recent report, in Silicon Valley these days it’s not uncommon to be asked, even by a stranger, “What’s your probability of doom?” when discussing artificial intelligence. The progressive brainiacs of Silicon Valley have created a metric called “p(doom)” to define the chance AI destroys humanity.

According to a recent report from the New York Times, P(doom) – short for “probability of doom” — refers to the percent chance someone believes artificial intelligence will bring about human extinction or some other existential catastrophe. The term has gone mainstream amid growing fears over rapidly advancing AI capabilities.

Sam Altman, chief executive officer of OpenAI Inc., speaks with members of the media during the Allen & Co. Media and Technology Conference in Sun Valley, Idaho, US, on Wednesday, July 12, 2023. The summit is typically a hotbed for etching out mergers over handshakes, but could take on a much different tone this year against the backdrop of lackluster deal volume, inflation and higher interest rates. Photographer: David Paul Morris/Bloomberg via Getty Images

Sam Altman, chief executive officer of OpenAI Inc. Photographer: David Paul Morris/Bloomberg via Getty Images

While sci-fi fans have long theorized about robot takeovers, recent AI achievements like ChatGPT passing the bar exam have made the threat seem more imminent. AI luminaries are also sounding alarms, with Yoshua Bengio estimating a 20 percent p(doom) and “Godfather of AI” Geoffrey Hinton putting it at 10 percent in the next 30 years if AI remains unregulated.

The p(doom) statistic reveals how tech insiders view AI’s potential risks and weigh utopian possibilities against dystopian outcomes. Optimists like Aaron Levie of Box peg it at nearly zero, while pessimists put it over 90 percent. But even figures like 15 percent indicate deep concerns — that’s the estimate of FTC chair Lina Khan.

Some see discussing p(doom) as largely theoretical. “It comes up in almost every dinner conversation,” Levie says. But others consider it vital for guiding research and policies. OpenAI’s former interim CEO Emmett Shear’s 50 percent estimate worried some employees into thinking he would limit progress.

But p(doom) critiques note AI’s risks depend partly on governance. And it’s unclear what constitutes bad odds when the stakes are existential — a 15 percent civilization —ending scenario seems far from reassuring.

Researcher Ajeya Cotra commented: “I know some people who have a p(doom) of more than 90 percent, and it’s so high partly because they think companies and governments won’t bother with good safety practices and policy measures. I know others who have a p(doom) of less than 5 percent, and it’s so low partly because they expect that scientists and policymakers will work hard to prevent catastrophic harm before it occurs.”

Read more at the New York Times here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

COMMENTS

Please let us know if you're having issues with commenting.