1,000 AI Experts and Tech Leaders Call for Temporary Halt in Advanced AI Development

Apple Founder Steve Wozniak
Alberto E. Rodriguez/Getty

1,000 AI experts, including Tesla and Twitter CEO Elon Musk and Apple co-founder Steve Wozniak, have called for a temporary halt on the advancement of AI technology until safeguards can be put in place.

Forbes reports that an open letter calling for a temporary halt in the development of advanced AI systems has been signed by more than 1,000 AI experts and tech executives, including titans of the sector like Elon Musk and Steve Wozniak. The letter, written by the Future of Life Institute, calls for a moratorium on AI systems more potent than OpenAI’s GPT-4 for at least six months in order to develop stronger governance and shared safety protocols.

Elon Musk, chief executive officer of Tesla Inc., speaks via video link during the Qatar Economic Forum in Doha, Qatar, on Tuesday, June 21, 2022. The second annual Qatar Economic Forum convenes global business leaders and heads of state to tackle some of the world's most pressing challenges, through the lens of the Middle East. Photographer: Christopher Pike/Bloomberg

Elon Musk, chief executive officer of Tesla Inc., speaks via video link during the Qatar Economic Forum in Doha, Qatar, on Tuesday, June 21, 2022. Photographer: Christopher Pike/Bloomberg

OpenAI founder Sam Altman, creator of ChatGPT

OpenAI founder Sam Altman, creator of ChatGPT (TechCrunch/Flickr)

The letter’s publication was made possible by the Future of Life Institute, a nonprofit organization dedicated to directing transformative technology toward enhancing life and reducing significant risks. Prominent researchers from Google-owned DeepMind and well-known machine learning authorities like Yoshua Bengio and Stuart Russell are among the signatories.

The open letter urges AI labs and independent experts to work together to create and put into practice a set of shared safety protocols for the design and development of advanced AI. To guarantee that AI systems adhering to them are risk-free, these protocols would be strictly audited and monitored by outside experts who are not affiliated with the company. The signatories stress that the proposed pause is only a temporary retreat from the dangerous race toward increasingly unpredictable black-box models with emergent capabilities, not a general halt to AI development.

The letter urges the creation of stronger governance systems in addition to the establishment of safety protocols. In addition to provenance and watermarking systems to help distinguish between authentic and fake content and track model leaks, these systems ought to include new regulatory bodies devoted to AI oversight and tracking. The experts also recommend increasing public funding for technical AI safety research and holding suppliers accountable for harm caused by AI.

Although the open letter is unlikely to succeed in all of its goals, it does represent a general unease about AI technologies and increases the need for more stringent regulation. The letter’s authors contend that society has previously put a halt to the advancement of other technologies that could have disastrous consequences, and that AI development is no different.

COMMENTS

Please let us know if you're having issues with commenting.