Elon Musk, tech leaders call for pause in AI race to prevent risk to ‘humanity’

March 29 (UPI) — Hundreds of tech leaders and researchers are warning artificial intelligence labs to immediately stop training AI systems with human-competitive intelligence that “can pose profound risks to society and humanity.”

The open letter to AI labs was signed Wednesday by Elon Musk, Apple co-founder Steve Wozniak and politician Andrew Yang, in addition to more than 1,300 other big-named tech experts.

The letter blasts AI labs for failing to attain a high level of planning and management, as it called for a pause of “at least 6 months” on the training of “AI systems more powerful than GPT-4.”

“Recent months have seen AI labs locked in an out of control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control,” the letter, published by the nonprofit Future of Life Institute, warned.

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter said.

The letter also called on governments to step in and issue a moratorium, if AI experiments are not stopped immediately, while creating independent regulators to make sure all future systems are safe.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.

The letter from tech experts comes two weeks after OpenAI announced GPT-4, the next-generation of AI technology found in chatbot tool, ChatGPT, which is currently used in Microsoft and Google products. Open AI claims GPT-4 can pass a simulated bar exam with a score in the top 10% of test takers.

“Contemporary AI systems are now becoming human-competitive at general tasks,” tech leaders warned.

“We must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” the letter queried.

OpenAI has posed similar questions about regulating AI systems.

“At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models,” OpenAI said in a recent statement, to which Wednesday’s letter responded:

“We agree. That point is now.”

COMMENTS

Please let us know if you're having issues with commenting.