OpenAI, Google, Microsoft Join Biden Administration ‘AI Safety’ Consortium

biden-creepy-smile-getty
Drew Angerer/Getty Images

The Biden administration has launched a consortium of over 200 major tech companies, academic institutions, and research groups to collaborate on developing ethical guidelines and safety standards for AI technology. Biden’s “AI Safety Institute Consortium” claims it has the goal of developing ways to “mitigate AI risks and harness its potential.”

Mashable reports that the Biden administration has officially tapped major tech players and other AI stakeholders to address safety and trust in AI development. On Thursday, the U.S. Department of Commerce announced the creation of the AI Safety Institute Consortium (AISIC).

Sam Altman, chief executive officer of OpenAI Inc., speaks with members of the media during the Allen & Co. Media and Technology Conference in Sun Valley, Idaho, US, on Wednesday, July 12, 2023. The summit is typically a hotbed for etching out mergers over handshakes, but could take on a much different tone this year against the backdrop of lackluster deal volume, inflation and higher interest rates. Photographer: David Paul Morris/Bloomberg via Getty Images

Sam Altman, chief executive officer of OpenAI Inc. Photographer: David Paul Morris/Bloomberg via Getty Images

Sundar Pichai, CEO of Google and Alphabet, attends a press event to announce Google as the new official partner of the Women's National Team at Google Berlin. Photo: Christoph Soeder/dpa (Photo by Christoph Soeder/picture alliance via Getty Images)

Sundar Pichai, CEO of Google and Alphabet, attends a press event to announce Google as the new official partner of the Women’s National Team at Google Berlin. Photo: Christoph Soeder/dpa (Photo by Christoph Soeder/picture alliance via Getty Images)

The consortium, housed under the Commerce Department’s National Institute of Standards and Technology (NIST), will follow President Biden’s AI executive order mandates. This includes “developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content,” said Secretary of Commerce Gina Raimondo.

Over 200 participants have joined AISIC so far. This includes AI leaders like ChatGPT developer OpenAI, Google, Microsoft, Apple, Amazon, Mark Zuckerberg’s Meta, and NVIDIA. Academics from MIT, Stanford, Cornell, and others are also participating. Additionally, industry researchers and think tanks like the Center for AI Safety, IEEE, and the Responsible AI Institute are involved.

The consortium stems from Biden’s sweeping executive order seeking to regulate AI development. “The government has a role in setting standards and tools to mitigate AI risks and harness its potential,” said Raimondo.

While the EU has worked on AI regulations, this marks a major U.S. government effort to formally oversee AI. “I understand this is a first step by the administration to work with industry on some of the hard problems. But it is an important first step and I think this is an area where industry and government collaboration is critical,” said Microsoft President Brad Smith.

Overall, AISIC represents unprecedented cooperation between the U.S. government and private sector to ensure AI safety. “Only through such open and honest collaboration can we manage AI in a way that builds trust and protects U.S. values,” said Secretary Raimondo. The initiative provides hope for developing ethical AI that protects privacy and national security.

Read more at Mashable here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

COMMENTS

Please let us know if you're having issues with commenting.