Mark Zuckerberg’s Facebook, Instagram to Start Flagging AI-Generated Images as Elections Ramp Up

WASHINGTON, DC - JULY 29: Facebook CEO Mark Zuckerberg testifies via video conference duri
Graeme Jennings - Pool/Getty Images

Mark Zuckerberg’s Meta will soon begin detecting and labeling AI-generated images on Facebook, Instagram and Threads, as well as requiring users to note when they post realistic AI video or audio as part of its efforts to prevent potential misinformation ahead of upcoming elections.

The Verge reports that the rise of AI-generated media has prompted Zuckerberg to roll out new labeling for AI content across his apps. In a blog post, Meta announced that in an effort to get ahead of potential misuse, the company will add watermarks to AI imagery on Facebook, Instagram and Threads. For more deceptive uses, Meta plans to penalize users who do not disclose realistic AI video or audio.

US President Joe Biden drives the new electric Ford F-150 Lightning (Photo by NICHOLAS KAMM/AFP via Getty Images)

Mark Zuckerberg

A pixelated photo of Facebook founder and CEO Mark Zuckerberg on an iPhone (Photo by Jaap Arriens/NurPhoto via Getty Images)

Meta President Nick Clegg announced the moves Tuesday as election seasons ramp up globally. “For those who are worried about video, audio content being designed to materially deceive the public on a matter of political importance in the run-up to the election, we’re going to be pretty vigilant,” Clegg said. “Do I think that there is a possibility that something may happen where, however quickly it’s detected or quickly labeled, nonetheless we’re somehow accused of having dropped the ball? Yeah, I think that is possible, if not likely.”

Meta is joining the companies by implementing content protection measures similar to Adobe’s Content Credentials metadata system. Google also recently expanded its SynthID watermark feature to include audio. Meta will soon require users to transparently note realistic AI video or audio. Violators face “warnings through to removal,” per Clegg. Highlighting examples of viral AI-generated political posts, Clegg believes widespread deception is unlikely. “I just don’t think that’s the way that it’s going to play out,” he said.

Internally, Meta has started testing large language models trained on its content policies to effectively identify violations. Clegg called it an efficient triage mechanism for human moderators. “It appears to be a highly effective and rather precise way of ensuring that what is escalated to our human reviewers really is the kind of edge cases for which you want human judgment,” Clegg said.

Alongside labeling, Meta continues grappling with its role in governing new generative technologies. Although cautious of potential harms, Clegg expressed some optimism for navigating this latest AI frontier. Read the full blog post from Meta here.

Read more at the Verge here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

COMMENTS

Please let us know if you're having issues with commenting.