Mark Zuckerberg’s Meta on Thursday released its third quarter report on content moderation, revealing that its global enforcement mistakes have dropped by 90 percent since it has pivoted away from third-party fact-checking and censorship-style practices.
Meta’s report found that it has slashed its weekly enforcement mistakes by more than 90 percent across Facebook and Instagram, meaning that out of the hundreds of billions of pieces of content produced on its platforms, less than 0.1 percent was removed incorrectly.
In January, Meta CEO Mark Zuckerberg announced major changes to the company’s content moderation policies, saying it has pivoting away from its fact-checking regime towards embracing free speech and avoiding censorship. Meta in May announced in its first quarter of 2025 report that its enforcement mistakes were reduced by 50 percent since the beginning of the Trump presidency.
Meta measured that its enforcement precision, or the percentage of correct removals out of the all its removals, is more than 90 percent on Facebook and more than 87 percent on Instagram.
“That means about 1 out of every 10 pieces of content removed, and less than one out of every 1,000 pieces of content produced overall, was removed in error,” the tech company said in its report.
Meta said in its report that it continues to take action against some problematic areas, such as adult nudity and sexual activity on Facebook and Instagram as well as violent content. On Facebook, it has seen an uptick in bullying and harassment. Meta said that this increase is largely because of changes made to improve reviewer training and enhance review workflows.
Meta said it has seen a 16.3 percent increase in global government requests for user data, of which India is the top requester, with a 31.9 percent increase in requests. After India, the United States had an 8.6 percent increase, followed by Brazil, Germany, and France.
“In the US, we received 81,064 requests in the first half of 2025, an increase of 8.6%, 77.3% of which include non-disclosure orders prohibiting Meta from notifying the target user. Emergency requests accounted for 6% of the total request in the US,” Meta stated in its report.
Meta said it has been testing the use of artificial intelligence to help review content enforcement, which has outperformed human review in areas such as celebrity impersonations, a common scam. It said that it would transition to the use of the AI models to continue to improve content review.

COMMENTS
Please let us know if you're having issues with commenting.