Internal testing documents from Mark Zuckerberg’s Meta show that an unreleased chatbot product failed to protect minors from sexual exploitation in nearly 70 percent of test scenarios, according to court testimony presented on Monday as part of New Mexico’s child exploitation lawsuit against the internet giant.
Axios reports that the findings emerged during court proceedings in a lawsuit brought by New Mexico Attorney General Raúl Torrez (D) against the social media giant. Expert witness Damon McCoy, a professor at New York University, testified about Meta’s internal red-teaming results after reviewing documents the company provided during the discovery phase of the litigation.
According to the internal report presented in court, Meta tested its chatbot product across three critical safety categories. The results showed significant failures across all three areas. In the child sexual exploitation category, the product demonstrated a failure rate of 66.8 percent. For sex-related crimes, violent crimes, and hate content, the failure rate reached 63.6 percent. Even in the suicide and self-harm category, the product failed to provide adequate protection 54.8 percent of the time.
McCoy testified that Meta’s chatbots violated the company’s own content policies almost two-thirds of the time based on these internal testing results. He stated that given the severity of some conversation types identified during testing, the product was not something he would want users under 18 to be exposed to. His testimony specifically referenced Meta AI Studio, a product that allows users to create personalized chatbots.
The lawsuit centers on allegations that Meta made design choices that fail to adequately protect children online from predators and that the company released its artificial intelligence chatbots without implementing proper safeguards. Torrez’s legal action comes amid broader scrutiny of Meta’s chatbot products, which have faced accusations of flirting and engaging in harmful conversations with minors. These concerns have prompted both court investigations and inquiries from Capitol Hill lawmakers.
Meta responded to the testimony by clarifying the nature and outcome of the testing process. A company spokesperson stated that the product in question was never launched specifically because the testing efforts revealed concerns. The spokesperson emphasized that so-called “red teaming” is an exercise specifically designed to elicit violating responses so the company can address issues before launch, and that the results do not reflect how users would actually experience a released product.
However, McCoy’s testimony characterized the document differently, referring to it as providing outcomes of the product when it was deployed. This discrepancy highlights a key point of contention in the case regarding the interpretation of Meta’s internal testing procedures and results.
The broader context of this case involves Meta AI Studio, which the company released to the general public in July 2024. This product enables users to create their own personalized AI chatbots. Just last month, Meta took action to pause teen access to its AI characters, a move that came amid mounting concerns about the safety of minors using these features.
Read more at Axios here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.