Apple and Google are continuing to offer “nudify” apps that allow users to create AI-generated deepfake pornography of real people, despite both companies maintaining policies that explicitly prohibit such content, according to a new report.
Bloomberg reports that a report published Wednesday by the Tech Transparency Project has revealed that tech giants Apple and Google are still hosting apps that enable users to create non-consensual nude or partially nude deepfake images of real people. The findings highlight ongoing enforcement challenges despite the companies’ stated policies against such content.
The research organization found that users can easily access these apps by searching for terms like “nudify” and “undress” in both the Apple App Store and Google Play Store. The applications can be used to digitally alter images of both celebrities and the general public to make them appear nude or partially undressed.
According to the report, the companies are not merely hosting these apps but are actively directing users to them through search results and advertisements. The Tech Transparency Project identified 18 apps with nudifying capabilities in the Apple App Store and 20 in the Google Play Store. Both platforms also use their autocomplete features to suggest names of additional nudifying apps as users type search terms.
The scale of the issue is substantial. Apps identified by the research group have been downloaded 483 million times collectively and generated approximately 122 million dollars in revenue, according to estimates from market researcher AppMagic. A spokesperson for AppMagic confirmed that the Tech Transparency Project’s work has resulted in several apps being removed and prompted others to modify their user policies.
The issue is not new. Earlier this year, both companies removed apps that had been flagged by the Tech Transparency Project. However, researchers found that just months later, dozens of similar applications could be found on both platforms. This pattern has prompted increased scrutiny from politicians worldwide who have intensified calls to curb the spread of nudifying apps over the past year.
Some of the identified apps used names and imagery that explicitly cast them in a sexual context. Others were marketed more subtly but could easily be used for creating sexualized content, offering more convenience than traditional photo-editing software. Many of these apps offered subscription services.
Both Apple and Google maintain policies that should prevent such apps from appearing on their platforms. Apple’s App Store guidelines for developers explicitly ban “overtly sexual or pornographic material.” Google Play Store policies ban “apps that degrade or objectify people, such as apps that claim to undress people or see through clothing, even if labeled as prank or entertainment apps.”
Following Bloomberg’s inquiry about the report, Apple removed 15 apps identified by the research group. Among those taken down was “PicsVid AI Hot Video Generator,” which offers templates featuring women in sexually suggestive poses. Apple also contacted developers of six additional apps to alert them to policy issues and potential removal. The company stated that other apps mentioned in the report did not violate its guidelines and noted that it has proactively rejected many apps and removed others.
Google reported that many of the apps referenced in the report have been suspended from Google Play for policy violations, with an ongoing investigation underway. “When violations of our policies are reported to us, we investigate and take appropriate action,” the company stated.
Regulators are increasingly demanding stronger action from the technology companies. Last year, President Donald Trump signed the Take It Down Act, which criminalizes the publication of non-consensual sexual content and requires social media platforms and websites to remove such posts. In April, the UK government plans to introduce legislation that would create a path to prosecute technology executives whose companies fail to remove such images.
In the instant bestseller Code Red: The Left, the Right, China, and the Race to Control AI, Breitbart News social media director and author Wynton Hall points out that artificial intelligence is “being used to generate child sexual abuse material (CSAM).”
Hall writes, “For example, AI tools can rapidly and easily create ‘deepfakes’ by studying real photographs of abused children to generate new images showing those children in sexual positions. This involves overlaying the face of one person on the body of another.”
Read more at Bloomberg here.
Lucas Nolan is a reporter for Breitbart News covering issues of AI, free speech, and online censorship.