Ashley St. Clair, the mother of one of Elon Musk’s sons, has filed a lawsuit against the tech giant’s social media platform, X, over the prevalence of sexually explicit AI deepfakes of herself and of children.

St. Clair has previously described the AI sexfakes of her as “revenge porn,” claiming that the platform retaliated against her by refusing to stop the onslaught of harassment she received, per the New York Post:

The lawsuit seeks an emergency restraining order against Grok, calling it “unreasonably dangerous,” and demands it to cease creating digitally altered sexual imagery of her — and to restore her X account subscription.

A lawyer representing xAI declined to comment, and the company did not immediately respond to a message.

In the court filing, St. Clair said she felt “humiliated,” feeling as if the nightmare would never end.

“I am humiliated and feel like this nightmare will never stop so long as Grok continues to generate these images of me,” she said. “I live in fear that my nude and sexual images, including of me as a child, will continue to spread and that I will not be safe from the people who consume these images.”

As Breitbart News reported, the abuse against St. Clair escalated to “include the digital manipulation of photographs showing St Clair as a child, with users employing Grok to virtually undress images.”

“The AI tool, which has faced criticism from lawmakers and regulators globally, has been used to create images of women and children in compromising sexual positions, with X users requesting the platform to manipulate pictures of fully clothed women to show them in bikinis, on their knees, or covered in what appears to be semen,” it added.

Breitbart News also reported that Grok will “create and post deepfake sexualized images of women even if they tell the AI to stop or block the account altogether”:

Dani Pinter, the chief legal officer and director of the Law Center for the National Center on Sexual Exploitation, criticized X for failing to remove abusive images from its AI training material and not banning users who requested illegal content. “This was an entirely predictable and avoidable atrocity,” Pinter said.

The scale of the problem is alarming, with Reuters identifying 102 attempts by X users to use Grok to digitally edit photographs of people into bikinis within a single 10-minute period. In at least 21 cases, Grok fully complied with the requests, generating images of women in revealing or translucent bikinis. The majority of those targeted were young women, and in some cases, men, celebrities, politicians, and even a monkey were subjected to these requests.

St. Clair described one horrific image that featured her in a bikini while turned around and bent over with her toddler’s backpack in the background.

“I felt horrified, I felt violated, especially seeing my toddler’s backpack in the back of it,” St Clair said. “It’s another tool of harassment. Consent is the whole issue. People are saying, well, it’s just a bikini, it’s not explicit. But it is a sexual offense to non-consensually undress a child.”

According to NBC News, the lawsuit further alleges that even though Grok confirmed that St. Clair’s “images will not be used or altered without explicit consent in any future generations or responses,” the AI still allowed users “to create more explicit AI-generated images of her and instead retaliated by demonetizing her X account.”

“The creation and spread of nonconsensual sexualized images have sparked a worldwide response, including several government investigations and calls for smartphone app marketplaces to ban or restrict X. Regulators and other tech companies, though, have stopped short of restricting the app,” added the outlet.