The increasing prevalence of AI-powered “nudify” apps and deepfake technology has led to a disturbing trend of students creating and sharing sexually explicit images of their classmates, with 75 percent of these images targeting children under 14 and even as young as 11 years old, according to a new poll of teachers in the UK.
The Guardian reports that in recent years, the rapid advancement of AI has given rise to a new form of digital abuse: deepfake pornography. This alarming trend has now made its way into schools, with an increasing number of students using “nudify” apps to create sexually explicit images of their classmates, teachers, and even themselves. The consequences of this behavior are far-reaching, causing significant emotional distress and trauma to the victims.
A recent poll conducted by Teacher Tapp on behalf of the Guardian revealed that one in ten secondary school teachers in England are aware of students creating “deepfake, sexually explicit videos” in the last academic year. Shockingly, three-quarters of these incidents involved children aged 14 or younger, with some cases involving students as young as 11. The ease of access to these AI-powered tools has made it simple for children to engage in this harmful behavior.
The impact on victims can be devastating. Girls and young women targeted by deepfake pornography often feel violated, humiliated, and betrayed. School friendships are shattered, and victims may struggle to face their peers in the classroom. In some cases, the emotional toll is so severe that students have vomited upon discovering the sexually explicit images of themselves circulating among their classmates.
While the majority of deepfake pornography targets women and girls, boys are not immune to this form of abuse. The charity Everyone’s Invited (EI) has reported cases of boys being targeted, with AI-generated sexual images of them being created and shared around the school, causing significant distress and trauma.
Teachers, too, are increasingly becoming targets of deepfake pornography. With little training on how to handle these situations, educators are doing their best to support and educate students. However, the lack of clear guidance and inconsistency in addressing these cases has left many teachers feeling overwhelmed and uncertain about the best course of action.
Schools in the United States have been wrestling with the same problem for some time. In 2024, Breitbart News reported that multiple students at a middle school in Beverly Hills, California, were expelled for passing around deepfake porn of classmates:
The disturbing case came to light in February when explicit images depicting the faces of 16 eighth-grade students, aged 13-14, superimposed on artificially generated naked bodies, were shared through messaging apps. The victims’ sex have not been disclosed, but the incident has sent shockwaves through the community and raised serious concerns about the misuse of emerging technologies.
This type of AI-generated image, known as a “deepfake,” can be very convincing to the untrained observer. Breitbart News previously reported that apps to generate deepfake porn have exploded in popularity.
According to Superintendent Michael Bregy, the five students expelled were deemed to be the “most egregiously involved” in the creation and dissemination of the explicit images. While the expulsion agreements remain confidential, they outline the terms and duration of the students’ expulsion, as well as the conditions for their potential return to school.
Read more at the Guardian here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.