Survey reveals that 1 in 10 minors use AI to create nude images of classmates and share them online.
Aug. 28, 2024, 4:47 a.m.
Read time estimation: 3 minutes.
8
A recent survey has revealed a disturbing trend among minors: one in ten children is using AI technology to create non-consensual nude images of their classmates.
The findings, released by Thorn, a non-profit organization dedicated to protecting children from sexual exploitation, highlight the increasing misuse of AI tools among young people, particularly within school settings.
A Rising Concern
The survey, conducted online between November 3 and December 1, 2023, included 1,040 minors aged 9 to 17. These participants, from diverse backgrounds, were questioned about their experiences with child sexual abuse material (CSAM) and other harmful online activities. The results paint a concerning picture of how AI technologies, particularly “nudify” apps, are being misused by children to create fake nude images of their peers.
Advertisement These findings have raised serious concerns among parents, educators, and child protection advocates, underscoring the ease with which minors can exploit these AI tools for harmful purposes.
The survey also revealed a disturbing trend: one in seven minors admitted to distributing self-generated CSAM. This highlights a growing issue of risky online behavior among young people. While some of these actions might seem like teenage mistakes, the severe consequences for the victims cannot be ignored.
Study under fire
Thorn, the organization behind the survey, has been the subject of controversy. The non-profit has faced scrutiny for its past work in developing tools for law enforcement, which some privacy advocates have criticized. Additionally, the organization's founder, Ashton Kutcher, stepped down last year after receiving criticism for supporting a convicted rapist.
Despite facing criticism, Thorn continues to collaborate with major tech giants like Google, Meta, and Microsoft, aiming to combat the spread of AI-generated child sexual abuse material (AIG-CSAM). However, the persistent presence of harmful AI-generated content on these platforms raises concerns about the effectiveness of these partnerships.
Against AI-Driven Harm
The survey's findings serve as a stark warning about the growing dangers of generative AI technologies in the hands of young people. Recent incidents, such as investigations in Washington State and Florida where students used AI to create inappropriate images of their teachers and classmates, highlight the real-world consequences of this digital abuse.
As the report concludes, the need for proactive measures to address these risks is clear. While technology plays a significant role in facilitating these harmful behaviours, the underlying issue lies in the behaviours themselves. The survey calls for open discussions about the dangers of “deepfake nudes” and the establishment of clear boundaries regarding acceptable behaviour in schools and communities, irrespective of the tools being used.
Advertisement The survey emphasizes the importance of educating both young people and adults about the potential harms of AI misuse, stressing that the consequences for victims are severe and far-reaching. The findings urge society to take decisive action to curb these dangerous trends before they escalate further.