Investigation Launched into Alleged Creation and Dissemination of Inappropriate Photos at Fairfax High School

Los Angeles school officials are currently conducting an investigation into allegations of the creation and dissemination of inappropriate photos within the Fairfax High School community. This incident marks the latest in a series of alleged misuse of technology by students, as reported by the Los Angeles Unified School District.

Last week, administrators at Laguna Beach High School initiated an investigation after a student allegedly utilized artificial intelligence to create and circulate “inappropriate images” of classmates. Similarly, in January, five eighth-graders from Beverly Hills were expelled for their involvement in the production and sharing of fabricated nude pictures of their peers. These students had superimposed the faces of their classmates onto AI-generated nude bodies, which were then shared through messaging apps.

While it remains unclear whether AI was employed in the incident at Fairfax High, the district’s statement did not provide such information. The Los Angeles Unified School District emphasized that these allegations are taken seriously and do not align with the values of their community. Appropriate disciplinary action will be taken if warranted.

Preliminary investigations suggest that the images were created and shared on a third-party messaging app not affiliated with the Los Angeles Unified School District. The district highlighted its commitment to providing “digital citizenship” lessons to students across all grade levels, aiming to enhance education on the ethical use of technology, including AI, as well as digital citizenship, privacy, and safety.

In previous similar investigations, local law enforcement has been involved. However, the Los Angeles Unified School District did not disclose whether the Los Angeles police or school police have been engaged in their investigation or if any disciplinary measures have been taken.

The misuse of deepfake technology, which combines real people’s photos with computer-generated nude bodies, has become a growing concern. These fake images can be produced using a simple cellphone. Recent incidents include a 16-year-old high school student from Calabasas who reported that a former friend used AI to generate and circulate pornographic images of her. Additionally, AI-generated sexually explicit images of Taylor Swift were distributed on social media in January.

In California, sharing a nude photo of a classmate without consent could potentially lead to prosecution under state laws related to child pornography and disorderly conduct. However, these laws may not apply to AI-generated deepfakes. Several federal bills have been proposed to address this issue, including one that would criminalize the production and sharing of AI-generated sexually explicit material without consent and another that would allow victims to sue. In California, lawmakers have also proposed extending prohibitions on revenge porn and child pornography to computer-generated images.

School districts are actively working to understand and address the implications of this technology. The Orange County Department of Education has taken the lead in organizing monthly meetings with districts to discuss AI and its integration into the education system.

CrimeDoor
Author: CrimeDoor

Leave a Reply

Share on:

[mailpoet_form id="1"]

Subscribe to Our Newsletter