A disturbing incident involving the circulation of AI-generated deepfake nude photos among students has been reported in Almendralejo, Spain. According to Spanish newspaper El Pais, a 14-year-old girl was approached by a boy on Instagram who demanded money. When she refused, he sent her a deepfake naked photo of herself. This incident has sparked outrage among parents and has led to an investigation by the local Juvenile Prosecutor’s Office. So far, 20 victims have been identified, and a group of suspected culprits behind the deepfakes has been identified.
The availability and accessibility of deepfake technology have raised concerns about its potential misuse. The FBI has previously issued a warning about deepfakes being used for extortion, with reports of altered explicit content involving both minors and non-consenting adults. In response to the increasing prevalence of AI-generated child sexual abuse material (CSAM), attorney generals from all states in the US have urged Congress to take action against this issue.
Experts have observed a rise in the dissemination of AI-generated CSAM, which poses challenges in identifying victims. Efforts to regulate and address this problem are ongoing, but the proliferation of open-source generative AI and the complexities of reaching a consensus on solutions may hinder progress.
The case in Almendralejo serves as a stark reminder of the potential harm that the misuse of AI technology can cause. It highlights the need for continued vigilance and proactive measures to protect individuals, particularly minors, from the negative consequences of deepfake manipulation.