At least 25 arrests were carried out during a global operation against the abuse images of children generated by artificial intelligence (AI), said the Organization of the police of the European Union Europol.
The suspects were part of a criminal group whose members engaged in the distribution of images entirely generated by the AI of minors, according to the agency.
The operation is one of the first involving such sexual abuse material (CSAM), explains Europol. The absence of national legislation against these crimes made him “exceptionally difficult for investigators,” he added.
Arrests were carried out simultaneously on Wednesday, February 26 during Operation Cumberland, led by the Danish police, said a press release.
Authorities of at least 18 other countries have been involved and the operation continues, with more arrests expected in the coming weeks, Europol said.
In addition to the arrests, so far, 272 suspects have been identified, 33 searches of houses have been made and 173 electronic devices have been seized, according to the agency.
He also said the main suspect was a Danish national who was arrested in November 2024.
The press release indicates that he “directed an online platform where he distributed the equipment generated by the AI that he produced”.
After having made a “symbolic online payment”, users around the world were able to obtain a password that allowed them to “access the platform and look at the abused children”.
The agency said that the sexual exploitation of online children was one of the main priorities of European Union law enforcement organizations, which treated “a constantly increased illegal volume of content”.
Europol added that even in cases where the content was fully artificial and that there was no real victim represented, as with Operation Cumberland, “the CSAM generated by AI always contributes to the objectification and sexualization of children”.
The Executive Director of Europol, Catherine de Bolle, said: “These artificially generated images are so easily created that they can be produced by people with criminal intention, even without substantial technical knowledge.”
She warned that the police should develop “new methods and investigation tools to meet emerging challenges.
The Internet Watch Foundation (IWF) warns that more sexual abuse II images of children are produced and become more widespread on the open web.
In research last year, the charitable organization noted that over a period of one month, 3,512 sexual abuse images and sexual exploitation on children were discovered on a dark website. Compared to one month in the previous year, the number of most serious category images (category A) had increased by 10%.
Experts say that the sexual abuse of AI children can often be incredibly realistic, which makes it difficult to say the reality of the false.