IWF Demands Action On Pedophiles Using Generative AI

IWF Demands Action On Pedophiles Using Generative AI

The Internet Watch Foundation is warning that generative AI threatens to enable the “generation of images at scale,” that will “overwhelm those working to fight online child sexual abuse.”

The body is alarmed at the rapid progress of generative AI over the course of the past year and predicts that AI-generated child sexual abuse material (CSAM) will only become more graphic over time.

The most disturbing of crimes

The Internet Watch Foundation says that generative AI holds great potential to better our lives, but warns that the technology can easily be turned to malevolent purposes.

Susie Hargreaves, chief executive of the IWF, told the Guardian on Wednesday that the agency’s “worst nightmares have come true.”

“Earlier this year, we warned AI imagery could soon become indistinguishable from real pictures of children suffering sexual abuse, and that we could start to see this imagery proliferating in much greater numbers. We have now passed that point,” said Hargreaves.

She added, “Chillingly, we are seeing criminals deliberately training their AI on real victims’ images who have already suffered abuse. Children who have been raped in the past are now being incorporated into new scenarios because someone, somewhere, wants to see it.”

The scale of the problem is already causing major problems for those fighting against CSAM.

In one month 20,254 AI-generated images were posted to one dark web CSAM forum. IWF analysts identified 2,562 criminal pseudo-photographs, created with the aid of generative AI.

More than half of these showed children under the age of 10 while  564 were classified as category A images – the most serious form of child abuse images.

IWF Demands Action On Pedophiles Using Generative AI PlatoBlockchain Data Intelligence. Vertical Search. Ai.IWF Demands Action On Pedophiles Using Generative AI PlatoBlockchain Data Intelligence. Vertical Search. Ai.

Celebrity CSAM photos

Generative AI is creating new categories of CSAM. IWF findings show that celebrities are being de-aged and transformed into children with AI tools.

The de-aged celebrities are then placed into abusive scenarios for the gratification of online pedophiles.

The children of celebrities are also targeted, “nudifying” the youngsters for users on darknet forums.

The IWF says these images are increasingly difficult to tell apart from real CSAM.

“The most convincing AI CSAM is visually indistinguishable from real CSAM, even for trained IWF analysts. Text-to-image technology will only get better and pose more challenges for the IWF and law enforcement agencies,” reads the report.

Government recommendations

The IWF, which is based in the U.K., is asking tech companies from around the world to ensure that CSAM is against their terms of use agreements. It also wants better training for law enforcement agencies so they can more readily identify these types of images.

The IWF is also asking the British government to make AI CSAM a major topic of discussion at the upcoming AI Summit at Bletchley Park next month.

The U.K. is hoping to attract key players from the business world as well as world leaders to the event. Thus far, Italian Premier Giorgia Meloni is the only confirmed leader of the G7 expected to attend.

Time Stamp:

More from MetaNews