Pedophile Rings Are Using AI to Create Child Pornography, UK Group Warns - Decrypt

Pedophile Rings Are Using AI to Create Child Pornography, UK Group Warns – Decrypt

Pedophile Rings Are Using AI to Create Child Pornography, UK Group Warns - Decrypt PlatoBlockchain Data Intelligence. Vertical Search. Ai.

A UK-based internet watchdog group is sounding the alarm over a surge in the amount of AI-generated child sexual abuse material (CSAM) circulating online, according to a report by The Guardian.

The Internet Watch Foundation (IWF) said pedophile rings are discussing and trading tips on creating illegal images of children using open-source AI models that can be downloaded and run locally on personal computers instead of running the cloud where widespread controls and detection tools can intercede.

Founded in 1996, the Internet Watch Foundation is a non-profit organization dedicated to monitoring the internet for sexual abuse content, specifically that targets children.

“There’s a technical community within the offender space, particularly dark web forums, where they are discussing this technology,” IWF Chief Technology Officer Dan Sexton told the Guardian. “They are sharing imagery, they’re sharing [AI] models. They’re sharing guides and tips.”

The proliferation of fake CSAM would complicate current enforcement practices.

“Our worry is that, if AI imagery of child sexual abuse becomes indistinguishable from real imagery, there is a danger that IWF analysts could waste precious time attempting to identify and help law enforcement protect children that do not exist,” Sexton said in a previous IWF report.

Cyber criminals using generative AI platforms to create fake content or deepfakes of all kinds is a growing concern for law enforcement and policymakers. Deepfakes are AI-generated videos, images, or audio fabricating persons, places, and events.

For some in the U.S., the issue is also top of mind. In July, Louisiana Governor John Bel Edwards signed legislative bill SB175 into law that would sentence anyone convicted of creating, distributing, or possessing unlawful deepfake images depicting minors to a mandatory five to 20 years in prison, a fine of up to $10,000, or both.

With concerns that AI-generated deepfakes could make their way into the 2024 U.S. Presidential Election, lawmakers are drafting bills to stop the practice before it can take off.

On Tuesday, U.S. Senators Amy Klobuchar (D-MN), Josh Hawley (R-MO), Chris Coons (D-DE), and Susan Collins (R-ME) introduced the Protect Elections from Deceptive AI Act aimed at stopping the use of AI technology to create deception campaign material.

During a U.S. Senate hearing on AI, Microsoft President Brad Smith suggested using Know Your Customer policies similar to those used in the banking sector to identify criminals using AI platforms for nefarious purposes.

“We’ve been advocates for those,” Smith said. “So that if there is abuse of systems, the company that is offering the [AI] service knows who is doing it, and is in a better position to stop it from happening.”

The IWF has not yet responded to Decrypt’s request for comment.

Stay on top of crypto news, get daily updates in your inbox.

Time Stamp:

More from Decrypt