YouTubers asked to disclose AI-generated content – or else

YouTubers asked to disclose AI-generated content – or else

YouTubers asked to disclose AI-generated content – or else PlatoBlockchain Data Intelligence. Vertical Search. Ai.

YouTube is slapping a bunch of rules on AI-generated videos in the hope of curbing: the spread of faked footage masqueraded as legit; deepfakes that make people appear to say or do things they never did; and tracks that rip off artists’ copyrighted work.

This red tape will be rolled out over the coming months and apply to material uploaded by users, we’re told

Specifically, the Google-owned vid-sharing giant will require content creators to disclose if their videos contain believable synthetic footage of made-up events, including AI-made depictions, or deepfakes that put words in people’s mouths. In those cases, a label will be added to a video’s description declaring the content was altered or digitally generated, and a more prominent note will be added to the video player itself if the content is particularly sensitive. Breaking the rules will lead to content being torn down and accounts punished.

Here’s YouTube’s wording:

We’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools. When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material. For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do.

Faked footage that could mislead viewers about important topics such as elections, conflicts and violence, public health issues, or popular figures must also be flagged in particular. “Creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties,” YouTube product veeps Jennifer Flannery O’Connor and Emily Moxley warned today. 

The video site’s standard content moderation rules also still apply when it comes to AI-generated media. A label disclosing fabricated content will not shield the video from deletion if it violates YT’s community guidelines. Content that is particularly vulgar, or contains harmful imagery, or dangerous and illicit information, will still be removed even if it’s fake. 

The biz also disclosed it has tens of thousands of human moderators globally who police the platform with the help of AI-powered classifiers; the Google operation claimed generative AI – like the kind that makes deepfakes – have helped train and improve these classifiers. Presumably these neural networks will get better at spotting undisclosed faked material, and flag it up to moderators.

“YouTube has always used a combination of people and machine learning technologies to enforce our Community Guidelines, with more than 20,000 reviewers across Google operating around the world,” the ‘tube execs said. “In our systems, AI classifiers help detect potentially violative content at scale, and reviewers work to confirm whether content has actually crossed policy lines.”

People will be able to request that videos be deleted if they depict AI-generated images or audio mimicking their likenesses, for safety and/or privacy reasons. Content moderators will decide whether to take down the material, depending on whether the person submitting the complaint can be “uniquely identified.” Deepfakes portraying public figures for satirical or parody purposes, however, will probably be marked as benign. 

AI-generated music that rips off an artist’s voice and sound, however, may be removed. YouTube will roll out an upcoming feature that will apparently make it easier for creators and record labels that have partnered with the Alphabet biz to flag any videos that may potentially infringe on copyrighted music. 

“In determining whether to grant a removal request, we’ll consider factors such as whether content is the subject of news reporting, analysis or critique of the synthetic vocals,” Flannery O’Connor and Moxley said.

Generative AI is improving rapidly. The sophistication and number of available tools to create fake, realistic-looking content means that orgs like YouTube will likely be inundated with AI-made media. At the moment, it believes the best tactic to balance creativity and content moderation is to encourage YouTubers to be transparent about whether their videos have been digitally altered or not.

YouTube said it will introduce the aforementioned features to support its new policies over the coming months and in 2024. ®

Time Stamp:

More from The Register