YouTube is taking steps to try to help viewers better understand if what they’re watching has been created, whether completely or in part, by generative AI.

“Generative AI is transforming the ways creators express themselves — from storyboarding ideas to experimenting with tools that enhance the creative process,” YouTube said in a message shared on Monday. “But viewers increasingly want more transparency about whether the content they’re seeing is altered or synthetic.”

Part of the effort to bolster trust on the popular streaming platform includes the launch of a new tool that requires creators to inform viewers when realistic-looking content (defined by YouTube as “content a viewer could easily mistake for a real person, place, or event”) is made with altered or synthetic media, including generative AI.

YouTube offered up examples of content that creators should mark as having been altered:

  • Using the likeness of a realistic person: Digitally altering content to replace the face of one individual with another’s or synthetically generating a person’s voice to narrate a video
  • Altering footage of real events or places: Such as making it appear as if a real building caught fire, or altering a real cityscape to make it appear different than in reality
  • Generating realistic scenes: Showing a realistic depiction of fictional major events, like a tornado moving toward a real town.

If a creator marks the content in such a way, the disclosure will show as a label. YouTube said that for most videos, the label will appear in the video’s expanded description, but for content involving more sensitive topics — such as health, news, elections, or finance — it will appear on the video itself to increase its prominence.

YouTube added that it’s not requiring creators to disclose content “that is clearly unrealistic, animated, includes special effects, or has used generative AI for production assistance.”

Additionally, the Google-owned company that creators aren’t required to highlight every instance where generative AI has been used in the broader production process. For example, there’s no need to disclose when the technology has been used to create scripts, content ideas, or automatic captions.

The new labels will begin to roll out across all YouTube platforms in the coming weeks.

And a warning to YouTube creators who try to skip the requirement to voluntarily highlight altered content — YouTube said that in the future it will “look at enforcement measures for creators who consistently choose not to disclose this information.” It added that it may even slap a label on content in cases where a creator has failed to do so, “especially if the altered or synthetic content has the potential to confuse or mislead people.”

Editors’ Recommendations

Source link