YouTube has introduced a new rule mandating creators to disclose the use of AI in generating realistic content. This initiative is part of YouTube’s efforts to ensure transparency regarding altered or synthetic media, including generative AI, which could be mistaken by viewers as genuine.
The goal of these disclosures is to prevent users from being misled by artificially created videos that appear authentic. This is especially crucial as advancements in generative AI technology blur the line between real and fake content, raising concerns about the risks posed by AI and deepfakes, particularly during significant events like the upcoming U.S. presidential election.
This announcement builds on YouTube’s commitment made in November to implement this update as part of a broader introduction of new AI policies.
YouTube clarifies that the new policy does not cover obviously unrealistic or animated content, such as fantasy scenarios with unicorns. It also excludes content using generative AI for tasks like script generation or automatic captioning.
Instead, the focus is on content that uses the likeness of real individuals or events. Creators must disclose any digital alterations, such as replacing faces or generating voices, as well as manipulating footage of real events or places to create realistic but fictional scenarios.
YouTube plans to display disclosure labels in the expanded description section for most videos. However, for sensitive topics like health or news, a more prominent label will appear directly on the video.
These labels will be rolled out across all YouTube formats in the coming weeks, starting with the mobile app and extending to desktop and TV. YouTube will also enforce measures for creators who consistently fail to use the required labels, adding them proactively when necessary to prevent viewer confusion.