Meta’s oversight board has urged the tech company to update its rules regarding porn deepfakes to keep pace with the rapidly evolving artificial intelligence landscape.
The independent board, which serves as a top court for Meta’s content moderation decisions, made this recommendation after reviewing two cases involving deepfake images of prominent women in India and the United States.
In one case, a deepfake shared on Instagram was left up despite a complaint, while in the other, the faked image was removed from the Meta platform. Both decisions were appealed to the board, which determined that the deepfakes violated a Meta rule against “derogatory sexualized photoshop.” The board suggested that this rule needs to be made more easily understandable for users.
Meta currently defines “derogatory sexualized photoshop” as manipulated images that are sexualized in ways likely to be unwanted by those depicted. However, the oversight board argued that referring to “photoshop” in this context is too narrow, given the availability of generative AI technology that can create images or videos with simple text prompts.
To address this issue, the board recommended that Meta clearly state that it does not allow AI-created or manipulated non-consensual sexual content. While Meta has agreed to abide by the board’s decisions regarding specific content moderation cases, it treats policy suggestions as recommendations that it can adopt if deemed appropriate.
In April, the Board took on two new cases regarding Meta’s moderation of AI-generated explicit images of women.
Today, we’re publishing our decision, recommending that Meta make it easier for users to report non-consensual sexualized images.
Advertisement🔗: https://t.co/FsdE1fbXM1 pic.twitter.com/YlnUhRsr9Y
— Oversight Board (@OversightBoard) July 25, 2024