According to Meta, its present method of tagging AI-generated content is too restrictive, and it will soon affix a “Made with AI” flag to a wider variety of audio, video, and image files. When it notices industry-standard AI image markers or when users confirm that they are uploading AI-generated content, it will begin appending the label to media in May. Fact-checkers’ flagged postings may also receive the classification from the corporation, albeit content that has been flagged as manipulated or incorrect will probably be ranked lower.
Following an Oversight Board ruling on a video that was purposefully altered to show President Joe Biden inappropriately touching his granddaughter, the business announced the move. The Oversight Board supported Meta’s choice to leave the video up on Facebook since it did not contravene the platform’s policies against manipulating media. With the number of elections in 2024, the board recommended that Meta “reconsider this policy quickly.”
Meta says it agrees with the board’s “recommendation that providing transparency and additional context is now the better way to address manipulated media and avoid the risk of unnecessarily restricting freedom of speech, so we’ll keep this content on our platforms so we can add labels and context.” The company added that, in July, it will stop taking down content purely based on violations of its manipulated video policy. “This timeline gives people time to understand the self-disclosure process before we stop removing the smaller subset of manipulated media,” Meta’s vice president of content policy Monika Bickert wrote in a blog post.
Meta had been applying an “Imagined with AI” label to photorealistic images that users whip up using the Meta AI tool. The updated policy goes beyond the Oversight Board’s labeling recommendations, Meta says. “If we determine that digitally-created or altered images, video or audio create a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label so people have more information and context,” Bickert wrote.
While the company generally believes that transparency and allowing appropriately labeled AI-generated photos, images and audio to remain on its platforms is the best way forward, it will still delete material that breaks the rules. “We will remove content, regardless of whether it is created by AI or a person, if it violates our policies against voter interference, bullying and harassment, violence and incitement, or any other policy in our Community Standards,” Bickert noted.
The Oversight Board told Engadget in a statement that it was pleased Meta took its recommendations on board. It added that it would review the company’s implementation of them in a transparency report down the line.
“While it is always important to find ways to preserve freedom of expression while protecting against demonstrable offline harm, it is especially critical to do so in the context of such an important year for elections,” the board said. “As such, we are pleased that Meta will begin labeling a wider range of video, audio and image content as ‘Made with AI’ when they detect AI image indicators or when people indicate they have uploaded AI content. This will provide people with greater context and transparency for more types of manipulated media, while also removing posts which violate Meta’s rules in other ways.”