Before the U.S. elections, which will test Meta‘s ability to police misleading content created by emerging artificial intelligence algorithms, the owner of Facebook announced significant modifications to its regulations on digitally created and edited media on Friday.
According to a blog post by Vice President of Content Policy Monika Bickert, the social media behemoth will begin labeling AI-generated movies, photos, and audio that are uploaded on its platforms as “Made with AI” in May.
This expands the scope of the policy, which previously only covered a small portion of doctored content. According to Bickert, Meta will additionally affix distinct and conspicuous marks to digitally modified content that presents a “particularly high risk of materially deceiving the public on a matter of importance,” irrespective of whether artificial intelligence or other tools was utilized in the creation of this content.
The new approach will shift the company’s treatment of manipulated content. It will move from one focused on removing a limited set of posts toward one that keeps the content up while providing viewers with information about how it was made.
Meta previously announced a scheme to detect images made using other companies’ generative AI tools using invisible markers built into the files, but did not give a start date at the time.
A company spokesperson told Reuters the new labeling approach would apply to content posted on Meta’s Facebook, Instagram and Threads services. Its other services, including WhatsApp and Quest virtual reality headsets, are covered by different rules.
Meta will begin applying the more prominent “high-risk” labels immediately, the spokesperson said.
The changes come months before a U.S. presidential election in November that tech researchers warn may be transformed by new generative AI technologies. Political campaigns have already begun deploying AI tools in places like Indonesia, pushing the boundaries of guidelines issued by providers like Meta and generative AI market leader OpenAI.
In February, Meta’s oversight board called the company’s existing rules on manipulated media “incoherent” after reviewing a video of U.S. President Joe Biden posted on Facebook last year that altered real footage to wrongfully suggest he had behaved inappropriately.
The footage was permitted to stay up, as Meta’s existing “manipulated media” policy bars misleadingly altered videos only if they were produced by artificial intelligence or if they make people appear to say words they never actually said.
The board said the policy should also apply to non-AI content, which is “not necessarily any less misleading” than content generated by AI, as well as to audio-only content and videos depicting people doing things they never actually did.