The Arabic term “shaheed,” which has been removed off the company’s platforms more frequently than any other word or phrase, is being pushed for modification by the Oversight Board. Last year, after internal efforts to update the rules had stalled, Meta sought the group for assistance in creating new ones.
The board points out that while this is a common translation of the Arabic word “shaheed,” the word can have “multiple meanings” and is not always synonymous with “martyr.” But according to the firm, Meta’s present regulations only take into account the concept of “martyr,” which suggests admiration. As a result, the term is no longer used in reference to those that the company has classified as “dangerous individuals.”
However, this policy ignores the “linguistic complexity” of the word, which is “often used, even with reference to dangerous individuals, in reporting and neutral commentary, academic discussion, human rights debates and even more passive ways,” the Oversight Board says in its opinion. “There is strong reason to believe the multiple meanings of ‘shaheed’ result in the removal of a substantial amount of material not intended as praise of terrorists or their violent actions.”
In their recommendations to Meta, the Oversight Board says that the company should end its “blanket ban” on the word being used to reference “dangerous individuals,” and that posts should only be removed if there are other clear “signals of violence” or if the content breaks other policies. The board also wants Meta to better explain how it uses automated systems to enforce these rules.
If Meta adopts the Oversight Board’s recommendations, it could have a significant impact on the platform’s Arabic-speaking users. The board notes that the word, because it is so common, likely “accounts for more content removals under the Community Standards than any other single word or phrase,” across the company’s apps.
“Meta has been operating under the assumption that censorship can and will improve safety, but the evidence suggests that censorship can marginalize whole populations while not improving safety at all,” the board’s co-chair (and former Danish prime minister) Helle Thorning-Schmidt said in a statement. “The Board is especially concerned that Meta’s approach impacts journalism and civic discourse because media organizations and commentators might shy away from reporting on designated entities to avoid content removals.”
This is hardly the first time Meta has been criticized for moderation policies that disproportionately impact Arabic-speaking users. A 2022 report commissioned by the company found that Meta’s moderators were less accurate when assessing Palestinian Arabic, resulting in “false strikes” on users’ accounts. The company apologized last year after Instagram’s automated translations began inserting the word “terrorist” into the profiles of some Palestinian users.
The opinion is also yet another example of how long it can take for Meta’s Oversight Board to influence the social network’s policies. The company first asked the board to weigh in on the rules more than a year ago (the Oversight Board said it “paused” the publication of the policy after October 7 attacks in Israel to ensure its rules “held up” to the “extreme stress” of the conflict in Gaza). Meta will now have two months to respond to the recommendations, though actual changes to the company’s policies and practices could take several more weeks or months to implement.
“We want people to be able to use our platforms to share their views, and have a set of policies to help them do so safely,” a Meta spokesperson said in a statement. “We aim to apply these policies fairly but doing so at scale brings global challenges, which is why in February 2023 we sought the Oversight Board’s guidance on how we treat the word ‘shaheed’ when referring to designated individuals or organizations. We will review the Board’s feedback and respond within 60 days.”