slug,title,url,date,year,month,topics,category,outcome,Geographic Region,summary,text_length,word_count,body bun-0w49p93l,Breast Cancer Awareness Content,https://www.oversightboard.com/decision/bun-0w49p93l/,"May 15, 2025",2025,,"Freedom of expression,Health,Sex and gender equality",Adult nudity and sexual activity,Overturned,"Canada,United Kingdom,United States",Fifteen users appealed Meta’s decisions to remove their Facebook and Instagram posts featuring different types of breast cancer awareness content.,11259,1624,"Multiple Case Decision May 15, 2025 Fifteen users appealed Meta’s decisions to remove their Facebook and Instagram posts featuring different types of breast cancer awareness content. Overturned FB-HG46TXVV Platform Facebook Topic Freedom of expression,Health,Sex and gender equality Standard Adult nudity and sexual activity Location Canada,United Kingdom,United States Date Published on May 15, 2025 Overturned IG-8I65L4BR Platform Instagram Topic Freedom of expression,Health,Sex and gender equality Standard Adult nudity and sexual activity Location Portugal Date Published on May 15, 2025 Overturned IG-AF358TQZ Platform Instagram Topic Freedom of expression,Health,Sex and gender equality Standard Adult nudity and sexual activity Location Belgium Date Published on May 15, 2025 Overturned IG-WB5PWTQX Platform Instagram Topic Freedom of expression,Health,Sex and gender equality Standard Adult nudity and sexual activity Location France Date Published on May 15, 2025 Overturned FB-2ZYYARG4 Platform Facebook Topic Freedom of expression,Health,Sex and gender equality Standard Adult nudity and sexual activity Location United Kingdom Date Published on May 15, 2025 Overturned IG-MGZ9LVHM Platform Instagram Topic Freedom of expression,Health,Sex and gender equality Standard Adult nudity and sexual activity Location Italy Date Published on May 15, 2025 Overturned FB-PC8EOREZ Platform Facebook Topic Freedom of expression,Health,Sex and gender equality Standard Adult nudity and sexual activity Location United Kingdom Date Published on May 15, 2025 Overturned IG-MYN4NFW5 Platform Instagram Topic Freedom of expression,Health,Sex and gender equality Standard Adult nudity and sexual activity Location Germany Date Published on May 15, 2025 Overturned IG-01K6V6SO Platform Instagram Topic Freedom of expression,Health,Sex and gender equality Standard Adult nudity and sexual activity Location Japan Date Published on May 15, 2025 Overturned IG-1L14W06S Platform Instagram Topic Freedom of expression,Health,Sex and gender equality Standard Adult nudity and sexual activity Location Czech Republic,United States Date Published on May 15, 2025 Overturned FB-G25CZT99 Platform Facebook Topic Freedom of expression,Health,Sex and gender equality Standard Adult nudity and sexual activity Location United States Date Published on May 15, 2025 Overturned FB-VNVO9UVT Platform Facebook Topic Freedom of expression,Health,Sex and gender equality Standard Adult nudity and sexual activity Location United Kingdom Date Published on May 15, 2025 Overturned IG-5YYDVVT6 Platform Instagram Topic Freedom of expression,Health,Sex and gender equality Standard Adult nudity and sexual activity Location United States Date Published on May 15, 2025 Overturned IG-VKLZXQU0 Platform Instagram Topic Freedom of expression,Health,Sex and gender equality Standard Adult nudity and sexual activity Location Canada,United States Date Published on May 15, 2025 Overturned FB-0LPPBEEG Platform Facebook Topic Freedom of expression,Health,Sex and gender equality Standard Adult nudity and sexual activity Location Guyana Date Published on May 15, 2025 Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Summary Fifteen users appealed Meta’s decisions to remove their Facebook and Instagram posts featuring different types of breast cancer awareness content. After the Board brought the appeals to Meta’s attention, the company reversed its original decisions and restored all 15 posts. About the Cases In October and November 2024, 15 users from various countries (Belgium, Canada, Czech Republic, France, Germany, Guyana, Italy, Japan, Portugal, United Kingdom and United States) posted content raising awareness around breast cancer in the context of “Pink October,” or Breast Cancer Awareness Month, an international campaign to raise awareness of the disease. They comprise: The users who appealed Meta’s removal decisions to the Board argued that their intention when posting was to share true stories about breast cancer, and relevant information about symptoms for prevention purposes and aesthetic services that help cancer survivors heal. Under Meta’s Adult Nudity and Sexual Activity Community Standard, the company restricts “the display of nudity or sexual activity because some people in [Meta’s community] may be sensitive to this type of content, particularly due to cultural background or age.” This policy applies to “real photographs and videos of nudity and sexual activity, AI- or computer-generated images of nudity and sexual activity, and digital imagery, regardless of whether it looks ‘photorealistic.’” The company considers “uncovered female nipples” as “nudity” under this policy. Meta allows, however, visible female nipples when shared in “mastectomy,” “medical” or “health” contexts, which covers content seeking to “inform, discuss, or educate people about health-related issues ... or disease” or that “relates directly to the treatment of a disease,” like a “medical examination by oneself or a medical practitioner.” After the Board brought these cases to Meta’s attention, the company determined that all 15 pieces of content did not violate the Adult Nudity and Sexual Activity policy and that its original decisions to remove the posts were incorrect. Meta considered that the breast cancer symptoms cases did not violate the policy because they depict female nipples in cartoon images that described and depicted breast cancer symptoms. Meta considered that mastectomy scarring and nipple tattoos cases also did not violate the policy because they depict visible female nipples in a mastectomy context, which is allowed under the Adult Nudity and Sexual Activity Community Standard. Finally, Meta also considered that people’s experiences cases did not violate the policy because they depict female nipples in real photographs that were shared in a medical, educational, and/or awareness-raising context. The company then restored all pieces of content to Facebook and Instagram. Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook, Instagram and Threads users. Significance of Cases These cases demonstrate that, despite Meta’s efforts to further improve accuracy in the enforcement of its exceptions to the Adult Nudity and Sexual Activity Community Standard, errors continue to hinder users’ ability to raise awareness about breast cancer. In one of the Board’s earliest decisions, the Breast Cancer Symptoms and Nudity decision , the Board emphasized that the removal of such content impacts not only users’ freedom of expression but also their right to health, given that access to health-related information is an important part of the right to health. The impact of similar errors was also addressed in the Board’s Education Posts About Ovulation , Breast Self-Exam and Testicular Cancer Self-Check Infographics summary decisions. The Board has issued recommendations aimed at reducing Meta's enforcement errors specifically concerning exceptions to its Adult Nudity and Sexual Activity Community Standard. For example, the Board recommended that Meta “implement an internal audit procedure to continuously analyze a statistically representative sample of automated content removal decisions to reverse and learn from enforcement mistakes” ( Breast Cancer Symptoms and Nudity , recommendation no. 5). The Board considers that Meta did not address this recommendation, given that the company bundled it with recommendation no. 1, mentioned below, without engaging with the internal audit procedure outlined by the Board. The Board also recommended that Meta “improve the automated detection of images with text-overlay to ensure that posts raising awareness of breast cancer symptoms are not wrongly flagged for review” ( Breast Cancer Symptoms and Nudity , recommendation no. 1). The Board considers this recommendation implemented as demonstrated through published information. Meta reported enhancements to text-overlay and the development of a new health content classifier with better capabilities to identify breast cancer context. The company reported that these enhancements have been in place since July 2021. To show the impact of the recommendation, Meta reported that its implementation has contributed to an additional 1,000 pieces of content being sent for human review that would have previously been removed between March 21 and April 18, 2023, alone. Even with the improvements reported by Meta, enforcement errors may still occur in at-scale content moderation. The Board encourages Meta to continue to improve its ability to accurately enforce its policies. The Board additionally recommended that Meta “ensure users can appeal decisions taken by automated systems to human review when their content is found to have violated Facebook’s Community Standard on Adult Nudity and Sexual Activity” ( Breast Cancer Symptoms and Nudity , recommendation no. 4). Meta declined this recommendation because, according to the company, “the majority of appeals are reviewed by content reviewers.” “If users appeal a decision [Meta] make[s] to remove nudity, the appeal will be reviewed by a content reviewer, except in cases where [the company has] capacity constraints.” The fact that 15 pieces of content that clearly fall within the exceptions to the Adult Nudity and Sexual Activity policy were deemed violating by Meta until the cases were brought to the company’s attention shows that enforcement accuracy may improve if Meta implements this recommendation. The Board encourages Meta to continue to improve its ability to accurately detect content that falls within exceptions to the Adult Nudity and Sexual Activity policy. While Meta’s commitment to the aforementioned recommendation no. 1 has been reportedly preventing enforcement errors, a commitment to recommendations nos. 4 and 5 seems necessary in light of the multiple enforcement errors identified by the Board. Implementing these recommendations would further strengthen the company’s ability to reverse errors. Decision The Board overturns Meta’s original decisions to remove the 15 pieces of content. The Board acknowledges Meta’s correction of its initial errors once the Board brought the cases to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" bun-1d7m83y5,Proud Boys News Article,https://www.oversightboard.com/decision/bun-1d7m83y5/,"February 27, 2024",2024,,"Freedom of expression,Journalism,News events",Dangerous individuals and organizations,Overturned,United States,"The Board reviewed two Facebook posts, removed by Meta, which linked to a news report about the criminal sentencing of Proud Boys members. After the Board brought these two appeals to Meta’s attention, the company reversed its original decisions and restored both posts.",7172,1090,"Multiple Case Decision February 27, 2024 The Board reviewed two Facebook posts, removed by Meta, which linked to a news report about the criminal sentencing of Proud Boys members. After the Board brought these two appeals to Meta’s attention, the company reversed its original decisions and restored both posts. Overturned FB-2JHTL3QD Platform Facebook Topic Freedom of expression,Journalism,News events Standard Dangerous individuals and organizations Location United States Date Published on February 27, 2024 Overturned FB-ZHVJLX60 Platform Facebook Topic Freedom of expression,Journalism,News events Standard Dangerous individuals and organizations Location United States Date Published on February 27, 2024 This is a summary decision. Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors and inform the public about the impact of the Board’s work. They are approved by a Board Member panel, not the full Board. They do not consider public comments and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement. Case Summary The Board reviewed two Facebook posts, removed by Meta, which linked to a news report about the criminal sentencing of Proud Boys members. After the Board brought these two appeals to Meta’s attention, the company reversed its original decisions and restored both posts. Case Description and Background In September 2023, two Facebook users posted a link to a news article about the conviction and sentencing of a member of the Proud Boys who participated in the January 6, 2021, attack on the U.S. Capitol. As part of the article, there is a picture of a group of men each wearing a T-shirt on which the text ""proud boys"" is shown and the group's logo. Neither user added a comment or caption when sharing the link. The Proud Boys is a far-right group founded in 2016 that has quickly become known for violence and extremism, including playing a significant role in the January 6, 2021, attack on the U.S. Capitol, for which many group members have been prosecuted. Meta originally removed the post from Facebook, citing its Dangerous Organizations and Individuals policy , under which the company prohibits representation of and certain speech about individuals and organizations that Meta designates as dangerous, as well as unclear references to them. However, the policy recognizes that “users may share content that includes references to designated dangerous organizations and individuals to report on, condemn or neutrally discuss them or their activities.” In their appeals to the Board, both users argued that their content did not violate Meta’s Community Standards. The user in the first case claimed that the news article was reposted to inform people about the conviction of the Proud Boys leader and stated that if the content had been reviewed by a human, instead of a bot, they would have concluded that the content did not violate Meta's Community Standards. In the second case, the user stated that the purpose of the post was to inform people that justice had been done with regard to an act of terrorism. They also emphasized the importance of human moderation in such instances, since Meta's automated systems made an incorrect decision, likely influenced by the words used in the article instead of looking at the context. After the Board brought these two cases to Meta’s attention, the company determined that the posts did not violate its policies. Although the posts refer to the Proud Boys, a designated organization, they simply report on the group. Meta concluded that its initial removal was incorrect as the posts fall into the exception that permits users “to report on, condemn or neutrally discuss” dangerous organizations and individuals. Meta restored both pieces of content to the platform. Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook and Instagram users. Case Significance These cases illustrate the challenges associated with enforcement of exceptions for news reporting, as set out in Meta's Dangerous Organizations and Individuals Community Standard. This kind of error directly impacts users' ability to share external links from a news outlet, when they relate to a designated group or organization, on Meta's platforms, even when these are neutral and have public value. Previously, the Board has issued several recommendations regarding Meta's Dangerous Organizations and Individuals policy and news reporting. These include a recommendation to “add criteria and illustrative examples to Meta’s DOI policy to increase understanding of exceptions, specifically around neutral discussion and news reporting,” which Meta has implemented as demonstrated through published information ( Shared Al Jazeera Post , recommendation no.1). The Board has urged Meta to “assess the accuracy of reviewers enforcing the reporting allowance under the DOI policy to identify systemic issues causing enforcement errors,” ( Mention of the Taliban in News Reporting , recommendation no.5). Furthermore, the Board has recommended that Meta ‘’should conduct a review of the HIPO ranker [high-impact false positive override system] to examine if it can more effectively prioritize potential errors in the enforcement of allowances to the Dangerous Organizations and Individuals policy, including news reporting content, where the likelihood of false-positive removals that impacts freedom of expression appears to be high,’’ ( Mention of the Taliban in News Reporting , recommendation no.6). Meta reported implementation on the last two recommendations without publishing further information and thus this implementation cannot be verified. The Board remains concerned that despite Meta’s report that it has implemented all these recommendations, these two cases underscore the need for more effective measures in accordance with the Board’s recommendations. The Board emphasizes that full adoption of these recommendations, alongside Meta publishing information to demonstrate they have been successfully implemented, could reduce the number of incorrect removals of news reports under Meta’s Dangerous Organizations and Individuals policy. Decision The Board overturns Meta’s original decisions to remove these two pieces of content. The Board acknowledges Meta’s correction of its initial errors once the Board brought these cases to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" bun-1ynnk264,Gender Identity Debate Videos,https://www.oversightboard.com/decision/bun-1ynnk264/,"April 23, 2025",2025,,"Freedom of expression,LGBT",Remove the term “transgenderism” from the Hateful Conduct policy and corresponding implementation guidance.,Upheld,"Canada,United States","In two posts that include videos in which a transgender woman is confronted for using a women’s bathroom and a transgender athlete wins a track race, the majority of the Board has upheld Meta’s decisions to leave up the content.",57071,8885,"Multiple Case Decision April 23, 2025 In two posts that include videos in which a transgender woman is confronted for using a women’s bathroom and a transgender athlete wins a track race, the majority of the Board has upheld Meta’s decisions to leave up the content. Upheld FB-XHPXAN6Z Platform Facebook Topic Freedom of expression,LGBT Location Canada,United States Date Published on April 23, 2025 Upheld IG-84HSR2FP Platform Instagram Topic Freedom of expression,LGBT Location United States Date Published on April 23, 2025 Gender Identity Debate Videos PDF In two posts that include videos in which a transgender woman is confronted for using a women’s bathroom and a transgender athlete wins a track race, the majority of the Board has upheld Meta’s decisions to leave up the content. The Board notes that public debate on policies around transgender peoples’ rights and inclusion is permitted, with offensive viewpoints protected under international human rights law on freedom of expression. In these cases, the majority of the Board found there was not enough of a link between restricting these posts and preventing harm to transgender people, with neither creating a likely or imminent risk of incitement to violence. Nor did the posts represent bullying or harassment. Transgender women and girls’ access to women’s bathrooms and participation in sports are the subjects of ongoing public debate that involves various human rights concerns. It is appropriate that a high threshold be required to suppress such speech. Beyond the content in these cases, the Board has made recommendations to address how Meta’s January 7, 2025, revisions to the renamed Hateful Conduct Policy may adversely impact LGBTQIA+ people, including minors. Additional Note: Meta’s January revisions did not change the outcome in these cases, though the Board took the rules at the time of posting and the updates into account during deliberation. On the broader policy and enforcement changes hastily announced by Meta in January, the Board is concerned that Meta has not publicly shared what, if any, prior human rights due diligence it performed, in line with its commitments under the UN Guiding Principles on Business and Human Rights. It is vital Meta ensures adverse impacts on human rights globally are identified and prevented. About the Cases The first case involves a Facebook video in which an identifiable transgender woman is confronted for using the women’s bathroom at a university in the United States. The woman who films the encounter asks the transgender woman why she is using the women’s bathroom, also stating she is concerned for her safety. The post’s caption describes the transgender woman as a “male student who thinks he’s a girl,” and asks why “this” is tolerated. This post has been viewed more than 43,000 times. Nine users reported the content, but Meta found no violations. One of those users then appealed to the Board. In the second case, a video shared on Instagram shows a transgender girl winning a track race, with some spectators disapproving of the result. The caption names the athlete, who is a minor (under 18), refers to her as a “boy who thinks he’s a girl” and uses male pronouns. This content, which has been viewed about 140,000 times, was reported by one user but Meta decided there was no violation. The user appealed to the Board. Key Findings The full Board has found neither post violates the updated Hateful Conduct policy. Considering the policy prior to Meta’s January 7 changes, the majority did not find a violation under this version either because neither post contained a “direct attack” against people based on their gender identity, which is a protected characteristic. A minority, on the other hand, has found that both posts would have violated the policy’s pre-January 7 version. For the majority of the Board, neither post would have broken the rule against “statements denying existence,” under the previous version of the policy. This rule was deleted in Meta’s January update. Nor do the posts represent a “call for exclusion” because there are no calls for the transgender woman to leave the bathroom or for the transgender athlete to be ejected, disqualified from competition or otherwise left out. Prior to January 7, there were exceptions under Meta’s internal guidance (not available publicly) to specifically allow calls for gender-based exclusion from sporting activities or specific sports, as well as from bathrooms. Since January 7, these exceptions are now made clear publicly in the Hateful Conduct rules, making these rules more transparent and accessible. A minority of the Board disagrees, finding that both posts violated the pre-January 7 Hate Speech policy, including on “calls for exclusion” based on gender identity and the (now deleted) rule on “statements denying existence.” The overall intent of these posts would have been clear: as direct and violating attacks that call for exclusion of transgender women and girls from access to bathrooms, participation in sports and inclusion in society, solely based on denying their gender identity. On Bullying and Harassment, the Board finds by consensus no violation for the bathroom post since the adult transgender woman would have had to self-report the content for it to be assessed under the rules prohibiting “claims about gender identity” and “calls for … exclusion.” This type of self-reporting is not required for minors (aged between 13 and 18) unless they are considered by Meta to be a “voluntary public figure.” The majority of the Board agrees with Meta that the transgender athlete, who is a minor, is a voluntary public figure who has engaged with their fame, although for different reasons. For these Board Members, the athlete voluntarily chose to compete in a state-level athletics championship, in front of large crowds and attracting media attention, having already been the focus of such attention for earlier athletic participation. Therefore, additional protections under Tier 3 of the policy, including the rule that does not permit “claims about gender identity,” do not apply, and the majority finds no violation in the athletics post. A minority disagrees, finding that the transgender athlete should not be treated as a voluntary public figure. Such public figure status should not be applied to a child because they have chosen to participate in an athletics competition that created media attention driven by their gender identity, which is not within their control. This should not equate to voluntarily engaging with celebrity. Therefore, this post violates the rule against “claims about gender identity,” as well as “calls for exclusion” under the Bullying and Harassment policy and should have been removed. The Board is concerned about the self-reporting requirement under the Bullying and Harassment policy and its impact on victims of targeted abuse, making related recommendations. For a minority of Board Members, the more troubling aspect is that both these posts meet the threshold of imminent risk of “discrimination, hostility or violence” against transgender people, under international human rights law, which requires that this content be removed. The videos were posted against a backdrop of worsening violence and discrimination against LGBTQIA+ people, including in the United States. They deliberately attack and misgender specific transgender individuals as well as transgender people as a group, and in one case, involve the safety of a child. Finally, the Board is concerned that Meta has incorporated the term “transgenderism” into its revised Hateful Conduct policy. For rules to be legitimate, Meta must frame them neutrally. The Oversight Board’s Decision The Oversight Board upholds Meta’s decisions to leave up the content in both cases. The Board also recommends that Meta: *Case summaries provide an overview of cases and do not have precedential value. 1. Case Description and Background These cases concern two posts containing videos shared on Facebook and Instagram in the United States in 2024. The first case involves a video with a caption, shared on Facebook. A woman films an encounter in which she confronts an identifiable transgender woman for using the women’s bathroom at a university. The caption refers to the transgender woman as a “male student who thinks he’s a girl,” while asking why “this” is tolerated. In the video, the woman asks the transgender woman why she is using the women’s bathroom, challenges her on her gender and states that she “pay[s] a lot of money to be safe in the bathroom.” The transgender woman responds that she is a “trans girl” and that safety in the bathroom is important to her too. The post has been viewed about 43,000 times. Nine users reported the post for hate speech and bullying and harassment, but Meta found the content was not violating. One of those users appealed to the Board. In the second case, a video shared on Instagram shows a transgender girl winning a girls’ state-level track championship race, with some spectators disapproving of the result. The caption identifies the teenage athlete by name, referring to her as a “boy who thinks he’s a girl,” as well as using male pronouns. The post has been viewed about 140,000 times. One user reported the content for hate speech and bullying and harassment, but Meta determined the content was not violating. The user appealed Meta’s decision to the Board. The Board’s review of these cases comes at a time of significant public debate in certain parts of the world about the rights of transgender women and girls. In the United States, these debates intensified during the 2024 Presidential Election. The new U.S. administration is enacting policy changes directly affecting the rights of transgender people. Those who support broader freedom of expression for debate around these issues do not necessarily support the policy changes being enacted, many of which are also adversely impacting freedom of expression and access to information. On January 7, 2025, Meta announced revisions to its Hate Speech policy, renaming it the Hateful Conduct policy . These changes, to the extent relevant to these cases, will be described in Section 3 and analyzed in Section 5. The Board notes content is accessible on Meta’s platforms on a continuing basis, and updated policies are applied to all content present on the platform, regardless of when it was posted. The Board therefore assesses the application of policies as they were at the time of posting and, where applicable, as since revised (see also the approach in Holocaust Denial ). 2. User Submissions The user who appealed the content (bathroom post) in the first case to the Board explained that Meta is allowing what is, in their view, a transphobic post to stay on its platform. The user who appealed the athletics post in the second case said it attacks and harasses the athlete who is a minor and violates Meta’s Community Standards. Neither of the users who appealed to the Board appear in either post under review. The users who shared both posts were notified of the Board’s review and invited to submit a statement, but none were received. 3. Meta’s Content Policies and Submissions I. Meta’s Content Policies Hateful Conduct (previously named Hate Speech) Community Standard According to the Hateful Conduct policy rationale, Meta doesn’t allow hateful conduct (previously hate speech) on its platforms because the company “believe[s] that people use their voice and connect more freely when they don’t feel attacked on the basis of who they are.” Meta defines “hateful conduct” in the same way it previously defined “hate speech” as “direct attacks against people” on the basis of protected characteristics, including sex and gender identity. It does not generally prohibit attacks against “concepts or institutions.” Following Meta’s January 7, 2025, update, the policy rationale states that Meta’s policies are designed to “allow room” for various types of speech, including for people to use “sex- or gender-exclusive language” when discussing “access to spaces often limited by sex or gender, such as access to bathrooms, specific schools, specific military, law enforcement or teaching roles, and health or support groups.” It recognizes that people “call for exclusion or use insulting language in the context of discussing political or religious topics, such as ... transgender rights, immigration or homosexuality.” In the same update to the Hateful Conduct policy, Meta removed various Tier 1 prohibitions (the violations considered most severe), including the rule against “statements denying existence, including but not limited to claims that protected characteristic(s) do not or should not exist, or that there is no such thing as a protected characteristic.” Under Tier 2 of the Hateful Conduct policy, Meta continues to prohibit “calls or support for exclusion or segregation or statements of intent to exclude or segregate” on the basis of protected characteristics, including sex or gender identity, unless otherwise specified. Meta prohibits “social exclusion,” defined as “denying access to spaces (physical and online) and social services, except for sex or gender-based exclusion from spaces commonly limited by sex or gender, such as restrooms, sports and sports leagues, health and support groups, and specific schools.” Prior to the January 7 update, this exemption was narrower, specifying only “gender-based exclusion in health and positive support groups.” At the time the posts were first reviewed, Meta’s internal guidance to reviewers specified that calls for exclusion from sporting activities or specific sports were permitted. However, calls for exclusion from bathrooms were permitted only on escalation. When content is escalated, it is sent to additional teams within Meta for policy and safety review. Meta’s January 7 changes have made both of these previously unpublished exceptions public and turned the bathroom exception from escalation-only to the default at-scale meaning that all human reviewers are instructed to leave content up, without requiring escalation to an internal team at Meta. The updated Hateful Conduct policy also now exempts from its prohibition on “insults” (described under the previous policy as “generalizations that state inferiority”) any “allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like ‘weird.’” Bullying and Harassment Community Standard The rationale for the Bullying and Harassment policy states that “bullying and harassment happen in many places and come in many different forms from making threats and releasing personally identifiable information to sending threatening messages and making unwanted malicious contact.” The Bullying and Harassment Community Standard is split into four tiers, with Tier 1 providing “universal protections for everyone,” and Tiers 2 - 4 providing additional protections, limited according to the status of the targeted person. Meta distinguishes between public figures and private individuals “to allow discussion, which often includes critical commentary of people who are featured in news or who have a large public audience.” For private individuals, the company removes “content that’s meant to degrade or shame.” In certain instances, self-reporting is required because it helps the company understand whether the person targeted actually feels bullied or harassed. The policy rationale also states that Meta recognizes “bullying and harassment can have more of an emotional impact on minors, which is why the policies provide heightened protection for anyone under the age of 18, regardless of user status.” Tier 3 of the policy prohibits “claims about ... gender identity” and “calls for ... exclusion.” Private adults who are targeted by such claims must report the violating content themselves for it to be removed. Self-reporting is not required for private minors and minors who are considered involuntary public figures. A minor who is a voluntary public figure and all adult public figures are not protected under Tier 3 of the Bullying and Harassment policy, even if they self-report. The policy rationale defines public figures, among others, as “people with over one million fans or followers on social media and people who receive substantial news coverage,” as well as government officials and candidates for office. Meta’s internal guidelines define “involuntary public figures” as: “Individuals who technically qualify as public figures but have not engaged with their fame. II. Meta’s Submissions Meta kept both posts on Facebook and Instagram, finding neither post violated its Hateful Conduct (previously named Hate Speech) or Bullying and Harassment policies. It confirmed that this outcome was not impacted by its January 7 policy changes. The Board asked questions on the scope and application of these policies and Meta responded to all of them. Bathroom Post Meta determined the bathroom post in the first case did not violate the Hateful Conduct policy. First, it did not constitute a “call for exclusion” under the Hate Speech policy because it was ambiguous whether it was questioning the transgender woman’s presence in the specific bathroom or the broader policy of allowing transgender women in women’s bathrooms. Meta noted that “removing indirect, implicit, or ambiguous attacks would interfere with people’s ability to discuss concepts or ideas on its platforms,” in this case the concept of transgender women using women’s bathrooms. Meta explained that following the January 7 update, it now considers calls for exclusion from bathrooms on the basis of sex or gender to be permissible. In its view, this update to the public-facing language improved transparency and simplified enforcement of this rule. Second, the post did not violate the (now deleted, and no longer applicable) Tier 1 rule on denying the existence of a protected characteristic group. Meta does not consider the post describing the depicted transgender woman as male (i.e., misgendering) to deny the existence of transgender people. Meta stated that it did not equate a statement denying that a person belongs to a protected characteristic group with denying the existence of that group. Meta also concluded the bathroom post did not violate the Bullying and Harassment policy because the transgender woman targeted in the post did not report the content herself. Meta clarified that the prohibition on “claims about gender identity” prohibits misgendering, and had the targeted person self-reported, it would have been found violating. However, even if the user had self-reported, Meta would have found the rule against “calls for exclusion” not violated, as there was no explicit call for exclusion. In response to the Board’s questions, Meta stated that it has considered alternatives to the self-reporting requirement, but they present risks of overenforcement. Meta explained it would be difficult to define the appropriate level of relationship between a targeted person and a third-party reporting on their behalf. It added it would be challenging to validate the accuracy of the information provided. In response to the Board’s questions, Meta explained that the company does not remove content solely because it contains footage of an identifiable person without consent in a private setting, as an additional violating element is required. This is because, “while private settings present different risks from public ones, many non-private activities and speech occur in private settings."" Athletics Post Meta concluded the athletics post in the second case did not violate the Hate Speech (now Hateful Conduct) policy. First, Meta found there was no prohibited call for exclusion. For Meta, the way the post draws attention to the spectators’ disapproval of the transgender girl’s victory may be directed at the “concept” of allowing transgender girls and women to compete in sporting events consistent with their gender identity. Meta explained the updated Hateful Conduct policy now publicly clarifies that social exclusion does not include “sex or gender-based exclusion from spaces commonly limited by sex or gender, such as … sports and sports leagues,” which was previously enforced through an exception in the internal guidance to reviewers. Second, for the same reasons as the bathroom post, Meta found this post did not violate the (now deleted) Tier 1 rule on denying the existence of a protected characteristic group. Meta also concluded that this post did not violate the Bullying and Harassment Community Standard. Meta found it did not contain a “call for exclusion” and that although the athlete was a minor (aged between 13 and 18), she was a “voluntary public figure” because she had engaged with her celebrity. She was therefore not protected from the Tier 3 prohibition on “claims about gender identity” (which prohibits misgendering an individual). Had she not been classified as a voluntary public figure, the content would have violated the rule on “claims about gender identity.” In that instance, as she is a minor, she would not have had to self-report the content for a violation to be found. In Meta’s analysis, the company considered the targeted minor a “public figure,” given the significant news coverage about her as an athlete, and that she “may have capacity to influence or communicate to large groups of individuals.” Meta explained that the company allows “more discussion and debate around public figures in part because – as here – these conversations are often part of social and political debates and the subject of news reporting.” They said that “athletes who enter competitions and generate news coverage, for reasons positive or negative, automatically become public figures when they appear in a specified number of news articles.” Meta also clarified that minors under the age of 13 cannot qualify as public figures. The transgender athlete in this case, who was not under 13 but is a minor, was a “voluntary public figure” because she had in Meta’s view, “to some extent,” engaged with her fame, “speaking publicly about” their transition, to a school newspaper in 2023. Through the distinction between minors who are “voluntary” or “involuntary” public figures, Meta “seeks to balance the safety of minors with their right to agency, expression, and dignity through, for example, choosing to engage with their celebrity, including the notoriety that may come with it.” The company explained “this approach respects the rights of minors by allowing the public to discuss minors who have voluntarily engaged with their fame while restricting potentially harmful negative attention directed toward[s] minors who have become famous because they are victims of crime or abuse.” Meta added that, even if either post had violated its content policies, they would still have been kept up under the newsworthiness allowance, upon escalated review. This is because both posts relate to topics of considerable political debate in the United States, and the facts underpinning the post about the transgender athlete who is a minor were subject to significant news coverage. 4. Public Comments The Oversight Board received 658 public comments that met the terms for submission . Of these comments, 53 were submitted from Asia Pacific and Oceania, 174 from Europe, eight from Latin America and the Caribbean, one from Sub-Saharan Africa and 422 from the United States and Canada. Because the public comments period closed before January 7, 2025, none of the comments address the policy changes Meta made on that date. To read public comments submitted with consent to publish, click here. The submissions covered the following themes: immutability of biological traits; research into harms of misgendering or exclusion of transgender people; risks of under and overenforcement of content involving transgender people; the self-reporting requirement and the status of the involuntary public figure, who is a minor, under Meta’s Bullying and Harassment policy; and, the impact of the participation of transgender women and girls in sports and women’s bathrooms on women’s rights. 5. Oversight Board Analysis The Board selected these cases to assess whether Meta’s approach to moderating discussions about gender identity respects the human rights, including freedom of expression, of all people. The Board analyzed Meta’s decisions in these cases against Meta’s content policies, values and human rights responsibilities. The Board also assessed the implications of these cases for Meta’s broader approach to content governance. 5.1 Compliance With Meta’s Content Policies I. Content Rules Hateful Conduct (previously named Hate Speech) policy Following the January 7 policy changes, the Board finds neither post violates Meta’s Hateful Conduct policy. A violation consists of two elements: (i.) a “direct attack” in the form of prohibitions listed under the “Do not post” section of the policy; (ii.) that targets a person or group on the basis of a listed protected characteristic. For both posts, the absence of a “direct attack” under the revised rules means there is no violation. The Board notes that “gender identity” remains a protected characteristic under Meta’s Hateful Conduct policy. Prior to the January 7 policy changes, the Board assessed both posts against two prohibitions (i.e., “direct attacks”) within the Hate Speech policy: (i.) statements denying the existence of transgender people or identities; (ii.) calls for social exclusion of transgender people. The majority of the Board found that neither post violated Meta’s rule (now deleted and no longer enforced) on “statements denying existence.” For this rule to have been violated, the content would have needed to include a more categorical statement that: transgender people or transgender identities do not exist; that no one is transgender; or, that anyone who identifies as transgender is not. Both posts refer to the biological sex of the individuals in the videos to say they “think” they are female. While this may show disregard for these individuals’ gender identities and may be rude or offensive to many, it does not amount, even by inference, to a statement that transgender people or identities do not exist. One might infer from the posts a rejection of the idea that gender identity, rather than biological sex, should determine who can participate in women's and girls’ sports or access women’s bathrooms. The expression of this opinion, however controversial, did not violate this rule in the Hate Speech policy. A minority of the Board found that both posts violated Meta’s previous rule on “statements denying existence.” For a minority, the assertions in both video captions that the depicted people are males “who think they are females,” without explanation or qualification, categorically reject the possibility that transgender women and girls are or can be anything other than male. The language and tone, while implicit, seek to characterize all transgender identities as a delusion, rather than as an identity. For this minority, finding a violation would be consistent with Board precedent recognizing how indirect narratives or “ malign creativity ” in statements can constitute hate speech (see Holocaust Denial and Post in Polish Targeting Trans People ). The Board notes that Meta’s prohibition on calls for social exclusion is retained in the January 7 policy update, but in addition to allowing gender-based exclusion from “health and support groups,” the policy now allows exclusion based on sex or gender from “spaces commonly limited by sex or gender, such as restrooms, sports and sport leagues.” The policy rationale was also updated to recognize that Meta seeks to permit sex- or gender-exclusive language on these issues. For the majority of the Board, neither post constituted a call for social exclusion under the Hate Speech policy prior to these changes. In the bathroom post, there is no call for the transgender woman to leave the facility, be involuntarily removed, or be excluded in future. Rather the person recording asks the transgender woman, “Do you think that’s OK?” While the conversation may have been unwelcome and rude, it does not meet the plain definition of a “call for exclusion.” In the athletics post, there is no call for the transgender athlete to be ejected, disqualified from competition or otherwise left out. The post depicts her participation and victory, implicitly elevating a question as to whether it is fair. Debating the validity of various approaches to transgender athletic participation or questioning the eligibility of a single athlete does not amount to a call for social exclusion in violation of Meta’s policy. The majority of the Board notes that prior to January 7, Meta’s internal guidance to reviewers included instructions to allow calls for gender-based exclusion from sporting activities or specific sports, and for decisions made by Meta’s internal policy teams, to allow calls for gender-based exclusion from bathrooms. Making Meta’s rules more transparent and accessible, as the January 7 amendments do in this area, is generally welcome. For a minority, both posts, understood in context (see the minority’s human rights analysis in Section 5.2), constituted prohibited “calls for exclusion” based on gender identity. That context, taken together with the statements denying the existence of transgender identity by characterizing it as a delusion, makes the overarching intent of these posts as a direct and violating attack clear: the exclusion of transgender women and girls from access to bathrooms, participation in sports and inclusion in society, solely based on denying their gender identity. Finding a violation of this rule was consistent with Meta’s Hate Speech policy rationale, which previously stated that hate speech was not permitted because “it creates an environment of intimidation and exclusion, and in some cases may promote offline violence.” For the minority, the January 7 policy changes are not in line with Meta’s human rights responsibilities, which require the removal of both posts (see Section 5.2). Bullying and Harassment Policy The Bullying and Harassment Community Standard was not revised on January 7. In the first case, the bathroom post, the Board finds by consensus that since the transgender woman is an adult and not a public figure, she would have had to self-report the content for Tier 3 of the Bullying and Harassment policy to be assessed, including the rules on “claims about gender identity” and “calls for exclusion.” As the transgender woman in the video did not report the content herself an analysis of Tier 3 of the policy is not necessary. While the Board acknowledges that self-reporting may assist Meta in ascertaining if a targeted person feels bullied or harassed, the Board is concerned about the practical challenges and additional burden on users to report harassing content under the Bullying and Harassment policy. Public comments submitted to the Board (see PC-30418, and PC-30167) and various reports highlight the shortcomings of the self-reporting requirement and its impact on victims of targeted abuse. Moreover, the changes that Meta announced on January 7, which were explicitly designed to reduce automated detection of “less severe policy violations,” could increase this burden. In this regard, Meta should continue to explore how to reduce this burden on targets of bullying and harassment, for example by allowing trusted representatives to report with their agreement and on their behalf. Relatedly, when Meta requires users to self-report under certain policy lines, these reports should be effectively prioritized for review to ensure accurate enforcement of these policies. As the Board previously explained in the Post in Polish Targeting Trans People decision, Meta’s automated systems monitor and deduplicate multiple reports on the same piece of content to “ensure consistency in reviewer decisions and enforcement actions.” In the Board’s understanding, this may result in omissions of self-reports where there are multiple user reports. The Board, therefore, recommends that Meta should ensure that self-reports from users are prioritized for review, guaranteeing that any technological solutions implemented account for potential adverse impacts on at-risk groups (see Section 7). While the minority acknowledges that Meta’s rules require private adults to self-report bullying and harassment violations, these Board members are concerned about Meta’s general analysis of the setting of the bathroom post. Confronting a transgender woman in a bathroom is an invasive act that should be considered a form of ""harassment."" This was not a “non-private activity,” but an invasion of a person’s privacy. In relation to the athletics post in the second case, the Board notes that Tier 3 of the Bullying and Harassment policy does not protect people between the ages of 13 and 18, who are public figures, and who have “engaged with their fame.” According to Meta, this engagement distinguishes voluntary public figure status from involuntary status. The Board agrees that Meta was wrong to categorize the minor transgender athlete as having “engaged with” her fame (and therefore as a voluntary public figure) solely on the basis that she participated in an interview with a school newspaper a year before the athletics competition shown in the video took place. This was not a sufficient basis for Meta to demonstrate agency on the part of the child for voluntarily becoming a public figure. The majority finds that the depicted athlete qualifies as a voluntary public figure who is a minor by virtue of her choice to compete in a state-level athletics championship. Such state-level competitions garner wide attention, take place in front of large spectator crowds and are often covered by the media to generate attention. The choice to perform in a high-profile sporting event, particularly after already being the focus of media reporting for her earlier athletic participation, is a voluntary decision by the transgender athlete. For the majority, Meta properly recognizes “minor voluntary public figures” on the basis that they are exercising agency, expression and dignity through their choice to shape a public identity. With older children participating in high-level sporting competitions, active in the entertainment industry, influential on social media and occupying other prominent public roles, such recognition of personal agency and expressive rights is appropriate. A minority finds that the transgender athlete should not be considered a voluntary public figure. At most, she should be treated as an involuntary public figure and be afforded all the protections of the Bullying and Harassment policy, including Tier 3. These Board Members disagree with basing a “public figure” status, especially of a child, solely on an arbitrary number of online media references to them. Such media coverage does not, in itself, turn a child into a public figure, nor should it be the basis for a reduction in the protections she receives. Endorsing this approach is inconsistent with the Sharing Private Residential Information policy advisory opinion and is especially concerning when applied to a minor. A child’s choice to participate in a state-level athletics competition should not be equated to voluntarily engaging with their apparent celebrity, especially when media coverage has been driven by the minor's gender identity, which is not within their control. While the athlete participated in this event knowing she may attract attention, that is not the same as having agency and the freedom of expression to engage with the media attention that followed. There is no indication that the minor sought to engage with this apparent fame or actively participated in the media attention she received. Under the Tier 3 Bullying and Harassment rules, a minority finds that the athletics post violates the prohibition on “claims about gender identity.” These Board Members agree with Meta that “claims about gender identity” include misgendering. This post directly states that the transgender athlete is a “boy who thinks he’s a girl” and uses male pronouns. In the minority’s view, these are claims about gender identity targeting an identifiable child to harass and bully them, and as such violate the policy. For a minority, the post also violates the Tier 3 Bullying and Harassment prohibition on calls for exclusion for the same reasons it violated the similar prohibition on calls for exclusion under the previous Hate Speech policy. The transgender athlete is clearly identifiable and named in the post. For the majority, as the athlete was a voluntary public figure, Tier 3 of the Bullying and Harassment policy does not apply, and analysis of potential violations is therefore not necessary. 5.2 Compliance With Meta’s Human Rights Responsibilities The majority of the Board finds that keeping both posts on the platforms was consistent with Meta’s human rights commitments. A minority of the Board disagrees, finding that Meta has a responsibility to remove both posts. Freedom of Expression (Article 19 ICCPR) Article 19 of the International Covenant on Civil and Political Rights (ICCPR) provides for broad protection of expression, including views about politics, public affairs and human rights ( General Comment No. 34 , paras. 11-12). The UN Human Rights Committee has highlighted that the value of expression is particularly high when discussing political issues (General Comment No. 34, paras. 11, 13; see also para. 17 of the 2019 report of the UN Special Rapporteur on freedom of expression, A/74/486 ). When restrictions on expression are imposed by a state they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s human rights responsibilities in line with the UN Guiding Principles on Business and Human Rights , which Meta itself has committed to in its Corporate Human Rights Policy. The Board does this both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression” ( A/74/486 , para. 41). I. Legality (Clarity and Accessibility of the Rules) The principle of legality requires rules limiting expression to be accessible and clear, formulated with sufficient precision to enable an individual to regulate their conduct accordingly (General Comment No. 34, para. 25). Additionally, these rules “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” and must “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not” ( Ibid. ). The UN Special Rapporteur on freedom of expression has stated that, when applied to private actors’ governance of online speech, rules should be clear and specific ( A/HRC/38/35 , para. 46). People using Meta’s platforms should be able to access and understand the rules, and content reviewers should have clear guidance regarding their enforcement. The Board finds that in relation to the updated Hateful Conduct rules as applied in these cases, the legality standard is satisfied, as those rules are clear and accessible. II. Legitimate Aim Any restriction on expression should pursue one of the legitimate aims of the ICCPR, which include protecting the “rights of others.” In several decisions, the Board has found that Meta’s Hate Speech (renamed Hateful Conduct) policy aims to protect the rights of others (see Knin Cartoon .) The Hateful Conduct policy rationale still states that Meta believes “that people use their voice and connect more freely when they don’t feel attacked on the basis of who they are.” The Hate Speech policy previously noted that the company prohibited hate speech because “it creates an environment of intimidation and exclusion, and in some cases may promote offline violence.” The Board has previously found that the Bullying and Harassment Community Standard also aims to protect the rights of others, noting that “users’ freedom of expression may be undermined if they are forced off the platform due to bullying and harassment,” and that “the policy also seeks to deter behavior that can cause significant emotional distress and psychological harm, implicating users’ right to health,” (see Pro-Navalny Protests in Russia ). In respect of children, respecting the best interests of the child (Article 3 UNCRC) is additionally important (see Iranian Make-up Video for a Child Marriage ). III. Necessity and Proportionality Under ICCPR Article 19(3), necessity and proportionality requires that restrictions on expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected,” (General Comment No. 34, para. 34). The Board notes that public debate on policy issues around the rights of transgender people and inclusion must be permissible. The Board agrees that international human rights principles on freedom of expression protect offensive viewpoints (see UN Special Rapporteur on freedom of expression, report A/74/486 , at para. 24). To justify a restriction on speech, a direct and immediate connection between the speech limited and the threat should be demonstrated in a specific and individualized fashion (General Comment No. 34, op. cit., at para. 35). For these cases, Board Members disagree on the nature and degree of harm the two posts posed, and therefore what limitations were necessary and proportionate. The majority of the Board finds that neither post creates a likely or imminent risk of incitement to violence, so there is an insufficient causal connection between restricting these posts and preventing harm to transgender people. This also means that there is no affirmative responsibility for Meta to prohibit these posts (e.g., under Article 20, para. 2 of the ICCPR). For the majority, issues of transgender women and girls’ access to women’s bathrooms and participation in sports are subjects of ongoing public debates (see PC-30308) that implicate a range of human rights concerns. As The Future of Free Speech organization argues , an “overly restrictive application of Meta’s policies can create a chilling effect” on individuals that “may refrain from participating in discussions on gender identity for fear of their views being labeled as hate speech or harassment,” and “marginalize voices that seek to challenge or critique prevailing norms around gender, which is essential for a vibrant democratic society.” It is therefore appropriate that a high threshold be demonstrated to justify any restriction, to avoid precluding public discourse and impairing understanding of these issues. The majority acknowledges that amid the intensity of these debates, these posts have the potential to be deeply offensive and even hurtful. However, the UN Special Rapporteur has stated: “Expression that may be offensive or characterized by prejudice and that may raise serious concerns of intolerance may often not meet a threshold of severity to merit any kind of restriction. There is a range of expression of hatred, ugly as it is, that does not involve incitement or direct threat, such as declarations of prejudice against protected groups. Such sentiments would not be subject to prohibition under the International Covenant on Civil and Political Rights ... and other restrictions or adverse actions would require an analysis of the conditions provided under article 19 (3) of the Covenant,” (report A/74/486 , at para. 24). For the majority, it follows that suppression of speech voicing viewpoints that are hateful or discriminatory, but below the incitement threshold, as in these cases, would not make any alleged underlying prejudice disappear. Rather, people with those ideas are driven to other platforms, often with like-minded people rather than a broader range of individuals, an outcome that may exacerbate intolerance, instead of enabling a more transparent and public discourse about sensitive issues. That the posts are not respectful is not grounds for suppressing speech. The Board has often used the Rabat Plan of Action’s six-factor test to assess whether content qualifies as incitement under the terms of the ICCPR and to establish a high threshold for restrictions: the social and political context; the status of the speaker; intent to incite audience against a group; the content and form of the expression; the extent of its dissemination; and, the likelihood of harm, including imminence. For the majority, the following combined factors demonstrate that Meta has no positive responsibility to remove the two posts: The majority notes that the Rabat Plan also calls for positive initiatives that do not infringe on freedom of speech to promote tolerance and inclusion, including encouraging counter-speech, such as the forceful condemnation of offensive or degrading speech. Education, information, dialogue and storytelling to foster dialogue can help drive forward these debates in a constructive way that avoids denigration and discrimination, and social media companies can play their part. There may also be less intrusive means available to Meta to address concerns around intolerance short of content removal such as removal of posts from recommendations or limits on interactions or shares. In relation to Bullying and Harassment, the majority note that Meta’s policies in this area pursue different objectives to the Hateful Conduct policy and are focused on reducing harms to targeted individuals. However, the Bullying and Harassment prohibitions are potentially very broad in their application and could sweep up speech that is self-referential, satirical, or culturally specific. Meta mitigates the risk of over-enforcement by requiring self-reporting for some violations and exempting public figures from protection against lower severity violations. While the self-reporting tools are limited, they are an appropriate mechanism for ensuring a targeted individual actually feels attacked before action on that content is taken. As noted in Section 5.1, the Board has doubts about the criteria Meta applied in designating the teen in the second case as a “voluntary public figure.” However, as applied to this post, the athlete would have understood that her participation in this level of competition would attract attention because of her transgender identity. It is, for the majority, consistent with the Convention on the Rights of the Child to consider an older teen’s autonomy and evolving capacity to take decisions. As such, the majority finds the athlete could reasonably expect to receive critical commentary about their biological sex. Waiving protections under Tier 3 of the Bullying and Harassment policy recognizes that agency, as well as the public interest in the speech at issue, and does not violate the principle of upholding the best interests of the child. Some Board Members who support the majority position note that Meta’s human rights responsibilities provide the company with a degree of discretion to take a stance on social issues. For these members, the Board’s prior relevant cases around hateful content (see Depiction of Zwarte Piet and South Africa Slurs ) mean it would be within Meta’s discretion to take a more restrictive stance against the misgendering of transgender people or other use of gender- or sex-exclusive language. In doing so, they should provide clear and accessible policies to this effect, provided they are enforced consistently and fairly. However, Meta’s human rights responsibilities do not require it to adopt this position. Here, Meta has chosen to provide limited protections for individuals against misgendering in the Bullying and Harassment policy. It has taken steps to prevent overreach by requiring self-reporting, and by creating the public figure criteria to allow discussion of individuals in the news. For this reason, these Board Members also uphold Meta’s decisions not to remove either post. For the minority, Meta’s decisions to leave up both posts contradicts its human rights responsibilities. The minority notes that rules to address the harms of hate speech and bullying and harassment are consistent with freedom of expression because they are essential to ensure that vulnerable minorities can express themselves, including their gender identities. Meta seeks to provide an expressive space to LGBTQIA+ people to maximize diversity and pluralism (see UN Independent Expert on Sexual Orientation and Gender Identity, report A/HRC/56/49 , July 2024, at para. 7 and 66). Meta has a specific and additional responsibility to remove from its platforms any advocacy of hatred against LGBTQIA+ people that constitutes incitement to discrimination, hostility or violence (Article 20, para 2, ICCPR; report A/74/486 , at para. 9). However, for the minority Meta’s Hateful Conduct policy exists to limit the use of language that contributes to an environment that makes discrimination and violence more acceptable and therefore sets a different threshold in terms of intent and causation. In this way, this policy is distinct from Meta’s Violence and Incitement policy. Even so, in these two cases, a minority find that the incitement to discrimination threshold was met, as demonstrated under the Rabat Plan of Action : For the minority, taking all of these factors into consideration, both posts clearly contribute to an imminent risk of further “discrimination, hostility, or violence,” and no measure short of removal would adequately prevent harm on this basis in either case. The minority stresses that the purpose of the Bullying and Harassment policy is to ensure the safety of individuals, including children, from violence and physical harm, and to safeguard their psychological health, to prevent isolation, self-harm and suicide, so they can be free to express themselves free of that intimidation. Meta’s human rights responsibilities are heightened in respect of children . One in three internet users globally are under 18 . The Committee on the Rights of the Child has recognized bullying as a form of violence against children (CRC, General Comment No. 25 on children’s rights in relation to the digital environment, at para. 81). For a minority, Meta’s threshold for classifying children as “voluntary public figures” is too low, with implications beyond LGBTQIA+ youth. When influential and popular accounts engage in anti-LGBTQIA+ bullying and harassment, they knowingly signal to their hundreds of thousands of followers to engage in online abuse. A minority is concerned that Meta does not consider the power imbalance between the accounts leading to harassment and targeted individuals. This can cause severe near-term harms that are especially acute for LGBTQIA+ youth, and, as discussed in the analysis above, makes the removal of both posts necessary and proportionate. According to Meta, in situations where a child’s gender identity is weaponized in public debates for political purposes, and this is reported on by the media, they become by virtue of that attention a voluntary public figure who can be subject to Tier 3 attacks in the same way as an elected official. This circular cruelty is not in the best interests of the child (CRC Article 3), and in the view of a minority, Meta should have a higher threshold to apply public figure status to minors and require more robust evidence to demonstrate that they have engaged with their fame. Otherwise, a child in this situation has only two options: to stop pursuing their passions or face harassment by their bullies. Non-Discrimination The Board observes that gender identity is a protected characteristic recognized under international human rights law, and this is reflected in Meta’s listing of protected characteristics in the Hateful Conduct policy. The Board is concerned Meta has incorporated the term “transgenderism” into this policy. This term suggests that being transgender is a question of ideology, rather than an identity. For its rules to have legitimacy, Meta must seek to frame its content policies neutrally, in ways that respect human rights principles of equality and non-discrimination. This could be achieved, for example, by stating “discourse about gender identity and sexual orientation” in place of “discourse about transgenderism and homosexuality.” Human Rights Due Diligence The UN Guiding Principles on Business and Human Rights , Guiding Principles 13, 17 (c) and 18, require Meta to engage in ongoing human rights due diligence for significant policy and enforcement changes, which the company would ordinarily do through its Policy Product Forum, including engagement with impacted stakeholders. The Board is concerned that Meta’s January 7, 2025, policy and enforcement changes were announced hastily, in a departure from regular procedure, with no public information shared as to what, if any, prior human rights due diligence it performed. Now these changes are being rolled out globally, it is important that Meta ensures adverse impacts of these changes on human rights are identified, mitigated and prevented, and publicly reported. This should include a focus on how groups may be differently impacted, including women and LGBTQIA+ people. In relation to enforcement changes, due diligence should be mindful of the possibilities of both overenforcement ( Call for Women’s Protest in Cuba , Reclaiming Arabic Words ) as well as underenforcement ( Holocaust Denial , Homophobic Violence in West Africa , Post in Polish Targeting Trans People ). 6. The Oversight Board’s Decision The Oversight Board upholds Meta's decision to leave up the content in both cases. 7. Recommendations Content Policy 1.As part of its ongoing human rights due diligence, Meta should take all of the following steps in respect of the January 7, 2025, updates to the Hateful Conduct Community Standard. First, it should identify how the policy and enforcement updates may adversely impact the rights of LGBTQIA+ people, including minors, especially where these populations are at heightened risk. Second, Meta should adopt measures to prevent and/or mitigate these risks and monitor their effectiveness. Third, Meta should update the Board on its progress and learnings every six months, and report on this publicly at the earliest opportunity. The Board will consider this recommendation implemented when Meta provides the Board with robust data and analysis on the effectiveness of its prevention or mitigation measures on the cadence outlined above and when Meta reports on this publicly. 2. To ensure Meta’s content policies are framed neutrally and in line with international human rights standards, Meta should remove the term “transgenderism” from the Hateful Conduct policy and corresponding implementation guidance. The Board will consider this recommendation implemented when the term no longer appears in Meta’s content policies or implementation guidance. Enforcement 3. To reduce the reporting burden on targets of Bullying and Harassment, Meta should allow users to designate connected accounts, which are able to flag potential Bullying and Harassment violations requiring self-reporting on their behalf. The Board will consider this recommendation implemented when Meta makes these features available and easily accessible to all users via their account settings. 4. To ensure there are fewer enforcement errors on Bullying and Harassment violations requiring self-reporting, Meta should ensure the one report representing multiple reports on the same content is chosen based on the highest likelihood of a match between the reporter and the content’s target. In doing this Meta should guarantee that any technological solutions account for potential adverse impacts on at-risk groups. The Board will consider this recommendation implemented when Meta provides sufficient data to validate the efficacy of improvements in the enforcement of self-reports of Bullying and Harassment violations as a result of this change. *Procedural Note: Return to Case Decisions and Policy Advisory Opinions" bun-2fozdyiz,Statements Targeting Indigenous Australians,https://www.oversightboard.com/decision/bun-2fozdyiz/,"August 1, 2024",2024,,"Discrimination,Marginalized communities,Race and ethnicity",Hateful conduct,Overturned,Australia,"A user appealed Meta’s decisions to leave up two Facebook posts, both shared by another user, which respond to news articles with commentary targeting the Indigenous population of Australia.",5782,866,"Multiple Case Decision August 1, 2024 A user appealed Meta’s decisions to leave up two Facebook posts, both shared by another user, which respond to news articles with commentary targeting the Indigenous population of Australia. Overturned FB-CRZUPEP1 Platform Facebook Topic Discrimination,Marginalized communities,Race and ethnicity Standard Hateful conduct Location Australia Date Published on August 1, 2024 Overturned FB-XJP78ARB Platform Facebook Topic Discrimination,Marginalized communities,Race and ethnicity Standard Hateful conduct Location Australia Date Published on August 1, 2024 Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Summary A user appealed Meta’s decisions to leave up two Facebook posts, both shared by a single user, which respond to news articles with commentary targeting the Indigenous population of Australia. After the Board brought the appeals to Meta’s attention, the company reversed its original decisions and removed both posts. About the Cases Between December 2023 and January 2024, an Australian user shared two Facebook posts about Indigenous Australians. The first post contains a link to an article detailing an Indigenous land council’s effort to buy land in a park in one of Sydney’s suburbs. The post’s caption calls on Indigenous people to “bugger off to the desert where they actually belong.” The second post shares an article about a car chase in northeastern Australia. The caption of the post calls for “Aboriginal ratbags” to serve prison time along with receiving “100 strokes of the cane.” Meta’s Hate Speech policy prohibits statements that support or advocate for the segregation or exclusion of people on the basis of race and ethnicity. Meta specifically prohibits content that explicitly calls for “expelling certain groups” and content that supports “denying access to spaces (physical and online).” The policy also bans “targeted cursing” and “generalizations that state inferiority,” including “mental characteristics” directed at a person or group of people based on their protected characteristic(s). After the Board brought this case to Meta’s attention, the company determined that both pieces of content violated its Hate Speech policy and the original decisions to leave both pieces of content up were incorrect. The company then removed the content from Facebook. Meta explained to the Board that the post calls for exclusion of Indigenous Australians from the parkland, and that the phrase “bugger off” in reference to them is an example of targeted cursing against members of a protected group. Furthermore, Meta acknowledged that the term “ratbag” is derogatory, with meanings that include “stupid person” in Australian English, therefore violating Meta’s Hate Speech policy prohibition on statements referring to members of a protected characteristic group as mentally inferior. Board Authority and Scope The Board has authority to review Meta's decisions following appeals from the users who reported content that was then left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook, Instagram and Threads users. Significance of Cases The Board has repeatedly emphasized the particular importance of addressing hate speech targeted at groups that have historically been and continue to be discriminated against ( South Africa Slurs and Post in Polish Targeting Trans People decisions). The Board has also raised serious concerns that Meta’s enforcement practices may disproportionately impact First Nations peoples. In the Wampum Belt decision, the Board noted that while mistakes are inevitable, “the types of mistakes and the people or communities who bear the burden of those mistakes reflect design choices that must constantly be assessed and examined.” In that case, the Board emphasized the importance of Meta monitoring the accuracy of its hate speech enforcement not only generally but with particular sensitivity to enforcement errors for “subcategories of content where incorrect decisions have a particularly pronounced impact on human rights.” The Board explained that it was therefore “incumbent on Meta to demonstrate that it has undertaken human rights due diligence to ensure its systems are operating fairly and are not exacerbating historical and ongoing oppression.” On calls for exclusion, the Board recommended that Meta “should rewrite Meta’s value of 'Safety' to reflect that online speech may pose risk to the physical security of persons and the right to life, in addition to the risks of intimidation, exclusion and silencing,” ( Alleged Crimes in Raya Kobo , recommendation no. 1). Implementation of this recommendation has been demonstrated through published information. Decision The Board overturns Meta’s original decisions to leave up the content. The Board acknowledges Meta’s correction of its initial errors once the Board brought the cases to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" bun-3bczqsyp,Reports of Israeli Rape Victims,https://www.oversightboard.com/decision/bun-3bczqsyp/,"October 7, 2023",2023,,"Sex and gender equality,Violence,War and conflict",Dangerous individuals and organizations,Overturned,"Israel,Palestinian Territories",FB-YCJP0Q9D,7626,1166,"Multiple Case Decision April 4, 2024 A user appealed Meta’s decisions to remove two Facebook posts that describe sexual violence carried out by Hamas militants during the October 7, 2023, terrorist attacks on Israel. After the Board brought the appeals to Meta’s attention, the company reversed its original decisions and restored the posts. Overturned FB-YCJP0Q9D Platform Facebook Topic Sex and gender equality,Violence,War and conflict Standard Dangerous individuals and organizations Location Israel,Palestinian Territories Date Published on April 4, 2024 Overturned FB-JCO2RJI1 Platform Facebook Topic Sex and gender equality,Violence,War and conflict Standard Dangerous individuals and organizations Location Israel,United States,Palestinian Territories Date Published on April 4, 2024 This is a summary decision . Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Case Summary A user appealed Meta’s decisions to remove two Facebook posts that describe sexual violence carried out by Hamas militants during the October 7, 2023, terrorist attacks on Israel. After the Board brought the appeals to Meta’s attention, the company reversed its original decisions and restored the posts. Case Description and Background In October 2023, a Facebook user uploaded two separate posts, one after the other, containing identical content featuring a video of a woman describing the rape of Israeli women committed by Hamas during the terrorist attacks on October 7. The caption contains a “trigger warning” and the speaker in the video warns users about the graphic content. The video goes on to show footage of two different women being kidnapped by Hamas, with one clip involving a woman severely injured lying face down in a truck and another an injured woman being dragged from the back of a vehicle. These images were widely shared in the aftermath of the attack. The first post was shared about 4,000 times and the second post had less than 50 shares. Both posts were initially removed by Meta for violating the Dangerous Organizations and Individuals Community Standard . Under this policy, the company prohibits third-party imagery depicting the moment of a designated terror attack on identifiable victims under any circumstances, even if shared to condemn or raise awareness of the attack. Additionally, under Meta’s Violence and Incitement Community Standard, the company removes “content that depicts kidnappings or abductions if it is clear that the content is not being shared by a victim or their family as a plea for help, or shared for informational, condemnation or awareness-raising purposes.” At the outset of the Hamas attack on October 7, Meta began strictly enforcing its Dangerous Organizations and Individuals policy on videos showing moments from individual attacks on visible victims. Meta explained this approach in its Newsroom post on October 13, saying it had done so “in order to prioritize the safety of those kidnapped by Hamas.” The two pieces of content in these two cases were therefore removed for violating Meta’s Dangerous Organizations and Individuals policy. Following this decision, many news outlets began broadcasting related footage and users also started posting similar content to raise awareness and condemn the attacks. As a result, on or around October 20, Meta updated its policies to allow users to share this footage only within the context of raising awareness or to condemn the atrocities, and applied a warning screen to inform users that the footage may be disturbing. Meta published this change to its policy in a December 5 update to its original Newsroom post from October 13 (see Hostages Kidnapped From Israel for additional information and background). Meta initially removed both pieces of content from Facebook in these two cases. The user appealed Meta’s decisions to the Board. After the Board brought these cases to Meta’s attention, the company determined the posts no longer violated its policies, given the updated allowance, and restored them both. Board Authority and Scope The Board has authority to review Meta’s decisions following appeals from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook and Instagram users. Case Significance These two cases highlight challenges in Meta’s ability to enforce content in a high-risk conflict situation that is constantly and rapidly evolving. As the Board found in its expedited decision, Hostages Kidnapped from Israel, Meta’s initial policy prohibiting content depicting hostages protected the dignity of those hostages and aimed to ensure they were not exposed to public curiosity. However, the Board also found that, in exceptional circumstances, when a compelling public interest or the vital interest of hostages requires it, temporary and limited exceptions can be justified. Given the context, restoring this type of content to the platform with a “mark as disturbing” warning screen is consistent with Meta’s content policies, values and human rights responsibilities. This would also be consistent with international humanitarian law and the practice of preserving documentation of alleged violations for future accountability, as well as increasing public awareness. In that case, the Board also noted that Meta took too long to roll out the application of this exception to all users and that the company’s rapidly changing approach to content moderation during the conflict has been accompanied by an ongoing lack of transparency. Previously, the Board has issued recommendations that are relevant to this case. The Board recommended that Meta announce exceptions to its Community Standards, noting “their duration and notice of their expiration, in order to give people who use its platforms notice of policy changes allowing certain expression,” ( Iran Protest Slogan , recommendation no. 5). Meta has partially implemented this recommendation as demonstrated through published information. The Board has also previously recommended that Meta preserve evidence of potential war crimes, crimes against humanity and grave violations of human rights in the interest of future accountability ( Sudan Graphic Video , recommendation no. 1 and Armenian Prisoners of War Video , recommendation no. 1). Meta has agreed to implement this recommendation and the work is still in progress. The Board emphasizes the need for Meta to act on these recommendations to ensure that content regarding human rights is enforced accurately on its platforms. Decision The Board overturns Meta’s original decisions to remove the two pieces of content. The Board acknowledges Meta’s corrections of its initial errors once the Board brought the two cases to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" bun-6aqh31t6,Posts Supporting UK Riots,https://www.oversightboard.com/decision/bun-6aqh31t6/,"April 23, 2025",2025,,"Misinformation,Religion,Violence","Revise criteria for initiating the Crisis Policy Protocol, including identifying core criteria that, when met, are sufficient for the immediate activation of the protocol.",Overturned,United Kingdom,"In reviewing three different posts shared during the UK riots of summer 2024, the Board has overturned Meta’s original decisions to leave them up on Facebook. Each created the risk of likely and imminent harm during a period of contagious anger and growing violence.",43826,6764,"Multiple Case Decision April 23, 2025 In reviewing three different posts shared during the UK riots of summer 2024, the Board has overturned Meta’s original decisions to leave them up on Facebook. Each created the risk of likely and imminent harm during a period of contagious anger and growing violence. Overturned FB-9IQK53AU Platform Facebook Topic Misinformation,Religion,Violence Location United Kingdom Date Published on April 23, 2025 Overturned FB-EAJQEE0E Platform Facebook Topic Misinformation,Religion,Violence Location United Kingdom Date Published on April 23, 2025 Overturned FB-93YJ4A6J Platform Facebook Topic Misinformation,Religion,Violence Location United Kingdom Date Published on April 23, 2025 Posts Supporting UK Riots PDF In reviewing three different posts shared during the UK riots of summer 2024, the Board has overturned Meta’s original decisions to leave them up on Facebook. Each created the risk of likely and imminent harm. They should have been taken down. The content was posted during a period of contagious anger and growing violence, fueled by misinformation and disinformation on social media. Anti-Muslim and anti-immigrant sentiment spilled onto the streets. Meta activated the Crisis Policy Protocol (CPP) in response to the riots and subsequently identified the UK as a High-Risk Location on August 6. These actions were too late. By this time, all three pieces of content had been posted. The Board is concerned about Meta being too slow to deploy crisis measures, noting this should have happened promptly to interrupt the amplification of harmful content. Additional Note: Meta’s January 7, 2025, revisions to the renamed Hateful Conduct policy did not change the outcome in these cases, though the Board took the rules at the time of posting and the updates into account during deliberation. On the broader policy and enforcement changes hastily announced by Meta in January, the Board is concerned that Meta has not publicly shared what, if any, prior human rights due diligence it performed in line with its commitments under the UN Guiding Principles on Business and Human Rights. It is vital Meta ensures any adverse impacts on human rights globally are identified and prevented. About the Cases In the first case, a text-only post shared at the start of the riots, called for mosques to be smashed and buildings where “migrants,” “terrorists” and “scum” live to be set on fire. This post had more than 1,000 views. The second and third cases both involve reposts of likely AI-generated images. One is of a giant man in a Union Jack T-shirt chasing smaller Muslim men in a menacing way. Text over the image gives a time and place to gather for one of the protests and includes the “EnoughIsEnough” hashtag, while the accompanying caption says: “Here we go again.” This post had fewer than 1,000 views. The other image is of four Muslim men running in front of the Houses of Parliament after a crying blond-haired toddler. One of the men waves a knife while a plane flies overhead towards Big Ben. This image includes the logo of an influential social media account known for anti-immigrant commentary in Europe, including misinformation and disinformation. This had more than 1,000 views. All three were reported by other Facebook users for either hate speech or violence. Meta kept all three up following reviews by its automated systems only. After the users appealed to the Board and these cases were selected, the content was reviewed by humans, with Meta removing the text-only post in the first case. The company confirmed the original decisions to keep up the two likely AI-generated images. Between July 30 and August 7, 2024, violent riots broke out in the UK after three girls were murdered in the town of Southport. Shortly after this knife attack, misinformation and disinformation spread on social media falsely suggesting the perpetrator was a Muslim and an asylum seeker. Key Findings The Board has found that the text-based post and giant man image both violate the Violence and Incitement policy, which does not allow threats of high-severity violence against a target, or threats of violence against individuals or groups based on protected characteristics and immigration status. The text-based post contains a general threat and incitement of violence against people and property, as well as identifying targets based on religion and immigration status. The giant man image is a clear call for people to gather and carry out acts of discriminatory violence at a particular time and place. Meta’s conclusion that this image – an aggressive man chasing fleeing Muslim men, combined with a time and place and the “EnoughIsEnough” hashtag – contains no target or a threat, strains credibility. This content was shared on August 4, well into the week-long riots. By this time, there was more than enough context to warrant removal. The AI image of four Muslim men pursuing a crying, blond-haired toddler broke the rule under the Hateful Conduct (previously named Hate Speech) policy against attacking people based on their protected characteristics, including by making an allegation of serious criminality. Meta interpreted this post as being a qualified statement in visual form by referring to the specific “Muslim man or men who were incorrectly accused of stabbing the children in Southport.” Before January 7, Meta’s internal guidance stated qualified statements that avoid generalizing all members of a group as criminals were allowed. The Board disagrees with Meta’s application of the rule in this case, noting the image does not represent a qualified statement as it does not depict the Southport stabbing in any form. It is set in London (not Southport), with four men (not one) running after a male toddler (not three young girls), and a plane flying towards Big Ben, the latter evoking 9/11 imagery and portraying Muslims as a threat to Britain. When reviewing these cases, the Board noted issues of clarity around both the Violence and Incitement and Hateful Conduct policies, caused by discrepancies between public-facing language and internal guidelines. The Board also has strong concerns about Meta’s ability to accurately moderate hateful and violent imagery. Given Meta’s experts failed to identify violations in both of the likely AI-generated images, this would indicate that current guidance to reviewers is too formulaic, ignores how visual imagery works and is outdated. Finally, the Board notes that Meta had third-party fact-checkers reviewing certain pieces of content containing the false name of the Southport perpetrator during the riots, labelling them as “false” and reducing their visibility. With Meta replacing its third-party fact-checking system in the United States, the Board recommends the company examine the experience of other platforms using Community Notes and research their effectiveness. The Oversight Board’s Decision The Board overturns Meta’s original decisions to leave up the three posts. The Board also recommends Meta: *Case summaries provide an overview of cases and do not have precedential value. 1. Case Description and Background The Oversight Board has reviewed three cases involving content posted by different users on Facebook during riots in the UK between July 30 and August 7, 2024. The riots followed a knife attack at a dance workshop in Southport on July 29 in which three young girls were killed and ten others injured. Axel Rudakubana, a British 17-year-old, was immediately arrested and later convicted for the attack. Yet, misinformation and disinformation about his identity, including a false name, rapidly circulated online after the attack, wrongly asserting that he was a Muslim and an asylum seeker who had recently arrived in Britain by boat. One such post was shared more than six million times. Notwithstanding a police statement at noon on July 30 disputing the online rumors, anti-immigration and anti-Muslim protests took place across 28 cities and towns, with many turning into riots . They mobilized thousands of people, including anti-Muslim and anti-immigration groups. Refugee centers and hotels housing immigrants were among many buildings attacked or set on fire, alongside looting and other disorder . The violence led to many people, including more than 100 police officers , being injured. On August 1, a judicial order lifted the Southport attacker’s anonymity as a minor to quell the disorder but it was not immediately successful. The first post under the Board’s review was shared two days after the killings. It supported the ongoing riots, calling for mosques to be smashed and buildings where “migrants,” “terrorists” and “scum” are living to be set on fire. The post acknowledged the riots had damaged private property and injured police officers, but argued this violence was necessary for the authorities to listen and put a stop to “all the scum coming into Britain.” The post asked those who disagreed with the riots to think about the murder of the “little girls,” stating they would not be “the last victims” if the public did not do something. The post had more than 1,000 views and fewer than 50 comments. The second post was shared six days after the attack and is a reshare of another post. It contains what looks like an AI-generated image of a giant, angry and aggressive white man wearing a Union Jack (the UK flag) T-shirt menacingly chasing several smaller, fleeing Muslim men. The image is accompanied by the caption: “Here we go again.” A text overlay provides a time and place to gather for a protest in the city of Newcastle on August 10 and includes the hashtag “EnoughIsEnough.” This content has had fewer than 1,000 views. The third post, shared two days after the attack, is a repost of another likely AI-generated image. In it, four bearded Muslim men wearing white kurtas (tunics) are running in front of the Houses of Parliament in London, pursuing a crying blond-haired toddler in a Union Jack T-shirt. One of the men carries a knife. A plane flies towards Big Ben, seemingly a reference to the 9/11 terror attacks in 2001 in New York. The caption includes the words “Wake up” and the logo of an influential social media account known for anti-immigrant commentary in Europe, including misinformation and disinformation. This piece of content has had more than 1,000 views and fewer than 50 comments. Facebook users reported all three posts for violating either the Hate Speech (renamed Hateful Conduct ) or Violence and Incitement policies. Meta’s automated tools assessed all three posts as non-violating and they were kept up. When the users appealed to Meta, the company’s automated systems confirmed the decisions to leave up the content. The Board’s selection of these cases was the first time any of the three posts were reviewed by humans. Following this, Meta reversed its decision on the text-only post, removing it for violating the Violence and Incitement policy, but confirmed its original decisions on the other two posts. On January 7, 2025, Meta announced revisions to its Hate Speech policy, renaming it the Hateful Conduct policy . These changes, to the extent relevant to these cases, will be described in Section 3 and analyzed in Section 5. The Board notes content is accessible on Meta’s platforms on a continuing basis, and updated policies are applied to all content present on the platform, regardless of when it was posted. The Board therefore assesses the application of policies as they were at the time of posting and, where applicable, as since revised (see also the approach in Holocaust Denial ). 2. User Submissions None of the users who posted the content in these cases responded to invitations to submit a statement to the Board. The users who reported the posts provided statements to the Board claiming the posts were clearly encouraging people to attend racist protests, inciting violence against immigrants and Muslims, or encouraging far-right supporters to continue rioting. One of the users said they were an immigrant and felt threatened by the post they were appealing about. 3. Meta’s Content Policies and Submissions I. Meta’s Content Policies Violence and Incitement Meta’s Violence and Incitement policy rationale provides that the company removes “language that incites or facilitates violence and credible threats to public or personal safety,” including “violent speech targeting a person or group of people on the basis of their protected characteristic(s) or immigration status.” It also explains that Meta considers “language and context in order to distinguish casual or awareness-raising statements from content that constitutes a credible threat to public or personal safety.” The policy states that everyone is protected from “threats of violence that could lead to death (or other forms of high-severity violence)” and from “threats of violence that could lead to serious injury (mid-severity violence).” Meta’s internal guidance to moderators mentions that this protection also extends to attacks on places that could lead to death or serious injury of a person. It includes calls to burn down or attack a place. The policy does not require moderators to confirm that people are inside the building. The policy defines threats of violence as “statements or visuals representing an intention, aspiration, or call for violence against a target, and threats can be expressed in various types of statements such as statements of intent, calls for action, advocacy, expressions of hope, aspirational statements and conditional statements.” Hateful Conduct ( previously named Hate Speech) Meta defines “hateful conduct” in the same way that it previously defined “hate speech,” as “direct attacks against people” on the basis of protected characteristics, including race, ethnicity, religious affiliation and national origin. The policy continues to protect “refugees, migrants, immigrants and asylum seekers” under Tier 1 of the policy, which Meta considers to be the most severe attacks. However, they are not protected from attacks under Tier 2, in order to allow “commentary and criticism of immigration policies.” According to the policy rationale, this is because people sometimes “call for exclusion or use insulting language in the context of discussing political or religious topics, such as when discussing ... immigration.” Meta explicitly states that its “policies are designed to allow room for these types of speech.” Tier 1 of the policy prohibits direct attacks that target people based on a protected characteristic or immigration status with “allegations of serious immorality and criminality,” providing violent criminals (“terrorists,” “murderers”) as examples. Before January 7, Meta’s internal guidance to reviewers allowed “qualified behavioral statements,” distinguishing these from prohibited generalizations, including unqualified behavioral statements, alleging serious criminality. Qualified behavioral statements describe actions that individuals or groups have taken or their participation in events while mentioning their protected characteristic or immigration status. Prohibited generalizations attribute inherent traits to all or most members of an entire group (such as saying they are “killers” or they “kill”). Since January 7, Meta’s guidance to reviewers no longer prohibits behavioral statements, including against an entire protected characteristic group or based on immigration status. Meaning saying a protected characteristic group “kill” would be non-violating as a behavioral statement. II. Meta’s Submissions Text-Only Post Meta reversed its original decision on this case, removing it for violating the Violence and Incitement policy. It did so because the calls for people to riot, “smash mosques,” and “do damage to buildings” where “migrants” and “terrorists” are living, are “statements advocating violence against a place that could result in death or serious injury.” Giant Man Post Meta found this post did not violate the Violence and Incitement policy. While it contains a call for people to attend a specific gathering, according to Meta it does not contain a threat of violence against people or property. Meta emphasized its policy, informed by its value of “voice,” seeks to protect political speech around protests. Therefore, even with ongoing widespread disorder, a post would need to contain a threat or clear target to be violating. Four Muslim Men Post Meta found this post did not violate the Hateful Conduct (formerly Hate Speech) policy. While generalizations, such as attacking all or most Muslims as violent criminals would be violating, “referring to specific Muslim people as violent criminals” would not. Meta interpreted the image as referring to a specific “Muslim man or men who were incorrectly accused of stabbing the children in Southport,” given the false information circulating at the time. Crisis Measures In response to the Board’s questions, Meta explained it activated the Crisis Policy Protocol (CPP) in August and designated the entire UK as a Temporary High-Risk Location (THRL) from August 6–20, once the CPP was activated. THRL is a mechanism that enables Meta to implement additional safety measures, such as additional content restrictions or proactive monitoring to prevent incitement to violence in locations identified to be high-risk due to real-world events. During that time, Meta removed any calls to bring weapons to any location within the UK or to forcibly enter high-risk locations. The company did not set up an Integrity Product Operations Center (IPOC), which Meta describes as a “measure that brings together different teams, subject matter experts and capabilities from across the company (...) to respond in real time to potential problems or trends.” Third-Party Fact-Checking Meta relied on third-party fact-checkers to review content during the riots and rate its accuracy. For “several pieces of content ... containing the false name of the Southport perpetrator” and rated as “false,” Meta kept the content on the platform but attached labels. It also removed the content from recommendations while demoting it in the feed of users that follow the account. Meta says it reduced such content’s visibility “within hours of it appearing on the platform.” Meta also established an internal working group of people from its policy, operations and law enforcement outreach teams to monitor and respond to the situation. The Board asked Meta 13 questions about specific crisis-related measures deployed during the UK riots, including the role of third-party fact-checkers, details about the capabilities of its Hate Speech classifiers, how the context of the riots informed Meta’s analysis of the content, whether any of the posts was demoted and the risks to free expression and access to information from overenforcement. Meta responded to all these questions. 4. Public Comments The Oversight Board received nine public comments that met the terms for submission . Five of the comments were submitted from Europe, three from the United States and Canada and one from the Middle East and North Africa. Because the public comments period closed before January 7, 2025, none of the comments address the policy changes Meta announced on that date. To read public comments submitted with consent to publish, click here . The submissions covered the following themes: social media’s role in the 2024 UK riots, including in spreading misinformation and organizing and coordinating riots; the links between online anti-immigrant and anti-Muslim speech and violence; the use of imagery in hate speech and dehumanization; risks to freedom of expression from overenforcement; and, moderation measures short of removal. 5. Oversight Board Analysis The Board selected these cases to examine how Meta ensures freedom of expression in discussions around immigration, while also respecting the human rights of immigrants and religious minorities in the context of a crisis. This case falls within the Board’s strategic priorities of Crisis and Conflict Situations and Hate Speech Against Marginalized Groups. The Board analyzed Meta’s decisions in these cases against Meta’s content policies, values and human rights responsibilities. The Board also assessed the implications of these cases for Meta’s broader approach to content governance. 5.1 Compliance With Meta’s Content Policies I. Content Rules Text-Only Post The Board finds this post violates Meta’s Violence and Incitement policy that prohibits credible threats of high-severity violence against a target and threats of violence against individuals or groups based on religion as a protected characteristic and immigration status. While people may often post violent or threatening language online as hyperbole or in non-serious and joking ways, language and context distinguish casual statements from credible threats to public or personal safety. This post explicitly encourages people to riot, “smash mosques” and “do damage to buildings” where “migrants” and “terrorists” are living. This makes it a clear violation of Meta’s Violence and Incitement policy in two ways: one, the general threat and incitement of high-severity violence against people and property; two, by targets being identified based on people’s religion and immigration status. There is no way to interpret this post as a casual or non-serious statement. It was published on July 31 while violence was spreading across the UK, a day after a group threw bricks and petrol bombs at a mosque, and set a police car on fire, injuring eight officers. In the weeks following, similar violence ensued across the country. Giant Man Post The Board finds this post violates Meta’s Violence and Incitement policy prohibiting threats of violence against individuals or groups based on religion as a protected characteristics and immigration status. The Board notes that there were no written words in this post directly and expressly calling for people to engage in violence. However, this content demonstrates how imagery combined with less direct written references to violence can also be an unambiguous form of incitement. The text overlay to the image specified the date, time and location for people to gather, at a specific monument in Newcastle on August 10. It was posted after several days of violent riots across the country in which Muslims and immigrants were targets and people already had, among other things, attacked a hotel housing asylum seekers, torched a library and a community center, and pelted police officers with bottles and cans. The caption “Here we go again” is, when combined with the imagery of a giant white man aggressively pursuing smaller brown men in Islamic dress, a clear call for people to continue those ongoing acts of discriminatory violence and intimidation at a specified time and place. While the statement “Enough Is Enough” could be, alone and divorced from its context, a non-violent political statement about immigration, it had been used as a hashtag to organize prior riots and connect people for that purpose. The Board finds that the combined elements of this post make the content policy violation clear. Meta’s conclusion that the image contains no target or threat strains credulity and raises questions about why it took so long for the company to activate the Crisis Policy Protocol. By the time this post was shared, there was more than enough context about how information on the riots was spreading online to ensure violating inciting elements in this post could have been identified, if content like it had been prioritized for human review and appropriate interpretative guidance provided. Four Muslim Men Post The Board finds the content in the third case violates Meta’s Hateful Conduct prohibition on allegations of serious criminality against a protected characteristic group. The January 7 policy changes did not change this assessment. In this case, the visual of Muslim men pursuing a crying blond-haired toddler, alongside the terrorist imagery, generalizes that Muslims are violent criminals and terrorists, and a threat to British people and children specifically. The image is a very clear example of a dehumanizing trope seeking to harness anti-immigrant sentiment by mobilizing anti-Muslim stereotypes. Through its elements, the post generalizes Muslims as a collective national threat, portraying them as menacing and falsely attributing criminality and violence to them as a group defined by their religion. By visually linking Muslims to one of the most infamous terrorist events in modern history, the image falsely suggests that all Muslims are terrorists and a danger to Britain. The Board disagrees with Meta’s assessment that the image was a “qualified statement,” i.e., that the depiction of a knife-yielding Muslim referred to the rumored perpetrator of the Southport attack, rather than Muslims more broadly. For the Board, while this content was posted in the context of the public disorder following the Southport stabbings and seeks to exploit the heightened emotions around them, it does not visually represent those events. At the time the image was posted, the Southport attacker was known to be a lone person and not a Muslim, the victims were three young girls and not a male toddler, and the attacks had no association with London or the 9/11 terrorist attacks. Inferring that the depiction of four Muslim men could be a reference to that lone attacker is incorrect. Moreover, even if the content depicted a lone Muslim, it would be a strange logic to invoke disinformation largely fueled by anti-Muslim prejudice to permit hate speech. II. Enforcement Action The two cases involving image-based violations of Meta’s Violence and Incitement and Hateful Conduct policies raise concerns about how Meta moderates harmful content when it is based on imagery, rather than text. The Board has previously raised similar concerns in Posts in Polish Targeting Trans People , Planet of the Apes Racism , Hateful Memes Video Montage , Media Conspiracy Cartoon and Knin Cartoon . This concern is only heightened in these cases, as they demonstrate how the barriers to creating persuasive visual hate speech and incitement to violence are drastically lowering with the development of new AI tools. While an image being automatically generated will not change whether it is violating or not, new AI tools could significantly increase the prevalence of this content. This requires Meta to ensure its automated tools are better trained to detect violations in imagery and prioritize its human review until such a time that automated review is more reliable. The Board is concerned about the delay in Meta activating its Crisis Policy Protocol , a mechanism the company created in response to previous Board recommendations. The company took almost a full week to designate the UK as a Temporary High-Risk Location. As part of this measure, Meta instituted temporary prohibitions on calls to bring weapons to or forcibly enter specific locations. The Board believes that activation of the Crisis Policy Protocol would have been more effective if deployed promptly, in the critical hours and days following the attack, when false information about the attacker spread rapidly online and social media was used to organize and coordinate violence fueled by anti-immigrant, racist and anti-Muslim sentiment. Additional interventions could have facilitated quicker and more accurate proactive moderation of content linked to the riots, interrupting amplification of harmful content and potentially reducing the risk of further harm. Operational tools could have been deployed to identify and review potentially violating content, proactively scan the platforms for specific keywords or hashtags and assign specialized regional teams. These teams could have provided additional context and guidance to at-scale reviewers moderating hate speech and incitement, including in visual forms. The Board emphasizes that decisions to activate crisis-related measures must be made as quickly as possible. To achieve this, the company should identify core criteria that, when met in predefined combinations or individually, will trigger the immediate activation of the Crisis Policy Protocol. Additionally, this assessment should be repeated throughout the crisis to ensure that the measures in place are appropriate, effective and calibrated to the evolving risks. 5.2 Compliance With Meta’s Human Rights Responsibilities The Board finds that the removal of all three posts, as required by a proper interpretation of Meta’s content policies, is also consistent with Meta’s human rights responsibilities. Freedom of Expression (Article 19 ICCPR) Article 19 of the International Covenant on Civil and Political Rights (ICCPR) provides for broad protection of expression, including views about politics, public affairs and human rights ( General Comment No. 34 , paras. 11-12). When restrictions on expression are imposed by a state they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s human rights responsibilities in line with the UN Guiding Principles on Business and Human Rights (UNGPs), which Meta itself has committed to in its Corporate Human Rights Policy . The Board does this both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. Under the UNGPs Principle 13, companies should “avoid causing or contributing to adverse human rights impacts through their own activities” and “prevent or mitigate adverse human rights impacts that are directly linked to their operations, products or services.” As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression,” ( A/74/486 , para. 41). At the same time, when company rules differ from international standards, companies should give a reasoned explanation of the policy difference in advance (ibid., at para 48). I. Legality (Clarity and Accessibility of the Rules) The principle of legality requires rules limiting expression to be accessible and clear, formulated with sufficient precision to enable an individual to regulate their conduct accordingly (General Comment No. 34, para. 25). Additionally, these rules “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” and must “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not” ( Ibid. ). The UN Special Rapporteur on freedom of expression has stated that when applied to private actors’ governance of online speech, rules should be clear and specific ( A/HRC/38/35, para. 46). People using Meta’s platforms should be able to access and understand the rules and content reviewers should have clear guidance regarding their enforcement. The Board finds it is not clear to users that Meta’s Violence and Incitement policy prohibits threats against places as well as against people, noting in the context of the UK riots many places were targeted because of their association with Muslims, asylum seekers and immigrants. The Board finds that the Hateful Conduct prohibition on allegations about “[v]iolent criminals (including but not limited to: terrorists, murderers)” is sufficiently clear as applied to the four Muslim men post. However, Meta’s attempt to distinguish prohibited generalizations about an entire group’s inherent qualities from permissible behavioral statements that may not apply to an entire group (i.e. referring to a group as “terrorists” or “murderers” versus saying they “murder”) causes significant confusion. Both can be dehumanizing generalizations, depending on the context, and the distinction in enforcement may create perceptions of arbitrariness. II. Legitimate Aim Any restriction on freedom of expression should pursue one of the legitimate aims of the ICCPR, which includes the “rights of others” and the “protection of public order” (Article 19, para. 3, ICCPR). The Board has previously held that Meta’s Violence and Incitement policy pursues the legitimate aim of protecting public order and the rights of others, including in particular the right to life (see Iranian Woman Confronted on Street and Tigray Communication Affairs Bureau ). The Board has also previously held that Meta’s Hate Speech (renamed Hateful Conduct) policy aims to protect the right to equality and non-discrimination, a legitimate aim that is recognized by international human rights standards (see e.g., Knin Cartoon and Myanmar Bot ). This continues to be the legitimate aim of the Hateful Conduct policy. III. Necessity and Proportionality Under ICCPR Article 19(3), necessity and proportionality require that restrictions on expression, “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected,” ( General Comment No. 34 , para. 34). The Special Rapporteur on freedom of expression has also noted that on social media, “the scale and complexity of addressing hateful expression presents long-term challenges,” ( A/HRC/38/35 , para. 28). However, according to the Special Rapporteur, companies should “demonstrate the necessity and proportionality of any content actions (such as removals or account suspensions).” Companies are required “to assess the same kind of questions about protecting their users’ right to freedom of expression” (ibid para. 41). The value of expression is particularly high when discussing matters of public concern and the right to free expression is paramount in the assessment of political discourse and commentary on public affairs. People have the right to seek, receive and impart ideas and opinions of all kinds, including those that may be controversial or deeply offensive (General Comment 34, para. 11). In the Politician’s Comments on Demographic Changes decision, the Board found that while controversial, the expression of this opinion on immigration did not include direct dehumanizing or hateful language towards vulnerable groups, or a call for violence. However, when such conditions are met, it may merit removal of content (see also Criticism of EU Migration Policies and Immigrants decision). The Board finds that all three posts should have been removed under Meta's policies, and their removal is necessary and proportionate considering the six factors outlined in the Rabat Plan of Action ( The Rabat Plan of Action, OHCHR, A/HRC/22/17/Add.4, 2013 ). Those factors are: the social and political context; the status of the speaker; the intent to incite people to act against a target group; the content and form of the speech; the extent of dissemination; and the likelihood and imminence of harm. Enforcement It is a concern that, even after the Board selected these cases, Meta maintained that two posts including AI-generated imagery were non-violating. It seems moderators (and even Meta’s policy teams) are given a checklist that is interpreted too formulaically, depending on singular elements to be present for a violation to be found. This appears to be in the pursuit of consistent enforcement. But this guidance, mainly written with text-based posts in mind, ignores how visual imagery works, resulting in inconsistencies in enforcement. This indicates a particular challenge for Meta when it comes to its rules on content alleging inherent criminality against a protected characteristic group, as these cases demonstrate. The current guidance to reviewers appears to be especially outdated given how much social media content is predominantly image and video-based. While consistency can be an important measure of the quality of Meta’s moderation, this should not be at the expense of accurately accounting for context, particularly in visual portrayals of hate speech and incitement. During a rapidly unfolding crisis, like the UK riots, the real threat of loss of life and property is too high a cost. Accuracy requires considering context and using judgment. As discussed above, it is particularly important that Meta’s Crisis Policy Protocol is activated swiftly and that reviewers are given context-specific guidance to ensure Meta’s policies are accurately enforced. The Board notes that in contexts like the UK riots, unverified and false information left unchallenged and uncorrected can be especially dangerous. Analysis by Professor Marc Owen Jones (specializing in misinformation and disinformation) in an X thread on July 30 explained that there were at least 27 million impressions for posts on X stating or speculating that the attacker was Muslim, a migrant, a refugee or a foreigner. He also noted that there were more than 13 million impressions for posts denouncing such speculation. Meta’s policies on misinformation are important in this context, in particular, its rule on removing “misinformation or unverifiable rumors that expert partners have determined are likely to directly contribute to a risk of imminent violence or physical harm to people,” (see Alleged Crimes in Raya Kobo decision). For misinformation that does not risk imminent violence or physical harm, measures less intrusive than removal may be necessary, for example providing additional information to correct falsehoods. Meta informed the Board its third-party fact-checkers reviewed “several pieces of content” that contain “the false name of the Southport perpetrator,” soon after it began to spread, categorizing them as “false.” Fact-checkers should have been able to do this once UK authorities released statements about the false name on July 30. These posts were then covered with the fact-check label, their visibility reduced “within hours of appearing on the platform,” and users were directed to a fact-checker’s article correcting the falsity. For more on Meta’s approach to fact-checking see Removal of COVID-19 Misinformation policy advisory opinion. The Board does not know what percentage of false content posted during the UK riots was reviewed by fact-checkers. The Board recalls its concerns that the number of fact-checkers Meta relies on is limited and too often a significant volume of content queued for review by fact-checkers is never assessed. As Meta explores the rollout of its Community Notes program – with which it intends to replace third-party fact-checking, starting in the U.S. – it should examine the experience of platforms using similar tools to respond to misinformation during the riots in the UK and broader research into the effectiveness of Community Notes. For example, research by the Center for Countering Digital Hate (CCDH) of posts on X from five high-profile accounts that pushed false information during the UK riots found these accounts amassed over 430 million views. According to its analysis, of the 1,060 posts shared by these accounts between July 29 and August 5, only one had a Community Note. Human Rights Due Diligence Principles 13, 17 (c) and 18 of the UNGPs, require Meta to engage in ongoing human rights due diligence for significant policy and enforcement changes, which the company would ordinarily do through its Policy Product Forum, including engagement with impacted stakeholders . The Board is concerned that Meta’s January 7, 2025, policy and enforcement changes were announced hastily, in a departure from regular procedure, with no public information shared as to what, if any prior human rights due diligence it performed. Now these changes are being rolled out globally, it is important that Meta ensures adverse impacts of these changes on human rights are identified, mitigated and prevented, and publicly reported. This should include a focus on how different groups may be differently impacted, including immigrants, refugees and asylum seekers. In relation to enforcement changes, due diligence should be mindful of the possibilities of both overenforcement ( Call for Women’s Protest in Cuba , Reclaiming Arabic Words ) as well as underenforcement ( Holocaust Denial , Homophobic Violence in West Africa , Post in Polish Targeting Trans People ). The Board notes the relevance of the first recommendation in the Criticism of EU Migration Policies and Immigrants cases to addressing these concerns. 6. The Oversight Board’s Decision The Oversight Board overturns Meta’s original decisions to leave up all three pieces of content, requiring the second and third posts to be removed. 7. Recommendations Content Policy 1.To improve the clarity of its Violence and Incitement Community Standard, Meta should specify that all high-severity threats of violence against places are prohibited, as well as against people. The Board will consider this recommendation implemented when Meta updates the Violence and Incitement Community Standard. Enforcement 2. To improve the clarity of its Hateful Conduct Community Standard, Meta should develop clear and robust criteria for what constitutes allegations of serious criminality, based on protected characteristics, in visual form. These criteria should align with and adapt existing standards for text-based hateful conduct, ensuring consistent application across both text and imagery. The Board will consider this recommendation implemented when the internal implementation standards reflect the proposed change. 3. To ensure Meta responds effectively and consistently to crises, the company should revise the criteria it has established to initiate the Crisis Policy Protocol. In addition to the current approach, in which the company has a list of conditions that may or may not result in protocol activation, the company should identify core criteria that, when met, are sufficient for the immediate activation of the protocol. The Board will consider this recommendation implemented when Meta briefs the Board on its new approach for activation of the Crisis Policy Protocol and concludes a disclosure of the procedures in its Transparency Center. 4. To ensure accurate enforcement of its Violence and Incitement and Hateful Conduct policies in future crises, Meta’s Crisis Policy Protocol should ensure potential policy violations that could lead to likely and imminent violence are flagged for in-house human reviewers. These reviewers should provide time-bound, context-informed guidance for at-scale reviewers, including for image-based violations. The Board will consider this implemented when Meta shares documentation on this new Crisis Policy Protocol lever, outlining how (1) potential violations are flagged for in-house review; (2) context-informed guidance is cascaded down; and (3) implemented for at-scale reviewers. 5. As the company rolls out Community Notes, it should undertake continuous assessments of the effectiveness of Community Notes as compared to third-party fact-checking. These assessments should focus on the speed, accuracy and volume of notes or labels being affixed in situations where the rapid dissemination of false information creates risks to public safety. The Board will consider this recommendation implemented when Meta updates the Board every six months until implementation is completed and shares the results of this evaluation publicly. *Procedural Note: Return to Case Decisions and Policy Advisory Opinions" bun-7e941o1n,Explicit AI Images of Female Public Figures,https://www.oversightboard.com/decision/bun-7e941o1n/,"July 25, 2024",2024,,,Bullying and harassment,Overturned,India,"In two cases of explicit AI images that resemble female public figures from India and the United States, the Oversight Board finds that both posts should have been removed from Meta’s platforms.",48680,7649,"Multiple Case Decision July 25, 2024 In two cases of explicit AI images that resemble female public figures from India and the United States, the Oversight Board finds that both posts should have been removed from Meta’s platforms. Overturned IG-JPEE85LL Platform Instagram Standard Bullying and harassment Location India Date Published on July 25, 2024 Upheld FB-4JTP0JQN Platform Facebook Standard Bullying and harassment Location Greece,United Kingdom,United States Date Published on July 25, 2024 Explicit AI Images of Female Public Figures Decision PDF Explicit AI Images of Female Public Figures Public Comments Appendix In two cases of explicit AI images that resemble female public figures from India and the United States, the Oversight Board finds that both posts should have been removed from Meta’s platforms. Deepfake intimate images disproportionately affect women and girls – undermining their rights to privacy and protection from mental and physical harm. Restrictions on this content are legitimate to protect individuals from the creation and dissemination of sexual images made without their consent. Given the severity of harms, removing the content is the only effective way to protect the people impacted. Labeling manipulated content is not appropriate in this instance because the harms stem from the sharing and viewing of these images – and not solely from misleading people about their authenticity. The Board’s recommendations seek to make Meta’s rules on this type of content more intuitive and to make it easier for users to report non-consensual sexualized images. About the Cases These two cases involve AI-generated images of nude women, one resembling an Indian public figure, the other an American public figure. In the first case, an Instagram account that shared only AI-generated or manipulated images of Indian women posted a picture of the back of a nude woman with her face visible, as part of a set of images. This set also featured a similar picture of the woman in beachwear, most likely the source material for the explicit AI manipulation. The second case also involves an explicit AI-generated image resembling a female public figure, this time from the United States. In this image, posted to a Facebook group for AI creations, the nude woman is being groped. The famous figure she resembles is named in the caption. In the first case (Indian public figure), a user reported the content to Meta for pornography but as the report was not reviewed within 48 hours, it was automatically closed. The user then appealed to Meta, but this was also automatically closed. Finally, the user appealed to the Board. As a result of the Board selecting this case, Meta determined that its original decision to leave the content on Instagram was in error and the company removed the post for violating the Bullying and Harassment Community Standard. Later, after the Board began its deliberations, Meta disabled the account that posted the content and added the explicit image to a Media Matching Service (MMS) bank. In the second case (American public figure), the explicit image had already been added to an MMS bank for violating Meta’s Bullying and Harassment policy and so was automatically removed. These banks automatically find and remove images that already have been identified by human reviewers as breaking Meta’s rules. The user who posted the AI-generated image appealed but this was automatically closed. The user then appealed to the Board to have their post restored. Deepfake intimate images comprise synthetic media digitally manipulated to depict real people in a sexualized way. It is becoming easier to create, with fewer pictures required to generate a realistic image. One report points to a 550% increase in online deepfake videos since 2019, the vast majority of which are sexualized depictions of real individuals and target women. Key Findings The Board finds that both images violated Meta’s rule that prohibits “derogatory sexualized photoshop” under the Bullying and Harassment policy. It is clear the images have been edited to show the faces of real public figures with a different (real or fictional) nude body, while contextual clues, including hashtags and where the content was posted, also indicate they are AI-generated. In the second case (American public figure), there is an additional violation of the Adult Nudity and Sexual Activity policy as the explicit image shows the woman having her breast squeezed. Removing both posts was in line with Meta’s human rights responsibilities. The Board believes that people using Meta’s platforms should be able to understand the rules. While the term “derogatory sexualized photoshop” should have been clear enough to the two users posting in these cases, it is not sufficiently clear more generally to users. When the Board asked Meta about the meaning, the company said the term refers to “manipulated images that are sexualized in ways that are likely to be unwanted by the target and thus perceived as derogatory.” The Board notes that a different term such as “non-consensual” would be a clearer description to explain the idea of unwanted sexualized manipulations of images. Additionally, the Board finds that “photoshop” is too narrow to cover the array of media manipulation techniques available today, especially generative AI. Meta needs to specify in this rule that the prohibition on this content covers this broader range of editing techniques. To ensure the rules prohibiting non-consensual sexualized images are more intuitive, the Board finds they should be part of the Adult Sexual Exploitation Community Standard, rather than Bullying and Harassment. In both these cases, users would have been unlikely to perceive them as an issue of Bullying and Harassment. External research shows that users post such content for many reasons besides harassment and trolling, including a desire to build an audience, monetize pages or direct users to other sites, including pornographic ones. Therefore, Meta’s rules on these images would be clearer if the focus was on the lack of consent and the harms from such content proliferating – rather than the impact of direct attacks, which is what is implied by enforcing under Bullying and Harassment. The Adult Sexual Exploitation policy would be a more logical place for these rules. This policy already prohibits non-consensual intimate images, which is a similar issue as both are examples of image-based sexual abuse. Then, Meta could also consider renaming the policy to “Non-Consensual Sexual Content.” The Board notes the image resembling an Indian public figure was not added to an MMS bank by Meta until the Board asked why. Meta responded by saying that it relied on media reports to add the image resembling the American public figure to the bank, but there were no such media signals in the first case. This is worrying because many victims of deepfake intimate images are not in the public eye and are forced to either accept the spread of their non-consensual depictions or search for and report every instance. One of the existing signals of lack of consent under the Adult Sexual Exploitation policy is media reports of leaks of non-consensual intimate images. This can be useful when posts involve public figures but is not helpful for private individuals. Therefore, Meta should not be over-reliant on this signal. The Board also suggests that context indicating the nude or sexualized aspects of the content are AI-generated, photoshopped or otherwise manipulated be considered as a signal of non-consent. Finally, the Board is concerned about the auto-closing of appeals for image-based sexual abuse. Even waiting 48 hours for a review can be harmful given the damage caused. The Board does not yet have sufficient information on Meta’s use of auto-closing generally but considers this an issue that could have a significant human rights impact, requiring risk assessment and mitigation. The Oversight Board’s Decision In the first case (Indian public figure), the Board overturns Meta’s original decision to leave up the post. In the second case (American public figure), the Board upholds Meta’s decision to take down the post. The Board recommends that Meta: *Case summaries provide an overview of cases and do not have precedential value. 1. Case Description and Background The Oversight Board has reviewed two cases together, one posted on Facebook, the other on Instagram, by different users. The first case involves an AI-manipulated image of a nude woman shown from the back with her face visible. Posted on Instagram, the image resembles a female public figure from India and was part of a set of images featuring a similar picture of the woman, in beachwear, which was likely the source material for the AI manipulation. The account that posted this content describes itself as only sharing AI-generated images of Indian women, and the caption includes hashtags indicating the image was created using AI. A user reported the content to Meta for pornography. This report was automatically closed because it was not reviewed within 48 hours. The user then appealed Meta’s decision to leave up the content, but this was also automatically closed and so the content remained up. The user then appealed to the Board. As a result of the Board selecting this case, Meta determined that its decision to leave the content up was in error and it removed the post for violating the Bullying and Harassment Community Standard. Later, after the Board selected the case, Meta disabled the account that posted the content and added the content to a Media Matching Service (MMS) bank. The second case concerns an image posted to a Facebook group for AI creations. It shows an AI-generated image of a nude woman being groped on the breast. The image was created with AI so as to resemble an American public figure, who is named in the caption. In this second case, the image was removed for violating Meta’s Bullying and Harassment policy. A different user had already posted an identical image, which led to it being escalated to Meta’s policy or subject matter experts who decided the content violated the Bullying and Harassment policy, specifically for “derogatory sexualized photoshop or drawings,” and removed it. That image was then added to an MMS bank. These banks automatically find and remove images that have already been identified as violating. The AI-generated image in the second case was automatically removed because it had been added to an MMS bank. The user who posted the content appealed the removal but the report was automatically closed. They then appealed to the Board to have their content restored. The Board noted the following context in reaching its decision on these cases. Deepfake intimate images are synthetic media that have been digitally manipulated to depict real people in a sexualized manner. What is perceived as pornography may differ across countries and cultures. A public comment submitted to the Board from Witness, an international human rights NGO, gives the example of a Bangladeshi deepfake of a female politician in a bikini, which could be particularly harmful because of the cultural context, though it might not be actionable in another cultural setting (see PC-27095). Deepfake intimate imagery is becoming easier to create using AI tools, with fewer pictures required to generate a realistic image. Women in International Security ( WIIS ) explains: “This means that practically everyone who has taken a selfie or posted a picture of themselves online runs the hypothetical risk of having a deepfake created in their image.” The Guardian reported that the AI firm Deeptrace analyzed 15,000 deepfake videos it found online in September 2019, noting that 96% were pornographic and 99% of those mapped faces from female celebrities onto porn performers. There has reportedly been a 550% increase in the number of online deepfake videos since 2019, with sexualized depictions of real individuals making up 98% of all deepfake videos online and women comprising 99% of the targeted individuals ( Home Security Heroes 2023 report ). The top 10 dedicated deepfake intimate imagery websites collectively received more than 34 million monthly visits. Image-based sexual abuse has been shown to have a significant impact on victims. The UK's 2019 Adult Online Hate, Harassment and Abuse report quotes a range of studies on image-based sexual abuse (including deepfake intimate imagery) that have examined the experiences of victims. These studies found that victims may struggle with feelings of shame, helplessness, embarrassment, self-blame, anger, guilt, paranoia, isolation, humiliation and powerlessness; along with feeling a loss of integrity, dignity, security, self-esteem, self-respect and self-worth. Researchers of online sexual abuse suggest the harms of deepfake sexual imagery may be as severe as those associated with real non-consensual sexual images. Deepfake intimate imagery is a global issue. There have been reports of female politicians being targeted in Bangladesh , Pakistan , Italy , the United States, Northern Ireland and Ukraine. Journalists, human rights defenders and celebrities are also routinely targeted. However, anyone can be a victim of deepfake intimate imagery. There have been recent incidents in the United States and Spain of children and young teenagers being targeted with deepfake intimate imagery. Experts consulted by the Board noted that this content can be particularly damaging in socially conservative communities. For instance, an 18-year-old woman was reportedly shot dead by her father and uncle in Pakistan’s remote Kohistan region after a digitally altered photograph of her with a man went viral. Both India and the United States have considered laws and announced further plans to regulate deepfakes. A public comment from Rakesh Maheshwari, a former senior government official in cyber law explains how India’s current laws on social media could be applied to the content in the first case (see PC-27029). However, the Board received many public comments emphasizing how important it is that social media platforms be the first line of defense because legal regimes may not move quickly enough to stop this content from proliferating. A public comment from the Indian NGO Breakthrough Trust also explains that in India, “women often face secondary victimisation” when accessing police or court services by being asked why they put pictures of themselves on the internet in the first place – even when the images were deepfaked (see PC-27044). Meta has been active in developing technologies to address a related issue, non-consensual intimate image sharing (NCII). Independent experts consulted by the Board praised Meta’s efforts in finding and removing NCII as industry-leading and called their image-matching technologies valuable. While NCII is different from deepfake intimate imagery in that NCII involves real images whereas deepfakes involve digitally created or altered images, both are examples of image-based sexual abuse. 2. User Submissions The user who reported the content in the first case said they had seen AI-generated explicit images of celebrities on Instagram and were concerned about this being available on a platform that teenagers were permitted to use. The content creator did not provide a user statement to the Board. The user who shared the post in the second case stated in their appeal that their intention wasn’t to bully, harass or degrade anyone but to entertain people. 3. Meta’s Content Policies and Submissions I. Meta’s Content Policies Bullying and Harassment Community Standard The Bullying and Harassment Community Standard states under Tier 1 (Universal protections for everyone) that everyone (including public figures) is protected from “Derogatory sexualized photoshop or drawings.” Additional internal guidance provided to content moderators defines “derogatory sexualized photoshop or drawings” as content that has been manipulated or edited to sexualize it in ways that are likely to be unwanted by the target and thus perceived as derogatory – in one example, combining a real person’s head with a nude or nearly nude body. Adult Nudity and Sexual Activity policy This policy prohibits, among other depictions of nudity and sexual activity, “fully nude close-ups of buttocks” as well as “squeezing female breasts.” Squeezing female breasts is “defined as a grabbing motion with curved fingers that shows both marks and clear shape change of the breasts. We allow squeezing in breastfeeding contexts.” Adult Sexual Exploitation policy This policy prohibits: “Sharing, threatening, stating an intent to share, offering or asking for non-consensual intimate imagery that fulfils all of the three following conditions: II. Meta’s Submissions Meta assessed both posts under its Bullying and Harassment and Adult Nudity and Sexual Activity policies. The company found both violated the Bullying and Harassment policy, but only the post in the second case (American public figure) violated the Adult Nudity and Sexual Activity Community Standard. Bullying and Harassment Community Standard The Bullying and Harassment policy protects both public and private figures from “[d]erogatory sexualized photoshop or drawings,” as this type of content “prevents people from feeling safe and respected on Facebook, Instagram and Threads.” In response to the Board’s question about how the company identifies “photoshopped” or AI-generated content, Meta explained that the assessment is made on a “case-by-case basis” and relies on several signals including “context clues as well as signals from credible sources, such as articles from third party fact-checkers, credible media sources, assessments from onboarded Trusted Partners, and other non-partisan organizations or government partners.” Meta determined that the image in the first case violated this policy because it was created using AI to resemble a public figure from India and was manipulated or edited to make the figure appear nearly nude in a “sexually suggestive pose.” The company also considered the user’s handle and hashtags, both of which clearly indicate the image is AI generated. Meta determined that the image in the second case also violated this policy. The company considered the following factors in determining that this image was AI generated: the face appears to be combined with a nearly nude body and the “coloring, texture, and clarity of the image suggested the video [image] was AI-generated”; there was external reporting through the media on the proliferation of such generated images; and the content was posted in a group dedicated to sharing images created using artificial intelligence. Adult Nudity and Sexual Activity Community Standard Meta informed the Board that the company determined the image in the second case (American public figure) also violated the Adult Nudity and Sexual Activity policy. Because the content in the second case shows someone “grabbing” the AI-generated image of the female public figure, it violated the prohibition on imagery showing the squeezing of female breasts. According to the company, the image in the first case (Indian public figure) does not violate the policy because although fully nude close-ups of buttocks are prohibited, this image is not a close-up as defined by the company. The company also explained that the decision to remove both posts struck the right balance between its values of safety, privacy, dignity and voice because “Meta assessed the creative value of the content in this case bundle as minimal.” Looking at the hashtags and captions on both posts, the company concluded the “intent was sexual rather than artistic.” The company also concluded that the “safety concern with removing this content outweighed any expressive value of the speech.” Citing stakeholder input from an earlier policy forum on “Attacks Against Public Figures,” Meta highlighted concerns about abuse and harassment that public figures face online, and argued that this leads to self-censorship and the silencing of those who witness the harassment. In May 2024, Meta updated its Adult Nudity and Sexual Activity Community Standard, clarifying that the policy applies to all “photorealistic imagery,” and that “[w]here it is unclear if an image or video is photorealistic,” they “presume that it is.” The Board understands this to mean that realistic AI-generated images of real people, celebrities or otherwise, will be removed under the Adult Nudity and Sexual Activity Community Standard when they contain nudity or sexual activity and do not qualify for a narrow range of exceptions. The company cites prevention of “the sharing of non-consensual or underage content” as the rationale for removing photorealistic sexual imagery. The Board welcomes Meta’s clarification of the Adult Nudity and Sexual Activity Community Standard, and supports the company’s efforts to enforce against realistic-looking fictional images and videos the same way in which it would real ones. The outcomes for both pieces of content in this case would remain the same under the updated policies: both would still be removed based on the prohibition on derogatory sexualized photoshop under the Bullying and Harassment Community Standard, and the post featuring the American public figure would violate the Adult Nudity and Sexual Activity Community Standard for showing the squeezing of breasts in a context that is not permitted. While these changes represent a welcome clarification, they are not sufficient to deal with the proliferation of AI-generated non-consensual intimate imagery. The Board reaffirms the importance of a dedicated policy line against AI generated or manipulated non-consensual sexualized content that exists as part of Meta’s Adult Sexual Exploitation Community Standard. Media Matching Service Banks According to Meta, its Media Matching Service (MMS) banks identify and act on media, in this case images, posted on its platforms. Once content is identified for banking, it is converted into a string of data or “hash.” The hash is then associated with a particular bank. Meta’s MMS banks are created to align with specific Community Standard policies, and are not designed around specific behaviors or types of content such as derogatory sexualized photoshopped content. These banks can automatically identify and remove images that have already been identified by human reviewers as violating the company’s rules. The image in the second case (American public figure) was removed because an identical image had already been escalated to human review and added to an MMS bank. The image in the first case (Indian public figure) was initially not added to an MMS bank. Meta stated that: “Not all instances of content found to be violating for derogatory sexualized photoshopping are added to a MMS bank. Requests to bank content must generally be approved, on escalation, by our internal teams. This is because MMS banking is a powerful enforcement tool that may carry over-enforcement risks.” Meta only changed its decision to bank the image in the first case after the Board submitted a question asking why this had not been done. The Board asked 13 questions about MMS Banks, the auto-closing of appeals, the prohibition on derogatory sexualized photoshopping in the Bullying and Harassment policy, and other policies that might be relevant to this case. Meta responded to them all. 4. Public Comments The Oversight Board received 88 public comments that met the terms for submission. Of these, 14 were from Asia Pacific and Oceania, 24 from Central and South Asia, 15 from Europe, five from Latin America and Caribbean and 30 from the United States and Canada. To read public comments submitted with consent to publish, click here . The submissions covered the following themes: the prevalence and cultural implications of deepfakes in India, the impact of deepfake intimate imagery on women generally and female public figures, why auto-closing appeals related to image-based sexual abuse is problematic, and the necessity of combining human and automated systems of review to detect and remove deepfake intimate imagery. 5. Oversight Board Analysis The Board analyzed Meta’s decision in this case against Meta’s content policies, values and human rights responsibilities. The Board also assessed the implications of this case for Meta’s broader approach to content governance. 5.1 Compliance With Meta’s Content Policies I. Content Rules It is clear to the Board that both images violated Meta’s prohibition on “derogatory sexualized photoshop” under the Bullying and Harassment policy. Both have been edited to show the head or face of a real person with a real or fictional body that is nude or nearly nude. Both cases also contain contextual clues that the content has been AI-generated. The post in the first case (Indian public figure) includes a list of hashtags indicating that it is AI-generated and was posted by an account dedicated to posting these images. The post in the second case (American public figure) was posted on a Facebook group for AI imagery. The Board agrees with Meta that only the post in the second case, however, violated the Adult Nudity and Sexual Activity policy, as it depicts the woman having her breast squeezed. The other image does not violate this policy in its current form because it is not a close-up shot of nude buttocks as defined by the company. The Board notes that this means under the current policy, similar images that do not contain obvious contextual clues indicating that they are AI-generated would not be removed. The impact of this on victims is discussed in section 5.2 below. 5.2 Compliance With Meta’s Human Rights Responsibilities In both the first case (Indian public figure) and the second case (American public figure), the Board finds that removal of the content from Meta’s platforms is consistent with Meta’s human rights responsibilities. Freedom of Expression (Article 19 ICCPR) Article 19 of the ICCPR provides for broad protection of expression, including expression that may be considered “deeply offensive” (General Comment 34, para. 11, see also para. 17 of the 2019 report of the UN Special Rapporteur on freedom of expression, A/74/486 ). When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR, General Comment 34, para. 34). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s human rights responsibilities in line with the UN Guiding Principles on Business and Human Rights, which Meta itself has committed to in its Corporate Human Rights Policy. The Board does this both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression,” ( A/74/486 , para. 41). I. Legality (Clarity and Accessibility of the Rules) The principle of legality requires rules limiting expression to be accessible and clear, formulated with sufficient precision to enable an individual to regulate their conduct accordingly (General Comment No. 34, para. 25). Additionally, these rules “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” and must “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not,” ( Ibid ). The UN Special Rapporteur on freedom of expression has stated that when applied to private actors’ governance of online speech, rules should be clear and specific (A/HRC/38/35, para. 46). People using Meta’s platforms should be able to access and understand the rules and content reviewers should have clear guidance regarding their enforcement. The Board finds that although in this context the term “derogatory sexualized photoshop” should have been clear to the users posting these pieces of content, it is not generally sufficiently clear to users. In response to the Board’s question on the meaning of this term, Meta stated that: “‘Derogatory sexualized photoshop or drawings’ refers to manipulated images that are sexualized in ways that are likely to be unwanted by the target and thus perceived as derogatory (for example, combining a real person’s head with a nude or nearly nude body).” The Board notes that a term such as “non-consensual” would be a clearer descriptor than “derogatory” to convey the idea of unwanted sexualized manipulations to images. Moreover, the Board finds the term “photoshop” in the prohibition on “derogatory sexualized photoshop” is dated and too narrow to cover the array of media manipulation techniques available to users, particularly those powered by generative AI. While the term “photoshop” no longer necessarily implies the use of a particular editing software, it still commonly refers to the manual editing of images using digital tools. By contrast, much of the non-consensual sexualized imagery spread online today is created using generative AI models that either automatically edit existing images or create entirely new ones. Meta should ensure that its prohibition on derogatory sexualized content covers this broader array of editing techniques, in a way that is clear to both users and the company’s moderators. The Board also finds that the policy lines prohibiting these images would be more appropriately located in the Adult Sexual Exploitation Community Standard rather than the Bullying and Harassment Community Standard. The rules need to be intuitive to make it easy for users to understand what is prohibited and why. This is particularly important in cases where the content would be compliant with the rules if the image had been consensually made and shared, such as in the first case (Indian public figure). If a user looks at the images in both these cases, they are unlikely to see them as an issue of Bullying and Harassment. A public comment from the RATI Foundation for Social Change (an Indian NGO that assists victims of online and offline sexual violence), stated that although one of the ways it assists victims is by helping to get AI-generated sexual images removed from Meta’s platforms, it had never heard of the prohibition on “derogatory sexualized photoshop” and had never reported an AI explicit image under Bullying and Harassment. Instead, it reported such images under other policies such as Adult Nudity and Sexual Activity, Child Exploitation and Adult Sexual Exploitation (see PC-27032). Including this prohibition in the Bullying and Harassment Community Standard presumes that users are posting these images to harass people. However, this may not accurately reflect why a given user has posted an AI-generated explicit image. This is confusing for all users, from the people posting this content to the people reporting it. External research commissioned by the Board shows that users post deepfake intimate imagery for a number of reasons that may not involve an express intent to bully or harass. While harassment and trolling are two of them, users are often also motivated by a desire to build an audience on the platform, monetize their page or direct users to off-platform sites, such as pornography sites and services, or clickbait websites. A study by Powell et. al from 2020 also found that perpetrators of image-based sexual abuse often report motivations of it being “fun” or to “flirt” as well as to “trade the images.” The policy lines prohibiting these images would be clearer if the focus was on the lack of consent and the harms of proliferation of such content, rather than the impact of direct attacks implied by a Bullying and Harassment designation. The Adult Sexual Exploitation policy would therefore be a clearer and more logical place to include these prohibitions. This policy focuses on images shared with a lack of consent and contains the prohibition on non-consensual intimate image sharing (NCII), which is clearly a very similar issue. Meta should also consider renaming this policy to something more detailed and clearer to users, such as “Non-Consensual Sexual Content.” II. Legitimate Aim Any restriction on freedom of expression should also pursue one or more of the legitimate aims listed in the ICCPR, which includes protecting the rights of others. The Human Rights Committee has interpreted the term “rights” to include human rights as recognized in the ICCPR and more generally in international human rights law ( General Comment 34, at para. 28). Meta’s decision to prohibit deepfake intimate imagery on the platform seeks to protect the rights to physical and mental health, as this content is extremely harmful to victims (Article 12 ICESCR ); freedom from discrimination, as there is overwhelming evidence showing that this content disproportionately affects women and girls (Article 2 ICCPR and ICESCR); and the right to privacy, as it affects the ability of people to maintain a private life and authorize how images of themselves are created and released (Article 17 ICPPR). The Board concludes that platform restrictions on deepfake intimate imagery are designed to protect individuals from the creation and dissemination of sexual images made without their consent – and the resulting harms of such images to victims and their rights. This represents a legitimate aim for restricting this content. III. Necessity and Proportionality Under ICCPR Article 19(3), necessity and proportionality requires that restrictions on expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected,” (General Comment No. 34, para. 34). The severity of harms in this content: The Board finds that prohibition and removal are necessary and proportionate measures to protect the rights of people impacted by this content. The harms caused by this content are severe for the people depicted in it. Their rights to privacy and the protection from mental and physical harm are undermined by the non-consensual use of their image to create other sexualized images. Given the severity of these harms, removal of the content is the only effective means available to protect victims – there are no less intrusive measures that would be sufficient. In the Altered Video of President Biden decision , the Board recommended labeling of manipulated content as a means to prevent users being misled about the authenticity of content. Labeling would not, however, be sufficient to address the harm here, as it stems from the sharing and viewing of the image itself, not solely from people being misled as to its authenticity. As the vast majority of the people depicted in these images are women or girls, this type of content also has discriminatory impacts and is a highly gendered harm (see PC-27045). The application of MMS banks to this content: The Board also considered Meta’s use of MMS banks. The image in the second case (American public figure) had already been added to the MMS bank, but the image in the first (Indian public figure) was not added until after the Board asked Meta why this was the case. Meta stated in its response that it relied on media reports to indicate that the second case’s image was circulating on social media and that “banking was necessary in order to address the broader issue of the proliferation of this content.” It went on to say that there were no media signals in the first case. The Board highlights that there may be many victims of non-consensual deepfake intimate imagery whose images are shared numerous times on platforms. However, they do not have a public profile and are forced to either accept the proliferation of their non-consensual depictions or search and report every instance, which would be very resource-intensive and traumatizing. A public comment from Witness urges Meta to “avoid placing the burden of reporting on victims, including repeated reporting of the same content,” (see PC-27095). The Board reiterates this concern, especially given the impacts on victims in regions or communities with limited media literacy. Meta has stated that one of its signals of lack of consent under the Adult Sexual Exploitation policy is media reports of leaks of NCII images. While this can be useful information for content concerning public figures, it is important that Meta not be over-reliant on this signal as it is not a helpful signal for content concerning private individuals, who will likely not be subject to media reporting. Meta also needs to rely on signals that help identify non-consensual depictions of private individuals. In determining the proportionality of measures applied by Meta, the Board also discussed the act of sanctioning users – in particular, whether everyone (not just the first user) who shared these images should be given a strike. A public comment from RATI Foundation for Social Change stated that, in their experience of assisting victims of deepfake intimate imagery, “many of these videos are posted in collaboration with another account. However, when the post is actioned only one of the accounts is penalized. The other account which is an alt account of the offender survives and it resumes posting.” It also stated that it saw many copies of the same video, which seems to indicate that MMS banks would be useful in addressing this content (see PC-27032). Of course, as the Digital Rights Foundation notes, MMS banks are “restricted by the database of known images” and will always have a more limited utility against AI-generated images as new ones can be so easily created (see PC-27072). MMS banks, therefore, can only be one tool in Meta’s arsenal to combat deepfake intimate imagery. While applying strikes in every instance could make enforcement more effective, it could also lead to users being penalized in situations where this is not justified, such as sharing images they do not know are AI-generated or otherwise non-consensual. The Board acknowledges this tension. Meta shared with the Board that the MMS bank in this case was not configured to apply strikes due to the risk of over-enforcement. However, in some circumstances, this has changed and users are now able to appeal these decisions. Given this change in circumstances, the Board prompts Meta to reconsider whether applying strikes may be justified. The artificial distinction between non-consensual intimate image sharing and deepfake intimate imagery: Finally, the Board considered whether non-consensual intimate image sharing (NCII) and deepfake intimate imagery should be treated separately within Meta’s policies. When asked about the possibility of moving the prohibition on derogatory sexualized photoshopping (which, as discussed in the Legality section above, would be better described with a more accurate term) to the Adult Sexual Exploitation Policy, Meta told the Board that the two content categories are very different because the rules on NCII enforcement require a signal of a lack of consent (such as a vengeful statement or media reports of a leak), whereas the rules on derogatory sexualized photoshopping do not. However, this is an enforcement choice that could theoretically be remedied by considering context indicating that the nude or sexualized aspects of the content are AI-generated, photoshopped or otherwise manipulated to be a signal of non-consent, and specifying that such content need not be “non-commercial or produced in a private setting” to violate the policy. There is already a significant overlap between the policies that may not be clear to users. Meta stated in its response to the Board’s questions that, at the time of enforcing the content in these cases, its definition of intimate imagery for the Adult Sexual Exploitation policy was internally defined as (i) screenshots of private sexual conversations and (ii) imagery of one or more people in a private setting, including manipulated imagery that contain nudity, near nudity, or people engaged in sexual activity. Creating a presumption that AI-generated sexual images are non-consensual may occasionally lead to an image being removed that was consensually made. The Board is deeply concerned about the over-enforcement of allowable nudity and near-nudity, as demonstrated by the Breast Cancer Symptoms and Nudity case, Gender Identity and Nudity cases, and Breast Self-Exam and Testicular Cancer Self-Check Infographics summary decisions. However, in the case of sexualized deepfakes, this presumption has already been underlying Meta’s enforcement of derogatory sexualized photoshopping, as the company presumes that all sexualization covered by this policy and created through AI or photoshopping is unwanted. It is inevitable that not all AI-generated content will be caught by this new policy line (just as it is not caught now), but by combining the two categories of non-consensual content, Meta can leverage its successes at combatting NCII and use aspects of its approach to assessing consent to reduce deepfake intimate imagery on its platforms. The Board also explored whether, in order to provide better protection for those whose rights are impacted by this type of content, Meta should alter its approach to the prohibitions on NCII and derogatory sexualized photoshop to start with a presumption that such imagery is non-consensual, instead of the current approach of presuming imagery is consensual and requiring signals of non-consent to remove them. After assessing the feasibility and impact of this proposed approach, however, the Board concluded that such an approach risked significantly over-enforcing against non-violating content and would not currently be operationally feasible in the context of automated tools that Meta relies on for enforcement. Access to Remedy The Board is concerned by the appeals that were auto-closed in the first case. Both the original report and the appeal against Meta’s decision to keep the content on the platform were auto-closed. Meta informed the Board that “content reported for any violation type (with the exception of Child Sexual Abuse Material) is eligible for auto-close automation if our technology does not detect a high likelihood of a violation and it is not reviewed within 48 hours.” Users may be unaware of the auto-closing process and the fact that when they submit content for appeal, it may never actually be reviewed. Meanwhile, as in the first case, victims and others seeking to remove deepfake intimate imagery may report the content but are denied any actual review. When they then appeal that decision, they can find themselves in the same position, with the same auto-closing process happening again. Many of the public comments received in this case criticized the use of the auto-closing of appeals for image-based sexual abuse. The damage caused by these images is so severe that even waiting 48 hours for a review can be harmful. The American Sunlight Project, which gave the example of deepfake intimate imagery targeting female politicians during elections, states, “such content could receive hundreds of thousands of views, be reported on in national press, and sink the public perception of a political candidate, putting her on uneven footing when compared with her opponents. In some countries, including India, where this case took place, it could even endanger her life,” (see PC-27058). A related point was made by the Centre for Protecting Women Online, which cautioned that the harm of these images in an election will be particularly severe “in contexts where digital literacy in the general population is low and where the influence of messages and images posted on social media is highly likely to influence voters as the almost exclusive source of news and information,” (PC-27088). Of course, regardless of the public or private status of victims, a delay in removing these images severely undermines their privacy and can be catastrophic. The Board considered whether content that causes such severe harms to its victims (through both deepfake and NCII) should be exempt from the auto-closing process. The Board acknowledges the challenges of at-scale content moderation, and the need to rely on automated processes to manage content flagged for review, but it is concerned about the severity of harm that may result from its use in policy areas such as this one. The Board does not have sufficient information on the use of auto-closing across all Meta’s policies to make a recommendation on the use of auto-closing within Meta’s broader enforcement systems, but considers it an issue that may have significant human rights impacts that require careful risk assessment and mitigation. 6. The Oversight Board’s Decision The Oversight Board overturns Meta’s original decision to leave up the content in the first case (Indian public figure), requiring the post to be removed, and upholds Meta’s decision to take down the content in the second case (American public figure). 7. Recommendations Content Policy 1. To increase certainty for users and combine its policies on non-consensual content, Meta should move the prohibition on “derogatory sexualized photoshop” into the Adult Sexual Exploitation Community Standard. The Board will consider this recommendation implemented when this section is removed from the Bullying and Harassment policy and is included in the publicly available Adult Sexual Exploitation policy. 2. To increase certainty for users, Meta should change the word “derogatory” in the prohibition on “derogatory sexualized photoshop” to “non-consensual.” The Board will consider this recommendation implemented when the word “non-consensual” replaces the word “derogatory” in the prohibition on derogatory sexualized content in the publicly available Community Standards. 3. To increase certainty for users and ensure that its policies address a wide range of media editing and generation techniques, Meta should replace the word “photoshop” in the prohibition on “derogatory sexualized photoshop” to a more generalized term for manipulated media. The Board will consider this recommendation implemented when the word “photoshop” is removed from the prohibition on “derogatory sexualized” content and replaced with a more generalized term, such as “manipulated media.” 4. To harmonize its policies on non-consensual content and help ensure violating content is removed, Meta should add a new signal for lack of consent in the Adult Sexual Exploitation Policy: context that content is AI-generated or manipulated. For content with this specific context, the policy should also specify that it need not be “non-commercial or produced in a private setting” to be violating. The Board will consider this recommendation implemented when both the public-facing and private internal guidelines are updated to reflect this change. Procedural Note: The Oversight Board’s decisions are made by panels of five Members and approved by a majority vote of the full Board. Board decisions do not necessarily represent the views of all Members. Under its Charter , the Oversight Board may review appeals from users whose content Meta removed, appeals from users who reported content that Meta left up, and decisions that Meta refers to it (Charter Article 2, Section 1). The Board has binding authority to uphold or overturn Meta’s content decisions (Charter Article 3, Section 5; Charter Article 4). The Board may issue non-binding recommendations that Meta is required to respond to (Charter Article 3, Section 4; Article 4). Where Meta commits to act on recommendations, the Board monitors their implementation. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, a digital investigations group providing risk advisory and threat intelligence services to mitigate online harms, also provided research. Return to Case Decisions and Policy Advisory Opinions" bun-7zoqzby0,Goebbels Quote,https://www.oversightboard.com/decision/bun-7zoqzby0/,"December 18, 2023",2023,December,"Freedom of expression,Misinformation",Dangerous individuals and organizations,Overturned,"Canada,United Kingdom,United States","In this summary decision, the Board considers four posts together. Four separate users appealed Meta’s decisions to remove posts that contain a quote attributed to Joseph Goebbels.",7341,1076,"Multiple Case Decision December 18, 2023 In this summary decision, the Board considers four posts together. Four separate users appealed Meta’s decisions to remove posts that contain a quote attributed to Joseph Goebbels. Overturned FB-EZ2SSLB1 Platform Facebook Topic Freedom of expression,Misinformation Standard Dangerous individuals and organizations Location Canada,United Kingdom,United States Date Published on December 18, 2023 Overturned FB-GI0MEB85 Platform Facebook Topic Freedom of expression,Misinformation Standard Dangerous individuals and organizations Location Germany,United States Date Published on December 18, 2023 Overturned FB-2X73FNY9 Platform Facebook Topic Freedom of expression,Misinformation Standard Dangerous individuals and organizations Location Australia Date Published on December 18, 2023 Overturned FB-PFP42GAJ Platform Facebook Topic Freedom of expression,Misinformation Standard Dangerous individuals and organizations Location United States Date Published on December 18, 2023 This is a summary decision . Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors and inform the public about the impact of the Board’s work . They are approved by a Board Member panel, not the full Board. They do not involve a public comments process and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement. Case Summary In this summary decision, the Board considers four posts together. Four separate users appealed Meta’s decisions to remove posts that contain a quote attributed to Joseph Goebbels, the Nazi’s propaganda chief. Each post shared the quote to criticize the spread of false information in the present day. After the Board brought the appeals to Meta’s attention, the company reversed its original decisions and restored each of the posts. Case Description and Background The four posts contain a variation of the same quote attributed to Joseph Goebbels, which states: “A lie told once remains a lie, but a lie told a thousand times becomes the truth.” Each user added a caption to accompany the quote. The captions contain the users’ opinions on perceived historical parallels between Nazi Germany and present-day political discourse, as well as threats to free expression posed by the normalization of false information. Meta originally removed the four posts from Facebook, citing its Dangerous Organizations and Individuals policy, under which the company removes content that “praises,” “substantively supports” or “represents” individuals and organizations it designates as dangerous, including the Nazi party. The policy allows content that discusses a dangerous organization or individual in a neutral way or that condemns its actions. The four users appealed the removal of their content to the Board. In their appeals, they each stated that they included the quote not to endorse Joseph Goebbels or the Nazi party, but to criticize the negative effect of false information on their political systems. They also highlighted the relevance of historical lessons to the issue of the dangers of propaganda. After the Board brought these cases to Meta’s attention, the company determined that the content did not violate Meta’s Dangerous Organizations and Individuals policy and the removals of the four posts were incorrect. The company then restored the content to Facebook. Meta stated that the content did not contain any support for the Nazi party, but rather includes descriptions of “the Nazi regime’s campaign to normalize falsehoods in order to highlight the importance of ethics and epistemic standards for free speech.” Board Authority and Scope The Board has authority to review Meta's decisions following appeals from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation processes involved, reduce errors and increase fairness for Facebook and Instagram users. Case Significance These cases highlight Meta’s failure to distinguish between supportive references to organizations it designates as dangerous, which are prohibited, and neutral or condemning references that the company allows. The Board has previously issued multiple recommendations on Meta’s Dangerous Organizations and Individuals policy. Continued errors in applying the exceptions of this Community Standard appear to significantly limit important free expression by users, making this a crucial area for further improvement by the company. In a previous decision, the Board recommended that “Meta should assess the accuracy of reviewers enforcing the reporting allowance under the Dangerous Organizations and Individuals policy in order to identify systemic issues causing enforcement errors,” ( Mention of the Taliban in News Reporting decision, recommendation no. 5). The Board further urged Meta to “conduct a review of the high impact false positive override (HIPO) ranker to examine if it can more effectively prioritize potential errors in the enforcement of allowances to the Dangerous Organizations and Individuals policy,” ( Mention of the Taliban in News Reporting decision, recommendation no. 6). This ranker system prioritizes content decisions for additional review, which Meta uses to identify cases in which it has acted incorrectly, for example, by wrongly removing content. Meta is still assessing the feasibility of this recommendation. And the Board asked Meta to “enhance the capacity allocated to HIPO review across languages to ensure more content decisions that may be enforcement errors receive additional human review,” ( Mention of the Taliban in News Reporting decision, recommendation no. 7). Meta has reported that this recommendation is work the company already does, without publishing information to demonstrate this. In addition, the Board recommended that Meta “explain and provide examples of the application of key terms used in the Dangerous Organizations and Individuals policy, including the meanings of ‘praise,’ ‘support’ and ‘representations,’” and said those public explanations “should align with the definitions used in Facebook's Internal Implementation Standards,” ( Nazi Quote decision, recommendation no. 2). Meta implemented this recommendation. As these cases illustrate, the use of an analogy to a notoriously dangerous figure for the purposes of criticism of a current person or practice is a common and entirely legitimate form of political discourse. These four cases illustrate the need for more effective measures along the lines of the Board’s recommendations. Decision The Board overturns Meta’s original decisions to remove the content. The Board acknowledges Meta’s correction of its initial errors once the Board brought these cases to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" bun-86tj0rk5,Posts That Include “From the River to the Sea”,https://www.oversightboard.com/decision/bun-86tj0rk5/,"September 4, 2024",2024,,"Freedom of expression,Protests,War and conflict",Violent and graphic content,Upheld,"Canada,Israel,Palestinian Territories","In reviewing three cases involving different pieces of Facebook content containing the phrase “From the River to the Sea,” the Board finds they did not break Meta’s rules on Hate Speech, Violence and Incitement or Dangerous Organizations and Individuals.",77226,12105,"Multiple Case Decision September 4, 2024 In reviewing three cases involving different pieces of Facebook content containing the phrase “From the River to the Sea,” the Board finds they did not break Meta’s rules on Hate Speech, Violence and Incitement or Dangerous Organizations and Individuals. Upheld FB-TDOKI4L8 Platform Facebook Topic Freedom of expression,Protests,War and conflict Location Canada,Israel,Palestinian Territories Date Published on September 4, 2024 Upheld FB-0H634H19 Platform Facebook Topic Freedom of expression,Protests,War and conflict Location Israel,Palestinian Territories Date Published on September 4, 2024 Upheld FB-OMEHM1ZR Platform Facebook Topic Freedom of expression,Protests,War and conflict Standard Violent and graphic content Location Israel,United Kingdom,Palestinian Territories Date Published on September 4, 2024 Posts That Include ""From the River to the Sea"" Decision PDF Hebrew Decision Translation This decision is available in Arabic and Hebrew. Click here for decision in Hebrew. For Arabic, navigate to top right of the page and click the globe for the language menu. In reviewing three cases involving different pieces of Facebook content containing the phrase “From the River to the Sea,” the Board finds they did not break Meta’s rules on Hate Speech, Violence and Incitement or Dangerous Organizations and Individuals. Specifically, the three pieces of content contain contextual signs of solidarity with Palestinians – but no language calling for violence or exclusion. They also do not glorify or even refer to Hamas, an organization designated as dangerous by Meta. In upholding Meta’s decisions to keep up the content, the majority of the Board notes the phrase has multiple meanings and is used by people in various ways and with different intentions. A minority, however, believes that because the phrase appears in the 2017 Hamas charter and given the October 7 attacks, its use in a post should be presumed to constitute glorification of a designated entity, unless there are clear signals to the contrary. These three cases highlight tensions between Meta’s value of voice and the need to protect freedom of expression, particularly political speech during conflict, and Meta’s values of safety and dignity to protect people against intimidation, exclusion and violence. The current and ongoing conflict that followed the Hamas terrorist attack in October 2023 and Israel’s subsequent military operations has led to protests globally and accusations against both sides for violating international law. Equally relevant is the surge in antisemitism and Islamophobia not only to these cases but also general use of “From the River to the Sea” on Meta’s platforms. These cases have again underscored the importance of data access to effectively assess Meta’s content moderation during conflicts, as well as the need for a method to track the amount of content attacking people based on a protected characteristic. The Board’s recommendations urge Meta to ensure its new Content Library is an effective replacement for CrowdTangle and to fully implement a recommendation from the BSR Human Rights Due Diligence Report of Meta’s Impacts in Israel and Palestine. About the Cases In the first case, a Facebook user commented on a video posted by a different user. The video’s caption encourages others to “speak up” and includes hashtags such as “#ceasefire” and “#freepalestine.” The user’s comment includes the phrase “FromTheRiverToTheSea” in hashtag form, additional hashtags such as “#DefundIsrael” and heart emojis in the colors of the Palestinian flag. Viewed about 3,000 times, the comment was reported by four users but these reports were automatically closed because Meta’s automated systems did not prioritize them for human review. The Facebook user in the second case posted what is likely to be a generated image of floating watermelon slices that form the words from the phrase, alongside “Palestine will be free.” Viewed about 8 million times, this post was reported by 937 users. Some of these reports were assessed by human moderators who found the post did not break Meta’s rules. For the third case, an administrator of a Facebook page reshared a post by a Canadian community organization, in which the founding members declared support for the Palestinian people, condemned their “senseless slaughter” and “Zionist Israeli occupiers.” With less than 1,000 views, this post was reported by one user but the report was automatically closed. In all three cases, users then appealed to Meta to remove the content but the appeals were closed without human review following an assessment by one of the company’s automated tools. After Meta upheld its decisions to keep the content on Facebook, the users appealed to the Board. Unprecedented terrorist attacks by Hamas on Israel in October 2023, which killed 1,200 people and involved 240 hostages being taken, have been followed by a large-scale military response by Israel in Gaza, killing over 39,000 people (as of July 2024). Both sides have since been accused of violating international law, and committing war crimes and crimes against humanity. This has generated worldwide debate, much of which has taken place on social media, including Facebook, Instagram and Threads. Key Findings The Board finds there is no indication that the comment or the two posts broke Meta’s Hate Speech rules because they do not attack Jewish or Israeli people with calls for violence or exclusion, nor do they attack a concept or institution associated with a protected characteristic that could lead to imminent violence. Instead, the three pieces of content contain contextual signals of solidarity with Palestinians, in the hashtags, visual representation or statements of support. On other policies, they do not break the Violence and Incitement rules nor do they violate Meta’s Dangerous Organizations and Individuals policy as they do not contain threats of violence or other physical harm, nor do they glorify Hamas or its actions. In coming to its decision, the majority of the Board notes that the phrase “From the River to the Sea” has multiple meanings. While it can be understood by some as encouraging and legitimizing antisemitism and the violent elimination of Israel and its people, it is also often used as a political call for solidarity, equal rights and self-determination of the Palestinian people, and to end the war in Gaza. Given this fact, and as these cases show, the standalone phrase cannot be understood as a call to violence against a group based on their protected characteristics, as advocating for the exclusion of a particular group, or of supporting a designated entity – Hamas. The phrase’s use by this terrorist group with explicit violent eliminationist intent and actions, does not make the phrase inherently hateful or violent – considering the variety of people using the phrase in different ways. It is vital that factors such as context and identification of specific risks are assessed to analyze content posted on Meta’s platforms as a whole. Though removing content could have aligned with Meta’s human rights responsibilities if the phrase had been accompanied by statements or signals calling for exclusion or violence, or legitimizing hate, such removal would not be based on the phrase itself, but rather on other violating elements, in the view of the majority of the Board. Because the phrase does not have a single meaning, a blanket ban on content that includes the phrase, a default rule towards removal of such content, or even using it as a signal to trigger enforcement or review, would hinder protected political speech in unacceptable ways. In contrast, a minority of the Board finds that Meta should adopt a default rule presuming the phrase constitutes glorification of a designated organization, unless there are clear signals the user does not endorse Hamas or the October 7 attacks. One piece of research commissioned by the Board for these cases relied on the CrowdTangle data analysis tool. Access to platform data is essential for the Board and other external stakeholders to assess the necessity and proportionality of Meta’s content moderation decisions during armed conflicts. This is why the Board is concerned with Meta’s decision to shut down the tool while there are questions over the newer Meta Content Library as an adequate replacement. Finally, the Board recognizes that even with research tools, there is limited ability to effectively assess the extent of the surge in antisemitic, Islamophobic, and racist and hateful content on Meta’s platforms. The Board urges Meta to fully implement a recommendation previously issued by the BSR Human Rights Due Diligence report to address this. The Oversight Board’s Decision The Oversight Board upholds Meta’s decisions to leave up the content in all three cases. The Board recommends that Meta: * Case summaries provide an overview of cases and do not have precedential value. 1. Case Description and Background The Oversight Board reviewed three cases together involving content posted on Facebook by different users in November 2023, following the Hamas terrorist attacks of October 7 and after Israel had started a military campaign in Gaza in response. The three pieces of content, all in English, each contain the phrase “From the River to the Sea.” In the first case, a Facebook user commented on another user’s video. The video has a caption encouraging others to “speak up” and several hashtags including “#ceasefire” and “#freepalestine.” The comment contains the phrase “FromTheRiverToTheSea” in hashtag form, as well as additional hashtags including “#DefundIsrael” and heart emojis in the colors of the Palestinian flag. The user who created the content is not a public figure and they have fewer than 500 friends and no followers. The comment had about 3,000 views and was reported seven times by four users. The reports were closed after Meta’s automated systems did not prioritize them for human review within 48 hours. One of the users who reported the content then appealed to Meta. In the second case, a Facebook user posted what appears to be a generated image of floating watermelon slices that form the words “From the River to the Sea,” along with “Palestine will be free.” The user who created the content is not a public figure and they have fewer than 500 friends and no followers. The post had about 8 million views and was reported 951 times by 937 users. The first report was closed, again because Meta’s automated systems did not prioritize it for human review within 48 hours. Some of the other reports were reviewed and assessed by human moderators who decided the content was non-violating. Several users who reported the content then appealed to Meta. In the third case, the administrator of a Facebook page reshared a post from the page of a community organization in Canada. The post is a statement from the organization’s “founding members” who declare support for “the Palestinian people,” condemn their “senseless slaughter” by the “Zionist State of Israel” and “Zionist Israeli occupiers,” and express their solidarity with “Palestinian Muslims, Palestinian Christians and anti-Zionist Palestinian Jews.” The post ends with the phrase “From The River To The Sea.” This post had fewer than 1,000 views and was reported by one user. The report was automatically closed. The user who reported the content then appealed to Meta. All the appeals Meta received regarding the three pieces of content were closed without human review, based on an assessment by automated tools. Meta upheld its decisions to keep the three pieces of content on the platform. The users who reported the content then appealed to the Board to have the content taken down. After the Board selected and announced these cases, the user who posted the content in the third case deleted the post from Facebook. The Board notes the following context in reaching its decision. On October 7, 2023, Hamas, a designated Tier 1 organization under Meta’s Dangerous Organizations and Individuals Community Standard, led unprecedented terrorist attacks on Israel from Gaza that killed an estimated 1,200 people and resulted in roughly 240 people being taken hostage, mostly Jewish and several Muslim Israeli citizens, as well as dual citizens and foreign nationals ( Ministry of Foreign Affairs , Government of Israel). More than 115 of those hostages continue to be held in captivity as of July 2024. The attacks included the burning and destruction of hundreds of homes and led to the immediate and ongoing displacement of about 120,000 people. Israel immediately undertook a large-scale military campaign in Gaza in response to the attacks. Israel’s military action, which is ongoing, has killed over 39,000 people ( The UN Office for the Coordination of Humanitarian Affairs , drawing on data from the Ministry of Health in Gaza). Reports indicate that, as of July 2024, 52% of the fatalities are estimated to be women and children. The military campaign has caused extensive destruction of civilian infrastructure and the repeated displacement of 1.9 million people, the overwhelming majority of Gaza’s population, who are now facing an acute humanitarian crisis. As of April 2024 , at least 224 humanitarian personnel have been killed in Gaza, “more than three times as many humanitarian aid workers killed in any single conflict recorded in a single year.” Meta immediately designated the events of October 7 a terrorist attack under its Dangerous Organizations and Individuals policy. Under its Community Standards, this means that Meta would remove any content on its platforms that “glorifies, supports or represents” the October 7 attacks or its perpetrators. During the ongoing conflict, both sides have been accused of violating international law. Israel is facing proceedings for alleged violations of its obligations under the Convention on the Prevention and Punishment of the Crime of Genocide at the International Court of Justice . Moreover, Hamas and Israeli officials have each been named by the prosecutor of the International Criminal Court in applications for arrest warrants based on charges of war crimes and crimes against humanity alleged to have been committed by each party. Hamas officials are accused of bearing criminal responsibility for extermination; murder; taking hostages; rape and other acts of sexual violence; torture; other inhumane acts; cruel treatment; and outrages upon personal dignity in the context of captivity, on Israel and the Palestinian Territories (in the Gaza strip) from at least 7 October 2023. According to the prosecutor, these “were part of a widespread and systematic attack against the civilian population of Israel by Hamas and other armed groups pursuant to organisational policies,” some of which “continue to this day.” Israeli officials are accused of bearing criminal responsibility for starvation of civilians as a method of warfare; willfully causing great suffering, or serious injury to body or health; willful killing or murder; intentionally directing attacks against a civilian population; extermination and/or murder, including in the context of deaths caused by starvation; persecution; and other inhumane acts, on the Palestinian Territories (in the Gaza strip) from at least 8 October 2023. According to the prosecutor, these “were committed as part of a widespread and systematic attack against the Palestinian civilian population pursuant to State policy,” which “continue to this day.” Furthermore, in a July 19, 2024 Advisory Opinion , issued in response to a request by the UN General Assembly, the International Court of Justice concluded that “the State of Israel’s continued presence in the Occupied Palestinian Territory is unlawful” and stated the obligations for Israel, other States and international organizations, including the United Nations, on the basis of this finding. The Court’s analysis does not consider “conduct by Israel in the Gaza Strip in response to [the] attack carried out on 7 October 2023.” The UN Independent International Commission of Inquiry on the Occupied Palestinian Territory, including East Jerusalem, and in Israel, established by the UN Human Rights Council, concluded, in a May 2024 report, that members of Hamas “deliberately killed, injured, mistreated, took hostages and committed sexual and gender-based violence against civilians, including Israeli citizens and foreign nationals, as well as members of the Israeli Security Forces (ISF).” According to the Commission, “these actions constitute war crimes,” as well as “violations and abuses of international humanitarian law and international human rights law.” The Commission also concluded that “Israel has committed war crimes, crimes against humanity and violations of [international humanitarian law] and [international human rights law].” It further stated that “Israel has used starvation as a method of war,” “weaponized the withholding of life-sustaining necessities, including humanitarian assistance,” and “perpetrated sexual and gender-based crimes against Palestinians.” With regards to reports and ISF allegations indicating that the military wing of Hamas and other non-State armed groups in Gaza operated from within civilian areas, the Commission “reiterates that all parties to the conflict, including ISF and the military wings of Hamas and other non-State armed groups, must adhere to [international humanitarian law] and avoid increasing risk to civilians by using civilian objects for military purposes.” Additionally, the UN Special Representative of the Secretary General on Sexual Violence in Conflict concluded that clear and convincing information was found that “sexual violence, including rape [and] sexualized torture” was committed against the hostages in the context of the October 7 attacks, and called for “a fully-fledged investigation.” The terrorist attacks and military operations that have led to the death of tens of thousands of people and the dislocation of over two million people, mostly in Gaza, but also in Israel and the occupied West Bank, have generated intense worldwide interest, debate and scrutiny. Much of this has taken place on social media platforms, including Facebook, Instagram and Threads. According to reporting and research commissioned by the Board, the use of the phrase “From the River to the Sea” surged across social media and in pro-Palestinian protests and demonstrations following the October 7 attacks and Israel’s military operations. The phrase refers to the area between the Jordan River and the Mediterranean Sea, which today covers the entirety of the State of Israel and the Israeli-occupied Palestinian Territories. The phrase predates the October 7 attacks and has a long history as part of the Palestinian protest movement , starting during the partition plan of 1948 adopted by the UN General Assembly. The phrase is tied to Palestinians’ aspirations for self-determination and equal rights (see public comments: Access Now PC-29291; SMEX PC-29396; PC-29211; PC-28564; Jewish Voices for Peace PC-29437). However, the phrase has also been linked in its more recent use to Hamas. The original 1988 Hamas charter called for the destruction of Israel and “seems to encourage the killing of Jews wherever they are found,” (PC-28895). The 2017 Hama s charter included the adoption of the phrase “From the River to the Sea,” which is used by individuals and groups calling for violent opposition to or the destruction of Israel, and the forced removal of Jewish people from Palestine, with variations such as “from the River to the Sea, Palestine will be Arab,” (see public comments: ADL PC-29259; American Jewish Committee PC-29479; NGO Monitor PC-28905; PC-29526; Jewish Home Education Network PC-28694). Another variation of the phrase also appeared in the 1977 platform of Israel’s ruling Likud Party: “Between the Sea and the Jordan there will only be Israeli sovereignty.” The phrase does not have a single meaning. It has been adopted by various groups and individuals and its significance depends on the speaker, the listener and the context. For some, it is an antisemitic charge denying Jewish people the right to life, self-determination and to stay in their own state, established in 1948, including through forced removal of Jewish people from Israel. As a rallying cry, enshrined in Hamas’s charter, it has been used by the head of the Hamas political bureau Ghazi Hamad , anti-Israel voices, and supporters of terrorist organizations that seek Israel’s destruction through violent means. It is also a call for a Palestinian state encompassing the entire territory, which would mean the dismantling of the Jewish state. When heard by members of the Jewish and pro-Israel community, it may evoke fear and be understood by them as a legitimation or defense of the unprecedented scale of killings, abductions, slaughter and atrocities committed during the October 7 attacks, when Jewish people witnessed an attempted enactment of the aim to annihilate them. The fact that the Jewish population accounts for about 0.2% of the world population (15.7 million people worldwide), half of which are Israeli Jews (about 0.1 % of the world population), enhances this sentiment and a sense of risk and intimidation felt by many Jewish people (see public comments: ADL PC-29259; CAMERA PC-29218; Campaign Against Antisemitism PC-29361; World Jewish Congress PC-29480; American Jewish Committee PC-29479). On the other hand, the estimated number of Palestinians worldwide at the end of 2023 was about 14.6 million people, half of whom live inside Israel or in territories under Israeli occupation. This is partly why many understand the phrase as a call for the equal rights and self-determination of the Palestinian people. At times it is used to indicate support for one or more specific political aims: a single bi-national state on all the territory, a two-state solution for both groups, the right of return for Palestinian refugees, or an end to the Israeli military occupation of Palestinian territories seized in the 1967 war, among other aims. In other contexts, the phrase is a simple affirmation of a place, a people and a history without any concrete political objectives or tactics (see public comments: Access Now PC-29291; SMEX PC-29396; PC-29211; PC-28564; Hearing Palestine Initiative at the University of Toronto PC-28564). After the October 7 Hamas terrorist attacks and the Israeli military campaign in Gaza, it has also been used alongside calls for a ceasefire (see public comments: Jewish Voice for Peace PC-29437; Access Now PC-29291; also Article 19 briefing ). For some Palestinians and the pro-Palestinian community, the use of the phrase in the Likud 1977 charter together with recent statements by Benjamin Netanyahu, the party leader, and members of his administration to oppose the creation of a Palestinian state indicates opposition to both a two-state solution and for equal rights of Palestinians , and to call for the expulsion of Palestinians from Gaza and/or the West Bank (see public comments: Access Now PC-29291; Digital Rights Foundation PC-29256). The Board commissioned external experts to analyze the phrase on Meta’s platforms. The experts’ analysis relied on CrowdTangle, a data analysis tool owned and operated by Meta. CrowdTangle tracks public content from the largest public pages, groups and accounts across all countries and languages, but does not include all content on Meta’s platforms or information about content that was removed by the company. Therefore, instances of the use of the phrase accompanied by violating content (e.g., a direct attack or calls for violence targeting Jewish people and/or Israelis on the basis of a protected characteristic or content supporting a terrorist organization) would be unlikely to be found, because they would probably have been taken down by Meta. In the six months before the October 7 attacks, experts noted more uses of the phrase in Arabic than in English, on Facebook (1,600 versus 1,400, respectively). In the six months that followed October 7, up to March 23, 2024, the use of the phrase in English rose significantly compared with Arabic (82,082 versus 2,880, respectively). According to those experts, the most significant increases in the use of the phrase on Facebook during this period occurred in January and March. On Instagram, the phrase in English has been used significantly more than in Arabic before and after October 7. A big increase was observed in November 2023, at the same time as the Israel Defense Forces’ (IDF) strike on Al-Shifa Hospital, and the growing humanitarian crisis in Gaza. Additionally, the uses of the phrase found by the experts on the platform came as part of posts that either sought to raise awareness about the impact of the war on Palestinians, called for a ceasefire and/or celebrated Palestinian rights to self-determination and equality. Though there were hashtags that became increasingly vocal against the Israeli military, no posts that explicitly called for the death of Jewish people or supported Hamas’s actions on October 7 were identified. The absence of such posts may be the result of such content being removed by Meta. The phrase has been used as part of anti-war and pro-Palestinian protests across the world, including during the US college campus protests of April to May 2024. As of June 6, 2024, more than 3,000 people had been arrested or detained at demonstrations on campuses in the United States for alleged violations of rules governing campus assemblies. In the majority of such cases, the charges were subsequently dropped . In other countries, there are instances in which officials have sought to ban or cancel protests or to prosecute protesters, due to the use of the phrase (for example, in Vienna , Austria ). The Czech city of Prague sought to prohibit a demonstration in November 2023 because of the intended use of the phrase but a municipal court overturned the decision, allowing the demonstration to go ahead. In the United Kingdom, the former Home Secretary encouraged police to interpret the use of the phrase as a violation of law, but the Metropolitan Police declined to adopt a blanket ban. In Germany, the Ministry of the Interior designated the phrase a slogan associated with Hamas . The administrative court in the city of Munster , North Rhine-Westphalia, held that the phrase alone could not be interpreted as incitement because it has multiple meanings. However, the Higher Administrative Court of another state in Germany determined that even though the phrase could have multiple meanings, the court could not set aside a prohibition on its use in an assembly through a preliminary decision, given the order issued by the Ministry of the Interior. In the United States, Resolution 883 , which was approved by 377 votes against 44 at the House of Representatives in April 2024, condemns the phrase as “an antisemitic call to arms with the goal of the eradication of the State of Israel, which is located between the Jordan River and the Mediterranean Sea.” The resolution also emphasizes that “Hamas, the Palestinian Islamic Jihad, Hezbollah, and other terrorist organizations and their sympathizers have used and continue to use this slogan as a rallying cry for action to destroy Israel and exterminate the Jewish people.” Since October 7, the United Nations , government agencies and advocacy groups have warned about an increase in both antisemitism and Islamophobia. In the United States, for example, in the three months following October 7, the Anti-Defamation League (ADL) tracked a 361% increase in reported antisemitic incidents – physical assaults, vandalism, verbal or written harassment and rallies that included “antisemitic rhetoric, expressions of support for terrorism against the state of Israel and/or anti-Zionism.” If not accounting for this final category of “rallies that included antisemitic rhetoric, expressions of support for terrorism against the state of Israel and/or anti-Zionism,” which was added by the ADL after October 7, the United States still saw a 176% increase in cases of antisemitism. According to the Council on American-Islamic Relations , during the same three-month period, reports of anti-Muslim and anti-Palestinian discrimination and hate (e.g., employment discrimination, hate crime and incidents, and education discrimination, among other categories outlined in its report , p. 13-15) rose by about 180% in the United States. Comparative data released by the UK’s Metropolitan police on antisemitic and Islamophobic hate crimes in October 2022 versus October 2023 showed an increase in both (antisemitic from 39 to 547 and Islamophobic from 75 to 214, respectively). Some Board Members also consider the fact that Jews are 0.5% and Muslims are 6.5% of the UK population , and that Jews are 0.2% and Muslims 25.8% of the world population , as important context in evaluating these numbers. Countries across Europe have warned of rising hate crimes, hate speech and threats to civil liberties targeting Jewish and Muslim communities. Murder and other forms of very severe violence targeting Palestinians, and attempted murder , rape and other forms of very severe violence targeting Jewish people, have been reported since October 7, 2023. 2. User Submissions The Facebook users who reported the content and subsequently appealed to the Board claimed the phrase was either breaking Meta’s rules on Hate Speech , Violence and Incitement or Dangerous Organizations and Individuals . The user who reported the content in the first case stated that the phrase violates Meta’s policies prohibiting content that promotes violence or supports terrorism. The users who reported the content in the second and third cases stated that the phrase constitutes hate speech, is antisemitic, and a call for genocide and to abolish the state of Israel. 3. Meta’s Content Policies and Submissions I. Meta’s Content Policies Meta analyzed the phrase and the content in the three cases under three policies. Hate Speech According to the policy rationale, the Hate Speech Community Standard prohibits “direct attacks against people – rather than concepts or institutions – on the basis of ... protected characteristics: [including] race, ethnicity, national origin [and] religious affiliation.” The company defines “attacks as dehumanizing speech; statements of inferiority, expressions of contempt or disgust; cursing; and calls for exclusion or segregation.” Tier 1 of the policy prohibits targeting of a person or a group of people on the basis of their protected characteristic using “statements in the form of calls for action or statements of intent to inflict, aspirational or conditional statements about, or statements advocating or supporting harm” with “calls for death without a perpetrator or method” and “calls for accidents or other physical harms caused either by no perpetrator or by a deity.” Under Tier 2 of the policy, Meta prohibits targeting a person or a group of people on the basis of their protected characteristics with “exclusion or segregation in the form of calls for action, statement of intent, aspirational or conditional statements, or statements advocating or supporting” explicit, political, economic or social exclusion. Finally, under the section marked “require additional information and/or context to enforce,” the company prohibits “content attacking concepts, institutions, ideas, practices, or beliefs associated with protected characteristics, which are likely to contribute to imminent physical harm, intimidation or discrimination against the people associated with that protected characteristic.” Violence and Incitement The Violence and Incitement policy prohibits threatening anyone with “violence that could lead to death (or other forms of high-severity violence).” Additional protections are provided to “persons or groups based on their protected characteristics ... from threats of low-severity violence.” Prior to December 6, 2023, the prohibition against calls for violence was contained in the Hate Speech policy. According to Meta, the decision to move this policy line to the Violence and Incitement policy was part of a reorganization of the Community Standards and did not affect the way in which this policy line is enforced. Under the section marked “require additional information and/or context to enforce,” the company prohibits “coded statements where the method of violence is not clearly articulated, but the threat is veiled or implicit, as shown by the combination of both a threat signal and a contextual signal.” Dangerous Organizations and Individuals After the users who reported the content to Meta appealed to the Board, Meta updated the Dangerous Organizations and Individuals Community Standard (on December 29, 2023). Hamas is a designated entity under Tier 1 of the policy. Prior to the December 29, 2023 update, the policy prohibited “praise” of Tier 1 entities, defined as “speaking positively about” or “legitimizing the cause of a designated entity by making claims that their hateful, violent, or criminal conduct is legally, morally, or otherwise justified or acceptable” or “aligning oneself ideologically with a designated entity or event.” The current policy (as of July 2024) prohibits “glorification” of a Tier 1 entity, including “legitimizing or defending the violent or hateful acts of a designated entity by claiming that those acts have a moral, political, logical or other justification that makes them acceptable or reasonable.” II. Meta’s Submissions Meta explained that the standalone phrase “From the River to the Sea” does not violate the company’s Hate Speech, Violence and Incitement, or Dangerous Organizations and Individuals policies. Therefore, it only removes content that contains the phrase if another portion of the content independently violates its Community Standards. Meta explained that the company has not undertaken a complete development process to collect the views of global stakeholders and experts regarding the phrase, but it did review use of the phrase after the October 7 attacks and Israel’s military response. The company stated it is aware the phrase has a long history. It explained that while some stakeholders view the phrase as antisemitic or a threat to the State of Israel, other stakeholders use the phrase in support of Palestinian people and believe that describing it as antisemitic is “either inaccurate or rooted in Islamophobia.” Because of these differing views, Meta “cannot conclude, without additional context, that the users in the content in question are using the phrase as a call to violence against a group based on their protected characteristics.” Nor could they conclude, “without additional context, that ... the speaker is advocating for the exclusion of a particular group.” In assessing the phrase under the Dangerous Organizations and Individuals policy, the company determined that “the phrase is not linked exclusively to Hamas. While Hamas uses the phrase in its 2017 charter, the phrase also predates the group and has always been used by people who are not affiliated with Hamas and who do not support its terrorist ideology.” As for the content under review, Meta determined that “none of the three pieces of content in this case bundle suggests support for Hamas or glorifies the organization. Absent this additional context, Meta assesses that this content does not violate our Community Standards.” In response to the Board’s questions about the research and analysis Meta had undertaken in reaching its conclusions, Meta said that its Policy team reviewed how the phrase was being used on its platforms and assessed it against the Community Standards. The company also conducted some analysis to determine whether to block hashtags containing the phrase. According to Meta, the company will remove a hashtag if it is inherently violating and block a hashtag when a high prevalence of content associated with a hashtag is violating. To make this assessment, Meta’s operations team reviewed content containing hashtags of the phrase and found that only a handful of pieces of content violated Meta’s policies and did so for reasons other than the phrase. The Board asked Meta whether the company had received government requests to remove content with the phrase and what action the company took in response. Meta informed the Board that the company received a number of requests from government bodies in Germany to restrict access to content in the country under local law. In response, Meta restricted access to the content in Germany. 4. Public Comments The Oversight Board received 2,412 public comments that met the terms for submission : 60% came from the United States and Canada, 17% from Middle East and North Africa, 12% from Europe, 6% from Asia Pacific and Oceania, and 5% from other regions. To read public comments submitted with consent to publish, click here . The submissions covered the following themes: the use of the phrase by Hamas and its meaning as an antisemitic call for violence or exclusion; the phrase as protected political speech during an ongoing humanitarian crisis; historical roots of the phrase and evolution of its use, including as a call for Palestinians’ rights to equality and self-determination; the need to assess the phrase contextually to determine its meaning and whether it can be associated with calls for violence; and concerns over the use of automation to moderate content related to the conflict and its negative impact on human rights defenders and journalists. 5. Oversight Board Analysis These three cases highlight the tension between Meta’s value of protecting voice and the heightened need to protect freedom of expression, particularly political speech in times of conflict, and Meta’s values of safety and dignity to protect people against intimidation, exclusion, violence and real-world harm. This is especially important during violent conflict with an impact on people’s safety, not only in the war zone but worldwide. It is imperative Meta take effective action to ensure its platforms are not used to incite acts of violence. The company’s response to this threat must also be guided by respect for all human rights, including freedom of expression. This is particularly relevant to the current and ongoing iteration of a conflict that followed Hamas’s terrorist attack in October 2023 and Israel’s subsequent military operations, resulting in political protests around the world and accusations made against both sides for violating international law. The surge in antisemitism and Islamophobia is also relevant to the assessment of not only these cases but also general use of the phrase “From the River to the Sea” on Meta’s platforms, given its different meanings, usages and understanding. The Board notes that, while Meta determined that “the slogan, standing alone, does not violate the Community Standards,” the company “has not conducted research on the prevalence and use of the phrase,” aside from the work that the company’s teams did to understand the use of the phrase in hashtags, as explained in Section 3. Meta did not provide data on content containing the phrase that was taken down due to other violations of its policies. Nonetheless, many public comments received by the Board highlight nuances in the use of this phrase. The Board believes that by giving researchers more access to platform data and investing additional resources in the development of internal research, Meta would enable a better understanding of correlations between online behavior and offline harm. The Board analyzed Meta’s decisions in these cases against Meta’s content policies, values and human rights responsibilities. The Board also assessed the implications of these cases for Meta’s broader approach to content governance. 5.1 Compliance With Meta’s Content Policies I. Content Rules The Board finds that the three pieces of content (one comment and two posts) do not violate Meta’s policies. Because the phrase “From the River to the Sea” can have a wide variety of meanings and interpretations, the Board looked at the content as a whole in these three posts to determine whether a policy was violated. There is no indication that any of the three pieces of content under review violate Meta’s Hate Speech policy by attacking Jewish or Israeli people with calls for violence or exclusion, or constitute an attack on a concept or an institution associated with a protected characteristic that could lead to imminent violence. While the phrase has been used by some to attack Jewish or Israeli people, these three pieces of content express or contain contextual signals of solidarity with Palestinians and there is no language or signal calling for violence or exclusion. The comment in the first case is on a video encouraging others to “speak up” and which includes a “#ceasefire” hashtag, while the user’s comment contains “#PalestineWillBeFree” and “#DefundIsrael” hashtags, as well as heart emojis in the colors of the Palestinian flag. The post in the second case is a visual representation, seemingly a generated image of floating watermelon slices (watermelon is a symbol of Palestinian solidarity, with the same colors as the Palestinian flag) that form the words of the phrase along with “Palestine will be free,” with no additional caption or visual signals. And the third post expressly states it is in solidarity with Palestinian families fighting to survive, stating support for Palestinians of all faiths. None of the three pieces of content violates the Violence and Incitement policy as they do not contain threats of violence or other physical harm. As the Board has explained in earlier decisions, Meta requires that a post contains a “threat” and a “target” to violate this policy. The Board finds no indications of a threat in these cases. The Violence and Incitement policy also prohibits “coded statements where the method of violence is not clearly articulated, but the threat is veiled or implicit.” That policy line requires a “threat signal” as well as a “contextual signal” to be enforced. Meta identifies, among its “contextual signals” for enforcement: “local context or expertise confirm[ing] that the statement in question could lead to imminent violence.” Though the Board acknowledges there are instances and settings in which content including the phrase can be used to call for violence, there is no indication the three pieces of content under review could lead to imminent violence. Additionally, the comment and two posts do not glorify Hamas, a designated organization under Meta’s Dangerous Organizations and Individuals policy, or its actions. None refer to Hamas or use any reference to glorify the organization or its actions. While several public comments have argued for interpreting any use of the phrase as support for Hamas, the majority of the Board rejects this approach and finds that the three pieces of content do not violate Meta’s policy, given the phrase, which was in existence before the establishment of Hamas, does not have a single meaning (see Section 1). Additionally, none of the content refers to the designated entity or attempts to justify the attacks of October 7. Finally, the Board notes again that the phrase “From the River to the Sea” has multiple meanings, and has been adopted by various groups and individuals, each with different interpretations and intentions. While it can be used by some to encourage and legitimize antisemitism and the violent elimination of Israel and its people, it is also used as a political call for solidarity, equal rights and self-determination of the Palestinian people, and to end the war in Gaza (see Section 2). Given these uses, and as these three cases show, the phrase alone cannot be understood, regardless of context, as a call to violence against a group based on their protected characteristics, advocating for the exclusion of a particular group, or supporting a designated entity or its actions. The use of a phrase by a particular extremist terrorist group with explicit, violent, eliminationist intent and actions does not make the phrase inherently hateful or violent, taking into consideration the variety of actors who use the phrase in different ways. Similarly, the Human Rights Committee, in General Comment 37, addressed the threshold for prohibiting expression based on symbols and emblems that may have multiple meanings and interpretations, stating: “Generally, the use of flags, uniforms, signs and banners is to be regarded as legitimate form of expression that should not be restricted, even if such symbols are reminders of a painful past. In exceptional cases, where such symbols are directly and predominantly associated with incitement to discrimination, hostility, or violence, appropriate restrictions should apply,” ( CCPR/C/GC/37 , para. 51). A minority of the Board believes that, while these three pieces of content do not violate Meta’s policies, the phrase “From the River to the Sea” should be presumed to constitute glorification of Hamas, a designated organization, and be removed unless it is clear the content using the phrase does not endorse Hamas and its aims. For these Board Members, after October 7, the context changed significantly and any ambiguous use of the phrase should be presumed to refer to and endorse Hamas and its actions. The minority agrees that in these three cases, there are clear signals the content does not glorify Hamas or October 7. The reasoning of the minority is provided in greater detail in the human rights analysis section (see Section 5.2). 5.2 Compliance With Meta’s Human Rights Responsibilities The Board finds that Meta’s decisions to keep the three pieces of content up on Facebook were consistent with the company’s human rights responsibilities. The Board understands that the content at issue in the third case is no longer on Facebook as the user who posted it deleted it from the platform. Freedom of Expression (Article 19 International Covenant on Civil and Political Rights ) Article 19 of the International Covenant on Civil and Political Rights (ICCPR) provides for broad protection of the right to freedom of expression, including “freedom to seek, receive and impart information and ideas of all kinds,” including “political discourse” and commentary on “public affairs,” ( General Comment No. 34 , para. 11). The Human Rights Committee has said that the scope of this right “embraces even expression that may be regarded as deeply offensive, although such expression may be restricted in accordance with the provisions of article 19, paragraph 3 and article 20” to protect the rights or reputations of others or to prohibit incitement to discrimination, hostility or violence (General Comment No. 34, para. 11). The broad protection provided to expression of political ideas extends to assemblies with a political message (ICCPR, Article 21; General Comment No. 37 , paras 32 and 49). “Given that peaceful assemblies often have expressive functions, and that political speech enjoys particular protection as a form of expression, it follows that assemblies with a political message should enjoy a heightened level of accommodation and protection,” (General Comment No. 37, para 32.) Protests can be conducted online and offline, whether jointly or exclusively. Article 21 extends to protect associated activities that take place online (paras. 6 and 34). When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s human rights responsibilities in line with the UN Guiding Principles on Business and Human Rights, which Meta itself has committed to in its Corporate Human Rights Policy. The Board does this both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. The Board agrees with the UN Special Rapporteur on freedom of expression that although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression,” ( A/74/486 , para. 41). As mentioned in Section 1, public comments reflect different views on how international human rights standards on limiting expression should be applied to the moderation of content containing the phrase “From the River to the Sea.” Several public comments argued that Meta’s human rights responsibilities require such content to be removed (see ADL PC-29259), given that the phrase can be identified with extreme calls to eliminate Jewish people. Others argued that it should be removed in contexts in which its spread is likely to give rise to harmful consequences for Jewish people or communities – when, for instance, there are elements suggesting that the speaker identifies with Hamas, or when the phrase is used in conjunction with other cues that connote threats of violence towards Israelis and/or Jewish people, such as “by any means necessary” or “go back to Poland,” (see ACJ PC-29479 and Professor Shany, Hersch Lauterpacht Chair in Public International Law at Hebrew University, former member and Chair of the UN Human Rights Committee PC-28895). Various public comments argued that nothing in the phrase inherently constitutes a call to violence or the exclusion of any group, nor is it linked exclusively to a statement expressing support for Hamas; rather it is primarily rooted in a Palestinian expression for liberation, freedom and equality (see SMEX PC-29396). Some public comments argued that claiming the phrase, in and of itself, carries a genocidal intent relies not on the historical record but rather on racism and Islamophobia (see Hearing Palestine Initiative PC-28564). Other public comments highlighted Meta’s responsibility to provide heightened protection to political speech, restricting content using the phrase only in specific contexts when the speaker is inciting violence, discrimination or hostility (see Human Rights Watch PC-29394). I. Legality (Clarity and Accessibility of the Rules) The principle of legality requires rules limiting expression to be accessible and clear, formulated with sufficient precision to enable an individual to regulate their conduct accordingly (General Comment No. 34, para. 25). Additionally, these rules “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” and must “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not” ( Ibid. ). The UN Special Rapporteur on freedom of expression has stated that when applied to private actors’ governance of online speech, rules should be clear and specific (A/HRC/38/35, para. 46). People using Meta’s platforms should be able to access and understand the rules and content reviewers should have clear guidance regarding their enforcement. The Board finds that as applied to the three pieces of content in these cases, Meta’s policies are sufficiently clear to users. II. Legitimate Aim Any restriction on freedom of expression should also pursue one or more of the legitimate aims listed in the ICCPR, which includes protecting the rights of others. The Human Rights Committee has interpreted the term “rights” to include human rights as recognized in the ICCPR and more generally in international human rights law ( General Comment 34, at para. 28). In several decisions, the Board has recognized that Meta’s Hate Speech policy pursues the legitimate aim of protecting the rights of others. Meta states that it does not allow hate speech because it “creates an environment of intimidation and exclusion, and in some cases may promote offline violence.” It protects the right to life (Article 6, para. 1, ICCPR) as well as the rights to equality and non-discrimination, including based on race, ethnicity and national origin (Article 2, para. 1, ICCPR; Article 2, ICERD). Conversely, the Board has repeatedly noted that it is not a legitimate aim to restrict expression for the sole purpose of protecting individuals from offense (see Depiction of Zwarte Piet , citing UN Special Rapporteur on freedom of expression, report A/74/486, para. 24), as the value that international human rights law places on uninhibited expression is high (General Comment No. 34, para. 38). The Violence and Incitement policy aims to “prevent potential offline violence” by removing content that includes “violent speech targeting a person or a group of people on the basis of their protected characteristics” and poses “a genuine risk of physical harm or direct threats to public safety.” As previously concluded in the Alleged Crimes in Raya Kobo decision, this policy serves the legitimate aim of protecting the rights of others, such as the right to life (Article 6, ICCPR). Meta’s Dangerous Organizations and Individuals policy aims to “prevent and disrupt real-world harm.” In several decisions, the Board has found that this policy pursues the legitimate aim of protecting the rights of others, such as the right to life (ICCPR, Article 6) and the right to non-discrimination and equality (ICCPR, Articles 2 and 26), because it covers organizations that promote hate, violence and discrimination as well as designated violent events motivated by hate (see Sudan’s Rapid Support Forces Video Captive and Greek 2023 Elections Campaign decisions). III. Necessity and Proportionality Under ICCPR Article 19(3), necessity and proportionality requires that restrictions on expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected,” (General Comment No. 34, para. 34). The Board further stresses that Meta has a responsibility to identify, prevent, mitigate and account for adverse human rights impacts, to perform ongoing human rights due diligence to assess the impacts of the company’s activities (UNGPs, Principle 17) and acknowledge that the risk of human rights harms – including the increased danger for vulnerable minorities facing hostility and incitement – is heightened during conflicts (UNGPs, Principle 7, A/75/212, para. 13). The Board has repeatedly highlighted the need to develop a principled and transparent framework for content moderation of hate speech during crises and in conflict settings (see “Two Buttons” Meme , Haitian Police Station Video and Tigray Communication Affairs Bureau decisions). While it is imperative that Meta seeks to prevent its platforms from being used to intimidate, exclude or attack people on the basis of their protected characteristics, or incite acts of terrorist violence – legitimate aims of its content moderation policies – Meta’s human rights responsibilities require any limitations on expression to be necessary and proportionate, not least to respect the voice of people in communities impacted by violence. It is precisely during a rapidly evolving conflict that large social media companies must devote the resources necessary to ensure that freedom of expression is not needlessly curtailed, devoting attention to regions where risks of harm are especially grave. The Board also notes that giving researchers access to platform data and investing additional resources in the development of internal research would allow Meta to better understand correlations between online behavior and offline harm. This would place the company in a better position to fulfill its responsibility to protect human rights under the UNGPs. The majority of the Board finds that leaving the content up in these three cases is consistent with the principle of necessity, emphasizing the importance of assessing such content in its particular context. While it cannot be denied that in some cases the use of “From the River to the Sea” is intended as and is used along with a call for violence or exclusion, or endorsement of Hamas and its violent acts, given the variety of ways the phrase is used, especially as part of protected political speech, it alone cannot be understood, regardless of context, as a call for violence, intimidation or exclusion. As part of its analysis, the Board drew upon the six factors (context of the statement, speaker’s position or status, intent to incite, content and form of expression, extent of its dissemination, and likelihood of harm) from the Rabat Plan of Action to evaluate the capacity of both the content in these cases and the standalone phrase “From the River to the Sea” to create a serious risk of inciting discrimination, violence or other lawless actions. The Rabat factors were developed to assess when advocacy of national, racial or religious hatred constitutes incitement to harmful acts, and the Board has used it in this way previously ( Knin Cartoon decision). Context The content in these three cases, as well as the broader adoption of the phrase, are in response to an ongoing conflict with significant regional and global consequences. All three pieces of content were posted soon after the October 7 attacks and as Israel’s ground offensive in Gaza was underway. There are indications in the three pieces of content that the users are responding to or calling attention to the suffering of the Palestinian people and/or condemning the actions of the Israeli military. The significant impact of Israel’s military actions in Gaza, as well as doubts about its legitimacy, have been part of public debate and discussion as well as legal processes before the International Court of Justice and the International Criminal Court. Individuals and groups across the world have sought to influence that discussion, locally and globally. At this time, in a joint statement , the UN Special Rapporteurs in the field of cultural rights, on the right to education, the rights to freedom of peaceful assembly and association, and on the protection and promotion of freedom of opinion and expression, stated that “calls for an end to violence and attacks on Gaza, or for a humanitarian ceasefire, or criticism of Israeli government’s policies and actions, have in too many contexts been misleadingly equated with support for terrorism or antisemitism. This stifles free expression, including artistic expression, and creates an atmosphere of fear to participate in public life.” More generally, the phrase and the way it is interpreted is heavily influenced by the evolving nature of the conflict and the broader context in the region, as well as globally. That context includes the increase in the use of the phrase both in support, approval or endorsement of Hamas and their violent acts, and its use in support of the Palestinian struggle for self-determination and equal rights, and alongside calls for ceasefire. The context also includes an immense surge in dangerous, dehumanizing and discriminatory rhetoric targeting Arabs, Israelis, Jews, Muslims and Palestinians. As expressed in public comments, there have been instances of individuals using the phrase in combination with antisemitic calls, threats of violence or expressions of support for Hamas, or justifying the October 7 attacks, when accompanied by statements that call for violence or exclusion, like “by all means necessary” or “go back to Poland,” (see public comment, PC-28895), or alongside other signals of violence, “such as the image of a paraglider which recalls perpetrators of the October 7 attacks,” (see AJC PC-29354). The majority of the Board observes that the removal of violating content could be consistent with Meta’s Community Standards and human rights responsibilities in instances where the context indicates the call is one for violence or exclusion. However, such removal would not be predicated on the phrase in and of itself but rather on contextual clues or other elements present in a post that contains the phrase. Given the different meanings and uses of the phrase, assessment of context and identification of specific risks that can derive from content posted on Meta’s platforms, analyzed as a whole, are vital. Nonetheless, because the phrase does not have a single meaning, a blanket ban on content that includes the phrase, a default rule towards removal of such content, or even using it as a signal to trigger enforcement or review, would hinder protected political speech in unacceptable ways. As stated by various public comments, given the highly contextual nature of its meaning and usage, and the well documented problems automation has in conducting analysis required to understand context, the reliance on automated tools to moderate content using this phrase would “inevitably lead to over-censorship of content on matters of public concern in an ongoing armed conflict,” (see public comments: Human Rights Watch PC-29394; Integrity Institute PC-29544; also Article 19 briefing ). This is particularly relevant in the context of the Israel-Gaza conflict, in which, as the Board previously stated in the Hostages Kidnapped From Israel and Al-Shifa Hospital decisions, Meta put in place several temporary measures, including a reduction on confidence thresholds to identify and remove content, which increased automated removal of content where there was a lower confidence score for content violating Meta’s policies. In other words, Meta used its automated tools more aggressively to remove content that might violate its policies. The Board also notes the prominence of the phrase in pro-Palestinian protests both online and offline across the world. The Board is aware of examples of protesters advocating for violence or praising Hamas, however, according to the Human Rights Committee, under international human rights law, there is a presumption in favor of considering assemblies to be peaceful (General Comment No. 37, CCPR/C/GC/37 , paras. 15-17) and violations by some participants do not impact the rights of others. In his report, the UN Special Rapporteur on the rights of freedom of peaceful assembly and of association highlights the importance of the safe and effective exercise of these rights as ensuring “checks and balances,” and as a way of overcoming “entrenched inequalities” so endemic to conflict situations. Exercising the rights of assembly and association is “often the only available option for those who live in post-conflict and fragile contexts to raise their voices; and they are an important avenue for women, victims, youth and marginalized groups, who are otherwise often excluded from these processes to voice their grievances and concerns [and] … bring local grievances to the attention of peacemakers and the international community, which, if they are addressed, can help to resolve the root causes of conflict and prevent furthering or resurging of conflicts,” ( A/78/246 , paras. 2-4) Identity of the Speaker There is no indication that either the users who posted the content in these three cases, or the pages on which the posts were shared for the second and third cases, are associated with or show support for designated organizations, such as Hamas, or discrimination and exclusion. In his public comment submission, Professor Yuval Shany, for example, identifies, “whether or not the speaker using the phrase identifies himself/herself with Hamas or supports violent act undertaken by Hamas,” (PC-28895) as a relevant indicator under the Rabat analysis. Intent, Content and Form of Expression As explained in more detail in Section 5.1, though the phrase “From the River to the Sea” can have a wide variety of uses, the Board finds the three pieces of content under review do not show intent to incite discrimination or violence, advocate for the exclusion of a particular group, or support designated entities or their actions. The phrase, akin to a slogan, spread very quickly and formed the basis for users to react to the October 7 terrorist attacks and Israel’s military operations in Gaza, with different meanings and intentions. As noted above, according to research commissioned by the Board, there has been a significant surge in the use of the phrase on Meta’s platforms after October 7, with the most significant increases in January and March 2024 on Facebook and in November 2023 on Instagram. The latter took place at the same time as the IDF strike on Al-Shifa Hospital and the growing humanitarian crisis in Gaza. Experts noted that Meta seems to be removing content that includes the phrase when it is accompanied by explicit signals of violence and/or discrimination. The commissioned research relied on CrowdTangle, which does not include all content on Meta’s platforms or content that has been removed by Meta. The research indicates that, for content that was left on Meta’s platforms, the phrase is generally used in posts raising awareness about the impact of the war on Palestinians, calling for a ceasefire or advocating for rights of Palestinians. Nonetheless, as previously mentioned, Meta did not provide data on content containing the phrase that was taken down due to other violations of its policies, nor has it conducted full on-platform data research “on the prevalence and use of the phrase.” The Board acknowledges that the phrase has been and continues to be used in some settings to call for exclusion or violence and may be used in that way on Meta's platforms. However, the Board would need more data to assess the nature and prevalence of content that was removed from Meta’s platforms. Likelihood and Imminence and Reach Analyzed as a whole, the Board finds that none of the content in these three specific cases presents a likelihood or risk of imminent violence or discrimination. As stated above, due to its multiple meanings and the varied intentions in its usage, the majority of the Board finds that the phrase itself cannot be inherently understood, in all cases or by default, and regardless of context, as harmful, violent or discriminatory. While the Board recognizes the phrase “From the River to the Sea” can be used along with threatening language against a Jewish or Israeli person or group, or along with more general threats of violence , or celebration of October 7, (see public comment, AJC PC-29354), and it is imperative that Meta prevent these uses in its platforms, its human rights responsibilities require that in its response to these threats, the company respects all human rights, including the voice of people in communities impacted by violence. The Board finds there is also a significant risk of removing content with the phrase when the content seeks to raise awareness about the suffering of people in Gaza and the dehumanization of Palestinians during an ongoing military campaign. As noted in a public comment, Meta’s platforms are among the most important tools for Palestinians to document the events occurring on the ground, and to seek support from the international community to hold the Israeli military and government accountable, and demand a stop to the violence (see public comment, Hearing Palestine Initiative PC-28564). Meta’s platforms are also vital vehicles to raise global awareness and mobilization in response to rising antisemitism and Islamophobia. The platforms are used to build solidarity, extend support to targeted individuals and groups, raise awareness of bigotry, counter disinformation and provide education. It is essential that these key functions of social media can be carried out in an environment in which people feel safe and respected . Enforcement of Meta’s content policies and continued examinations of the evolution of hateful language and of the relationship between social media and offline harm is essential. The reach of the first and third pieces of content was low, whereas the post in the second case had about 8 million views. However, the reach of the content is not a factor indicating that removal is necessary when the risk of harm is unclear. A minority of the Board, however, finds that the context after the October 7 attacks changes significantly the analysis pursuant to the six Rabat factors, and the meaning of the phrase must be determined with this context in mind. While the history and different uses of the phrase are relevant, its role as a statement of a violent program of a designated organization, one on multiple countries’ terrorism lists, means that the connotations of the phrase and the risks of its use have changed (context). For these Board Members, after October 7, the historical ambiguity consideration does not apply any more, and to disregard this new reality is unreasonable and ignores that the phrase can serve as coded endorsement of a designated entity and a hateful ideology that presents risk of harm. This minority of the Board finds that Meta should adopt a default rule presuming the phrase constitutes glorification of a designated organization unless there are clear signals that the user does not endorse Hamas or the October 7 attacks. Meta should then provide guidance to its content moderators on signals of the non-violating uses of the phrase to be exempted from this default rule. For this minority, adopting this approach would allow Meta to respect the freedom of expression of users who seek to show solidarity with Palestinians and to call for specific political aims, including the equal rights of all people in Israel and the Palestinian Territories, while considering the current risk of violence related to the use of the expression in different local environments. For the reasons stated above, the majority of the Board disagrees with this approach, given that the phrase, which was in existence before the establishment of Hamas, does not have a single meaning, intent or understanding. Furthermore, they emphasize the advice provided by the UN Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism, who has warned of the risks of delegitimizing civil society by “loosely characterizing them as ‘terrorist’ [which increases] the vulnerability of all civil society actors, contributing to the perception that they are legitimate targets of abuse by State and non-State actors,” ( A/HRC/40/52 , para. 54). See, for example, the content in summary decision Dehumanizing Comments About People in Gaza . Another minority of the Board feels strongly that drawing attention to the invocation of a phrase that has been adopted by a terrorist group must not be considered tantamount to a claim that individuals posting such content are characterized as terrorists themselves. This minority believes that in adjudicating online content, the provenance and meaning of phrases must be subject to analysis and interpretation; such parsing must not be conflated with efforts to delegitimize civil society actors. Finally, the Board acknowledges that Meta has designed a set of policies to address the risks from discriminatory content online. The evidence of harm produced by the cumulative widespread and high-speed circulation of antisemitic and other harmful content on Meta’s platforms, as discussed in the Holocaust Denial decision, require that Meta have adequate enforcement tools and measures to moderate such content without unduly curtailing political expression on issues of public interest, in line with its human rights responsibilities. Additionally, if adequately enforced, Meta’s policies provide significant guardrails to advance the goals of preventing violence and other harms resulting from terrorists and their supporters’ uses of Meta’s platforms. In this regard, in response to the Board’s recommendation no. 5 in the Mention of the Taliban in News Reporting case, Meta said it would develop new tools that would allow it to “gather more granular details about our enforcement of the [Dangerous Organizations and Individuals] news reporting policy allowance.” As the Board has previously recommended, this should also be extended to enforcement of the Hate Speech policy ( Holocaust Denial decision, recommendation no. 1), as well as the Violence and Incitement policy ( United States Posts Discussing Abortion decision, recommendation no. 1). Data Access The Board and external stakeholders will be in a better position to assess the necessity and proportionality of Meta’s content moderation decisions during ongoing armed conflicts, should Meta continue to provide the Board and independent researchers with access to platform data. In March 2024, Meta announced it would be shutting down CrowdTangle on August 14, 2024. The company explained it would instead focus its resources on “new research tools, Meta Content Library & Content Library API.” While the Board commends Meta for developing new research tools and working to provide greater functionality, the Board is concerned with the company’s decision to shut down CrowdTangle before these new tools can effectively replace it. According to an open letter sent by several organizations to Meta urging the company not to discontinue CrowdTangle “during a key election year,” there are significant concerns about the adequacy of the Meta Content Library to provide sufficient data access for independent monitoring. The European Commission has opened formal proceedings under the Digital Services Act against Facebook and Instagram for the decision to shut down its “real-time public insights tool CrowdTangle without an adequate replacement.” The Board echoes concerns raised by these organizations, individuals and the European Commission with Meta’s decision to discontinue CrowdTangle during a key election year without an adequate replacement. The Board does note that even with CrowdTangle, there are limits to the Board’s and the public’s abilities to effectively assess the extent of the surge in antisemitic, anti-Muslim, or racist and other hateful content on Meta’s platforms, and where and when that surge may be most prominent. Meta’s transparency reporting is not granular enough to evaluate the extent and nature of hateful content on its platforms. One of the recommendations (no. 16) issued by BSR in its Human Rights Due Diligence report, which was commissioned in response to the Board’s earlier recommendation in the Shared Al Jazeera Post decision, was for the company to develop a mechanism to track the prevalence of content attacking people on the basis of specific protected characteristics (for example, antisemitic, Islamophobic or homophobic content.) In September 2023, one year after the BSR report was issued, Meta reported it was still assessing the feasibility of this recommendation. The Board urges Meta to fully implement this recommendation as soon as possible. 6. The Oversight Board’s Decision The Oversight Board upholds Meta’s decisions to leave up the content in all three cases. 7. Recommendations Transparency 1.Meta should ensure that qualified researchers, civil society organizations and journalists, who previously had access to CrowdTangle, are onboarded to the company’s new Content Library within three weeks of submitting their application. The Board will consider this implemented when Meta provides the Board with a complete list of researchers and organizations that previously had access to CrowdTangle, and the turnaround time it took to onboard them to the Meta Content Library, at least 75% of which should be three weeks or less. 2. Meta should ensure the Meta Content Library is a suitable replacement for CrowdTangle, which provides equal or greater functionality and data access. The Board will consider this implemented when a survey of a representative sample of onboarded researchers, civil society organizations and journalists shows that at least 75% believe they are able to reasonably continue, reproduce or conduct new research of public interest, using the Meta Content Library. This survey should be carried out longitudinally if necessary, and the results of its first iteration should be shared with the Board no later than Q1, 2025. 3. Meta should implement recommendation no. 16 from the BSR Human Rights Due Diligence of Meta’s Impacts in Israel and Palestine report to develop a mechanism to track the prevalence of content attacking people on the basis of specific protected characteristics (for example, antisemitic, Islamophobic and homophobic content). The Board will consider this recommendation implemented when Meta publishes the results of its first assessment of these metrics and issues a public commitment on how the company will continue to monitor and leverage those results. *Procedural Note: Return to Case Decisions and Policy Advisory Opinions" bun-8s1h6eu5,Educational posts about ovulation,https://www.oversightboard.com/decision/bun-8s1h6eu5/,"November 16, 2023",2023,,"Health,Sex and gender equality",Adult nudity and sexual activity,Overturned,"United States,Pakistan","In this summary decision, the Board is considering two educational posts about ovulation together. The Board believes that Meta’s original decisions to remove each post makes it more difficult for people to access a highly stigmatized area of health information for women. After the Board brought these two appeals to Meta’s attention, the company reversed its earlier decisions and restored both posts.",7307,1127,"Multiple Case Decision November 16, 2023 In this summary decision, the Board is considering two educational posts about ovulation together. The Board believes that Meta’s original decisions to remove each post makes it more difficult for people to access a highly stigmatized area of health information for women. After the Board brought these two appeals to Meta’s attention, the company reversed its earlier decisions and restored both posts. Overturned FB-YZ2ZBZWN Platform Facebook Topic Health,Sex and gender equality Standard Adult nudity and sexual activity Location United States,Pakistan Date Published on November 16, 2023 Overturned IG-F5NPUOXQ Platform Instagram Topic Health,Sex and gender equality Standard Adult nudity and sexual activity Location Argentina Date Published on November 16, 2023 This is a summary decision. Summary decisions examine cases where Meta reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors. They are approved by a Board Member panel, not the full Board. They do not consider public comments, and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas of potential improvement in its policy enforcement. Case summary In this summary decision, the Board is considering two educational posts about ovulation together. The Board believes that Meta’s original decisions to remove each post makes it more difficult for people to access a highly stigmatized area of health information for women. After the Board brought these two appeals to Meta’s attention, the company reversed its earlier decisions and restored both posts. Case description and background For the first case, on March 15, 2023, a Facebook user based in the United States commented on a post in a Facebook group. The comment was written in English and included a photo of four different types of cervical mucus and corresponding fertility levels, with a description of each overlaid on the photo. The comment was in response to someone else’s post, which asked about PCOS (Polycystic Ovary Syndrome), fertility issues, and vaginal discharge. The content had no views, no shares, and had been reported once by Meta’s automated systems. The group states that its purpose is to help provide women in Pakistan who suffer from “invisible conditions” related to reproductive health such as “endometriosis, adenomyosis, PCOS and other menstrual issues” with a safe space to discuss the challenges they face and to support one another. For the second case, on March 7, 2023, an Instagram user posted a video depicting someone’s hand over a sink with vaginal discharge on the person’s fingers. The caption underneath the video is written in Spanish and the headline reads, ""Ovulation - How to Recognize It?"" The rest of the caption describes in detail how cervical mucus becomes clearer during ovulation, and at what point in the menstrual cycle someone can expect to be ovulating. It also describes other physiological changes one can expect when experiencing ovulation such as an increased libido and body temperature, and difficulty sleeping. The description for the user’s account says that it is dedicated to vaginal/vulvar health and period/menstruation education. The content had more than 25,000 views, no shares, and had been reported once by Meta’s automated systems. For both cases, Meta initially removed each of the two pieces of content under its Adult Nudity and Sexual Activity policy, which prohibits “imagery of sexual activity” except “in cases of medical or health context.” However, Meta acknowledged that both pieces of content fall within its allowance for sharing imagery with the presence of by-products of sexual activity (which may include vaginal secretions) in a medical or health context and restored them back to each platform. After the Board brought these two cases to Meta’s attention, the company determined that neither piece of content violated the Adult Nudity and Sexual Activity Community Standard and the removals were incorrect. The company then restored both pieces of content to Facebook and Instagram respectively. Board authority and scope The Board has authority to review Meta's decisions following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). Where Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, to reduce errors and increase fairness for people who use Facebook and Instagram. Case significance These cases highlight the difficulties of enforcing allowances for medical and health content set out in Meta’s Adult Nudity and Sexual Activity guidelines. As the user in the first case wrote in their appeal to the Board, understanding the appearance and texture of cervical mucus helps women track their cycles for ovulation and fertility. They also note that not all women have the means or resources to learn this information from a healthcare physician, or to purchase ovulation kits or have bloodwork done to track ovulation. Meta’s initial decision to remove this content makes it more difficult for people to access what is already a highly stigmatized area of health information for women. Previously, the Board has issued recommendations related to both the Adult Nudity and Sexual Activity policy for the purposes of educating and raising awareness of medical and health information, as well as to improve the enforcement of allowances set out in the company’s Community Standards. Specifically, the Board has urged Meta to improve the automated detection of images with text-overlay to ensure that posts raising awareness of breast cancer symptoms were not wrongly flagged for review (“ Breast cancer symptoms and nudity ,” recommendation no. 1) and to ensure that appeals based on policy exceptions are prioritized for human review (“‘ Two buttons meme ’,” recommendation no. 5). While Meta is currently assessing the feasibility of the second recommendation, Meta has completed work on the first recommendation. Meta deployed a new image-based health content classifier and enhanced an existing text-overlay classifier to further improve Instagram’s techniques for identifying breast cancer context content. Over 30 days in 2023, these enhancements contributed to an additional 3,500 pieces of content being sent for human review that would have previously been automatically removed. The full implementation of both recommendations will help reduce the error rate of content that is wrongly removed when it is posted under an allowance in the Community Standard, such as for raising awareness or educating users about various aspects of women’s reproductive health. Decision The Board overturns Meta’s original decisions to remove the two pieces of content. The Board acknowledges Meta’s correction of its initial errors once the Board brought these cases to the company’s attention. Return to Case Decisions and Policy Advisory Opinions" bun-9h1nn18u,Negative Stereotypes of African Americans,https://www.oversightboard.com/decision/bun-9h1nn18u/,"April 18, 2024",2024,,"Discrimination,Race and ethnicity",Hateful conduct,Overturned,United States,"The Board reviewed three Facebook posts containing racist material, which Meta left up. After the Board brought these appeals to Meta’s attention, the company reversed its original decisions and removed the posts.",6844,1049,"Multiple Case Decision April 18, 2024 The Board reviewed three Facebook posts containing racist material, which Meta left up. After the Board brought these appeals to Meta’s attention, the company reversed its original decisions and removed the posts. Overturned FB-LHBURU6Z Platform Facebook Topic Discrimination,Race and ethnicity Standard Hateful conduct Location United States Date Published on April 18, 2024 Overturned FB-1HX5SN1H Platform Facebook Topic Discrimination,Race and ethnicity Standard Hateful conduct Location United Kingdom,United States Date Published on April 18, 2024 Overturned FB-ZD01WKKW Platform Facebook Topic Discrimination,Race and ethnicity Standard Hateful conduct Location United States Date Published on April 18, 2024 This is a summary decision. Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Summary The Board reviewed three Facebook posts containing racist material, which Meta left up. Each post included a caricature or manipulated image of African Americans that highlights offensive stereotypes, including absent fathers, being on welfare and looters at a store. These cases highlight errors in Meta’s enforcement of its Hate Speech and Bullying and Harassment policies. After the Board brought these appeals to Meta’s attention, the company reversed its original decisions and removed the posts. About the Cases In late 2023, the Board received three separate appeals regarding three different images posted on Facebook, all containing negative material about African Americans. In the first post that was viewed about 500,000 times, a user posted a computer-generated image of a store on fire with Black people shown as cartoon characters, wearing hooded sweatshirts, carrying merchandise and running out of the store. The name of the store, Target, has been changed to “Loot” in the image and in the accompanying caption, the user describes the image as the next Pixar movie. The second post features a computer-generated image that also imitates a movie poster, with a Black woman who has exaggerated physical features shown holding a shopping cart full of Cheetos. The title of the movie is “EBT,” which is the name of a system for receiving social welfare benefits in the United States. At the top of the poster, in place of the names of actors, are the names Trayvon Martin and George Floyd, both African American victims of violence, one shot by an armed vigilante in 2012 and one killed at the hands of the police in 2020. Their deaths helped spark protests about racial disparities in the U.S. justice system. The third post, which was viewed about 14 million times, features a meme claiming that “Adobe has developed software that can detect photoshop in an image.” Underneath the claim, there is an image of a woman with colorful markings over her entire face (typically used to show heat detection) to imply that parts of the image have been altered. This is contrasted against an image of a Black family having a meal in which the father and food on the table have the same colorful markings, implying these two elements were added through editing. The post reinforces the widespread negative stereotype about the lack of a father figure in Black families in the United States, which stems from a complex history of systemic racism and economic inequality. Meta initially left all three posts on Facebook, despite appeals from users. In their appeals to the Board, those same users argued that the content depicted harmful racial stereotypes of African Americans. After the Board brought these cases to Meta’s attention, the company determined that each post violated the Hate Speech Community Standard, which bans direct attacks against people on the basis of protected characteristics, including race and ethnicity. The policy also specifically prohibits “targeting a person or a group” with “dehumanizing speech” in the form of “comparisons to criminals including thieves,” “mocking the concept, events or victims of hate crimes” and “generalizations that state inferiority in ... moral characteristics.” Additionally, Meta determined the second post that includes the names of Trayvon Martin and George Floyd also violated the Bullying and Harassment Community Standard under which the company removes “celebration or mocking of death or medical condition” of anyone. The company explained that “the image includes the name of two deceased individuals, Trayvon Martin and George Floyd ... The content trivializes their deaths by implying they will star in a fictitious animated movie.” The company therefore removed all three posts. Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the user who reported content that was left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook and Instagram users. Significance of Cases These cases highlight three instances in which Meta failed to effectively enforce its policies against Hate Speech and Bullying and Harassment, by leaving up violating posts despite user complaints. Two of the posts received a high number of views. Such moderation errors of under-enforcement can negatively impact people of protected-characteristic groups and contribute to an environment of discrimination. The Board has made compelling Meta to address Hate Speech Against Marginalized Groups a strategic priority. In 2022, the Board issued a recommendation that Meta should “clarify the Hate Speech Community Standard and guidance provided to reviewers, explaining that even implicit references to protected groups are prohibited by the policy when the reference could be reasonably understood,” ( Knin Cartoon , recommendation no. 1), which Meta reported partial implementation on. Decision The Board overturns Meta’s original decisions to leave up the three posts. The Board acknowledges Meta’s correction of its initial errors once the Board brought these cases to the company’s attention. Return to Case Decisions and Policy Advisory Opinions" bun-e1ycxi7e,Posts Displaying South Africa’s Apartheid-Era Flag,https://www.oversightboard.com/decision/bun-e1ycxi7e/,"April 23, 2025",2025,,"Discrimination,Elections","In both cases, users who reported the content to Meta then appealed to the Board.",Upheld,South Africa,"Following a review of two Facebook posts containing images of South Africa’s 1928-1994 flag, the majority of the Board has upheld Meta’s decisions to keep them up. They do not clearly advocate for exclusion or segregation, nor do they call for people to engage in violence or discrimination.",47400,7342,"Multiple Case Decision April 23, 2025 Following a review of two Facebook posts containing images of South Africa’s 1928-1994 flag, the majority of the Board has upheld Meta’s decisions to keep them up. They do not clearly advocate for exclusion or segregation, nor do they call for people to engage in violence or discrimination. Upheld FB-Y6N3YJK9 Platform Facebook Topic Discrimination,Elections Location South Africa Date Published on April 23, 2025 Upheld FB-VFL889X3 Platform Facebook Topic Discrimination,Elections Location South Africa Date Published on April 23, 2025 Posts Displaying South Africa's Apartheid-Era Flag Following a review of two Facebook posts containing images of South Africa’s 1928-1994 flag, the majority of the Board has upheld Meta’s decisions to keep them up. Board Members acknowledge the long-term consequences and legacy of apartheid on South Africa. However, these two posts do not clearly advocate for exclusion or segregation, nor can they be understood as a call for people to engage in violence or discrimination. The deliberation in these cases also resulted in recommendations to improve conflicting language in the Dangerous Organizations and Individuals policy. Additional Note: Meta’s January 7, 2025, revisions did not change the outcome in these cases, though the Board took the rules at the time of posting and the updates into account during deliberation. On the broader policy and enforcement changes hastily announced by Meta in January, the Board is concerned that Meta has not publicly shared what, if any, prior human rights due diligence it performd in line with its commitments under the UN Guiding Principles on Business and Human Rights. It is vital Meta ensures adverse impacts on human rights globally are identified and prevented. About the Cases Shared ahead of South Africa’s general elections in May 2024, the first Facebook post shows a photo of a white male soldier holding the country’s old flag, in use during the apartheid era. A caption urges others to share the post if they “served under this flag.” This post was viewed more than 500,000 times. Reported by three users, Meta decided the content did not break its rules. The second post, also on Facebook, comprises stock photos from the apartheid era, including the former flag, white children standing next to a black man on an ice cream bicycle, a public whites-only beach and a toy gun. The post’s caption says these were the good old days, asks others to “read between the lines,” and includes winking face and “OK” emojis. Viewed more than two million times, the content was reported by 184 users, mostly for hate speech. Meta’s human reviewers decided the post did not violate the Community Standards. In both cases, users who reported the content to Meta then appealed to the Board. Key Findings The majority of the Board has found that neither post violates the Hateful Conduct policy, while a minority finds both are violating. The policy does not allow “direct attacks” in the form of “calls or support for exclusion or segregation” based on a protected characteristic. Neither post advocates for bringing back apartheid or any other form of racial exclusion, according to the majority. While the soldier post uses the flag in a positive context, it does not advocate racial exclusion or segregation. For the photo grid post, the images combined with the emojis and the message to “read between the lines” indicate a racist message, but they do not rise to the level needed to violate this policy. A minority disagrees, pointing out the flag is an unambiguous and direct symbol of apartheid, which when shared with positive or neutral references, can be understood as support for racial segregation. For example, there is no doubt that the photo grid post, with its images of segregated life, messages and the “OK” emoji – understood by white supremacists globally as covert hate speech – supports racial exclusion. The Board has found unanimously that both posts violate the Dangerous Organizations and Individuals policy although the majority and a minority disagree over why. The company removes content that glorifies, supports or represents hateful ideologies, including white supremacy and separatism, as well as “unclear references” to these ideologies. The Board agrees the 1928-1994 flag cannot be decoupled from apartheid, a form of white separatist ideology. For the majority, both posts represent unclear references to white separatism, while for the minority, they explicitly glorify this ideology. It is not necessary and proportionate to remove the content, according to the majority, because the likelihood of imminent discrimination or violence from these posts is low. Banning such speech does not make intolerant ideas disappear and other content moderation tools less intrusive than removals could have been applied. A minority disagrees, noting that removal is necessary to ensure respect for equality and non-discrimination for non-white South Africans. They also point out the chilling effects of such hatred accumulating on Meta’s platforms on the freedom of expression of those targeted. All Board Members recognize conflicting language around “references” to hateful ideologies under the Dangerous Organizations and Individuals Community Standard. During the deliberation, there were questions over why Meta does not list apartheid as a standalone designation. Some Board Members asked why Meta’s list centers on those ideologies that may present risks in Global Minority regions but remains silent on comparable hateful ideologies in the Global Majority. The Oversight Board’s Decision The Oversight Board upholds Meta’s decisions to leave up both posts. The Board recommends that Meta: * Case summaries provide an overview of cases and do not have precedential value. 1. Case Description and Background These cases involve two Facebook posts shared in the run-up to South Africa’s general election in May 2024. The first post shows a photo of a white male soldier holding South Africa’s pre-1994 flag, which was the country’s flag under apartheid. The English caption urges users to share the content if they “served under this flag.” The post was viewed around 600,000 times and shared around 5,000 times. Three users reported the content to Meta for hate speech and violence. As Meta’s human reviewers found the content to be non-violating, it was kept up. One of the users who reported the content then appealed to the Board. The second post is a photo grid containing stock images taken during the apartheid era, including: the country’s former flag; an adult Black man on an ice cream bicycle with three white children standing next to him in a seemingly whites-only neighborhood; a public whites-only beach with a theme park; a South African board game; a packet of white candy cigarettes; and a silver toy gun. The caption states these were the “good old days” and asks the audience to “read between the lines,” followed by winking face and “OK” emojis. It was viewed around two million times and shared around 1,000 times. Within a week of posting, 184 users reported the content, mostly for hate speech. Some of the reports were assessed by human reviewers, who determined the content did not violate the Community Standards. The remaining reports were processed through a combination of automated systems and prior human review decisions. Like the soldier post, Meta found this content to be non-violating and kept it up on the platform. One of the users who reported the content then appealed to the Board. On January 7, 2025, Meta announced revisions to its Hate Speech policy, renaming it the Hateful Conduct policy . These changes, to the extent relevant to these cases, will be described in Section 3 and analyzed in Section 5. The Board notes content is accessible on Meta’s platforms on a continuing basis, and updated policies are applied to all content present on the platform, regardless of when it was posted. The Board therefore assesses the application of policies as they were at the time of posting, and, where applicable, as since revised (see also the approach in Holocaust Denial ). The Board notes the following context in reaching its decision: From 1948 to 1994, South Africa was under a state-sanctioned apartheid regime involving the racial segregation of white and non-white South Africans, although discriminatory laws had existed in the country before apartheid was formally adopted. During this time, South Africa was represented by an orange, white and blue flag. In 1994, following the end of apartheid, South Africa adopted the six-color flag that it uses today. Despite the end of apartheid, socioeconomic inequality continues to afflict the non-white population of the country in particular, contributing to racial tensions in politics and public discourse. In 2018, the Nelson Mandela Foundation took legal action in South Africa seeking to ban the “gratuitous display” of the apartheid-era flag following its use in protests the previous year. The action alleged that it amounted to “hate speech, unfair discrimination and harassment,” and that it celebrated the system’s atrocities. In 2019, South Africa’s Equality Court held that the flag’s gratuitous display amounted to hate speech and racial discrimination that can be prosecuted under domestic law. The court ruling clarified that displaying the flag is not illegal if used for artistic, academic, journalistic or other public interest purposes. The Supreme Court of Appeal (SCA) upheld this decision in April 2023. On May 29, 2024, South Africa held elections for the National Assembly. The African National Congress (ANC), the political party led by Nelson Mandela after the end of apartheid, lost its parliamentary majority. However, incumbent party leader Cyril Ramaphosa retained his presidency by forming a coalition government with opposition parties. 2. User Submissions The authors of the posts were notified of the Board’s review and provided with an opportunity to submit a statement. No response was received. In their statement to the Board, the user who reported the soldier post stated that South Africa’s former flag is comparable to the German Nazi flag. They said “brazenly displaying” it incites violence because the country is still reeling from the impact of apartheid as a crime against humanity. The user also stated that sharing such images during an election period can encourage racial hatred and endanger lives. Similarly, the user who reported the photo grid post explained that the use of the flag is illegal and taken as a whole, it suggests apartheid was a “better time” for South Africans. They emphasized how the former flag represents oppression and is “derogatory” and “painful” for the majority of South Africans. 3. Meta’s Content Policies and Submissions I. Meta’s Content Policies Hateful Conduct (previously named Hate Speech) Community Standard Meta’s Hateful Conduct policy states that “people use their voice and connect more freely when they don’t feel attacked on the basis of who they are.” Meta defines “hateful conduct” in the same way that it previously defined “hate speech,” as “direct attacks against people” based on protected characteristics, which include race, ethnicity and national origin. As a result of the Board’s recommendation to clarify its approach in the Knin Cartoon case, Meta states in the introduction to its Community Standards that the company may remove content that uses “ambiguous or implicit language” when additional context allows it to reasonably understand that the content goes against the Community Standards. Tier 2 of the Hateful Conduct policy prohibits as a form of direct attack “calls or support for exclusion or segregation or statements of intent to exclude or segregate,” whether in written or visual form. Meta prohibits the following types of calls for or support for exclusion: (i) general exclusion, which means calling for general exclusion or segregation, such as “No X allowed!”; (ii) political exclusion, which means denying the right to political participation or arguing for incarceration or denial of political rights; (iii) economic exclusion, which generally means denying access to economic entitlements and limiting participation in the labor market; and (iv) social exclusion, which means things like denying access to physical and online spaces and social services. Prior to January 7, the prohibition on “general exclusion” was called “explicit exclusion.” Dangerous Organizations and Individuals Meta’s Dangerous Organizations and Individuals policy seeks to “prevent and disrupt real-world harm.” Under the policy rationale, Meta states that it removes content that glorifies, supports or represents “hateful ideologies.” Meta explains it designates prohibited ideologies, which the policy lists as “including Nazism, white supremacy, white nationalism [and] white separatism” because they are “inherently tied to violence” and attempt “to organize people around calls for violence or exclusion of others based on their protected characteristics.” Directly alongside this listing, the company states it removes explicit glorification, support and representation of these ideologies (emphasis added). Meta twice states it also removes “unclear references” to hateful ideologies, once in the policy rationale and again under the description of Tier 1 organizations. Meta explains in the policy rationale that it requires users to “clearly indicate their intent” when creating or sharing such content. If a user’s intent is “ambiguous or unclear,” Meta defaults to removing content. II. Meta’s Submissions Meta left both posts on Facebook, finding no violations of its policies. Meta confirmed that its analysis of the content was not affected by the January 7 policy changes. Meta stated that the posts did not violate the Hateful Conduct policy, as there were no calls for exclusion of a protected group under Tier 2, nor any other prohibited direct attack. None of the statements in the posts mentioned a protected group, nor did the posts advocate for a particular action. According to Meta, for the policy to be operable at-scale, there must be a “direct” and explicit attack, not an implicit attack. Neither post had a direct attack. Meta’s internal enforcement guidance to reviewers contains an illustrative list of emojis that are violating if used in a context that allows a reviewer to confirm intent to directly attack a person or group on the basis of a protected characteristic. Photos, captions, text overlay on photos and the content of videos can help indicate what an emoji means. The list is global and does not contain the “OK” emoji. Meta decided the posts did not violate the Dangerous Organizations and Individuals policy. Meta noted that the flag shown in the posts was used in South Africa between 1928 and 1994, including the apartheid era and the years preceding it. The company acknowledged that since the end of apartheid, this flag has sometimes been used in historical commemoration but is most often used as a symbol of Afrikaner heritage and apartheid. However, it also recognized that the flag represents other meanings, including South Africans’ connections to different aspects of that period such as personal experiences, military service and other aspects of citizenship. Regarding Meta’s prohibition on explicit glorification, support or representation of hateful ideologies, the company noted in its guidance to reviewers that only Nazism, white supremacy, white nationalism and white separatism are named as hateful ideologies. Meta did, however, explain to the Board that it removes “praise of segregation policies” like those implemented during apartheid in South Africa as white separatism. In response to Board requests for examples, Meta said it would remove a statement like “apartheid was wonderful” in most instances, but this is not an example provided to reviewers in the enforcement guidance. Examples of policy violations provided to reviewers include, among others, “white supremacy is the right thing” and “yes, I am a white nationalist.” Meta considered that the soldier post’s statement, “Share if you served under this flag,” did not glorify or support a designated hateful ideology. Likewise, the photo grid post’s caption describing the apartheid era as the “good old days” and asking users to “read between the lines” [wink emoji, “OK” emoji], combined with the apartheid flag and historical images of that era do not, by themselves, glorify or support a hateful ideology. While Meta acknowledges the “OK” emoji is in some contexts associated with the white power movement, Meta’s view is it predominantly means “okay,” including in South Africa. Meta concluded its use here is not meant to glorify or support a hateful ideology. As part of its integrity efforts for the May 2024 South African elections, Meta ran anti-hate speech and misinformation campaigns on its platforms and local radio in the election lead-up. These campaigns were designed to educate people about identifying and reporting hate speech and misinformation online. The Board asked questions on the renamed Hateful Conduct and Dangerous Organizations and Individuals policies and their enforcement, which symbols and ideologies could violate these policies and Meta’s electoral integrity efforts in South Africa. Meta responded to all questions. 4. Public Comments The Oversight Board received 299 public comments that met the terms for submission . Of those, 271 were submitted from sub-Saharan Africa, 10 from Europe, four from Central and South Asia, five from the US and Canada, seven from Middle East and North Africa, and two from Asia-Pacific and Oceania. Because the public comments period closed before January 7, 2025, none of the comments address the policy changes Meta made on that date. To read public comments submitted with consent to publish, click here . The submissions covered the following themes: what the apartheid-era flag meant in South African history and politics; the impact of displaying it on non-whites and efforts to build a multi-cultural South Africa, and whether it should be allowed on Meta’s platforms; and, coded uses of online symbols and recommended approaches to moderating visual images that may constitute implicit attacks against protected groups. 5. Oversight Board Analysis The Board selected these cases to address Meta’s respect for freedom of expression and other human rights in the context of an election, and how it treats imagery associated with South Africa’s recent history of apartheid. These cases fall within the Board’s strategic priorities of Elections and Civic Space and Hate Speech Against Marginalized Groups. The Board analyzed Meta’s decisions in these cases against Meta’s content policies, values and human rights responsibilities. The Board also assessed the implications of these cases for Meta’s broader approach to content governance. 5.1 Compliance With Meta’s Content Policies Hateful Conduct (formerly Hate Speech) Community Standard The Board notes that Meta’s prohibition on “calls or support for exclusion or segregation” based on a protected characteristic is open to at least two interpretations, neither of which is impacted by the January 7 policy changes. The majority of the Board, noting Meta’s paramount value of voice, favors a narrow reading of the rule requiring advocacy for exclusion or segregation. A minority, noting Meta’s value of dignity, applies a broader reading, interpreting the prohibition to also encompass support for exclusion or segregation more generally. The majority of the Board finds that neither post violates this prohibition. While the posts appear to display nostalgia for the apartheid era, they do not advocate reinstituting apartheid or any other form of racial exclusion. Considering the soldier post, the majority recognizes that many people see the 1928–1994 flag as a symbol of apartheid. However, the flag itself, combined with a statement about military service, does not advocate exclusion or segregation. Additional elements would need to be present in the post to make it violating. While the flag is invoked positively in this post, that context is specific to military service and there is no sufficiently clear statement or reference that apartheid or similar policies should be reinstituted. Notwithstanding how divisive or insensitive sharing this flag may be to many in present-day South Africa, it would be incorrect to presume, without more evidence, that this post advocates racial exclusion or segregation that would violate this policy. The majority similarly finds that the photo grid post, with the image of the 1928–1994 flag alongside photographs of apartheid-era South Africa and the caption, does not advocate segregation or exclusion. They feasibly evoke general nostalgia for the period they depict. The majority acknowledges that the phrases “the good old days” and “read between the lines,” and the winking face and “OK” emojis are all, in combination with the photographs, indicators of a racist message that change how the images alone would be perceived. Nevertheless, Meta’s Hateful Conduct policy does not prohibit the expression of all racially insensitive or even racist viewpoints. The post, taken as a whole, does not rise to the level of advocacy for the reinstitution of apartheid or other forms of racial segregation or exclusion and is therefore permitted. For a minority, the 1928–1994 flag is an unambiguous and direct symbol of apartheid. When shared with a positive or neutral reference (rather than with condemnation), it is contextually understood in South Africa as support for racial segregation and exclusion and therefore is violating. For this minority, an innocuous display of the flag is not possible and can only be interpreted as support for the racial exclusion of the apartheid-era (also see public comments, including from the South African Human Rights Commission and Nelson Mandela Foundation, noting the 2023 SCA decision , PC-30759; PC-30771 ; PC-30768; PC-30772; PC-30774). The apartheid-era flag has also been co-opted by white nationalist movements in other parts of the world (PC-30769). For these reasons, the minority finds that both posts constitute support for racial exclusion. The soldier post, encouraging others to reshare the flag, can only be understood as support for the segregationist policy the flag represents. Considering the photo grid post as a whole, the images and caption make the post’s support for racial exclusion and segregation clear. As the post includes the flag without condemning or awareness-raising context, it violates on this basis alone. In addition, the other photographs appear to be stock images of aspects of life that were segregated; they do not tell a personal story of nostalgia, as the caption also makes clear. The use of the white power “OK” emoji in the caption is significant. It is understood by white supremacists globally as covert hate speech and a dog whistle, literally spelling out the letters “W” (for white) with three fingers and a “P” (for power) with the connecting thumb and index finger (see PC-30768). Its inclusion here was not in isolation. When accompanied with images of apartheid, a reference to “the good old days” and an invitation to users to “read between the lines,” together with a wink emoji, even a person not accustomed to white supremacist symbology is left in no doubt that this post supports racial exclusion and is therefore violating. For the minority, in reaching this conclusion, it is important to understand how the use of racist language and symbols online has adapted to evade content moderation, and how more subtle (but nevertheless direct) expressions of support for exclusion can be used to connect like-minded people. As here, coded or indirect hate speech can be alarmingly non-ambiguous, even when it leaves literal statements of intended meaning unsaid. Dangerous Organizations and Individuals Community Standard Through questions the Board asked Meta, the Board understands that the company’s designation of white separatism as a hateful ideology includes South African apartheid. However, internal guidance to Meta’s reviewers could make this more explicit by providing broader examples of violations. As addressed in Section 5.2 (legality) below, Meta’s rules on designated ideologies are vague. The Board unanimously finds that both posts violate the Dangerous Organizations and Individuals Community Standard, but for different reasons. For the majority, both posts meet Meta’s definition of unclear references to white separatism, which the policy prohibits. For a minority, both posts rise to the level of glorification of white separatism. The Board notes that the 1928–1994 South African flag cannot be decoupled from apartheid as a form of white separatist ideology. It was the national flag during two decades of legalized racial discrimination preceding apartheid and from when apartheid was instituted in 1948. For the majority, the soldier post, which encourages others to reshare if they served in the military under the flag, does not explicitly glorify apartheid as a form of white supremacy in its express and positive reference to military service. Similarly, the photo grid post does not indicate whether it is alluding to personal experiences during the apartheid-era or glorifying it. As noted above, however, there are several indicators of a racist message in this post, most notably the use of the “OK” emoji alongside the flag. For the majority, the positive but indirect indicators in both posts constitute a violating “unclear reference” to white separatism but are not sufficiently explicit to amount to “glorification.” For a minority of the Board, both posts meet the threshold for explicit glorification of white separatist ideology for the same reasons they constitute support for racial exclusion or segregation under the Hateful Conduct policy. In the soldier post, positive reference to the apartheid-era flag as an inherent symbol of white separatism, including in the context of military service, constitutes glorification of that ideology, even without apartheid policies being specifically mentioned. For the photo grid post, the combination of the white power symbol (“OK” emoji), the flag and the phrase “the good old days,” also explicitly glorifies this ideology. For these Board Members, users reporting both posts and the comments the posts attracted confirm that the content’s glorification of apartheid was well understood by audiences. Many reactions to the posts demonstrate how white separatists’ crude communications can creatively evade content moderation. They also show how networked hateful actors can exploit the design of Meta’s platforms to spread their message, identify new members and expand their numbers. 5.2 Compliance With Meta’s Human Rights Responsibilities The majority of the Board finds that keeping both posts up on the platform was consistent with Meta’s human rights responsibilities. A minority disagrees, finding that removal would be consistent with these responsibilities. Freedom of Expression (Article 19 ICCPR) Article 19 of the International Covenant on Civil and Political Rights (ICCPR) provides for broad protection of expression, including views about politics, public affairs and human rights ( General Comment No. 34 , paras. 11-12). The UN Human Rights Committee has highlighted that the value of expression is particularly high when discussing political issues (General Comment No. 34, paras. 11, 13). It has emphasized that freedom of expression is essential for the conduct of public affairs and the effective exercise of the right to vote (General Comment No. 34, para. 20; also see General Comment No. 25 , paras. 12 and 25). When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s human rights responsibilities in line with the UN Guiding Principles on Business and Human Rights, which Meta itself has committed to in its Corporate Human Rights Policy . The Board does this in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression,” ( A/74/486 , para. 41). I. Legality (Clarity and Accessibility of the Rules) The principle of legality requires rules limiting expression to be accessible and clear, formulated with sufficient precision to enable an individual to regulate their conduct accordingly (General Comment No. 34, para. 25). Additionally, these rules “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” and must “provide sufficient guidance … to enable them to ascertain what sorts of expression are properly restricted and what sorts are not” ( Ibid. ). When applied to private actors’ governance of online speech, rules should be clear and specific ( A/HRC/38/35 , para. 46). People using Meta’s platforms should be able to access and understand the rules, and content reviewers should have clear guidance regarding their enforcement. These cases highlight two problems of clarity regarding Meta’s Hateful Conduct prohibitions. First, Meta provided conflicting responses to the Board on whether “direct attacks” include implicit statements or not (similar concerns were raised in the Board’s Knin Cartoon case). Additionally, whether the rule on “calls or support for exclusion or segregation” is limited to advocacy for exclusion or encompasses any broader support of exclusion or segregation is also unclear. This is compounded by a lack of global examples of violations provided to reviewers, with none encompassing apartheid. The Dangerous Organizations and Individuals Community Standard also presents conflicting language on Meta’s approach to hateful ideologies. In some parts, it specifies that unclear references to hateful ideologies are prohibited, while in others it implies that only “explicit glorification, support or representation” is prohibited. The internal guidance provided to reviewers states that “[r]eferences, [g]lorification, [s]upport, or [r]epresentation” are all prohibited. The list of prohibitions under the “we remove” section of the policy does not refer to the rule on hateful ideologies at all, creating further confusion. While the Board finds “white separatism” should implicitly include apartheid as implemented in South Africa, Meta’s internal guidance to reviewers does not make this explicit nor include sufficient examples relevant to the South African context. The examples of violating content provided to the Board by Meta in response to questions (e.g., “apartheid was wonderful,” “white supremacy is the right thing”) do not reflect the realities of how racial supremacist messaging is often framed. At the same time, the Board notes that while apartheid as implemented in South Africa is inherently intertwined with white separatism and white supremacy, the concept of apartheid in international law applies to the intentional dominion of any racial group over another to systematically oppress them ( Rome Statute of the International Criminal Court , Article 7(2)(h); Apartheid Convention , Article 2). This raises questions as to why apartheid is not listed as a standalone designation. As Meta’s policies are global, several Board Members also questioned why Meta’s listing is centered around ideologies that may present risks in Global Minority regions while remaining silent on many comparable hateful ideologies in Global Majority regions. II. Legitimate Aim Any restriction on freedom of expression should pursue one or more of the legitimate aims listed in the ICCPR, which includes protecting the rights of others (Article 19, para. 3, ICCPR ). The Board has previously recognized that the Hate Speech Community Standard pursues the legitimate aim of protecting the rights of others. Those rights include the rights to equality and non-discrimination (Article 2, para. 1, ICCPR; Article 2 and 5 ICERD ). This is true also of the revised Hateful Conduct policy. Similarly, the Board considers that the Dangerous Organizations and Individuals policy, seeking to “prevent and disrupt real-world harm,” pursues the legitimate aim of protecting the rights of others, such as the right to life (ICCPR, Article 6) and the right to non-discrimination and equality (ICCPR, Articles 2 and 26). III. Necessity and Proportionality Under ICCPR Article 19(3), necessity and proportionality require that restrictions on expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected,” (General Comment No. 34, para. 34). The majority of Board Members finds that keeping up both posts is in line with Meta’s human rights responsibilities, and removal would not be necessary and proportionate. These Board Members acknowledge that apartheid’s legacy persists and has had long-term consequences felt across South Africa today. At the same time, international human rights law affords heightened protection to freedom of expression that relates to political participation, including in the context of elections (General Comment No. 25, paras. 12 and 25). In the Politician’s Comments on Demographic Changes case, the Board affirmed that expressions of controversial opinion are protected by international human rights standards. Both these posts constitute protected expression. Even if considered to be “deeply offensive,” this does not convert them to incitement to likely and imminent discrimination (see General Comment No. 34, (2011), para. 11; see also para. 17 of the 2019 report of the UN Special Rapporteur on freedom of expression, A/74/486 ). The majority emphasizes that the UN Special Rapporteur on freedom of expression has made it clear that speech bans can be justified when there is imminent and likely concrete harm. But when the harms are not likely or imminent, other measures can be deployed (see A/74/486, paras. 13, 54). Similarly, the UN Human Rights Committee has stated: “Generally, the use of flags, uniforms, signs and banners is to be regarded as a legitimate form of expression that should not be restricted, even if such symbols are reminders of a painful past. In exceptional cases, where such symbols are directly and predominantly associated with incitement to discrimination, hostility or violence, appropriate restrictions should apply,” (General Comment No. 37 on the right of peaceful assembly, CCPR/C/GC/37 , para. 51). For the majority, there are concerns about the excessive breadth of the Dangerous Organizations and Individuals Community Standard’s prohibition on “unclear references.” As Meta looks to reduce mistakes in its content moderation, as announced on January 7, these Board Members encourage an examination of how accurate and precise the enforcement of the “unclear references” rule is, as well as the compatibility of removals with Meta’s human rights responsibilities. The Board has often used the Rabat Plan of Action’s six-factor test to assess if incitement to violence or discrimination is likely and imminent. The majority finds this was not met by either post. For the majority, the likelihood of imminent discrimination or violence posed by the content is low for a variety of reasons. As noted above, the historical context of apartheid in South Africa and its continuing legacy is important to the interpretation of these posts. At the same time, the country’s relatively stable representative democracy since the end of apartheid and its robust legal framework for protecting human rights, are also relevant, particularly as it underwent elections at the time of these posts. Experts consulted by the Board noted that white supremacist rhetoric was not a major issue during the May 2024 elections. They said the period leading up to those elections was not characterized by interracial violence nor calls for violence from the white minority against other racial or ethnic groups. Neither post is from a high-profile or influential speaker, reducing the risk that either post would persuade anyone to engage in imminent acts of discrimination or violence. Neither post includes calls for action. The posts do not contain a clear intent to advocate for future acts of discrimination or violence nor would they be understood as a call to people to engage in such acts. Given these various factors, the majority determines that it was not likely nor imminent that violence or discrimination would have resulted from these posts. Banning highly offensive speech that does not incite imminent and likely harm does not make intolerant ideas disappear. Rather, people with those ideas are driven to other platforms, often with like-minded people rather than a broader range of individuals. This may exacerbate intolerance instead of enabling a more transparent, public discourse about the issues. The majority believes a variety of other content moderation tools short of removals could have served as a less intrusive means to achieve legitimate aims in these cases. The majority acknowledges the potential negative emotional ramifications of content in these cases as well as Meta’s legitimate aim of seeking to prevent discrimination. As the Board stated in one of its very first opinions ( Claimed Covid Cure ), the company should first seek to achieve legitimate aims by deploying measures that do not infringe on speech. If that is not possible, the company should select the least intrusive tool for achieving the legitimate aim. Then, it should monitor that the selected tool is effective. Meta should use this framework in publicly justifying its rules and enforcement actions. Indeed, the UN Special Rapporteur on freedom of expression has noted ( A/74/486 , para 51): “Companies have tools to deal with content in human rights-compliant ways, in some respects a broader range of tools than that enjoyed by States.” The Board urges Meta to transparently explore expanding its enforcement toolkit and introduce intermediate measures in enforcing its Hateful Conduct Community Standard, instead of defaulting to a binary choice of keep up or take down. In the Myanmar Bot case, the Board found that “heightened responsibilities should not lead to default removal, as the stakes are high in both leaving up harmful content and removing content that poses little or no risk of harm.” The Board urges Meta to examine how removing content can be an extreme measure that adversely impacts freedom of expression online. It also urges the company to consider other tools, such as the removal of content from recommendations or reduced distribution in users’ feeds, in appropriate circumstances. A minority of Board Members finds that removing both posts would be a necessary and proportionate limit on freedom of expression to ensure respect for the right to equality as well as freedom from discrimination for non-white South Africans. The minority is guided by the Rabat Plan factors to assess the risks posed by potential hate speech, including the harms these posts contributed to (op. cit). In particular, a minority notes the public comments from the Nelson Mandela Foundation and the South African Human Rights Commission, among others. These confirm the various ways in which expression on Meta’s platforms, supporting, justifying or otherwise glorifying segregation, contributes to the persistence of discrimination following apartheid (PC-30759; PC-30771; PC-30768; PC-30772; PC-30774). Comments beneath each post, largely in Afrikaans, which reveal a sense of white supremacy rooted in colonialism, confirm for this minority that the intent of the speaker to advocate hatred in an environment of severe discrimination was successful. The minority note that in the Depiction of Zwarte Piet case, the majority of the Board upheld the removal of a post based on its effects on the self-esteem and mental health of Black people, even when those effects may not have been directly intended by the speaker. This case is relevant beyond South Africa. Experts the Board consulted noted that symbols of apartheid, including the 1928–1994 flag, have been co-opted by white nationalist movements in other parts of the world too. This includes Dylann Roof, who gunned down nine members of a Black congregation church in the United States in 2015. A photo of Roof included on his social media shows him wearing a jacket with a patch of the apartheid-era flag (PC-30769). A minority, moreover, reiterates that Meta, as a private actor, may remove hate speech that falls short of the threshold of incitement to imminent discrimination or violence when this meets the ICCPR Article 19(3) requirements of necessity and proportionality (report A/HRC/38/35 , para. 28). In the South Africa Slurs case, the Board upheld Meta’s removal of a racial slur relying heavily on the particularities of the South African context. For a minority in this case, the removal of both posts is necessary not only to prevent discrimination but also to ensure that the accumulation of hatred on the platform does not have a chilling effect on the freedom of expression of people repeatedly targeted by hate speech (see also Depiction of Zwarte Piet , Communal Violence in Indian State of Odisha , Armenians in Azerbaijan and Knin Cartoon ). For the minority, the consequences on users’ human rights from content moderation (specifically, the removal of speech and feature limits or suspensions for recurring violations) are significantly different from those enforcing laws on hate speech (such as fines or imprisonment). For these reasons, a minority finds that removing both posts in accordance with the Hateful Conduct rule on exclusion, as well as the Dangerous Organizations and Individuals prohibition on “glorification,” would be necessary and proportionate. A minority notes that accurately scaled enforcement of the Dangerous Individuals and Organizations exception on social and political discourse should ensure this set of rules is not overenforced. Human Rights Due Diligence Principles 13, 17 (c) and 18 of the UNGPs require Meta to engage in ongoing human rights due diligence for significant policy and enforcement changes, which the company would ordinarily do through its Policy Product Forum, including engagement with impacted stakeholders . The Board is concerned that Meta’s January 7, 2025, policy and enforcement changes were announced hastily, in a departure from regular procedure, with no public information shared as to what, if any, prior human rights due diligence it performed. Now these changes are being rolled out globally, it is important that Meta ensures adverse impacts of these changes on human rights are identified, mitigated and prevented, and publicly reported. This should include a focus on how communities may be differently impacted, including in Global Majority regions. In relation to enforcement changes, due diligence should be mindful of the possibilities of both overenforcement ( Call for Women’s Protest in Cuba , Reclaiming Arabic Words ) as well as underenforcement ( Holocaust Denial , Homophobic Violence in West Africa , Post in Polish Targeting Trans People ). The Board notes that many of these changes are being rolled out worldwide, including in Global Majority countries like South Africa and others with a recent history of crimes against humanity, not limited to apartheid. It is especially important Meta ensures that adverse impacts of these changes on human rights in such regions are identified, mitigated, prevented and accounted for publicly as soon as possible, including through robust engagement with local stakeholders. The Board notes that in 2018, Meta cited the failure to remove hate speech from Facebook in crisis situations like Myanmar as motivation for increasing reliance on automated enforcement. In many parts of the world, users are less likely to engage with Meta’s in-app reporting tools for a variety of reasons, making user reports an unreliable signal of where the worst harms could be occurring. It is therefore crucial that Meta considers fully how the effects of any changes to automated detection of potentially violating content, both for under- and overenforcement, may have uneven effects globally, especially in countries experiencing current or recent crises, war or atrocity crimes. 6. The Oversight Board’s Decision The Oversight Board upholds Meta’s decisions to leave up both pieces of content. 7. Recommendations Content Policy 1. As part of its ongoing human rights due diligence, Meta should take all of the following steps in respect of the January 7 updates to the Hateful Conduct Community Standard. First, it should identify how the policy and enforcement updates may adversely impact populations in global majority regions. Second, Meta should adopt measures to prevent and/or mitigate these risks and monitor their effectiveness. Third, Meta should update the Board on its progress and learnings every six months, and report on this publicly at the earliest opportunity. The Board will consider this recommendation implemented when Meta provides the Board with robust data and analysis on the effectiveness of its prevention or mitigation measures on the cadence outlined above, and when Meta reports on this publicly. 2. To improve the clarity of its Dangerous Organizations and Individuals Community Standard, Meta should adopt a single, clear and comprehensive explanation of how its prohibitions and exceptions under this Community Standard apply to designated hateful ideologies. The Board will consider this recommendation implemented when Meta adopts a single, clear and comprehensive explanation of its rule and exceptions related to designated hateful ideologies (under “we remove”). 3. To improve the clarity of its Dangerous Organizations and Individuals Community Standard, Meta should list apartheid as a standalone designated hateful ideology in the rules. The Board will consider this recommendation implemented when Meta adds apartheid to its list of designated hateful ideologies. Enforcement 4. To improve clarity to reviewers of its Dangerous Organizations and Individuals Community Standard, Meta should provide more global examples to reviewers of prohibited glorification, support and representation of hateful ideologies, including examples that do not directly name the listed ideology. The Board will consider this recommendation implemented when Meta provides updated internal guidance to the Board including more global examples, including ones that do not directly name the listed ideology. *Procedural Note: Return to Case Decisions and Policy Advisory Opinions" bun-fjipx1xo,Criminal Allegations Based on Nationality,https://www.oversightboard.com/decision/bun-fjipx1xo/,"September 25, 2024",2024,,"Freedom of expression,War and conflict",Hateful conduct,Overturned,"Russia,United States","The Board has reviewed three cases together all containing criminal allegations made against people based on nationality. In overturning one of Meta’s decisions to remove a Facebook post, the Board has considered how these cases raise the broader issue of how to distinguish content that criticizes state actions and policies from attacks against people based on their nationality.",47173,7166,"Multiple Case Decision September 25, 2024 The Board has reviewed three cases together all containing criminal allegations made against people based on nationality. In overturning one of Meta’s decisions to remove a Facebook post, the Board has considered how these cases raise the broader issue of how to distinguish content that criticizes state actions and policies from attacks against people based on their nationality. Overturned FB-25DJFZ74 Platform Facebook Topic Freedom of expression,War and conflict Standard Hateful conduct Location Russia,United States Date Published on September 25, 2024 Upheld IG-GNKFXL0Q Platform Instagram Topic Freedom of expression,War and conflict Standard Hateful conduct Location India,Pakistan Date Published on September 25, 2024 Upheld TH-ZP4W1QA6 Platform Threads Topic Freedom of expression,War and conflict Standard Hateful conduct Location Israel Date Published on September 25, 2024 Criminal Allegations Based on Nationality Decision PDF The Board has reviewed three cases together all containing criminal allegations made against people based on nationality. In overturning one of Meta’s decisions to remove a Facebook post, the Board has considered how these cases raise the broader issue of how to distinguish content that criticizes state actions and policies from attacks against people based on their nationality. In making recommendations to amend Meta’s Hate Speech policy and address enforcement challenges, the Board has opted for a nuanced approach that works for moderation at-scale, with guardrails to prevent negative consequences. As part of the relevant Hate Speech rule, Meta should develop an exception for narrower subcategories that use objective signals to determine whether the target of such content is a state or its policies, or a group of people. About the Cases In the first case, a Facebook post described Russians and Americans as “criminals,” with the user calling the latter more “honorable” because they admit their crimes in comparison with Russians who “want to benefit from the crimes” of Americans. This post was sent for human review by Meta’s automated systems, but the report was automatically closed, so the content remained on Facebook. Three months later, when Meta selected this case to be referred to the Board, Meta’s policy subject matter experts decided the post did violate the Hate Speech Community Standard and removed it. Although the user appealed, Meta decided the content removal was correct following further human review. For the second case, a user replied to a comment made on a Threads post. The post was a video about the Israel-Gaza conflict and included a comment saying, “genocide of terror tunnels?” The user’s reply stated: “Genocide … all Israelis are criminals.” This content was sent to human review by Meta’s automated systems and then removed for violating the Hate Speech rules. The third case concerns a user’s comment on an Instagram post in which they described “all Indians” as “rapists.” The original Instagram post shows a video in which a woman is surrounded by men who appear to be looking at her. Meta removed the comment under its Hate Speech rules. All three cases were referred to the Board by Meta. The challenges of handling criminal allegations directed at people based on nationality are particularly relevant during crises and conflict, when they “may be interpreted as attacking a nation’s policies, its government or its military rather than its people,” according to the company. Key Findings The Board finds that Meta was incorrect to remove the Facebook post in the first case, which mentions Russians and Americans, because there are signals indicating the content is targeting countries rather than citizens. Meta does not allow “dehumanizing speech in the form of targeting a person or group of persons” based on nationality by comparing them to “criminals,” under its Hate Speech rules. However, this post’s references to crimes committed by Russians and Americans are most likely targeting the respective states or their policies, a conclusion confirmed by an expert report commissioned by the Board. In the second and third cases, the majority of the Board agrees with Meta that the content did break the rules by targeting persons based on nationality, with the references to “all Israelis” and “all Indians” indicating people are being targeted. There are no contextual clues that either Israeli state actions or Indian government policies respectively were being criticized in the content. Therefore, the content should have been removed in both cases. However, a minority of the Board disagrees, noting that content removal in these cases was not the least intrusive means available to Meta to address the potential harms. These Board Members note that Meta failed to satisfy the principles of necessity and proportionality in removing the content. On the broader issue of policy changes, the Board believes a nuanced and scalable approach is required, to protect relevant political speech without increasing the risk of harm against targeted groups. First, Meta should find specific and objective signals that would reduce both wrongful takedowns and harmful content being left up. Without providing an exhaustive list of signals, the Board determines that Meta should allow criminal allegations when directed at a specific group likely to serve as a proxy for the state, such as police, military, army, soldiers, government and other state officials. Another objective signal would relate to the nature of the crime being alleged, such as atrocity crimes or grave human rights violations, which can be more typically associated with states. This would mean that posts in which certain types of crime are linked to nationality would be treated as political speech criticizing state actions and remain on the platform. Additionally, Meta could consider linguistic signals that could distinguish between political statements and attacks against people based on nationality. While such distinctions will vary across languages, making the context of posts even more critical, the Board suggests the presence or absence of the definite article could be such a signal. For example, words such as “all” (“all Americans commit crimes”) or “the” (“the Americans commit crimes”) could indicate the user is making a generalization about an entire group of people, rather than their nation state. Having a more nuanced policy approach will present enforcement challenges, as Meta has pointed out and the Board acknowledges. The Board notes that Meta could create lists of actors and crimes very likely to reference state policies or actors. One such list could include police, military, army, soldiers, government and other state officials. For photos and videos, reviewers could look for visual clues in content, such as people wearing military uniform. When such a clue is combined with a generalization about criminality, this could indicate the user is referring to state actions or actors, rather than comparing people to criminals. The Board urges Meta to seek enforcement measures aimed at user education and empowerment when limiting freedom of expression. In response to one of the Board’s previous recommendations, Meta has already committed to sending notifications to users of potential Community Standard violations. The Board considers this implementation an important step towards user education and empowerment on Meta’s platforms. The Oversight Board’s Decision The Oversight Board overturns Meta’s decision to take down the content in the first case, requiring the post to be restored. For the second and third cases, the Board upholds Meta’s decisions to take down the content. The Board recommends that Meta: * Case summaries provide an overview of cases and do not have precedential value. 1. Case Description and Background These cases concern three content decisions made by Meta, one each on Facebook, Threads and Instagram. Meta referred the three cases to the Board. The first case involves a Facebook post in Arabic from December 2023, which states that both Russians and Americans are “criminals.” The content also states that “Americans are more honorable” because they “admit their crimes” while Russians “want to benefit from the crimes” of Americans. After one of Meta’s automatic classification tools (a hostile speech classifier) identified the content as potentially violating, the post was sent for human review. However, this was automatically closed so the content was not reviewed and remained on Facebook. In March 2024, when Meta selected this content for referral to the Board, the company’s policy subject matter experts determined the post violated the Hate Speech Community Standard . It was then removed from Facebook. The user who posted the content appealed this decision to Meta. Following another stage of human review, the company decided content removal in this case was correct. The second case is about a user’s reply in English to a comment made on a Threads post from January 2024. The post was a video discussing the Israel-Gaza conflict, with a comment noting “genocide of terror tunnels” with a question mark. The reply said “genocide” and stated that “all Israelis are criminals.” Meta’s automatic classification tools (a hostile speech classifier) identified the content as potentially violating. Following human review, Meta removed the reply to the comment for violating its Hate Speech Community Standard . Meta’s policy subject matter experts also then determined the original decision to remove the content was correct, after the company identified this case as one to refer to the Board. The third case concerns a user’s comment in English on an Instagram post from March 2024, stating “as a Pakistani” that “all Indians are rapists.” The comment was in response to a video of a woman surrounded by a group of men who appear to be looking at her. Meta removed the comment after one of its automatic classification tools (a hostile speech classifier) identified the comment as potentially violating the Hate Speech Community Standard . After Meta selected this content to refer to the Board, the company’s policy subject matter experts determined the original decision to remove the content was correct. In none of the three cases did the users appeal Meta’s decisions to the Board, but Meta referred all three. According to expert reports commissioned by the Board, “accusations of criminal behavior against nations, state entities and individuals are prevalent on Meta’s platforms and in the general public discourse.” Negative attitudes towards Russia on social media have increased since the Russian invasion of Ukraine in February 2022. According to experts, Russian citizens are often accused on social media of supporting their authorities’ policies, including Russia’s aggression towards Ukraine. Russian citizens, however, are less often accused of being “criminals” – a word used more frequently in reference to Russia’s political leadership and the soldiers of the Russian army. As per linguistic experts consulted by the Board, the Arabic translation of “Americans” and “Russians” in the first case could be used to express resentment towards American and Russian policies, governments and politics respectively, rather than against the people themselves. Experts also report that mentions of Israel and Israelis in relation to genocide have spiked on Meta’s platforms since the beginning of the country’s military operations in Gaza, which followed the Hamas terrorist attack on Israel in October 2023. The discourse in relation to accusations of genocidal actions has intensified, especially after the January 26, 2024, order of the International Court of Justice (ICJ), in which the ICJ ordered provisional measures against Israel under the Convention on the Prevention and Punishment of the Crime of Genocide in the South Africa v. Israel case. Since its adoption, this order has been a subject of both criticism and endorsement . Experts also argue that accusations against the Israeli government “often become the basis for antisemitic hate speech and incitement” given that all Jewish people, regardless of citizenship, are often “ associated with Israel in public opinion.” Finally, experts also explained that generalizations about Indians related to rape are rare on social media. While the characterization of “Indians as rapists” has occasionally surfaced in the context of alleged sexual violence by Indian security forces in conflict areas, this rarely refers to “all Indians.” Most scholarly , journalistic and human rights related documentation about these incidents clearly calls out abuses by the army and does not refer to a larger set of the population. 2. User Submissions The authors of the posts were notified of the Board’s review and provided with an opportunity to submit a statement. None of the users submitted a statement. 3. Meta’s Content Policies and Submissions I. Meta’s Content Policies Meta’s Hate Speech policy rationale defines hate speech as a direct attack against people – rather than concepts or institutions – on the basis of protected characteristics, including national origin, race and ethnicity. Meta does not allow hate speech on its platform because it “creates an environment of intimidation and exclusion, and in some cases may promote offline violence.” Tier 1 of the Hate Speech policy prohibits “dehumanizing speech or imagery in the form of comparisons, generalizations or unqualified behavioral statements (in written or visual form)” about “criminals.” Meta’s internal guidelines to content reviewers on how to enforce the policy define generalizations as “assertions about people’s inherent qualities.” Additionally, Meta’s internal guidelines define “qualified” and “unqualified” behavioral statements and provide examples. Under these guidelines, “qualified statements” do not violate the policy, while “unqualified statements” are violating and removed. The company allows people to post content containing qualified behavioral statements that can include specific historical, criminal or conflict events. According to Meta, unqualified behavioral statements “explicitly attribute a behavior to all or a majority of people defined by a protected characteristic.” II. Meta’s Submissions Meta removed all three posts for “targeting people with criminal allegations based on nationality,” as they contained generalizations about a group’s inherent qualities, as opposed to their actions. Meta noted that the statements are not explicitly limited to those involved in the alleged criminal activity, and do not contain further context to indicate the statements are tied to a particular conflict or criminal event. When Meta referred these cases to the Board, it stated that they present a challenge on how to handle criminal allegations directed at people based on their nationality under the Hate Speech policy. Meta told the Board that while the company believes the policy “strikes the right balance between voice and safety in most circumstances,” there are situations, particularly in times of crisis and conflict, “where criminal allegations directed toward people of a given nationality may be interpreted as attacking a nation’s policies, its government, or its military rather than its people.” While these cases do not constitute a request for a policy advisory opinion, Meta presented for the Board’s consideration alternative policy approaches to assess whether and how the company should amend its current approach of removing criminal allegations against people based on nationality, while allowing criticism of states for alleged criminal activities. In response to the Board’s questions, Meta stated that the company did not conduct new stakeholder outreach to develop the policy alternatives for these cases but instead considered extensive stakeholder input received as part of other policy development processes. It became clear to Meta that attacks characterizing members of nation states as “war criminals” could be leading to over-enforcement, and limiting legitimate political speech, since there tends to be a link between this type of attack and actions taken by states. Under the first alternative, Meta envisaged introducing an escalation-only framework to distinguish between attacks based on national origin as opposed to attacks targeting a concept. This would require identifying factors to help with this determination such as whether a particular country is involved in a war or crisis, or whether the content references the country or its military in addition to its people. In other words, if the automated systems identify the post as likely violating, it would be taken down unless, following an escalation to Meta’s subject matter experts, the latter conclude otherwise. Meta added that if this type of framework is adopted, the company would likely use this framework as a backdrop to the existing concepts versus people escalation-only policy under the Hate Speech policy. This means that Meta “would not allow content, even if it determined the content was in fact targeting a nation rather than people, if it would otherwise be removed, under the concepts versus people framework.” Under the existing concepts versus people escalation-only policy, Meta takes down “content attacking concepts, institutions, ideas, practices, or beliefs associated with protected characteristics, which are likely to contribute to imminent physical harm, intimidation or discrimination.” Meta noted that this new framework would enable the company to consider more contextual cues, but it would likely be applied rarely and only on-escalation. In addition, “as escalation-only policies are only applied to content escalated to Meta’s specialized teams, they may be perceived as inequitable to those who lack access to these teams and whose content is reviewed at-scale.” Under the second alternative, Meta presented a range of sub-options to address the risk of over-enforcement at-scale. Unlike the first alternative, this would not require additional context for content to be considered for assessment and would apply at-scale. The sub-options include: (a) Allowing all criminal comparisons on the basis of nationality. Meta noted that this option would result in under-enforcement by leaving up some criminal comparisons that attack people based on their nationality with no clear connection to political speech. (b) Allowing all criminal comparisons to specific subsets of nationalities. Meta stated that a specific exception could be considered for subsets of nationalities likely to represent government or national policy (e.g., “Russian soldiers,” “American police” or “Polish government officials”), based on the assumption that these subsets are more likely to be a proxy for the government or national policy. (c) Distinguishing between different types of criminal allegations. Meta noted that references to some types of crimes may be more frequently tied to states or institutions or appear to be more political than others. The Board asked Meta questions on operational feasibility and trade-offs involved in the proposed alternative policy measures, and the interplay between existing policies and the proposed policy measures. Meta responded to all questions. 4. Public Comments The Oversight Board received 14 public comments that met the terms for submission . Of these, seven were from the United States and Canada, six from Europe and one from Asia Pacific and Oceania. To read public comments submitted with consent to publish, click here . The submissions covered the following themes: the implications of allegations of criminality against a whole nation in times of conflict, Meta’s Hate Speech Community Standard and Meta’s human rights responsibilities in conflict situations. 5. Oversight Board Analysis The Board accepted these referral cases to consider how Meta should moderate allegations of criminality based on nationality, particularly how the company should distinguish between attacks against persons based on nationality and references to state actions and actors during conflicts and crises. These cases fall within the Board’s strategic priorities of Crisis and Conflict Situations and Hate Speech Against Marginalized Groups. The Board examined Meta’s decisions in these cases by analyzing Meta’s content policies, values and human rights responsibilities. The Board also assessed how Meta should distinguish between speech that attributes criminality to individuals as members of a nationality and speech that attributes criminality to states. That distinction is adequate as a matter of principle, but its implementation is challenging, especially at-scale. 5.1 Compliance With Meta’s Content Policies I. Content Rules The Board finds that the pieces of content in the second and third cases violate Meta’s Hate Speech policy. The Board believes, however, that Meta’s decision to remove the content in the first case was incorrect, given there are signals indicating that the Facebook post is targeting countries and not its citizens. After reviewing Meta’s Hate Speech policy, the Board recommends that the company reduce reliance on broad default rules and instead develop narrower subcategories that use objective signals to minimize false positives and false negatives on a scalable level. For example, the company should allow criminal allegations against specific groups that are likely to serve as proxies for states, governments and/or their policies, such as police, military, army, soldiers, government and other state officials. The company should also allow comparisons that mention crimes more typically associated with state actors and dangerous organizations, as defined by Meta’s Dangerous Organizations and Individuals policy, particularly atrocity crimes or grave human rights violations, such as those specified in the Rome Statute of the International Criminal Court. Individual Cases The Board finds that the post in the first case did not violate the prohibition against “dehumanizing speech in the form of targeting a person or a group of persons” based on nationality “with comparisons to, generalizations or unqualified behavioral statements about ... criminals,” under Meta’s Hate Speech Community Standard. An expert report commissioned by the Board indicated that the references to crimes committed by “Russians” and “Americans” is most plausibly read as targeting respective states or their policies, not people from those countries. Moreover, the post compares Russians with Americans. Given the role both Russia and the United States play in international relations and politics, the comparison indicates that the user was referring to the respective countries, rather than the people. The Board concludes that the post in the first case is targeting states or their policies and, therefore, does not contain dehumanizing speech against persons based on nationality in the form of a generalization about criminals – and should be restored. The Board agrees with Meta that the content in the second and third cases did violate Meta’s Hate Speech Community Standard, as these posts do contain generalizations about “criminals,” which target persons based on nationality. The references to “all Israelis” and “all Indians” most plausibly target Israelis and Indians, not the respective nations or governments. Additionally, neither post contains sufficient context to conclude it is referring to a particular act or criminal event. Although the content in the second case was posted in response to another Threads user’s post containing a video discussing the Israel-Gaza conflict, the word “all” in reference to Israelis is a strong indication that the people as a whole are being targeted and not just the government. Moreover, while the content also includes a reference to “genocide,” there are no contextual signals unambiguously indicating that the user intended to refer to Israel’s state actions or policies, rather than to target Israelis based on their nationality. Similarly, no such context is present in the third case: the fact the user is commenting on an Instagram video in which men look at a female figure indicates the user is likely to be targeting people. The men in the video have no apparent connection to the Indian government. Additionally, there is no indication the user was criticizing the Indian government’s policies or actions on rape. In the absence of unambiguous references serving as criticism of states, the Board concludes that the removal of content in the second and third cases was justified under Meta’s Hate Speech policy. Broader Issues Turning to the broader issues raised by the three cases, the Board acknowledges the challenges in distinguishing content criticizing state actions and policies from attacks against people based on nationality, especially during crises and conflicts. Thus, the Board believes that Meta should implement nuanced policy changes that result in relevant political speech being protected and left on Meta’s platforms, without increasing the risk of harm against targeted groups. In the Board’s understanding, this requires a scalable approach with guardrails to prevent adverse far-reaching consequences. The Board recommends that Meta find specific, objectively ascertainable signals that reduce false positives and false negatives in important subgroups of cases. For example – and without purporting to provide an exhaustive list of such signals – the Board is of the view that nationality-based allegations of criminality should generally be allowed when they are directed to a specific group that are likely to serve as proxies for states, such as soldiers, army, military, police, government or other state officials. Another objective signal relates to the nature of the crime alleged in the challenged post. Some crimes, particularly atrocity crimes or grave human rights violations, such as those specified in the Rome Statute of the International Criminal Court, are typically associated with state actors and dangerous organizations, while other crimes are almost exclusively committed by private individuals. Accordingly, posts that attribute the former type of crime to a nationality, when followed by references to state actions or policies, should be treated as political speech criticizing state action, and left on the platform, while the latter should generally be removed. Additionally, certain linguistic signals could serve a similar function of distinguishing between political statements and hate speech. While recognizing that inferences from such signals may vary from language to language, the Board suggests that the presence or absence of a definite article is likely to have significance. To say that “the Americans” commit crimes is not the same as saying that “Americans” commit crimes, as the use of the definite article may signal a reference to a particular group or location. Similarly, words like “all” are strong signals that the speaker is making generalizations about an entire group of people rather than their nation state. These distinctions may vary across languages, making contextual interpretations even more critical. At the same time, the Board considers that developing a framework that would only be available to Meta’s policy experts (an “escalation-only” rule), rather than to at-scale content reviewers, is an inadequate solution. In the Sudan ’ s Rapid Support Forces Video Captive case, the Board learned that Meta’s human reviewers carrying out moderation at-scale “are not instructed or empowered to identify content that violates the company’s escalation-only” rules. Similarly in this case, Meta informed the Board that “escalation-only” rules can only be enforced if content is brought to the attention of Meta’s escalation-only teams, for example, through Trusted Partners or significant press coverage, or inquiries from content moderators about concerning trends, specialized teams in the region or internal experts such as Meta’s Human Rights Team or Civil Rights Legal Team. While the Board acknowledges this escalation-only framework would allow for expert analysis of the overarching context of a conflict situation, cues around the user’s intent and any links to state institutions, the Board considers that this approach would not result in distinguishing between permissible and impermissible posts in most cases, given that it would not be applied at-scale. Similarly, the Board finds that another of Meta’s alternatives, to allow all criminal comparisons based on nationality, is not a sufficiently nuanced approach and would result in the risk of under-enforcement of harmful content, that may be especially exacerbated in times of crises. The Board considers this option overbroad as it may protect content targeting people, rather than states, their actions or policies. II. Enforcement Action Meta has informed the Board about potential enforcement challenges associated with some of the more nuanced policy alternatives it provided to the Board, including potential difficulties with classifier training to enforce narrow exceptions and the increased complexity for human reviewers moderating at-scale. The company noted that under the current Hate Speech policy, all protected characteristic groups are treated equally, which makes it easier for human reviewers to apply the policy, and this also facilitates classifier training. In the Violence Against Women case, Meta informed the Board that “it can be difficult for at-scale content reviewers to distinguish between qualified and unqualified behavioral statements without taking a careful reading of context into account.” In the Call for Women’s Protest in Cuba case, Meta told the Board that because it is challenging to determine intent at-scale, its internal guidelines instruct reviewers to remove behavioral statements about protected characteristic groups by default when the user has not made it clear whether the statement is qualified or unqualified. While the Board acknowledges the enforcement challenges around nuanced policies, it finds that Meta could consider creating lists of actors and crimes that are very likely to reference state policies or actors, rather than people. For example, the list could include references to police, military, army, soldiers, government and other state officials. When it comes to photo and video content, Meta may instruct its human reviewers to consider visual cues in the content. For instance, content that features people wearing military attire coupled with generalizations about criminality may indicate the user’s intent to reference state actions or actors, rather than to generalize or compare people to criminals. The Board also notes that some crimes can more typically be committed by or attributed to state actors and dangerous organizations, and therefore could signal that the user’s intent is to criticize actions or policies of state actors or dangerous organizations. In enforcement of such content at-scale, Meta may consider focusing on atrocity crimes or grave human rights violations, such as those specified in the Rome Statute of the International Criminal Court. In view of the enforcement challenges to minimize false positives and false negatives at scale, the Board recommends that Meta publicly share the results of the internal audits it conducts to assess the accuracy of human review and performance of automated systems in the enforcement of its Hate Speech policy. The company should provide the results in a way that allows these assessments to be compared across languages and/or regions. This recommendation is in line with the Board’s recommendation no. 5 from the Breast Cancer Symptoms and Nudity decision and recommendation no. 6 from Referring to Designated Dangerous Individuals as “Shaheed” policy advisory opinion. Considering the complexities and nuances of the proposed policies, the Board underlines the importance of providing sufficient and detailed guidance to human reviewers to ensure consistent enforcement, in line with recommendation no. 1 below. 5.2 Compliance With Meta’s Human Rights Responsibilities The Board finds that Meta’s decision to remove the content in the first case was not consistent with the company’s human rights responsibilities. The majority of the Board considers that removing the content in the second and third cases was in line with Meta’s human rights commitments, while a minority disagrees. Freedom of Expression (Article 19 ICCPR) Article 19 of the ICCPR provides for broad protection of expression, including about politics, public affairs and human rights, with expression about social or political concerns receiving heightened protection ( General Comment No. 34 , paras. 11-12). When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s human rights responsibilities in line with the UN Guiding Principles on Business and Human Rights (UNGPs), which Meta itself has committed to in its Corporate Human Rights Policy. The Board does this both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression,” ( A/74/486 , para. 41). I. Legality (Clarity and Accessibility of the Rules) The principle of legality requires rules limiting expression to be accessible and clear, formulated with sufficient precision to enable an individual to regulate their conduct accordingly (General Comment No. 34, para. 25). Additionally, these rules “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” and must “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not,” ( Ibid. ). The UN Special Rapporteur on freedom of expression has stated that when applied to private actors’ governance of online speech, rules should be clear and specific (A/HRC/38/35, para. 46). People using Meta’s platforms should be able to access and understand the rules and content reviewers should have clear guidance regarding their enforcement. The Board finds that, as applied to these three cases, Meta’s policy prohibiting dehumanizing speech against persons based on nationality in the form of comparisons to, generalizations or unqualified behavioral statements about “criminals” meets the legality test. While all three posts contain generalization about criminal allegations, the Board considers that the first case contains sufficient context to conclude the user was referring to generalizations about state actions or policies and this content should be restored. However, the content in the second and third cases targets people based on nationality, violating Meta’s Hate Speech policy. Further, the Board highlights that any new rules should be clear and accessible to users as part of Meta making changes to the policy. Thus, the Boards urges Meta to update the language of the Hate Speech policy to reflect changes that will result from this decision and the policy recommendations that are adopted. In the Violence Against Women, Knin Cartoon and Call for Women’s Protest in Cuba decisions, the Board found that content reviewers should have sufficient room and resources to take contextual cues into account in order to accurately enforce Meta’s policies. Therefore, to ensure consistent and effective enforcement, Meta should provide clear guidance about the new rules to its human reviewers, in line with recommendation no. 1 below. II. Legitimate Aim Any restriction on freedom of expression should also pursue at least one of the legitimate aims listed in the ICCPR, which includes protecting the “rights of others.” “The term ‘rights’ includes human rights as recognized in the Covenant and more generally in international human rights law,” (General Comment No. 34, para. 28). In line with its previous decisions, the Board finds that Meta’s Hate Speech policy, which aims to protect people’s right to equality and non-discrimination, pursues a legitimate aim that is recognized by international human rights law standards (see, for example, our Knin Cartoon decision). III. Necessity and Proportionality Under ICCPR Article 19(3), necessity and proportionality requires that restrictions on expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected,” (General Comment No. 34, para. 34). The UNGPs state that businesses should perform ongoing human rights due diligence to assess the impacts of their activities (UNGP 17) and acknowledge that the risk of human rights harms is heightened in conflict-affected contexts (UNGP 7). The UN Working Group on the issue of human rights and transnational corporations and other business enterprises noted that businesses’ diligence responsibilities should reflect the greater complexity and risk of harm in some scenarios ( A/75/212 , paras. 41-49). In the Myanmar Bot case, the Board found that “[Meta’s] heightened responsibilities should not lead to default removal, as the stakes are high in both leaving up harmful content and removing content that poses little or no risk of harm.” The Board further noted that “while Facebook’s concern about hate speech in Myanmar was well founded, it also must take particular care to not remove political criticism and expression, in that case supporting democratic governance.” While criticism of state policies, politics and actions, especially in crisis and conflict situations, is of heightened importance, attacks on persons based on nationality may be particularly harmful in the same context. Criminal allegations against people based on nationality may result in offline violence that targets people and contributes to the escalation of tensions between countries in a conflict setting. The majority of the Board finds that Meta’s decision to remove the content in the first case did not comply with the principles of necessity and proportionality, while the removals in the second and third cases were necessary and proportionate. The majority considers that in the absence of contextual cues to conclude that the users in the second and third cases were criticizing the Israeli and the Indian governments respectively, both content removals were justified. However, the majority concludes that such context is present in the first case, thereby making the removal in that case neither necessary nor proportionate, and requiring the post to be restored. The Board reiterates that context is key for assessing necessity and proportionality (see our Pro-Navalny Protests in Russia decision). The Board acknowledges the importance and challenges around identifying contextual cues within the content itself and taking into account the external context and “environment for freedom of expression” surrounding posted content, (see also our Call for Women’s Protest in Cuba decision). Regarding the content in the second case, the majority of the Board notes the reports that since October 7, the United Nations , government agencies and advocacy groups have warned about an increase in antisemitism and Islamophobia. The Anti-Defamation League, for example, reported that antisemitic incidents in the United States increased by 361% following the October 7 attacks. Countries across Europe have warned of rising hate crimes, hate speech and threats to civil liberties targeting Jewish and Muslim communities. When analyzing the challenges of enforcing Meta’s policies at-scale, the Board has previously emphasized that dehumanizing discourse that consists of implicit or explicit discriminatory speech may contribute to atrocities (see Knin Cartoon decision). In interpreting the Hate Speech Community Standard, the Board has also noted that even when specific pieces of content, seen in isolation, do not appear to directly incite violence or discrimination, during times of heightened ethnic tension and violence the volume of such content is likely to exacerbate the situation. At least in those circumstances, a social media company like Meta is entitled to take steps beyond those available to governments to make sure its platform is not used to foster and encourage hatred that leads to violence. In the absence of unambiguous references signaling criticism of the state, one of its institutions or policies, the majority of the Board concludes that the content in the second case constituted dehumanizing speech against all Israelis based on nationality. In the context of reports of increasing numbers of antisemitic incidents, including attacks on Jewish people and Israelis on the basis of their identity, such content is likely to contribute to imminent offline harm. Similarly, the majority of the Board takes note of the ongoing tensions between India and Pakistan , and the reports on instances of communal violence between Hindus and Muslims in India (see Communal Violence in the Indian State of Odisha decision). Therefore, the majority considers that the removal of the content in the third case was necessary and proportionate because it targeted Indians, rather than criticized the Indian government, contributing to an environment of hostility and violence. A minority of the Board disagrees with removal of the second and third posts. Global freedom of expression principles (as enshrined in ICCPR Article 19) require that limits on speech, including hate speech bans, meet necessity and proportionality principles, which entails an assessment of whether near term harm is likely and imminent from the posts. This minority is not convinced that content removal is the least intrusive means available to Meta to address potential harms in these cases as a broad array of digital tools are available for consideration (e.g., preventing the sharing of posts, demotions, labels, time-limited blocking, etc.). Meta’s failure to demonstrate otherwise does not satisfy the principle of necessity and proportionality. The Special Rapporteur has stated “just as States should evaluate whether a limitation on speech is the least restrictive approach, so too should companies carry out this kind of evaluation. And, in carrying out the evaluation, companies should bear the burden of publicly demonstrating necessity and proportionality,” (A/74/486, para 51) [emphasis added]. For the minority, Meta has failed to publicly demonstrate why removals are the least intrusive means and the majority has not made a persuasive case that the necessity and proportionality principle is satisfied in the second and third cases. While the majority of the Board upholds removal of the two violating posts in the second and third cases, it underlines the importance of seeking user education and user empowerment measures when limiting freedom of expression. The Board takes note of recommendation no. 6 in the Pro-Navalny Protests in Russia decision, in response to which Meta explored ways of notifying users of potential violations to the Community Standards before the company takes an enforcement action. The company has informed the Board that when the company’s automated systems detect with high confidence a potential violation in content that a user is about to post, Meta may inform the user that their post might violate the policy, allowing the user to better understand Meta’s policies, and then to decide whether to delete and post their content again without the violating language. Meta added that over the 12-week period from July 10, 2023, to October 1, 2023, across all notification types, the company notified users across more than 100 million pieces of content, with over 17 million notifications relating to enforcement of the Bullying and Harassment Community Standard. Across all notifications, users opted to delete their posts more than 20% of the time. The Board notes that all information is aggregated and de-identified to protect user privacy, and that all metrics are estimates, based on best information currently available for a specific point in time. The Board considers the implementation of such measures an important step towards user education and empowerment, and additional control for users over their own experiences on Meta’s platforms. 6. The Oversight Board’s Decision The Oversight Board overturns Meta’s decision to take down the content in the first case, requiring the post to be restored, and upholds Meta’s decisions to take down the content in the second and third cases. 7. Recommendations Content Policy 1. Meta should amend its Hate Speech Community Standard, adding the section marked as “new” below. The amended Hate Speech Community Standard would then include the following or other substantially similar language to that effect: “Do not post Tier 1 Content targeting a person or group of people (including all groups except those who are considered non-protected groups described as having carried out violent crimes or sexual offenses or representing less than half of a group) on the basis of their aforementioned protected characteristic(s) or immigration status in written or visual form with dehumanizing speech in the form of comparisons to or generalizations about criminals: [NEW] Except when the actors (e.g., police, military, army, soldiers, government, state officials) and/or crimes (e.g., atrocity crimes or grave human rights violations, such as those specified in the Rome Statute of the International Criminal Court) imply a reference to state rather than targeting people based on nationality.” The Board will consider this recommendation implemented when Meta updates the public-facing Hate Speech Community Standard and shares the updated specific guidance with its reviewers. Enforcement 2. To improve transparency around Meta’s enforcement, Meta should share the results of the internal audits it conducts to assess the accuracy of human review and performance of automated systems in the enforcement of its Hate Speech policy with the public. It should provide the results in a way that allows these assessments to be compared across languages and/or regions. The Board will consider this recommendation implemented when Meta includes the accuracy assessment results as described in the recommendation in its Transparency Center and in the Community Standards Enforcement Reports. *Procedural Note: Return to Case Decisions and Policy Advisory Opinions" bun-fu50knak,Candidate for Mayor Assassinated in Mexico,https://www.oversightboard.com/decision/bun-fu50knak/,"December 12, 2024",2024,December,"Elections,News events,Violence",Dangerous individuals and organizations,Upheld,Mexico,"In four cases of videos showing the assassination of Mexican mayoral candidate José Alfredo Cabrera Barrientos, the Board notes how Meta treated posts differently when three of them should have benefited from the same outcome – to remain up under the newsworthiness allowance.",47816,7354,"Multiple Case Decision December 12, 2024 In four cases of videos showing the assassination of Mexican mayoral candidate José Alfredo Cabrera Barrientos, the Board notes how Meta treated posts differently when three of them should have benefited from the same outcome – to remain up under the newsworthiness allowance. Upheld FB-ZZ1PC8GA Platform Facebook Topic Elections,News events,Violence Standard Dangerous individuals and organizations Location Mexico Date Published on December 12, 2024 Overturned IG-KWS5YL10 Platform Instagram Topic Elections,News events,Violence Standard Dangerous individuals and organizations Location Argentina Date Published on December 12, 2024 Upheld IG-ZZR570SK Platform Instagram Topic Elections,News events,Violence Standard Dangerous individuals and organizations Location Mexico Date Published on December 12, 2024 Upheld IG-BGIFPMQ2 Platform Instagram Topic Elections,News events,Violence Standard Dangerous individuals and organizations Location Mexico Date Published on December 12, 2024 Candidate for Mayor Assassinated in Mexico Decision Download a PDF of the full decision here . In four cases of videos showing the assassination of Mexican mayoral candidate José Alfredo Cabrera Barrientos, the Board notes how Meta treated posts differently when three of them should have benefited from the same outcome – to remain up under the newsworthiness allowance. These three posts were shared by news outlets clearly reporting on a political assassination ahead of Mexico’s elections: Meta left two up but removed one. Taking down reports on issues being debated by the public limits access to essential information and hinders free speech. This is concerning given the risks that news outlets face in Mexico when reporting on state corruption and organized crime. While there was an uneven application of the newsworthiness allowance in these cases, the Board also sets out its concerns about the effectiveness of the allowance itself. To address this, the Board reiterates its recent recommendation from the Footage of Moscow Terrorist Attack decision, calling for an exception to be made to the rule that does not allow third-party imagery showing the moment of designated attacks on visible victims. This updated approach would help ensure fairer treatment for all users. About the Cases In May 2024, four pieces of content about the assassination of a candidate running for mayor in the Mexican state of Guerrero were either posted by or reshared from news media accounts in Latin America. All four posts include similar videos showing José Alfredo Cabrera Barrientos on the campaign trail before a gun is aimed at him, followed by blurry images and sounds of gunshots. The first two cases involve posts shared by large media organizations. The caption for the first post discusses how many candidates have been murdered during the election cycle, while the audio for the video includes a statement by the state prosecutor’s office explaining that Cabrera Barrientos was under protection when killed. It was viewed about 59,000 times. The second post includes a warning about the video’s sensitivity and a caption reporting on the Governor of Guerrero’s statement condemning the murder. It was viewed more than a million times. As Meta had designated this assassination as a violating violent event under its Dangerous Organizations and Individuals policy, another version of the video had already been added to a Media Matching Service (MMS) bank, which is programmed to remove the same content. Under the policy, users are not allowed to share third-party imagery depicting the moment of such designated attacks on visible victims. The first two posts, which were identified by the MMS bank and referred to Meta’s subject matter experts for additional review, were left up despite breaking Meta’s rules. They were given a newsworthiness allowance, occasionally granted for content Meta decides has high public interest value. Both posts remain on Meta’s platforms with “Mark as Disturbing” warning screens and newsworthy labels, but were referred to the Board. In the third and fourth cases, users appealed Meta’s decisions to remove their posts to the Board. The third post involved a reshare of the video, with a message imposed on it stating that an uncensored version was available on Telegram. This had 17,000 views. The fourth post included a caption noting who had been shot and injured at the scene. It had 11,000 views. After an MMS bank identified both posts, they were removed. The assassination of Cabrera Barrientos took place on the final day of campaigning ahead of Mexico’s nationwide elections on June 2. Political violence has been a feature of recent elections in the country, with organized crime partially responsible. This has led candidates to drop out of election races, fearful for their lives. Key Findings While Meta was right to keep up the first two posts on its platforms as newsworthy content, the Board finds the company was not right to take down the post in the fourth case from Instagram. This post also had high public interest value. There was no material difference to justify a different outcome. Even after the Board selected the fourth case, Meta failed to apply the same newsworthiness allowance, stating this post sensationalized the footage by informing users it had gone viral. However, this detail is included alongside other information about the shooting, including details about the number of casualties, the Governor’s statement and the fact the shooter was killed at the event. Although Cabrera Barrientos is visible and identifiable, he was a public figure attending an election rally, so the privacy concerns reduce and the public interest value outweighs risks of harm. On the third post, which directed users to a Telegram link for an uncensored, graphic version of the video to get around Meta’s prohibition on sharing third-party imagery of attacks on visible victims, the majority of the Board agrees with Meta that this content posed greater risks to security and privacy – and should have been taken down. For the majority, Meta was right not to grant a newsworthiness allowance, especially given the post had no additional caption or commentary indicating its purpose was to inform others or condemn the assassination. A minority of the Board disagrees, finding that the third post should also qualify for the newsworthiness allowance, as it is similar to the others. As the Board recently noted in its Footage of Moscow Terrorist Attack decision, imagery of designated attacks can be shared for multiple reasons. While Meta is concerned about such content glorifying, supporting or representing criminal groups’ activities, the rule that does not allow users to share third-party imagery of designated attacks on visible victims is leading to removal of content with low or no risk of harm. Of relevance to these cases, experts have noted that criminal groups in Mexico do not generally use videos of political assassinations for recruitment purposes, although they may share them to intimidate. Furthermore, the Board found no evidence of this footage having been recorded by the perpetrators or being used to inspire copycat behavior. While the Board found the newsworthiness allowance should be applied to the fourth post, it notes that the allowance is rarely used since there are limited ways for Meta to identify content to benefit from it. In combination with the multiple factors that need to be considered to grant the allowance, this increases the risks of the allowance’s random application, to the detriment of users. This is why the Board believes that a change to Meta’s policy, as highlighted in our Footage of Moscow Terrorist Attack decision, is preferable to Meta’s current approach. The Oversight Board’s Decision The Oversight Board upholds Meta’s decisions in the first three cases. The Board overturns Meta in the fourth case, requiring the post to be restored with a “Mark as Disturbing” warning screen. The Board reiterates its recommendation from the recent Footage of Moscow Terrorist Attack decision, stating that Meta should allow, with a “Mark as Disturbing” warning screen, third-party imagery of a designated event showing the moment of attacks on visible but not personally identifiable victims for news reporting, awareness raising or to condemn. *Case summaries provide an overview of cases and do not have precedential value. 1. Case Description and Background On May 30, 2024, four different accounts posted about the assassination of José Alfredo Cabrera Barrientos, who was running for mayor in the municipality of Coyuca de Benitez in the Mexican state of Guerrero. He had been shot and killed the day before during a campaign rally. All four pieces of content, one on Facebook and three on Instagram, were either posted by or reshared from news media accounts based in Latin America. The posts include similar videos, showing Cabrera Barrientos shaking hands with constituents before a gun is aimed at him. Blurred or blurry images follow the sound of multiple gunshots and people screaming. Each post is accompanied by a caption, in Spanish, providing facts about the shooting. Meta designated the assassination of Cabrera Barrientos as a violating violent event under its Dangerous Organizations and Individuals policy. This means, among other things, that users are not allowed to share third-party imagery depicting the moment of such designated attacks on visible victims. Meta’s subject matter experts had previously assessed another version of the video as violating and added it to a Media Matching Service (MMS) bank that was programmed to remove this content. The first post was shared by a large media organization and includes a caption stating that 23 candidates for political office have been murdered during Mexico’s current election cycle. The audio accompanying the footage provides more details, including a statement by the state prosecutor’s office explaining the shooter had been killed at the event and the fact that Cabrera Barrientos was under protection when he was killed. The post was viewed about 59,000 times. The second post, also shared by a large media organization, includes a warning added by the user that the video is sensitive. The caption reports on a statement by the Governor of Guerrero, in which she condemns the killing and expresses condolences to the family. It was viewed more than a million times. These two posts were referred to the Board by Meta. After being identified by an MMS bank programmed to automatically remove this content, the posts were escalated to Meta’s subject matter experts for additional review. The Board has previously described systems where this type of escalation might occur (see, for example, Meta’s Cross-Check Program policy advisory opinion). The subject matter experts determined the posts violated the Dangerous Organizations and Individuals policy. However, they granted a newsworthiness allowance to keep the posts on the platform due to their public interest value. The posts therefore remained on the platform with a “Mark as Disturbing” warning screen and newsworthy label. In the third case, a user reshared content from a different media organization, without adding anything to it. The reshared video captures the candidate moments before his assassination, including the moment when a gun is aimed at him. The caption provides information about the assassination without any additional context. There is a message imposed on the video, which is restated in the caption, instructing viewers that an “uncensored” video is available on Telegram. It was viewed about 17,000 times. The fourth post was shared by a media organization, with a caption noting that one of the attackers was shot at the scene and, in addition to the candidate, three others were injured. It was viewed about 11,000 times. These posts were removed after an MMS bank identified them. Both users in these two cases appealed to the Board. The Board notes the following context in reaching its decision. José Alfredo Cabrera Barrientos was the candidate for a coalition of the opposition political parties PRI-PAN-PRD, running for the position of Coyuca de Benitez’s Mayor. The assassination took place on the final day of campaigning ahead of nationwide elections on June 2, 2024. At the time, Cabrera Barrientos was under special protection measures , with a security team in place. Reports, including the first and fourth posts in these cases, indicate that one attacker was shot and killed at the event. At least one other person suspected of being involved was taken into custody and later found dead while in detention. This assassination took place within a wider context of political violence in Mexico. During the 2018 election cycle, organized crime was reportedly responsible for about half of the political violence, as “politicians or political candidates are identified as rivals when they don’t cooperate with criminal groups, which can turn them into targets for assassination or threats.” During the 2021 election cycle, United Nations (UN) and regional human rights experts reported 250 political murders in the pre-electoral and campaigning period in Mexico. Those experts further noted “at least 782 other politically motivated attacks – ranging from death threats to attempted murder – against politicians.” This violence has a chilling effect on candidates. According to UN human rights experts, in the 2021 electoral cycle “many candidates dropped out, citing fears for their lives.” International and regional experts further highlighted the impact this has on “the right of citizens to elect the candidate of their choice.” In the most recent 2024 cycle, over 8,000 candidates for office reportedly dropped out of their races, an increase from previous elections. The context of political violence was reported to be a contributing factor. The Inter-American Commission on Human Rights ( IACHR ) also condemned violence against candidates: “Since last year, [it] has observed with concern the occurrence of a series of acts of violence, including murders, threats, and kidnappings against pre-candidates, candidates, and leaders or activists of different political movements or affiliations.” According to the IACHR, from March 2024 to May 24, 2024, at least 15 pre-candidates or candidates were murdered, along with nine other individuals who had either expressed interest in running or were unofficial candidates. According to the Committee to Protect Journalists (CPJ) and the Global Initiative Against Transnational Organized Crime , Mexico is one of the most dangerous countries in the world for journalists. According to Freedom House’s 2024 Report on Mexico : “Gangs have engaged in threats and violence against bloggers and online journalists who report on organized crime. Self-censorship has increased, with many newspapers in violent areas avoiding publishing stories concerning organized crime.” Journalists trying to report on the link between government officials and criminal gangs have been murdered, leading to further silencing and fear. Criminal gangs in Mexico are reportedly “active on Facebook… [and use the platform to intimidate] rival groups and civilians.” However, experts consulted by the Board indicate that criminal groups in Mexico do not generally use videos of political assassinations as a recruitment tool but may share violent imagery, to intimidate opponents, including journalists. 2. User Submissions The news outlet that posted the content in the second case, which Meta kept up on Instagram as newsworthy content, submitted a statement to the Board. The submission states that information about the assassination is important to share, given the electoral context in Mexico. The post included important factual background about the assassination and reported on the statement released by the Governor of Guerrero. The users who posted the content in the third and fourth cases appealed Meta’s removal decisions to the Board. In their statements to the Board, they say they are reporting important news about violence and terrorism. Both express frustration that they have been censored. 3. Meta’s Content Policies and Submissions I. Meta’s Content Policies Dangerous Organizations and Individuals Community Standard The Dangerous Organizations and Individuals policy rationale states that, in an effort to prevent and disrupt real-world harm, Meta does not allow organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on its platforms. The Community Standard also prohibits “content that glorifies, supports, or represents events that Meta designates as violating violent events,” including “terrorist attacks” and “multiple-victim violence or attempted multiple-victim violence.” Meta prohibits: “(1) glorification, support or representation of the perpetrator(s) of such attacks; (2) perpetrator-generated content relating to such attacks; or (3) third-party imagery depicting the moment of such attacks on visible victims ,” (emphasis added). According to internal guidelines for reviewers, Meta removes imagery depicting the moment of attacks on visible victims “regardless of sharing context.” Meta does not require the victim to be visible at the same time as the violence, as long as it is clear the violence is directed at the victim, who is visible at some point in the footage. Violent and Graphic Content Community Standard The Violent and Graphic Content policy rationale states that the company understands people “have different sensitivities with regard to graphic and violent imagery.” Meta therefore removes the most graphic content while allowing and adding a warning label to other graphic content. This policy allows, with a “Mark as Disturbing” warning screen, “imagery (both videos and still images) depicting a person’s violent death (including their moment of death or the aftermath) or a person experiencing a life threatening event.” The warning screen limits visibility to users aged over 18 and doesn’t recommend the content to users who do not follow the account. The policy prohibits such imagery when it shows dismemberment, visible innards, burning or throat slitting. Newsworthiness Allowance In certain circumstances, the company will allow content that may violate its policies to remain on the platform if it is “ newsworthy and if keeping it visible is in the public interest.” When making the determination, “[Meta will] assess whether that content surfaces an imminent threat to public health or safety, or gives voice to perspectives currently being debated as part of a political process.” According to Meta, its analysis is informed by country-specific circumstances, the nature of the speech and the political structure of the country affected. Meta can also apply a warning screen to content that it keeps up under this allowance and limit users under 18 from viewing the content. Lastly, the company states: “Newsworthy allowance can be ‘narrow,’ in which an allowance applies to a single piece of content, or ‘scaled,’ which may apply more broadly to something like a phrase.” II. Meta’s Submissions Meta designated the assassination of José Alfredo Cabrera Barrientos as a violating violent event under its Dangerous Organizations and Individuals policy soon after the attack. Meta determined that all four posts violated the company’s policy prohibiting “third-party imagery depicting the moment of [designated] attacks on visible victims.” The company explained that it generally removes all such designated imagery, regardless of the context in which it is shared, for two main reasons. The first reason is safety. According to the company, removing this content helps to limit copycat behaviors (imitative behaviors) and avoid the spread of content that raises the profile of and may have propaganda value to the perpetrators. The second reason is the privacy and dignity of victims and their families. The company also aims to protect the dignity of any victims and their loved ones “who did not consent to being the subject of public curiosity and media attention.” Designating certain attacks allows Meta to quickly remove content under its Dangerous Organizations and Individuals policy across its platforms in response to key events. Meta stated that it grants few newsworthiness allowances for content that violates this policy. According to Meta, given the concerns its policy aims to address, these allowances are typically narrow in scope and generally limited to footage shared by recognized media outlets for news reporting. For the first two cases, the company issued a newsworthiness allowance considering the “wide national reach” of the two news outlets that posted the content, and the fact that the footage was contextualized with captions. The company assessed the posts as having high public interest value, due to the relevance of the violence and insecurity associated with the election cycle to the public debate. Meta did not rule out all risks, due to “the proximity to election day, particularly of copycat attacks against other candidates in areas that lack security, as well as a potential risk of dignitary harm to the family of the candidate.” Nevertheless, the company considered the fact that the news organizations took “editorial steps to avoid sharing imagery in a sensationalist way,” and “included captions that contextualized the footage within the broader context of how the violence and insecurity have impacted the electoral cycle and shared information on the official law enforcement response to the incident.” The first post does not include the exact moment of the shooting and the second post includes its own warning screen. When Meta granted a “narrow” newsworthiness allowance for these two posts, the company applied a label (or “newsworthy inform treatment”) to let users know the posts were allowed for the purpose of public awareness. It also applied a “Mark as Disturbing” warning screen, which prevented users under 18 from viewing the content. Any user who reshared these two specific posts also benefited from the allowance. No posts by other accounts received a newsworthiness allowance related to footage of the assassination. All other content identified by the Media Matching Service (MMS) bank as violating footage of the Cabrera Barrientos shooting was automatically removed from Meta’s platforms. Meta configured the MMS bank to remove content without applying a strike, “to ensure enforcement was proportional given the possibility users could be sharing the footage to raise awareness about or condemn the attack.” The third and fourth posts were therefore removed by the MMS bank without a strike being applied to the users’ accounts. Once the Board brought these two posts to Meta’s attention, the company confirmed they did not merit a newsworthiness allowance. The third and fourth posts were not shared by “well-known news outlets, nor did they contextualize the video in the same way” as the first and second posts. Meta took note of the fact that the third post directed users to “uncensored images” on Telegram and that the fourth post emphasized that the imagery had gone viral on social media. The company found this sensationalized the footage. According to Meta, the company’s decisions to remove the third and fourth posts were in line with the conditions of legality, legitimacy, and necessity and proportionality. First, Meta reiterated the necessity of generally removing imagery showing the moment of attack of designated events, given the risks that the content may promote copycat behavior and advance the aims of perpetrators. Removing the content, but not applying a strike to these users, was the least restrictive means of addressing the risk of harm. The Board asked questions on how many newsworthiness allowances the company issued for imagery of the assassination, the status of accounts and pathways for users to benefit from the newsworthiness allowance, whether Meta had specialized teams in place to address heightened risks during the election cycle and how these teams were prepared. Meta responded to all questions. 4. Public Comments The Oversight Board received 10 public comments that met the terms for submission . Seven of the comments were submitted from Latin America and the Caribbean, two from the United States and Canada, and one from Europe. To read public comments submitted with consent to publish, click here . The submissions covered the following themes: electoral and political violence during Mexico’s 2024 general election; the impact of political violence on democratic processes; how Meta should moderate content and adjust its policy on third-party sharing of violating violent events imagery; the effectiveness of the newsworthiness allowance; the role of social media in providing information about election processes; the use of social media by criminal organizations; general information about standards in Mexico for depicting political violence in news reporting, and the importance of freedom of speech in the context of elections in Mexico. 5. Oversight Board Analysis The Board selected these cases to address how political violence is depicted on Meta’s platforms and its potential impact on electoral processes. These cases fall within the Board’s strategic priority of Elections and Civic Space. The Board analyzed Meta’s decisions in these cases against Meta’s content policies, values and human rights responsibilities. The Board also assessed the implications of these cases for Meta’s broader approach to content governance. 5.1 Compliance with Meta’s Content Policies I. Content Rules All four posts violate Meta’s prohibition on “third-party imagery depicting the moment of [designated] attacks on visible victims.” Meta designated the assassination of José Alfredo Cabrera Barrientos immediately after the event on May 29, 2024. All four posts include footage showing Cabrera Barrientos moving through the crowd, as well as the moment the gun is pointed at him and immediately afterwards, to the sound of gunshots and people screaming. The rule, under the Dangerous Organizations and Individuals Community Standard, and further explained in Meta’s internal guidelines, prohibits such footage regardless of the context in which it is shared. Meta was right to keep the first two posts on its platforms as newsworthy content, applying a “Mark as Disturbing” warning screen and a newsworthy label. Under its policies, Meta should also have allowed the fourth post to remain on Instagram due to its public interest value. There was no material difference between these posts to justify a different outcome. After the Board selected this case, despite the content being reviewed by subject matter experts entitled to determine newsworthiness allowances (or other measures only applied on escalation), Meta still failed to correct this differential treatment. This goes against the principle of treating users fairly. The content in these three posts shows a shooting at a campaign event in an election cycle during which political violence was a central issue. The first, second and fourth posts provided information about the shooting, including the number of casualties and statements released by the Governor in response. The Board disagrees with Meta that the fourth post sensationalizes the footage by informing users it had gone viral on social media. Rather than sensationalizing, this highlights the post’s significance to the public. That information is included along with other relevant details on the number of casualties, including that the shooter was killed at the event, and the statement released by the Governor of Guerrero. When journalists limit their coverage of key events, such as the killing of a politician, public access to critical information is limited. Given the significant risks that news outlets and journalists face in Mexico, ensuring the accessibility of this type of news on online platforms is vital, especially during an election period. Consequently, the Board considers threats against journalists, and the resulting self-censorship, as relevant context for its newsworthiness analysis. Additionally, while the victim is fully visible and identifiable in the footage, the fact he was a public figure lessens the privacy concerns in this case. He was attending a public campaign rally during an election, and not depicted in a humiliating or degrading manner. For these three posts, the public interest value outweighs the risks of harm. For the third post, the majority of the Board agrees with Meta that the content poses greater risks and it finds Meta’s decision not to grant a newsworthiness allowance was reasonable. The content provides a video of the assassination without any additional information or caption that suggests an intent to report, raise awareness or condemn the attack. On the contrary, the message imposed on the video, and restated in the caption, informs viewers that an “uncensored” version of the video is available on Telegram and provides a link to this platform. The majority of the Board finds that the post aims explicitly to circumvent the prohibition on sharing third-party imagery of attacks on visible victims by directing users to violating content on an external platform. For these reasons, the majority agrees that Meta was right not to grant a newsworthiness allowance in this case. Additionally, in its research, the Board also verified that the linked Telegram channel highlights extremely violent footage, including imagery of beheadings and, relevant to this case, graphic imagery of the candidate’s assassination. For a minority of Board Members, the third post, being similar to the others, also deserves the newsworthiness allowance. Insofar as the decision of the majority relied on the fact that this post included a hyperlink to another platform, the minority believes that a hyperlink, by itself, should not be seen as “publication” of the content to which it refers. When Meta granted a newsworthiness allowance for the first two posts, the company added a newsworthy label to inform users that the posts were allowed for public awareness. The Board has previously recommended that Meta notify users when content remains on the platform due to a newsworthiness allowance (see Colombia Protests decision, recommendation no. 4, and Sudan Graphic Video decision, recommendation no. 4). The Board welcomes this practice, as it provides people with valuable context for why policy-violating content is allowed to stay on the platform. 5.2 Compliance with Meta’s Human Rights Responsibilities The Board finds that keeping the first two posts up, with a warning screen and a newsworthy label, and removing the third post was consistent with Meta’s human rights responsibilities. However, the Board finds removing the fourth post was not consistent with Meta’s human rights responsibilities. Freedom of Expression (Article 19 ICCPR) Meta’s content moderation practices can have adverse impacts on the right to freedom of expression. Article 19 of the International Covenant on Civil and Political Rights (ICCPR) provides broad protection for this right, given its importance to political discourse, and the Human Rights Committee has noted that it also protects expression that may be considered “deeply offensive,” ( General Comment No. 34 , paras. 11, 13 and 38). Article 19’s protection is “particularly high” for “public debate in a democratic society concerning figures in the public and political domain,” ( General Comment No. 34 , para. 34). When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human rights commitments, in relation both to the individual content decisions under review and to Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of opinion and expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression,” ( A/74/486 , para. 41). I. Legality (Clarity and Accessibility of the Rules) The principle of legality requires rules limiting expression to be accessible and clear, formulated with sufficient precision to enable an individual to regulate their conduct accordingly ( General Comment No. 34 , para. 25). Additionally, these rules “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” and must “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not,” ( General Comment No. 34 , para. 25). The UN Special Rapporteur on freedom of expression has stated that when applied to private actors’ governance of online speech, rules should be clear and specific ( A/HRC/38/35 , para. 46). People using Meta’s platforms should be able to access and understand the rules, and content reviewers should have clear guidance regarding their enforcement. The Board previously discussed and recommended how Meta could better structure its rules around designated events in its Footage of Moscow Terrorist Attack decision. Here, the Board reiterates that while Meta should improve this policy, its rule prohibiting third-party footage of designated events on visible victims is sufficiently clear for users to understand that content like this is prohibited. The footage shared in these posts depicts a shooting that targeted the candidate and resulted in multiple victims. The policy provides sufficiently clear notice to users that this kind of footage can be designated. II. Legitimate Aim Meta’s Dangerous Organizations and Individuals policy aims to “prevent and disrupt real-world harm.” In several decisions, the Board has found that this policy pursues the legitimate aim of protecting the rights of others, such as the right to life ( ICCPR , Article 6) and the right to non-discrimination and equality ( ICCPR , Articles 2 and 26) because it covers organizations that promote hate, violence and discrimination as well as designated violent events motivated by hate. See Referring to Designated Dangerous Individuals as “Shaheed,” Sudan’s Rapid Support Forces Video Captive , Hostages Kidnapped from Israel and Greek 2023 Elections Campaign decisions. Meta’s policies also pursue the legitimate aim of protecting the right to privacy (ICCPR, Article 17) of identifiable victims and their families (see Video After Nigeria Church Attack decision). III. Necessity and Proportionality Under ICCPR Article 19(3), necessity and proportionality requires that restrictions on expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected,” (General Comment No. 34, para. 34). The Board recognizes that, in developing the designation policy, Meta has erred on the side of safety and privacy. Meta explained that its current policy approach allows the company to swiftly remove this content through MMS banks, which helps disrupt the spread of perpetrator propaganda and can limit copycat behavior. Removing this content also helps protect the privacy and dignity of victims and their families when victims are visible. As the Board recently noted in the Footage of Moscow Terrorist Attack decision, a narrower rule risks underenforcement of content depicting violent events. It could also allow footage to be reused for harmful purposes that Meta may struggle to detect and remove. In some contexts, the risks of incitement or repurposing of such imagery do justify erring on the side of safety. However, the Board also emphasized in its Footage of Moscow Terrorist Attack decision that imagery of designated attacks can serve multiple purposes. Not all content depicting a designated attack, as in three of these cases, serves to glorify, support or represent criminal groups’ activities. Such content does not always have the outcomes Meta aims to prevent. Policies that prioritize overenforcement, regardless of context, pose risks to freedom of expression, access to information and public participation. This rule does lead to removal of content with low or no risk of harm. To address potential overenforcement, Meta has several policy tools. Three of those tools are especially pertinent here, though others exist as well. First, it can remove content without applying strikes or other penalties that might restrict the user. Withholding strikes on content enforced by MMS banks mitigates the risks of limiting access for users through feature limits or account suspension and serves as an important tool for ensuring proportionality. Second, Meta can also apply newsworthiness allowances to permit designated content with limited potential to create risks to public safety and the dignity of those depicted. However, for the newsworthiness allowance to be an effective mitigation measure on overenforcement, it must be effectively applied to relevant content. In previous cases and the policy advisory opinion on cross-check, the Board has identified multiple obstacles to the effectiveness of the allowance (see Meta’s Cross-Check Program policy advisory opinion , Sudan’s Rapid Support Forces Video Captive , Armenian Prisoners of War Video ). The newsworthiness allowance can only be applied on escalation and not by at-scale moderators. Because Meta’s at-scale moderators are not instructed or empowered to identify and escalate content that could benefit from the newsworthiness allowance, there are limited pathways for Meta to identify content it should consider for the newsworthiness allowance. For news outlets, journalists and others reporting on public interest issues not enrolled in Meta’s cross-check program or with access to Meta’s internal teams, it will be difficult to gain access to those within the company empowered to consider and apply the newsworthiness allowance. Additionally, the decision to grant the allowance requires considering multiple factors to balance public interest and potential harm, leading to a lack of predictability and increasing the risk of arbitrariness in its application, to the detriment of users. The effect is that the allowance is rarely used (see Sudan Graphic Video decision). From June 1, 2023 through June 1, 2024, Meta has reported 32 allowances. In these cases, for example, two posts were identified and escalated while one was not, despite similarity in content and context, undermining the fair treatment of users. The Board finds that in these specific cases, given the context in Mexico, it was neither necessary nor proportionate to remove the first, second and fourth posts. The first, second and fourth posts do not contain elements suggesting risks of recruitment or incitement to copycat behavior. Experts consulted by the Board stated that criminal groups in Mexico do not generally use videos of political assassinations for recruitment, but may share such content to intimidate. The Board found no evidence that the footage in these cases was recorded by the perpetrator nor that the shooter or criminal groups shared these specific posts to inspire copycat behavior, spread perpetrator’s propaganda or glorify their violent acts. On the contrary, these three posts were shared by news outlets reporting on a political assassination at a campaign rally days before an upcoming election. Removing reports on issues being debated and scrutinized by the public, such as violence and the state’s response, would limit access to essential information and hinder free speech, while providing marginal gains in safety. In its Footage of Moscow Terrorist Attack decision, the Board noted that images of attacks often evoke stronger reactions than abstract descriptions. Images humanize victims and elicit moral outrage, sympathy, awareness of violence and encourage accountability. Given the significant risks journalists and news outlets face in Mexico when reporting on state corruption and organized crime, limiting their access to social media is especially concerning. Additionally, as the victim was a public figure engaged in public acts and he was not depicted in a humiliating or degrading manner, there are more limited privacy interests. In these three cases, applying a “Mark as Disturbing” warning screen, under Meta’s Violent and Graphic Content Community Standard, is a less restrictive means to protect the rights to safety and privacy. When Meta applies a warning screen, several consequences follow. All users must click through a screen to view content, and it is not available to users under the age of 18. Furthermore, the content is then removed from recommendations to users who do not follow the account (see Al-Shifa Hospital and Hostages Kidnapped From Israel decisions). These measures ensure that child users are not exposed to the content and limit the reach of this content to users who have sought it out. For the third post, the majority of the Board considers that the content presents greater risks to security and privacy. In that post, the user reshared content from a media account that included a message directing viewers to an “uncensored” video on Telegram. The majority agrees with Meta that removing the post is necessary and proportionate to protect safety. By sharing the post with a link to view graphic imagery of an individual’s death with no additional caption or commentary, the user gave no clear indication that their purpose was to inform others or to condemn the violence. Lacking such indications, and linking to uncensored footage, the post clearly suggests that the user was aiming to circumvent Meta’s Community Standards regarding Dangerous Organizations and Individuals. A minority of Board Members disagree, asserting that removing the third post was neither necessary nor proportionate. On the proportionality of Meta’s response, the Board welcomes the fact that the company did not apply strikes against the users who posted the two pieces of content that were removed and determined that in some circumstances, there is no need for additional penalization in the form of a strike. The Board emphasizes the value of separating Meta’s enforcement actions on content from the penalties given to users. It also recognizes that withholding strikes constitutes an important tool for achieving proportionality (see Iranian Make-up Video for a Child Marriage decision), insofar as the requirement of proportionality takes into account the restriction’s imposition not only on the interests of others, including listeners, but also on the interests of the speaker (UN Special Rapporteur on freedom of expression, Special Rapporteur Communication No. USA 6/2017 , pg. 3). The Board analyzed these cases in accordance with the newsworthiness allowance, as it reflects Meta’s current policy approach to these posts. However, as previously mentioned, the newsworthiness allowance has multiple limitations in accessibility and predictability. Public comments highlighted similar concerns, as well as the fear that users may self-censor to avoid account level penalties ( PC-30727 Digital Speech Lab). For these reasons, the Board considers that the newsworthiness allowance is not the most effective or least restrictive approach available to Meta. The Board recently highlighted these same concerns on the prohibition of third-party imagery depicting the moment of designated attacks on visible victims in the Footage of Moscow Terrorist Attack decision. That decision concluded that the most effective way to protect freedom of expression while mitigating harm and the risk of copycat behavior would be to establish an exception within the policy. This exception would permit third-party imagery of a designated event depicting the moment of attacks on visible victims, when shared in the contexts of news reporting, condemnation or awareness-raising. The content would have a “Mark as Disturbing” warning screen. Meta is currently assessing this recommendation. The Board reiterates that its proposed approach would better respect rights. To meet Meta’s safety concerns, the company could also require that users posting content for news reporting, condemnation or awareness-raising make their intent clear, as it does under the Dangerous Organizations and Individuals policy. The Board notes Meta defines awareness-raising in its guidance as “sharing, discussing or reporting new information ... for the purpose of improving the understanding of an issue or knowledge of a subject that has public interest value. Awareness raising … should not aim to incite violence or spread hate or misinformation,” (see Reporting on Pakistan Parliament Speech and Communal Violence in Indian State of Odisha decisions). The company could continue to remove unclear or ambiguous content, deferring to safety concerns. Meta could also choose to apply the exception only on-escalation, if clear protocols for identifying content are provided. While the Board consistently expresses concerns about the effectiveness of escalations-only policies (see Sudan's Rapid Support Forces Video Captive and Sudan Graphic Video decisions), it believes that a clearly articulated and policy-specific exception enforced on-escalation is preferable to relying on the newsworthiness allowance (see Armenian Prisoners of War Video decision). Under the framework proposed by the Footage of Moscow Terrorist Attack decision, the same outcome would be reached here without the need for the application of the newsworthiness allowance. The first, second and fourth posts would remain on platform as news reporting. Given that the intent behind the third post was not to report, raise awareness or condemn, the third post should be removed. While the Footage of Moscow Terrorist Attack decision addressed third-party footage with visible but not personally identifiable victims, the victim in this case is identifiable. However, given that he is a public figure at a public event, and he is not depicted in a humiliating or degrading manner, the privacy interests involved are similarly reduced and the content should benefit from the recommended exception. By limiting reliance on the rarely granted and unpredictable newsworthiness allowance, a clear policy exception for news reporting, condemnation and awareness-raising would help Meta to treat users fairly. 6. The Oversight Board’s Decision The Oversight Board upholds Meta’s decisions to leave up the first and second posts, and to remove the third post. The Oversight Board overturns Meta’s decision to take down the fourth post, requiring it to be restored with a “Mark as Disturbing” warning screen. 7. Recommendations Content Policy The Oversight Board reiterates its previous recommendation in the Footage of Moscow Terrorist Attack decision: Meta should allow, with a “Mark as Disturbing” warning screen, third-party imagery of a designated event showing the moment of attacks on visible but not personally identifiable victims when shared in news reporting, condemnation and awareness-raising contexts (Footage of Moscow Terrorist Attack decision, recommendation no. 1). *Procedural Note: Return to Case Decisions and Policy Advisory Opinions" bun-ih313zhj,Gender identity and nudity,https://www.oversightboard.com/decision/bun-ih313zhj/,"January 17, 2023",2023,January,"Health,LGBT,Sex and gender equality",Sexual solicitation,Overturned,United States,The Oversight Board has overturned Meta's original decisions to remove two Instagram posts depicting transgender and non-binary people with bare chests.,61328,9310,"Multiple Case Decision January 17, 2023 The Oversight Board has overturned Meta's original decisions to remove two Instagram posts depicting transgender and non-binary people with bare chests. Overturned IG-AZHWJWBW Platform Instagram Topic Health,LGBT,Sex and gender equality Standard Sexual solicitation Location United States Date Published on January 17, 2023 Overturned IG-PAVVDAFF Platform Instagram Topic Health,LGBT,Sex and gender equality Standard Sexual solicitation Location United States Date Published on January 17, 2023 Gender identity and nudity public comments The Oversight Board has overturned Meta’s original decisions to remove two Instagram posts depicting transgender and non-binary people with bare chests. It also recommends that Meta change its Adult Nudity and Sexual Activity Community Standard so that it is governed by clear criteria that respect international human rights standards. About the case In this decision, the Oversight Board considers two cases together for the first time. Two separate pieces of content were posted by the same Instagram account, one in 2021, the other in 2022. The account is maintained by a US-based couple who identify as transgender and non-binary. Both posts feature images of the couple bare-chested with the nipples covered. The image captions discuss transgender healthcare and say that one member of the couple will soon undergo top surgery (gender-affirming surgery to create a flatter chest), which the couple are fundraising to pay for. Following a series of alerts by Meta’s automated systems and reports from users, the posts were reviewed multiple times for potential violations of various Community Standards. Meta ultimately removed both posts for violating the Sexual Solicitation Community Standard, seemingly because they contain breasts and a link to a fundraising page. The users appealed to Meta and then to the Board. After the Board accepted the cases, Meta found it had removed the posts in error and restored them. Key findings The Oversight Board finds that removing these posts is not in line with Meta’s Community Standards, values or human rights responsibilities. These cases also highlight fundamental issues with Meta’s policies. Meta’s internal guidance to moderators on when to remove content under the Sexual Solicitation policy is far broader than the stated rationale for the policy, or the publicly available guidance. This creates confusion for users and moderators and, as Meta has recognized, leads to content being wrongly removed. In at least one of the cases, the post was sent for human review by an automated system trained to enforce the Adult Nudity and Sexual Activity Community Standard. This Standard prohibits images containing female nipples other than in specified circumstances, such as breastfeeding and gender confirmation surgery. This policy is based on a binary view of gender and a distinction between male and female bodies. Such an approach makes it unclear how the rules apply to intersex, non-binary and transgender people, and requires reviewers to make rapid and subjective assessments of sex and gender, which is not practical when moderating content at scale. The restrictions and exceptions to the rules on female nipples are extensive and confusing, particularly as they apply to transgender and non-binary people. Exceptions to the policy range from protests, to scenes of childbirth, and medical and health contexts, including top surgery and breast cancer awareness. These exceptions are often convoluted and poorly defined. In some contexts, for example, moderators must assess the extent and nature of visible scarring to determine whether certain exceptions apply. The lack of clarity inherent in this policy creates uncertainty for users and reviewers, and makes it unworkable in practice. The Board has consistently said Meta must be sensitive to how its policies impact people subject to discrimination (see for example, the “ Wampum belt ” and “ Reclaiming Arabic words ” decisions). Here, the Board finds that Meta’s policies on adult nudity result in greater barriers to expression for women, trans, and gender non-binary people on its platforms. For example, they have a severe impact in contexts where women may traditionally go bare-chested, and people who identify as LGBTQI+ can be disproportionately affected, as these cases show. Meta’s automated systems identified the content multiple times, despite it not violating Meta’s policies. Meta should seek to develop and implement policies that address all these concerns. It should change its approach to managing nudity on its platforms by defining clear criteria to govern the Adult Nudity and Sexual Activity policy, which ensure all users are treated in a manner consistent with human rights standards. It should also examine whether the Adult Nudity and Sexual Activity policy protects against non-consensual image sharing, and whether other policies need to be strengthened in this regard. The Oversight Board's decision The Oversight Board overturns Meta's original decision to remove the posts. The Board also recommends that Meta: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board overturns Meta’s original decisions in two cases of Instagram posts removed by Meta. Meta has acknowledged that its original decisions in both cases were wrong. These cases raise important concerns about how Meta’s policies disproportionately impact the expressive rights of both women and LGBTQI+ users of its platforms. The Board recommends that Meta should define clear, objective, rights-respecting criteria to govern the entirety of its Adult Nudity and Sexual Activity policy, ensuring equal treatment of all people that is consistent with international human rights standards, and avoids discrimination on the basis of sex or gender identity. Meta should first conduct a comprehensive human rights impact assessment to review the implications of the adoption of such criteria, which includes broadly inclusive stakeholder engagement across diverse ideological, geographic and cultural contexts. To the degree that this assessment should identify any potential harms, implementation of the new policy should include a mitigation plan for addressing them. The Board further recommends that Meta clarify its public-facing Sexual Solicitation policy and narrow its internal enforcement guidance to better target such violations. 2. Case description and background These cases concern two content decisions made by Meta, which the Oversight Board is addressing together in this decision. Two separate images with captions were posted on Instagram by the same account which is jointly maintained by a US-based couple. Both images feature the couple, who stated in the posts, and in their submissions to the Board, that they identify as transgender and non-binary. Meta removed both posts under the Sexual Solicitation Community Standard. In both cases, Meta’s automated systems identified the content as potentially violating. In the first image, posted in 2021, both people are bare-chested and have flesh-colored tape covering their nipples. In the second image, posted in 2022, one person is clothed while the other person is bare-chested and covering their nipples with their hands. The captions accompanying these images discuss how the person who is bare-chested in both pictures will soon undergo top surgery-gender affirming surgery that creates a flatter chest. They describe their plans to document the surgery process and discuss transgender healthcare issues. They announce that they are holding a fundraiser in order to pay for the surgery because they have had difficulty securing insurance coverage for the procedure. In the first case, the image was first automatically classified as unlikely to be violating. The report was closed without being reviewed and the content initially remained on the platform. Three users then reported the content for pornography and self-harm. These reports were reviewed by human moderators who found the post to be non-violating. When the content was reported by a user for a fourth time, another human reviewer found that the post violated the Sexual Solicitation Community Standard and removed it. In the second case, the post was identified twice by Meta’s automated systems and then sent for human review where it was found to be non-violating both times. Two users then reported the content, but each report was closed automatically without being reviewed by a human and the content remained on Instagram. Finally, Meta's automated systems identified the content a third time and sent it for human review. The last two times, Meta’s automated Adult Nudity and Sexual Activity classifier flagged the content, but the reason for these repeated reviews is unclear. This final human reviewer found the post violated the Sexual Solicitation Community Standard and removed it. The account owners appealed both removal decisions to Meta, and the content was reviewed by human reviewers in both cases. However, these reviews did not lead to Meta restoring the posts. The account owners then appealed both removal decisions to the Board. The Board is considering these two cases together, a first for the Board. The benefits of doing this are to identify similar issues in Meta’s content policies and processes and offer solutions that address these problems. After the Board selected these posts and Meta was asked to provide a justification for its decision to remove the content, Meta identified the removals as ""enforcement errors"" and restored the posts. When considering why these cases represent important issues, the Board notes as relevant context the high volume of public comments that were received in these cases, many of which were from people who identified as trans, non-binary, or cis-gender women who explained that they were personally affected by enforcement errors and issues similar to those present in these cases. The Board has also noted as relevant context the academic research, also cited in public comments, by Haimson et al , Witt, Suzor and Huggins , and two reports by Salty on algorithmic bias and censorship of marginalized communities . These studies found that enforcement errors in the two Community Standards discussed in these cases disproportionately affect women and the LGBTQI+ community. A co-author of one of these studies is a member of the Oversight Board. 3. Oversight Board authority and scope The Board has authority to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include policy advisory statements with recommendations to which Meta must respond (Charter Article 3, Section 4; Article 4). When the Board selects cases like this one, where Meta acknowledges that it made an error after the Board identifies the case, the Board reviews the original decision. This is to increase understanding of the policy parameters and content moderation processes that contributed to the error and to address issues the Board identifies with the underlying policies. The Board also aims to make recommendations to lessen the likelihood of future errors and treat users more fairly moving forward. When the Board identifies cases that raise similar issues, they may be assigned to a panel simultaneously to deliberate together. A binding decision will be made in respect of each piece of content. 4. Sources of authority The Oversight Board considered the following authorities and standards: I. Oversight Board decisions: II. Meta’s content policies: These cases involve Instagram's Community Guidelines and Facebook's Community Standards. Meta's Transparency Centre states that ""Facebook and Instagram share Content Policies. This means that if content is considered violating on Facebook, it is also considered violating on Instagram.” Sexual Solicitation Instagram’s Community Guidelines state that “offering sexual services” is not allowed. This provision then links to Facebook’s Community Standard on Sexual Solicitation. In the policy rationale on Sexual Solicitation , Meta states: “We draw the line, however, when content facilitates, encourages or coordinates sexual encounters or commercial sexual services between adults. We do this to avoid facilitating transactions that may involve trafficking, coercion and non-consensual sexual acts. We also restrict sexually explicit language that may lead to sexual solicitation because some audiences within our global community may be sensitive to this type of content, and it may impede the ability for people to connect with their friends and the broader community.” Facebook’s Community Standard on Sexual Solicitation states that Meta prohibits both explicit and implicit solicitation. Implicit solicitation has two criteria, both of which must be met for content to violate the policy. The first is “offer or ask” which is “Content that implicitly or indirectly (typically through providing a method of contact) offers or asks for sexual solicitation.” The second criterion is “suggestive elements” which is “Content that makes the aforementioned offer or ask using one of the following sexually suggestive elements.” The elements listed include “regional sexualized slang” and “poses.” Adult Nudity and Sexual Activity Instagram's Community Guidelines state that users should: “Post photos and videos that are appropriate for a diverse audience. We know that there are times when people might want to share nude images that are artistic or creative in nature, but for a variety of reasons, we don't allow nudity on Instagram. This includes photos, videos and some digitally-created content that show sexual intercourse, genitals and close-ups of fully-nude buttocks. It also includes some photos of female nipples, but photos in the context of breastfeeding, birth giving and after-birth moments, health-related situations (for example, post-mastectomy, breast cancer awareness or gender confirmation surgery) or an act of protest are allowed.” This section links to Facebook’s Adult Nudity and Sexual Activity policy, which provides more detail on these rules. As part of the policy rationale of the Adult Nudity and Sexual Activity Community Standard , Meta explains that: “We restrict the display of nudity or sexual activity because some people in our community may be sensitive to this type of content. Additionally, we default to removing sexual imagery to prevent the sharing of non-consensual or underage content.” Facebook’s Adult Nudity and Sexual Activity policy also states: “Do not post: Uncovered female nipples except in the context of breastfeeding, birth giving and after-birth moments, medical or health context (for example, post-mastectomy, breast cancer awareness or gender confirmation surgery) or an act of protest.” Users can also post imagery of genitalia when shared in a “medical or health context” (which includes gender confirmation surgery) but a label will be applied warning people that the content is sensitive. There are also at least 18 additional internal guidance factors about nipples and these exceptions. III. Meta’s values: Meta's values are outlined in the introduction to the Facebook Community Standards where the value of ""Voice"" is described as ""paramount"": The goal of our Community Standards has always been to create a place for expression and give people a voice. […] We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Meta limits ""Voice"" in service of four values, two of which are relevant here: ""Safety"": We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn't allowed on Facebook. ""Dignity"": We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade them. IV. International human rights standards: The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis of Meta’s human rights responsibilities in these cases were informed by the following human rights standards: 5. User submissions In their submissions for these cases, the users state that they believe this content was removed because of transphobia. They write that if the Board were to affirm that this content should remain on the platform, that the decision would contribute to making Instagram a more hospitable space for LGBTQI+ expression. 6. Meta’s submissions Meta explained in its decision rationale that both content removals were enforcement errors and that neither post violated its Sexual Solicitation policies. Meta states: “the only offer or ask is for donations to a fundraiser or to visit a website to buy t-shirts, neither of which relates to sexual solicitation.” Meta also states that neither post violates its Adult Nudity and Sexual Activity Standard. The rationale states that the internal “reviewer guidance specifically addresses how to action on non-binary, gender neutral, or transgender nudity.” The content in these cases was shared in an “explicitly non-binary or transgender context as evidenced by the overall topic of the content (undergoing top surgery) and the hashtags used.” Meta concluded that ""even if the nipples in these cases were visible and uncovered, they would not violate our Adult Nudity and Sexual Activity policy."" Meta also acknowledged that in both images, the nipples are “fully obscured.” Given the time elapsed since the content was removed, Meta could not tell the Board what policy or policies all the various automated systems that identified the content as potentially violating were programmed to enforce. In one case, Meta was able to explain that the content was enqueued for review twice by Adult Nudity and Sexual Activity classifiers. Meta could also not provide any explanation as to why the reviewers thought the content violated the Sexual Solicitation policy. The rationale acknowledges that Meta is “aware that some content reviewers may incorrectly remove content as implicit sexual solicitation (even though it is not) based on an overly-technical application of our internal reviewer guidance.” The Board asked Meta 18 questions, and Meta answered all of them. 7. Public comments The Oversight Board considered 130 public comments related to these cases. Ninety-seven of the comments were submitted from United States & Canada, 19 from Europe, 10 from Asia Pacific & Oceania, one from Latin America and the Caribbean, one from the Middle East and North Africa, one from Sub-Saharan Africa and one from Central & South Asia. The submissions covered the following themes: erroneous removals of content from trans, non-binary, and female users; the unfairness and inequality of gender-based distinctions to determine what forms of nudity are permitted on the platform; confusion over what content is permissible under the Adult Nudity and Sexual Activity, and Sexual Solicitation Community Standards; and the importance of social media for expression in societies where LGBTQI+ rights are being threatened. To read public comments submitted for these cases, please click here. Several comments submitted have not been included as they contained personally identifying information regarding individuals other than the commenter. 8. Oversight Board analysis The Board looked at the question of whether these posts should be restored through three lenses: Meta's content policies, the company's values and its human rights responsibilities. The Board selected these cases as the removal of non-violating content posted by people who identify with marginalized groups affects their freedom of expression. This is particularly significant as Instagram can be an important forum for these groups to build community. These cases demonstrate how enforcement errors may have a disproportionate impact on certain groups and may signify wider issues in policy and enforcement that should be fixed. 8.1 Compliance with Meta’s content policies The Board finds these posts do not violate any Meta content policy. While the Community Guidelines apply to Instagram, Meta also states that “Facebook and Instagram share content policies. Content that is considered violating on Facebook is also considered violating on Instagram.” The Facebook Community Standards provide more detail and are linked in the Guidelines. a. Sexual Solicitation The Sexual Solicitation Community Standard states that implicit sexual solicitation requires two elements: Implicit offer or ask. An implicit offer or ask is defined in the Sexual Solicitation Community Standard as “Content that implicitly or indirectly (typically through providing a method of contact) offers or asks for sexual solicitation.” In Meta’s ""Known Questions,"" which provide additional internal guidance to reviewers, the list of contact information that triggers removal as an implicit offer includes social media profile links and “links to subscription-based websites (for example, OnlyFans.com or Patreon.com).” In these cases, the content provided a link to a platform where the users were hosting a fundraiser to pay for surgery. Because Meta’s internal criteria defining ""implicit offer or ask"" are very broad, this link would technically qualify as an “offer or ask” under Meta’s reviewer guidance despite not violating the public facing standard, which indicates the offer or ask must be for something sexual. Sexually suggestive element. The Community Standard provides a list of sexually suggestive elements which includes poses. The Known Questions provide a list, described by Meta as exhaustive, of what are characterized as sexually suggestive poses, including nude “female breasts covered either digitally or by human body parts or objects.” In both images, the Board notes there are breasts covered by human body parts (hands) or objects (tape). In these cases, the content of the posts makes clear that the subjects of the photo identify as trans and non-binary, meaning that the breasts depicted belong to individuals who do not identify as women. The Board also finds the content is not sexually suggestive. On that basis, the second element required to violate the Sexual Solicitation policy - a sexually suggestive element such as a sexual pose (which includes a covered female breast) - was not met. Because the second element was not satisfied, the posts did not violate this standard. Applying the public version of the first element (which indicates the offer/ask must be for something sexual) also indicates that these images would not constitute sexual solicitation. b. Adult Nudity and Sexual Activity The Adult Nudity and Sexual Activity Community Standard states that users should not post images of “uncovered female nipples except in the context of breastfeeding, birth giving and after-birth moments, medical or health context (for example, post-mastectomy, breast cancer awareness or gender confirmation surgery) or an act of protest.” Meta’s Known Questions further state that reviewers should allow “imagery of nipples when shared in an explicitly female-to-male transgender, non-binary, or gender-neutral context (e.g., a user indicates such gender identity), regardless of size or shape of breast.” Neither image in these cases violates this Community Standard. First, neither of the images feature uncovered nipples. In both images, the individuals have covered their nipples with either their hands or tape. Second, had the nipples been uncovered, the Board notes the images were shared with accompanying text that made clear the individuals identify as non-binary. This policy is therefore not violated. 8.2 Compliance with Meta’s values The Board finds that the original decisions to remove these posts were inconsistent with Meta's values of ""Voice"" and ""Dignity"" and did not serve the value of ""Safety."" Enforcement errors that disproportionately affect groups facing discrimination pose a serious threat to “Voice” and “Dignity.” While Meta’s human rights arguments discussed “Safety,” particularly related to non-consensual image sharing, sex trafficking, and child abuse, the Board finds these removals did not advance “Safety.” 8.3 Compliance with Meta’s human rights responsibilities Freedom of expression (Article 19 ICCPR) Article 19 of the ICCPR provides for broad protection of expression, including discussion of human rights and expression which people may find offensive ( General Comment 34 , para. 11). The right to freedom of expression is guaranteed to all people without discrimination as to “sex” or “other status” ( ICCPR , Article 2, para 1). The Human Rights Committee has confirmed in cases such as Nepomnyashchiy v Russia ( CCPR/C/123/D/2318/2013 ) that the prohibition on discrimination includes discrimination on the grounds of gender identity. The content relates to important social issues. For these users, Instagram provides a forum to discuss and represent their gender expression, offering a forum to make connections and derive support. The content may also directly affect the users’ ability to pursue gender confirmation surgery, as both posts explain that one person will undergo top surgery and share a fundraiser to offset the surgery costs. Article 19 requires that where restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality ( ICCPR , Article 19, para. 3). Relying on the UNGPs framework, the UN Special Rapporteur on freedom of opinion and expression has called on social media companies to ensure that their content rules are guided by the requirements of Article 19, para. 3, ICCPR ( A/HRC/38/35 , paras. 45 and 70). The Board has adopted this framework to analyze Meta’s policies and enforcement. In this case, the Board finds that Meta has not met its responsibilities to create and enforce policies that align with these standards. The internal criteria applied to remove content under the Sexual Solicitation policy are more expansive than the stated rationale for the policy, with overenforcement consequences that Meta itself has recognized. The Adult Nudity and Sexual Activity Community Standard disproportionately impacts women and LGBTQI+ users and relies on subjective and speculative perceptions of sex and gender that are not practicable when engaging in content moderation at scale. The Board analyzes these shortcomings and recommends Meta begin a comprehensive process to address these problems. I. Legality (clarity and accessibility of the rules) Rules restricting expression must be clear and accessible so that both those responsible for enforcing them, and users, know what is allowed. Both Community Standards considered in these cases fall short of that standard. a. Sexual Solicitation The Board finds that the Sexual Solicitation Community Standard contains overbroad criteria in the internal guidelines provided to reviewers. This poorly tailored guidance contributes to over-enforcement by reviewers and confusion for users. Meta acknowledged this, as it explained to the Board that applying its internal guidance could “lead to over-enforcement"" in cases where the criteria for implicit sexual solicitation are met but it is clear that there was “no intention to solicit sex.” The confusion is reflected in both elements of this policy. In relation to the ‘offer or ask’ component of the Sexual Solicitation Community Standard, the public-facing rules refer to a “method of contact” for the soliciting party. However, the guidance for moderators, the Known Questions, state that a “method of contact” for an implicit ‘offer or ask’ includes social media profile links or links to third-party subscription-based websites such as Patreon. It is not made clear to users that any link to another social media profile, third-party payment platform or fundraising link (such as Patreon or GoFundMe) could mean that their post is treated as a solicitation. This confusion is reflected in the many public comments the Board received from people who did not understand why content including such third-party links was removed or led to their accounts being banned. The second criterion, requiring a sexually suggestive element, is broad and vague, as well as inconsistent with Meta’s Adult Nudity and Sexual Activity policy. The public-facing Community Standard includes “sexually suggestive poses” as a sexually suggestive element. The Known Questions then provide a detailed list of “sexually suggestive poses” which includes being topless and covering breasts with hands or objects. Users will likely not be able to predict that any image with covered breasts is considered a sexually suggestive pose. This confusion is compounded by the fact that the Adult Nudity policy permits topless photos where the nipples are covered. In this respect, content that is considered sexual under one policy is not considered sexual under another policy. In addition to user uncertainty, the fact that reviewers repeatedly reached different outcomes about this content suggests a lack of clarity for moderators on what content should be considered sexual solicitation. As Meta acknowledges, the application of its internal guidance on the two elements of implicit solicitation is removing content that does not seek sexual acts. In the longer term, erroneous removals will likely be best addressed by modifying the scope of this policy. In the short term, however, the Board recommends that Meta revise its internal guidelines to ensure that the criteria reflect the public-facing rules and require a clearer connection between the ""offer or ask"" and the ""sexually suggestive element."" Meta should also provide users with more explanation of what constitutes an ""offer or ask"" for sex and what constitute sexually suggestive poses in the public Community Standards. b. Adult Nudity and Sexual Activity The Adult Nudity and Sexual Activity Standard is premised on sex and gender distinctions that are difficult to implement and contain exceptions that are poorly defined. Certain rules in the policy will be confusing to users, who do not know what is allowed. This also causes confusion for moderators, who must make subjective assessments based on unavoidably incomplete information and rapidly apply a rule with numerous factors, exclusions, and presumptions. Despite using language focusing on specific body parts instead of gender (and allowing users to choose from a wide range of gender identities on their profile), most Meta rules do not explain how the company handles content depicting intersex, trans or non-binary people. For example, the policy refers to “male and female genitalia,” “female breasts” and “female nipples,” but it is unclear how these descriptions are applied to people with bodies and identities that may not align with these definitions. Many trans and non-binary people submitted public comments to the Board stating that users do not know if their content is assessed and categorized according to their gender identity, the sex they were assigned at birth, or aspects of their physical appearance. The current rules require human reviewers to quickly assess both a user’s sex, as this policy applies to “female nipples,” and their gender identity, as there are exceptions based on whether the depicted person is non-binary, gender neutral, transgender, or posting in a gender confirmation surgery context. Perceptions of sex and gender require the interpretation of contextual clues and appearance, both of which are subjective determinations conducive to errors. This approach is further complicated by Meta’s “default to female principle” whereby more restrictive policies applicable to female (as opposed to male) nudity are applied in situations of doubt. The Known Questions state that where there is no clear context and the person in the image “presents as female OR male-to-female transgender context exists, then default to female nudity and apply the relevant policy.” The number of restrictions and exceptions to the rules on nipples perceived as female is extensive and confusing. Exceptions range from acts of protest to scenes of childbirth and breastfeeding, to medical and health contexts, including post-mastectomy images and breast cancer awareness. The exceptions are often not defined or poorly defined. The list of exceptions has also grown substantially over time and can be expected to continue to grow as expression evolves. When it comes to women’s breasts, Meta’s Adult Nudity and Sexual Activity policy makes the default assumption that such depictions constitute sexual imagery. Yet the expanding list of exceptions reflect that, under many circumstances recognized in the policy, images of women’s breasts are not sexually suggestive. Even within each exception, numerous questions arise. For example, the gender confirmation surgery exception is of particular importance to trans and non-binary users, but Meta does not provide an explanation of the scope of its gender confirmation surgery exception in its public-facing rules. This has resulted in many of the public comments expressing confusion over whether permitted content under the exception could include pre-surgery photos (to create a before-and-after image) and images of trans women who have received breast augmentations. The internal guidelines and Known Questions make clear that this exception is narrower than the public guidance may be construed to imply. Meta’s policies are premised on binary distinctions between male and female, creating challenges when Meta tries to articulate its gender confirmation surgery exception. In Meta’s responses to the Board, Meta explained that the gender confirmation surgery exception means that it allows “uncovered female nipples before the individual has top surgery to remove their breasts when the content is shared in an explicitly female-to-male transgender, non-binary, or gender-neutral context.” The rules further state “Nipples of male-to-female transgender women having undergone a breast augmentation (top surgery) are prohibited, unless scarring over nipple is present.” The internal guidelines on surgical scarring and nipples are even more convoluted. The rules for mastectomies, for example, permit “Instances where the nipple is reconstructed from other tissue or stencilled or tattooed” and “instances where at least one surgically removed breast is present, even if the other bare female nipple is visible.” Even more confusingly, the rules state that “For mastectomies, scarring includes depiction of the area where the removed breast tissue used to be. The actual surgical scar does not need to be visible.” Reviewers will likely struggle to apply rules that require that they rapidly assess sex-specific characteristics of the depicted person to decide whether to apply female nipple rules, and then the gender of the person to determine if some exceptions apply, and then consider whether the content depicts the precursor or aftermath of a surgical procedure, which surgical procedure, and the extent and nature of the visible scarring, to determine whether other exceptions may apply. The same image of female-presenting nipples would be prohibited if posted by a cisgender woman but permitted if posted by an individual self-identifying as non-binary. The Board also notes additional nipple-related exceptions based on contexts of protest, birth giving, after birth, and breastfeeding which it did not examine here, but also must be assessed and presumably involve additional internal criteria. Given the importance of expressive rights about matters of gender, physical health, childbirth and parenting, the current complex patchwork of exceptions creates undue uncertainty for users and holds the potential for misapplied rules, as evidenced by this case. The lack of clarity for users and moderators inherent in this policy makes the standard unworkable. As further discussed below, the Board believes that Meta should adopt an approach to adult nudity that ensures that all people are treated without discrimination on the basis of sex or gender identity. II. Legitimate aim ICCPR Article 19 provides that when states restrict expression, they may only do so in furtherance of legitimate aims, which are set forth as: “respect for the rights or reputations of others . . . [and] the protection of national security or of public order (ordre public), or of public health and morals.” This decision examines Meta’s rationales for limiting speech in its policies in light of these standards. a. Sexual Solicitation Meta explains in its Sexual Solicitation policy that it is “intended to prevent users from using Facebook or Instagram to facilitate “transactions that may involve trafficking, coercion and non-consensual sexual acts” which could occur off-platform. This is an example of protecting the rights of others, which is a legitimate aim. b. Adult Nudity and Sexual Activity Meta provided several rationales for particular aspects of its Adult Nudity and Sexual Activity policy, including preventing the spread of non-consensual content, protecting minors where the age of the person is unclear and the fact that, “some people in our community may be sensitive to this type of content.” Meta also provided an explanation of its general principles on nudity to the Board. It states that “In drafting our policy, Meta considered (1) the private or sensitive nature of the imagery; (2) whether consent was given in the taking and sharing of nude images; (3) the risk of sexual exploitation; and (4) whether the disclosure of such images could lead to harassment off-platform, particularly in countries where such images may be culturally offensive.” Most of these objectives align with protecting the rights of others. However, Meta’s rationale of protecting “community sensitivity” merits further examination. This rationale has the potential to align with the legitimate aim of “public morals.” That said, the Board notes that the aim of protecting “public morals” has sometimes been improperly invoked by governmental speech regulators to violate human rights, particularly those of members of minority and vulnerable groups. The Human Rights Committee has cautioned that “the concept of morals derives from many social, philosophical and religious traditions; consequently, limitations... for the purpose of protecting morals must be based on principles not deriving exclusively from a single tradition” (Human Rights Committee, General Comment 34). While human rights law does recognize that public morals can constitute a legitimate aim of limitations on free expression for states, and public nudity restrictions exist around the world, Meta emphasizes aims other than “community sensitivities” in the specific context of this case. Meta stated that “[a]lthough [its] nudity policy is consistent with the protection of public morals [… it] is not ultimately based on this aim because moral standards around nudity differ so widely across cultures and would not be implementable at scale.” For example, in many communities and parts of the world, depictions of uncovered transgender and non-binary breasts might well be considered to traverse community sensitivities. Yet Meta does not restrict such expression. Moreover, the Board is concerned about the known and recurring disproportionate burden on expression that have been experienced by women, transgender, and non-binary people due to Meta’s policies (see below). For these reasons, the Board focuses on the other aims beyond “community sensitivities” that Meta has advanced in examining its human rights responsibilities. It should be noted that some of the reasons Meta provides for its nudity policy reflect a default assumption of the sexually suggestive nature of women’s breasts as the basis. The Board received public comments from many users that expressed concern about the presumptive sexualization of women’s, trans and non-binary bodies, when no comparable assumption of sexualization of images is applied to cisgender men. (See, e.g., Public Comment 10624 submitted by InternetLab). The Board received many public comments in this case through its normal case outreach processes. As a body committed to offering a measure of accountability to Meta’s user base and key stakeholders, the Board considers comments seriously as a part of its deliberations. As with all cases, we understand that these comments may not be representative of global opinion. The Board appreciates the experiences and expertise shared through comments and continues to take steps to increase the breadth of its outreach to communities that may not currently be participating in this process. Finally, the Board recognizes that Meta may legitimately factor in the importance of preventing certain harms that can have gendered impacts. As noted by the United Nations Special Rapporteur on violence against women, it is ""important to acknowledge that the Internet is being used in a broader environment of widespread and systemic structural discrimination and gender-based violence against women and girls"" (A/HRC/38/47). Further, surveys indicate that ""90 per cent of those victimized by non-consensual digital distribution of intimate images are women."" (A/HRC/38/47). Meta should seek to limit gendered harms, both in the over-enforcement and under-enforcement of nudity prohibitions. III. Necessity and proportionality The Board finds that Meta’s policies, as framed and enforced, capture more content than necessary. Neither policy is proportionate to the issues they are trying to address. a. Sexual Solicitation The Sexual Solicitation policy’s definitions of an implicit ""offer or ask"" and sexually suggestive poses are overbroad and bound to capture a significant amount of content unrelated to sexual solicitation. Meta itself acknowledges the risk of erroneous enforcement, stating it is “aware that some content reviewers may incorrectly remove content as implicit sexual solicitation (even though it is not) based on an overly-technical application of [its] internal reviewer guidance.” Meta continued: Currently, based on our Known Questions, we consider sharing, mentioning, or providing contact information of social or digital identities to be an implicit offer or ask for sexual solicitation. […] However, applying this guidance can lead to over-enforcement in cases where, for instance, a model is perceived by a reviewer as posing in a sexually suggestive way (meets the “sexually suggestive element” criterion) and tags the photographer to give them credit for the picture (meets the “offer or ask” criterion). This type of content is non-violating because there is no intention to solicit sex, but it may still be removed (contrary to the policy) because it otherwise meets the two criteria outlined above. UNESCO, in a report discussing education in the digital space, described the risk of mistaken over-enforcement. Noting that “strict regulations concerning the sharing of explicit images means that, in some cases, educational materials published online to support learning about the body, or sexual relationships, may be mistaken by moderators for inappropriate, explicit content and therefore removed from generic web platforms.” The Board also notes the many public comments it received discussing erroneous removals under this Standard. For example, ACON, (an HIV education NGO in Australia) writes that content that promoted HIV prevention messaging in a sex-positive way and content promoting education workshops have been removed for sexual solicitation. This has resulted in the NGO choosing the language in their content to avoid Meta removals, instead of choosing the best language to reach their targeted communities (Public Comment 10550). This was echoed by Joanna Williams, a researcher who found that nine out of twelve of the sexual health organizations she interviewed reported being negatively affected by Meta’s moderation in this area (Public Comment 10613). b. Adult Nudity and Sexual Activity In addition to the challenges in establishing enforceable and scalable rules based on Meta’s perception of sex and gender identity, as described above, the Board also finds that Meta’s policies on adult nudity pose disproportionate restrictions on some types of content and expression. The policies mandate the removal of content, when less restrictive measures could achieve the stated policy goals. Meta already uses a diverse range of enforcement actions aside from removal, including applying warning screens and age-gating content to only permit users over the age of 18 to view it. Further, it already employs such measures within its Adult Nudity and Sexual Activity policy, including for artistic depictions of sexual activity. Meta may also wish to engage automated and human moderators to make more refined, context-specific determinations of when nude context is actually sexual, regardless of the gender of the body it depicts. Meta could further employ a wide range of policy interventions to limit the visibility of nude content to users who do not wish to see it by enabling greater user control. Meta also has a number of dedicated policies on issues it is also addressing through the nudity policy (such as the Adult Exploitation Policy and the Child Sexual Exploitation, Abuse and Nudity policies) that could be strengthened. The Board notes that Meta’s enforcement practices reportedly result in a high number of false-positives, or mistaken removal of non-violating content. Meta’s last Community Standards Enforcement report for Instagram ( April-June 2022 ) disclosed that 21% of the Adult Nudity and Sexual Activity removals that were appealed led to the content being restored. The Board also received a high number of public comments concerning the mistaken removal of content under the Adult Nudity Policy. Non-discrimination There is evidence that Meta’s policies and enforcement relating to the Adult Nudity and Sexual Activity policy can lead to disproportionate impacts specifically for women and LGBTQI+ people. These impacts are reflected in both policy and enforcement and limit the ways in which groups can express themselves, resist prejudice and increase their visibility in society. While this case concerned trans and nonbinary users, the enforcement errors in this case stem from an underlying policy that also impacts women, especially as Meta adopts a ‘default to female’ approach for nude content. Therefore, this section considers how Meta’s policies impact both LGBTQI+ people and women. The large volume of public submissions in this case provided many illustrations of the impact these policies can have. The right to freedom of expression is guaranteed to all people without discrimination as to ""sex"" or ""other status"" (Article 2, para. 1, ICCPR). This includes sexual orientation and gender identity ( Toonen v. Australia (1994); A/HRC/19/41 , para. 7). The Human Rights Committee’s jurisprudence notes that “not every differentiation based on the grounds listed in article 26 of the Covenant amounts to discrimination, as long as it is based on reasonable and objective criteria and in pursuit of an aim that is legitimate under the Covenant.” Nepomnyashchiy v Russia, Human Rights Committee, 2018, para. 7.5 ( CCPR/C/123/D/2318/2013 ). The Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW) prohibits “any distinction, exclusion or restriction made on the basis of sex which has the effect or purpose of impairing or nullifying the recognition, enjoyment or exercise by women […] on a basis of equality of men and women, of human rights and fundamental freedoms in the political, economic, social, cultural, civil or any other field” (CEDAW, Art. 1). The Board notes that international human rights bodies have not addressed the human rights implications of either permitting or prohibiting consensual adult nudity and its potential discriminatory impacts. The UN Guiding Principles state that ""business enterprises should pay special attention to any particular human rights impacts on individuals from groups or populations that may be at heightened risk of vulnerability or marginalization"" ( UNGPs , Principles 18 and 20). The Special Rapporteur on freedom of expression has urged tech companies to “actively seek and take into account the concerns of communities historically at risk of censorship and discrimination” ( A/HRC/38/35 , para 48). The United Nations Working Group on Business and Human Rights has also recommended that technology companies ensure that “artificial intelligence and automation do not have disproportionate adverse impacts on women’s human rights” ( Gender Dimensions Handbook ). Given the importance of social media platforms as an arena for expression for individuals subject to discrimination, the Board has consistently articulated its expectation that Meta be particularly sensitive to the possibility of wrongful removal of content by, about or depicting members of these groups. As the Board noted in the ""Wampum belt"" decision ( 2021-012-FB-UA ) regarding artistic expression from Indigenous persons, it is not sufficient to evaluate the performance of Meta's enforcement of Facebook's Hate Speech policy on the user population as a whole – effects on specific groups must be taken into account. Similarly, in the “Reclaiming Arabic words” case, the Board confirmed that “the over-moderation of speech by users from persecuted minority groups is a serious threat to their freedom of expression” and expressed concern about how exemptions in Meta’s policies (in that case, the Hate Speech policy) were applied to expression from marginalized groups ( 2022-003-IG-UA ). The consequences of Meta’s choices result in disparate opportunities for expression being made available to women, trans, and gender non-binary people on its platforms. Meta’s current Adult Nudity and Sexual Activity policy treats female breasts and nipples as inherently sexual, and thus subject to prohibition, unless they have or will be operated on surgically or are in the act of breastfeeding. Instead of taking steps to ensure that censorship does not disproportionately impact some groups, Meta's policy entrenches and perpetuates such impacts on these groups. These cases highlight the disproportionate impact of Meta’s policy choices for people who identify as LGBTQI+, as content was identified multiple times by Adult Nudity and Sexual Activity classifiers despite falling outside the scope of the policy. The Board believes that these cases are emblematic of broader problems. For example, the Haimson et al. study found that transgender people report high levels of content being removed and accounts being deleted, typically due to nudity and sexual content. The enforcement of Meta’s policy choices also has a disproportionate impact on women. A study by Witt, Suzor, and Higgins found that up to 22% of images of women’s bodies that were removed from Instagram were apparent false positives The differential impact on women’s bodies was also noted in public comments (see, e.g., Public Comment 10616 by Dr. Zahra Stardust). This default position on nudity also has a severe impact in contexts where women may traditionally go bare-chested. The Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression has urged that companies engage with indigenous groups around the world to “develop better indicators for taking into account cultural and artistic context when assessing content featuring nudity” (para 54, A/HRC/38/35 ). In addition to these policy-related non-discrimination concerns, these cases also raised enforcement-related non-discrimination concerns. Meta should be mindful that content from users who identify with marginalized groups are at greater risk of repeated or malicious reporting, where users report non-violating content in order to burden or harass users. These issues were also raised by several public comments (see, e.g., Public Comment 10596 by GLAAD and Public Comment 10628 by The Human Rights Campaign Foundation). These cases highlighted that multiple reports that generate multiple reviews can increase the likelihood of mistaken removals. Indeed, in this case, most of the user reports resulted in human reviews that found the content to be non-violating, but the content continued to be reported until reviewers mistakenly determined it to be violating and removed it. Meta should see to develop and implement policies that help ameliorate all these concerns. These could include more uniform policies with respect to nudity that apply without discrimination on the basis of sex or gender identity. They might also include more contextualized determinations of what content is sexual, as long as such determinations avoid reliance on discriminatory criteria in making such determinations. The Board notes that Meta has a dedicated policy to address non-consensual intimate imagery in its Adult Sexual Exploitation policy. This has been an arena for prioritized enforcement by the company (see, for example, their introduction of automated detection technology to stop non-consensual images being repeatedly posted). When Meta considers changing its approach to managing nudity on the platform, Meta should closely examine the degree to which the Adult Nudity and Sexual Activity policy protects against the sharing of the non-consensual imagery and to understand whether changes in the Adult Sexual Exploitation policy or its enforcement may be needed to strengthen its efficacy. The Board also recognizes that Meta may have a legitimate interest in limiting sexual or pornographic content on its platform. But the Board believes that relevant business objectives can and should be met with approaches that treat all users without discrimination. Some of the Board members believe that Meta should seek to reduce the discriminatory impact of its current policies by adopting an adult nudity policy that is not based on differences of sex or gender. They noted the Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW) provisions on eliminating gendered stereotypes (see, for example Articles 5 and 10) and Meta’s explicit commitment to CEDAW in its corporate human rights policy. They concluded that, in the context of the nudity policy, these international human rights standards and Meta’s own commitment to non-discrimination, support eliminating stereotyped distinctions. This group of members have concluded that shifting towards a policy that is not based on sex or gender differences would be the best way for Meta to uphold its human rights responsibilities as a business, given its corporate values and commitments. These members note that norms and policies should evolve to address conventions that have discriminatory impacts, as many forms of discrimination were or continue to be widespread social convention. There was some disagreement among Board Members on these issues. Some members agreed in principle that Meta should not rely on sex or gender to limit expression but were deeply sceptical of Meta’s capacity to effectively address non-consensual intimate imagery and other potential harms without a sex and gender conscious nudity policy. Other members of the Board believe that because applicable human rights principles on non-discrimination allow for distinctions on the grounds of protected characteristics so long as they are “based on reasonable and objective criteria and in pursuit of an aim that is legitimate under the Covenant,” ( Nepomnyashchiy v Russia, Human Rights Committee, 2018, para. 7.5 (CCPR/C/123/D/2318/2013), a sex and gender-neutral nudity policy is not required and could cause or exacerbate other harms. The Board Members who support a sex and gender-neutral adult nudity policy recognize that under international human rights standards as applied to states, distinctions on the grounds of protected characteristics may be made based on reasonable and objective criteria and when they serve a legitimate purpose. They do not believe that the distinctions within Meta’s nudity policy meet that standard. They further note that, as a business, Meta has made human rights commitments that are inconsistent with an approach that restricts online expression based on the company’s perception of sex and gender. The Adult Nudity and Sexual Activity Community Standard disproportionately impacts women and LGBTQI+ users and relies on subjective and speculative perceptions of sex and gender that are not practicable when engaging in content moderation at scale. Viewed comprehensively, given the confusion around the rules and their enforcement, and the disproportionate and discriminatory impact of Meta’s current Adult Nudity and Sexual Activity policy, the Board recommends that Meta define clear, objective, rights-respecting criteria to govern the entirety of its Adult Nudity Policy, ensuring treatment of all people that is consistent with international human rights standards, including without discrimination on the basis of sex or gender identity. Meta should first conduct a comprehensive human rights impact assessment to review the implications of the adoption of such criteria, which includes broadly inclusive stakeholder engagement across diverse ideological, geographic and cultural contexts. To the degree that this assessment should identify any potential harms, implementation of the new policy should include a mitigation plan for addressing them. The Board requests a report on the assessment and plan six months from the date of issue of this decision. 9. Oversight Board decision The Oversight Board overturns Meta's original decisions to remove both posts, requiring them to be restored. 10. Policy advisory statement Content policy 1. In order to treat all users fairly and provide moderators and the public with a workable standard on nudity, Meta should define clear, objective, rights-respecting criteria to govern the entirety of its Adult Nudity and Sexual Activity policy, ensuring treatment of all people that is consistent with international human rights standards, including without discrimination on the basis of sex or gender identity. Meta should first conduct a comprehensive human rights impact assessment to review the implications of the adoption of such criteria, which includes broadly inclusive stakeholder engagement across diverse ideological, geographic and cultural contexts. To the degree that this assessment should identify any potential harms, implementation of the new policy should include a mitigation plan for addressing them. 2. In order to provide greater clarity to users, Meta should provide users with more explanation of what constitutes an ""offer or ask"" for sex (including links to third party websites) and what constitute sexually suggestive poses in the public Community Standards. The Board will consider this recommendation implemented when an explanation of these terms with examples is added to the Sexual Solicitation Community Standard. Enforcement 3. In order to ensure that Meta’s internal criteria for its Sexual Solicitation policy do not result in the removal of more content than the public-facing policy indicates and so that non-sexual content is not mistakenly removed, Meta should revise its internal reviewer guidance to ensure that the criteria reflect the public-facing rules and require a clearer connection between the ""offer or ask"" and the ""sexually suggestive element."" The Board will consider this implemented when Meta provides the Board with its updated internal guidelines that reflect these revised criteria. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world, provided expertise on socio-political and cultural context. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Return to Case Decisions and Policy Advisory Opinions" bun-ij9kg9e9,Australian Electoral Commission Voting Rules,https://www.oversightboard.com/decision/bun-ij9kg9e9/,"May 9, 2024",2024,,"Elections,Governments,Misinformation",Coordinating harm and publicizing crime,Upheld,Australia,"The Oversight Board has upheld Meta’s decisions to remove two separate Facebook posts containing the same screenshot of information posted on X by the Australian Electoral Commission, ahead of Australia’s Indigenous Voice to Parliament Referendum.",42825,6598,"Multiple Case Decision May 9, 2024 The Oversight Board has upheld Meta’s decisions to remove two separate Facebook posts containing the same screenshot of information posted on X by the Australian Electoral Commission, ahead of Australia’s Indigenous Voice to Parliament Referendum. Upheld FB-0TGD816L Platform Facebook Topic Elections,Governments,Misinformation Standard Coordinating harm and publicizing crime Location Australia Date Published on May 9, 2024 Upheld FB-8ZQ78FZG Platform Facebook Topic Elections,Governments,Misinformation Standard Coordinating harm and publicizing crime Location Australia Date Published on May 9, 2024 Australian Electoral Commission Voting Rules Decision PDF Australian Electoral Commission Voting Rules Public Comments Appendix The Oversight Board has upheld Meta’s decisions to remove two separate Facebook posts containing the same screenshot of information posted on X by the Australian Electoral Commission (AEC), ahead of Australia’s Indigenous Voice to Parliament Referendum. Both posts violated the rule in the Coordinating Harm and Promoting Crime Community Standard that prohibits content calling for illegal participation in a voting process. These cases show how information out of context can impact people’s right to vote. The Board recommends that Meta more clearly explain its voter and/or census fraud-related rules by publicly providing its definition of “illegal voting.” About the Cases On October 14, 2023, Australia held its Indigenous Voice to Parliament Referendum. Days before, a Facebook user posted in a group a screenshot of an X post from the AEC’s official account. The information shown included: “If someone votes at two different polling places within their electorate, and places their formal vote in the ballot box at each polling place, their vote is counted.” In addition, another comment taken by the user from the same X thread explained that the secrecy of the ballot prevents the AEC from “knowing which ballot paper belongs to which person,” while also stating “the number of double votes received is incredibly low.” However, the screenshot does not show all the information shared by the AEC, including that voting multiple times is an offence. The caption for the post stated: “vote early, vote often, and vote NO.” A second post shared by a different Facebook user contained the same screenshot but had text overlay with the statement: “so you can vote Multiple times. They are setting us up for a ‘Rigging’ … smash the voting centres … it’s a NO, NO, NO, NO, NO.” The Voice Referendum asked Australians whether the Constitution should be amended to give greater representation in parliament to the Aboriginal and Torres Strait Islander peoples. Voting is compulsory in Australia, with the AEC reporting turnout of about 90% in every election and referendum since 1924. Multiple voting is illegal and a type of electoral fraud. After Meta’s automated systems detected both posts, human reviewers removed them for violating Meta’s Coordinating Harm and Promoting Crime policy. Both users appealed. Key Findings The Board finds that both posts violated the Coordinating Harm and Promoting Crime rule that prohibits content “advocating, providing instructions for, or demonstrating explicit intent to illegally participate in a voting or census process.” In the first case, the phrase “vote often,” in combination with the AEC’s information on counting of multiple votes, is a clear call to engage in illegal voting. Voting twice is a type of “illegal voting,” as per Meta’s internal guidelines. In the second case, the use of the phrase “smash the voting centres,” alongside the rest of the text overlay, can be understood as advocating for people to flood polling places with multiple votes. Neither of the posts benefit from the policy exceptions on condemning, awareness raising, news reporting or humorous or satirical contexts. Specifically, on awareness raising, the posts do not fall under this exception since they go beyond discussing the AEC’s X post and instead decontextualize information to imply the AEC says that voting more than once is allowed. Preventing users from calling on others to engage in voter fraud is a legitimate aim of protecting the right to vote. The Board regards political speech as a vital component of democratic processes. In these cases, both users were directly engaging in the public debate sparked by the referendum but their calls for others to engage in illegal behavior impacted the political rights of people living in Australia, particularly the right to vote. So, while the calls to “vote No” are protected political speech, the phrases “vote often” and “smash the voting centres” are a different matter. The Board finds that Meta was correct to protect democratic processes by preventing voter fraud attempts from circulating on its platforms, given the frequent claims that the Voice Referendum was rigged. The Board acknowledges Meta’s efforts on the Voice Referendum. The company proactively identified potentially violating content under the voting interference rules of the Coordinating Harm and Promoting Crime and Misinformation Community Standards. The phrases “double vote” and “vote multiple times” were the keywords that activated the company’s keyword-based detection system in this case. According to Meta, the system is adapted to local contexts. Based on the information shared, the Board notes that initiatives like these should be consistently applied across the globe, in countries undergoing elections, although Meta is encouraged to develop success metrics for assessing how effective keyword-based detection is. Finally, the Board finds that the public-facing rules of the Coordinating Harm and Promoting Crime Community Standard are not clear enough. They do not include what is available to reviewers in Meta’s internal guidelines, namely the company’s definitions of “illegal voting.” Since it is crucial that users can engage on social media to discuss public-interest issues about democratic events, Meta needs to clearly inform users of the rules. The Oversight Board’s Decision The Oversight Board upholds Meta’s decisions in both cases to remove the content. The Board recommends that Meta: * Case summaries provide an overview of cases and do not have precedential value. 1. Decision Summary The Oversight Board upholds Meta’s decisions to remove two separate posts on Facebook containing a screenshot of a post by the Australian Electoral Commission (AEC) on X, previously known as Twitter. The screenshots from the AEC posted by the Facebook users included the following language: “If someone votes at two different polling places within their electorate, and places their formal vote in the ballot box at each polling place, their vote is counted.” In the first Facebook post, the screenshot was accompanied by a caption stating “vote early, vote often, and vote NO.” In the second Facebook post, the screenshot was accompanied by text overlay, which included: “so you can vote Multiple times … they are setting us up for a ‘Rigging’ … smash the voting centres… it’s a NO, NO, NO, NO, NO.” The caption also contained a “stop” emoji followed by the words “Australian Electoral Commission.” The Board finds that both posts violated the Coordinating Harm and Promoting Crime Community Standard , which prohibits “advocating, providing instructions for, or demonstrating explicit intent to illegally participate in a voting or census process, except if shared in condemning, awareness raising, news reporting, or humorous or satirical contexts.” The Board finds that none of the exceptions apply. These cases raise broader concerns around the sharing of decontextualized information against the backdrop of democratic processes, such as elections and referenda, with the potential of impacting people’s right to vote. The Board recommends that Meta more clearly explain the voter and/or census fraud-related policy lines under the Coordinating Harm and Promoting Crime Community Standard to clarify what constitutes an “illegal participation in a voting or census process.” 2. Case Description and Background On October 14, 2023, Australia held its Indigenous Voice to Parliament Referendum (hereinafter “Voice Referendum”). Days before the vote, a Facebook user in a group they administered shared a post with a screenshot of an X post from the official account of the Australian Electoral Commission (AEC). The AEC’s post on X included the following language: “If someone votes at two different polling places within their electorate, and places their formal vote in the ballot box at each polling place, their vote is counted.” The screenshot also shows another comment from the same thread on X, which explains that the secrecy of the ballot prevents the AEC from “knowing which ballot paper belongs to which person,” while also reassuring people that “the number of double votes received is incredibly low.” However, the screenshot does not show all the information shared by the AEC, including that voting multiple times is an offence in Australia. A caption accompanied the first Facebook post, stating: “vote early, vote often, and vote NO”. Another post containing the same screenshot of the AEC’s post on X was shared a day later by a different Facebook user on their profile. It was accompanied by text overlay, which included the following statement: “so you can vote Multiple times. They are setting us up for a ‘Rigging’ ... smash the voting centres ... it's a NO, NO, NO, NO, NO.” The caption also contained a “stop” emoji followed by the words “Australian Electoral Commission.” Both posts were proactively detected by Meta. The phrases “double vote” and “vote multiple times” were the keywords that activated the company’s “keyword-based pipeline initiative” in this case. This keyword-based detection approach is a systematic procedure deployed by Meta to proactively identify “content potentially violating including, but not limited to, content related to voter and census interference.” Both posts were then automatically lined up for human review. Following human review, both posts were removed for violating the Coordinating Harm and Promoting Crime policy. Meta also applied a standard strike and a 30-day feature limit to both user profiles, which prevented the users from posting or commenting in Facebook groups, creating news groups, or joining Messenger rooms. The Board noted the following context in reaching its decisions in these cases: The Voice Referendum asked whether Australia’s Constitution should be amended to recognize the First Peoples of Australia “by establishing a body called the Aboriginal and Torres Strait Islander Voice,” which would have been able to “make representations to the Parliament and the Executive Government of the Commonwealth on matters relating to Aboriginal and Torres Strait Islander peoples.” Relevant background information about the Voice Referendum includes the fact that the Aboriginal and Torres Strait Islander peoples in Australia are among the most socially and economically disadvantaged groups in the country, experiencing high levels of unemployment, lower participation in higher education, poor health outcomes ( both physical and mental health ), far shorter life expectancy than other Australians and high levels of incarceration. Aboriginal and Torres Strait Islander peoples also face discrimination and are disproportionately impacted by gender and police violence. Prime Minister Anthony Albanese campaigned in favor of the constitutional amendment (supporting “Yes”), while Australia's main opposition coalition campaigned against it (supporting “No”). The proposal was rejected nationally and by a majority in all six states, thus failing to secure the double majority needed to amend the Australian Constitution. Voting is compulsory in Australia and the AEC reports that voter turnout has been approximately 90% in every general election and referendum since 1924. Multiple voting is a type of electoral fraud both at state and federal levels, based on the Commonwealth Electoral Act 1918 and the Referendum (Machinery Provisions) Act 1984 . In response to allegations of multiple voting in the Voice Referendum, the AEC posted a lengthy thread on X, which stated that multiple voting is “very rare” and outlined the measures the AEC has in place to prevent the practice. The AEC explains on its website that to counter double voting, identical certified lists of all voters for a division are issued to each polling place. When electors are issued with a set of ballot papers, their names are marked off the certified list held at that issuing point. If an elector goes to another issuing point to cast another ordinary vote, the result will be that another copy of the certified list for that division will be marked to signify that the person has been issued with ballot papers. Immediately following voting day, each identical certified list for each division is scanned to check for instances of multiple marks against any names. The AEC then investigates and writes to each elector suspected of multiple voting. The response leads to the issue being resolved due to reasons such as “polling official error,” or explanations of “language or literacy difficulties” or that the person is “elderly and confused and voted more than once due to forgetting they had already cast a vote.” When they cannot be resolved, remaining cases are further investigated by the AEC and may be forwarded to the Australian Federal Police for consideration. In 2019, the AEC testified that multiple voting was a “very small problem,” only 0.03% of the 91.9% turnout were multiple mark-offs and the majority of multiple voting instances were mistakes by voters who were elderly, had poor literacy skills or had a low comprehension of the electoral process. The AEC reiterated the “negligible” rate of occurrence of multiple voting in Australia in its public comment submission to the Board. According to the AEC, only 13 cases of apparent multiple voting out of a total of 15.5 million votes were referred to the Australian Federal Police for further investigation in the context of the 2022 federal election, (PC-25006; see also PC-25007). According to experts consulted by the Board, claims that the Voice Referendum was rigged were frequent, with some posts accompanied by #StopTheSteal and #RiggedReferendum hashtags. Journalistic reporting similarly highlighted that claims of voter fraud in the context of the Voice Referendum were common. Based on social media monitoring tools deployed by experts consulted by the Board, as of February 2024, screenshots of the AEC’s posts on X had been shared on Meta’s platforms over 475 times, receiving thousands of reactions and at least 30,000 views. 3. Oversight Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). When Meta commits to act on recommendations, the Board monitors their implementation. When the Board identifies cases that raise similar issues, they may be assigned to a panel as a bundle to deliberate together. A binding decision will be made in respect of each piece of content. 4. Sources of Authority and Guidance The following standards and precedents informed the Board’s analysis in these cases: I. Oversight Board Decisions II. Meta’s Content Policies Meta’s Coordinating Harm and Promoting Crime policy rationale states that it aims to “prevent and disrupt offline harm and copycat behaviour” by prohibiting content “facilitating, organizing, promoting, or admitting to certain criminal or harmful activities targeted at people, businesses, property or animals.” The policy prohibits users from posting content “advocating, providing instructions for, or demonstrating explicit intent to illegally participate in a voting or census process, except if shared in condemning, awareness raising, news reporting, or humorous or satirical contexts.” There are also types of voter- or census-interference content that can be removed under the policy provided there is additional context to justify it. These include “calls for coordinated interference that would affect an individual’s ability to participate in an official election or census,” as well as “threats to go to an election site to monitor or watch voters or election officials’ activities if combined with a reference to intimidation.” Meta’s Violence and Incitement policy is aimed at preventing “potential offline harm” that may be related to content posted on Meta’s platforms. It prohibits “threats that could lead to death (and other forms of high-severity violence)” as well as “threats to take up weapons or bring weapons to a location or forcibly enter a location” such as “polling places or locations used to count votes or administer an election.” It also prohibits threats of violence “related to voting, voter registration, or the administration or outcome of an election; even if there is no target.” Meta’s Misinformation policy articulates how the company treats different categories of misinformation. Under one of these categories, Meta removes, “in an effort to promote election and census integrity,” “misinformation that is likely to directly contribute to a risk of interference with people’s ability to participate in those [political] processes.” That includes “misinformation about who can vote, qualifications for voting, whether a vote will be counted, and what information or materials must be provided in order to vote.” III. Meta’s Human Rights Responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , in which it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board’s analysis of Meta’s human rights responsibilities in this case was informed by the following international standards: 5. User Submissions In their statements to the Board, both users claimed they were merely sharing information posted by the AEC. The user who made the second post additionally asserted that their post served as a “warning to others” that the “election may be fraudulent” for allowing multiple voting since people “don’t need to show ID” to have their names marked off the list. 6. Meta’s Submissions Meta determined that both posts violated the policy line on “advocating, providing instructions for, or demonstrating explicit intent to illegally participate in a voting or census process” of the Coordinating Harm and Promoting Crime Community Standard. Based on Meta’s internal guidelines to content reviewers, Meta’s voting interference policies apply to both elections and “official referenda that are organized by a nationally designated authority.” The term “illegal voting” includes, “but is not limited to” the following: “(a) voting twice; (b) fabricating voting information to vote in a place where you are not eligible; (c) fabricating your voting eligibility; and (d) stealing ballots.” With respect to the first post, Meta emphasized the phrase “vote often” is “usually understood to mean illegally voting more than once in an election.” The company also found that the phrase was not intended as humor or satire, since the user was calling for people to vote “NO,” which in Meta’s view constituted a serious attempt to promote the user’s political preference. The company also shared with the Board that when reviewing content about elections at-scale, it is not always able to gauge the intent of users who post potential satire. With respect to the second post, Meta found the phrase “smash the voting centres” to be violating. The company explained the user’s call “could be read as advocacy to inundate the election with duplicate voting,” which is prohibited by the Coordinating Harm and Promoting Crime policy line against “advocating ... to illegally participate in a voting or census process.” According to Meta, if interpreted literally to mean a call to destroy the voting center buildings, the phrase would violate the Violence and Incitement policy, given that the policy prohibits: (i) threats of high-severity violence against a building that could lead to death or serious injury of any person present at the targeted place; and (ii) threats of violence “related to voting, voter registration, or the administration or outcome of an election; even if there is no target.” Based on Meta’s internal guidance to content reviewers, threats to places must be stated in “explicit terms,” such as “blow up,” “burn down,” “shoot up,” and also generic terms such as “attack,” “ambush” and “destroy” for a piece of content to be considered violating under this policy. Meta published the company’s integrity efforts for the Voice Referendum in a blog post in July 2023. Meta additionally told the Board that it formed a cross-functional team to begin preparations for the referendum in April 2023. The team consisted of Asia Pacific-based teams, as per standard practice for national elections. Meta also formed a virtual Integrity Product Operations Center (IPOC) during the final week of campaigning before the vote to focus on the referendum during a period of likely heightened tension. The IPOC included additional operations teams to quickly respond to escalations and critical risks that arose in the lead up to voting day. Meta did not apply the Crisis Policy Protocol or any other policy levers for the Voice Referendum. Meta also explained the company’s “keyword-based pipeline initiative,” which identifies and automatically enqueues potentially violating content containing keywords, whether in text or images like screenshots, for human review through “a specialized digital pipeline that scans for specific keywords.” Meta told the Board that the list includes many words and phrases developed by Meta’s misinformation and regional teams. The primary function of this keyword-based detection system is to “ensure the integrity” of elections and referenda by “systematically identifying and manually reviewing relevant content.” The keyword-based detection system was activated, in this case, because of the virtual IPOC that was set up for the Voice Referendum. Meta implements the initiative globally. It is not confined to specific countries or regions but is adapted to local contexts. According to Meta, the list of keywords is “dynamic,” subject to change and “specific to the nature of each event.” The initiative seeks to actively enforce the following areas of Meta’s Community Standards: (i) the Coordinating Harm and Promoting Crime policy addressing “voter and/or census fraud, including offers to buy or sell votes with cash gifts, and statements advocating or instructing illegal participation in voting or census processes;” and (ii) the Misinformation policy focusing on voter or census interference, including “misinformation about voting or census dates, locations, times, methods, voter qualifications, vote counting and required voting materials.” The keyword-based detection system for the Voice Referendum was not designed to actively enforce other content policies concerning elections or voting, such as those under the Violence and Incitement Community Standard. However, if content flagged by the initiative violates other Community Standards, it is also subjected to enforcement upon human review. With respect to this case content, the phrases “double vote” and “vote multiple times” were the keywords that activated Meta's detection system. The term “double vote” was not directly used in the Facebook posts but appeared in the screenshot of the AEC’s post on X. Any content containing these keywords, whether as text or in images like screenshots, is “automatically flagged and queued for human review to proactively monitor for voter suppression-related speech.” The Board asked Meta 12 questions in writing. The questions related to Meta’s voting interference content policies, the keyword-based detection system and protocols that Meta adopted for moderating content relating to the Voice Referendum. Meta answered all questions. 7. Public Comments The Oversight Board received five public comments that met the terms for submission. Three were submitted from the Asia-Pacific and Oceania region (all from Australia), one from the United States and Canada, and one from Europe. To read the public comments submitted with consent to publish, please click here . The submissions covered the following themes: the sociohistorical context leading to the Voice Referendum, history of voter fraud in Australia, the spread of misleading and decontextualized information during the Voice Referendum, and Meta’s content policies and enforcement practices on misinformation more generally. 8. Oversight Board Analysis The Board examined whether these posts should be removed by analyzing Meta’s content policies, human rights responsibilities and values. The Board also assessed the implications of this case for Meta’s broader approach to content governance. The Board selected these cases to examine Meta’s content moderation policies and enforcement practices on misleading or decontextualized voting information and voter fraud, given the historic number of elections in 2024. These cases fall within the Board’s strategic priority of Elections and Civic Space. 8.1 Compliance with Meta’s Content Policies I. Content Rules The Board finds that both posts violated the Coordinating Harm and Promoting Crime policy , which prohibits the advocacy of illegal participation in a voting or census process. The phrase “vote often” in the first post, when shared together with the AEC’s post on X about the counting of multiple votes, is a clear call to engage in such practice. Pursuant to Meta’s internal guidelines to content reviewers, “voting twice” is a form of “illegal voting.” The second post also violates the Coordinating Harm and Promoting Crime policy. It contained a screenshot of the X post and was accompanied with text overlay saying, “so you can vote multiple times.” It also urges people to “smash the voting centres.” The user could be simply attempting to express their frustration with the AEC for supposedly allowing people to “vote multiple times.” The phrase, however, when read together with the rest of the text overlay on the screenshot claiming the AEC was condoning multiple voting and accusing it of setting up people for a “rigging,” can be more reasonably understood as advocating for people to flood the polling place with multiple votes. In the context of the Australian elections, where voting is mandatory and the turnout is over 90%, a call for people to vote once is an unlikely interpretation of “smash the voting centres,” especially when this call follows a claim that people “can vote multiple times.” This is further supported by the user’s request for people to repeatedly vote “No” (“NO, NO, NO, NO, NO”). When read as a whole and in the context of the Australian elections, the post thus constitutes a call to vote twice, which amounts to “illegal voting,” prohibited by the Coordinating Harm and Promoting Crime policy. The Board recognizes that while it is a possibility that the posts could have been made satirically, their satirical intent is not explicit. The Board does not believe that the posts were implicitly satirical based on the language of the captions, and the text overlay on the images. While the degree of certainty in the call to action is different for both posts, each of them includes a plea to engage in multiple – hence “illegal” – voting. Given the risks associated with voter fraud attempts in electoral contexts, the Board believes that Meta’s humor or satire exception should only apply, in such circumstances, to content that is explicitly humorous. Therefore, neither of the posts qualifies for this exception. The posts also do not qualify for the awareness-raising exception under the Coordinating Harm and Promoting Crime policy. The screenshots and much of the user-created content were designed to call attention to the possibility of voter fraud based on the AEC’s statement. However, they went beyond and actively encouraged others to illegally participate in the Voice Referendum through multiple voting, rather than just discussing the AEC’s posts on X. The posts did not contain additional context provided by the AEC, in the same thread on X, that voting multiple times is an offence in Australia. Therefore, rather than raising awareness around the possibility of multiple voting, both posts decontextualized the AEC’s communication to imply that the AEC is saying that it is permissible to vote more than once. Unlike Meta, the Board does not believe a more literal reading of the word “smash” (meaning the destruction of buildings) is applicable in this case, given the lack of signals pointing in that direction (e.g., context of conflict or heightened tensions with widespread circulation of content directly inciting violence). Therefore, the Board concludes that the second post does not violate Meta’s Violence and Incitement policy. The Board also assessed both pieces of content against Meta’s Misinformation policy, given that they decontextualize the AEC’s communication. The Board concluded, however, that the Coordinating Harm and Promoting Crime Community Standard is the applicable policy in this case because both users are encouraging others to engage in voter fraud. II. Enforcement Action The Board acknowledges Meta’s integrity efforts for the Voice Referendum, including the keyword-based detection system adopted by Meta. The company explained the system was deployed for proactively identifying potentially violating content under the voting interference policy lines of the Coordinating Harm and Promoting Crime and Misinformation Community Standards. According to Meta, the keyword-based detection system is adapted to local contexts and contains market-specific terms. Based on the information Meta shared with the Board about how the initiative works, the Board appreciates that the keyword-based detection system was deployed and seems to have worked in this case. Initiatives like this one need to be consistently applied across the globe, in all countries undergoing elections and other democratic processes. The Board also believes that this initiative should encompass voting interference and related policies under the Violence and Incitement Community Standard. Given the limitations of keyword-based approaches to the detection of harmful content, the Board will continue to evaluate the efficacy of Meta’s system in other election-related cases. In this regard, the Board encourages Meta to develop success metrics for assessing how effective the keyword-based detection system is, along with other election integrity efforts, in identifying potentially violating content under election-relevant policies. This would be in line with the Board’s recommendation in the Brazilian General’s Speech decision for Meta to “develop a framework for evaluating the company’s election integrity efforts.” 8.2 Compliance with Meta’s Human-Rights Responsibilities Freedom of Expression (Article 19 ICCPR) Article 19 of the International Covenant on Civil and Political Rights (ICCPR) provides for broad protection for expression of “all kinds.” This right includes the ""freedom to seek, receive and impart information and ideas of all kinds.” The UN Human Rights Committee has highlighted that the value of expression is particularly high when it discusses political issues, candidates and elected representatives (General Comment No. 34, para. 13). This includes expression that is “deeply offensive,” critical of public institutions and opinions that may be erroneous (General Comment No. 34, para. 11, 38 and 49). The UN Human Rights Committee has emphasized that freedom of expression is essential for the conduct of public affairs and the effective exercise of the right to vote (General Comment No. 34, para. 20). The Committee further states that the free communication of information and ideas about public and political issues among citizens is essential for the enjoyment of the right to take part in the conduct of public affairs and the right to vote, Article 25 ICCPR (General Comment No. 25, para 25). In this case, both users were engaging on the referendum, a matter of public interest, to share their views on what the outcome should be, therefore directly participating in the public debate triggered by the referendum process. When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human rights commitments, both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As in previous cases (e.g., Armenians in Azerbaijan , Armenian Prisoners of War Video ), the Board agrees with the UN Special Rapporteur on freedom of expression that, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression,” ( A/74/486 , para. 41). In doing so, the Board attempts to be sensitive to ways in which the human rights responsibilities of a private social media company may differ from a government implementing its human rights obligations. I. Legality (Clarity and Accessibility of the Rules) Rules restricting expression should be clearly defined and communicated, both to those enforcing the rules and those impacted by them (General Comment No. 34, para. 25). Users should be able to predict the consequences of posting content on Facebook and Instagram. The UN Special Rapporteur on freedom of expression has highlighted the need for “clarity and specificity” in content-moderation policies ( A/HRC/38/35, para. 46). The public-facing language of the Coordinating Harm and Promoting Crime Community Standard is not sufficiently clear for the user. Given the importance of users being able to engage on social media to discuss issues of public interest in the context of democratic events, Meta needs to make sure that users are clearly informed of the applicable rules. This will help users anticipate whether content they are posting is potentially violating. In this regard, the Board finds that the clarification in the internal guidelines of what constitutes “illegal voting” should be incorporated into the public-facing Coordinating Harm and Promoting Crime Community Standard. II. Legitimate Aim Restrictions on freedom of expression must pursue a legitimate aim (Article 19, para. 3, ICCPR), including to protect the “public order” and the “rights of others.” The Coordinating Harm and Promoting Crime policy aims to “prevent and disrupt offline harm and copycat behaviour” by removing content “facilitating, organizing, promoting or admitting to certain criminal or harmful activities.” Protecting the right to vote and to take part in the conduct of public affairs is an aim that Meta’s Coordinating Harm and Promoting Crime policy can legitimately pursue, especially in the context of elections (Article 25, ICCPR). The Board finds that preventing users from calling on others to engage in voter fraud is a legitimate aim to protect the right to vote. General Comment No. 25 on the right to vote sets forth that “there should be independent scrutiny of the voting and counting process” so that “electors have confidence in the security of the ballot and the counting of votes,” (para. 20). Additionally, there is “the principle of one person, one vote must apply,” which means that “the vote of one elector should be equal to the vote of another” (para. 21). The Board also notes that the policy helps preserve “public order” by protecting polling places and democratic processes from voter interference, more broadly. III. Necessity and Proportionality Under ICCPR Article 19, para. 3, necessity and proportionality require that restrictions on expression “must be appropriate to achieve their protective function; they must be proportionate to the interest to be protected,” ( General Comment No. 34 , para. 34). As part of their human rights responsibilities, social media companies should consider a range of possible responses to problematic content beyond deletion to ensure restrictions are narrowly tailored ( A/74/486 , para. 51). The Board finds that Meta’s removal of both posts from Facebook complied with the requirements of necessity and proportionality. The Board notes the content was posted days before an upcoming referendum that marked a significant constitutional moment in Australia, especially for the Aboriginal and Torres Strait Islander peoples. On the one hand, political speech is a vital component of democratic processes and both users were directly engaging in the public debate sparked by the referendum. On the other hand, the users’ calls for others to engage in illegal behavior in the context of the referendum impacted the political rights of people living in Australia, particularly the right to vote and to take part in the conduct of public affairs. Applying these standards to the case content, the calls to “vote No” in both posts are clearly protected political speech. However, the phrase “vote often” in the first post and the phrase “smash the voting centres” in the second post are a different matter, given the fact they actively encouraged others to illegally participate in the Voice Referendum through multiple voting, as explained in more detail under Section 8.1 above. Experts consulted by the Board noted that claims the Referendum was rigged were frequent, while journalistic reporting highlighted that claims of voter fraud were common. Therefore, the Board finds that Meta was correct to err on the side of protecting democratic processes by preventing voter fraud attempts from circulating on Meta’s platforms ( General Comment No. 25 ). The circulation of voter fraud-related content may create an environment where the integrity of electoral processes is at risk. However, a minority of the Board find that the removal of the post urging people to “smash the voting centres” does not pass the necessity and proportionality test, given Meta’s failure to establish a “direct and immediate connection between the expression and the threat,” (General Comment No. 34, para. 35). For this minority, given the fact that the user’s call for people to “smash the voting centres” is an ambiguous call for people to vote multiple times, the connection with the voter fraud threat was not direct and immediate. The Board believes that Meta’s approach to expect clarity from users when enforcing exceptions to be a sensible one to assess if the content was shared in condemning, awareness raising, news reporting, or humorous or satirical context. There was no clear indication in the posts under analysis by the Board that the phrases “vote often” and “smash the voting centres” were meant rhetorically, instead of clearly advocating for multiple voting – an action that put the integrity of the Voice Referendum at risk. Therefore, both removals were necessary and proportional responses from Meta. Additionally, a minority of the Board is not convinced that content removal is the least intrusive means available to Meta to address voter fraud-related speech, and finds that Meta’s failure to demonstrate otherwise does not satisfy the requirement of necessity and proportionality. The Special Rapporteur has stated “just as States should evaluate whether a limitation on speech is the least restrictive approach, so too should companies carry out this kind of evaluation. And, in carrying out the evaluation, companies should bear the burden of publicly demonstrating necessity and proportionality,” ( A/74/486 , para. 51). For this minority, Meta should have publicly demonstrated why removal of such posts is the least intrusive means of the many tools it has at its disposal to avert likely near-term harms, such as voter fraud. If it cannot provide such a justification, then Meta should be transparent in acknowledging that its speech rules depart from UN human rights standards and provide a public justification for doing so. The minority believe that the Board would then be positioned to consider Meta’s public justification and a public dialogue would ensue without risking the distortion of existing UN human rights standards. 9. Oversight Board Decision The Oversight Board upholds Meta’s decisions to take down both pieces of content. 10. Recommendations Content Policy 1. To ensure users are fully informed about the types of content prohibited under the “Voter and/or census fraud” section of the Coordinating Harm and Promoting Crime Community Standard, Meta should incorporate its definition of the term “illegal voting” into the public-facing language of the policy prohibiting: “advocating, providing instructions for, or demonstrating explicit intent to illegally participate in a voting or census process, except if shared in a condemning, awareness raising, news reporting, or humorous or satirical contexts.” The Board will consider this recommendation implemented when Meta updates its public-facing Coordinating Harm and Promoting Crime Community Standard to reflect the change. *Procedural Note: The Oversight Board’s decisions are prepared by panels of five Members and approved by the majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Return to Case Decisions and Policy Advisory Opinions" bun-jexzgiy5,Reports on the War in Gaza,https://www.oversightboard.com/decision/bun-jexzgiy5/,"June 4, 2024",2024,,"Freedom of expression,Journalism,War and conflict",Dangerous individuals and organizations,Overturned,"Israel,Palestinian Territories","In these two summary decisions, the Board reviewed two posts reporting on the war in Gaza.",5118,751,"Multiple Case Decision June 4, 2024 In these two summary decisions, the Board reviewed two posts reporting on the war in Gaza. Overturned FB-VXKB1TZ5 Platform Facebook Topic Freedom of expression,Journalism,War and conflict Standard Dangerous individuals and organizations Location Israel,Palestinian Territories Date Published on June 4, 2024 Overturned IG-50OFM0LV Platform Instagram Topic Freedom of expression,Journalism,War and conflict Standard Dangerous individuals and organizations Location Israel,Palestinian Territories Date Published on June 4, 2024 Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Summary In these two summary decisions, the Board reviewed two posts reporting on the war in Gaza. After the Board brought these two appeals to Meta’s attention, the company reversed its original decisions and restored both posts. About the Cases In the first case, an Instagram user posted a short video in February 2024 from a Channel 4 News (UK) report on the killing of a Palestinian child. The video has a caption that expressly indicates it does not promote dangerous organizations and individuals and that it is a story about a Palestinian family and humanitarian workers. In the second case, a Facebook user posted a video in January 2024 from Al-Jazeera reporting on the war in Gaza. The clip contains reporting and analysis on hostage release negotiations between Israel and Hamas. Meta originally removed the posts from Instagram and Facebook, respectively, citing its Dangerous Organizations and Individuals policy. Under this policy, the company removes “glorification,” “support” and “representation” of designated entities, their leaders, founders or prominent members, and unclear references to them. In their appeals to the Board, both users stated that the videos were reports from media outlets and did not violate Meta’s Community Standards. After the Board brought these two cases to Meta’s attention, the company determined that the posts did not violate its policies and restored both pieces of content to its platforms. Board Authority and Scope The Board has authority to review Meta’s decisions following appeals from the users whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook, Instagram and Threads users. Significance of Cases These cases highlight errors in enforcement of an exception to the Dangerous Organizations and Individuals policy that allows content “reporting on, neutrally discussing or condemning dangerous organizations and individuals and their activities,” in order to safeguard a space for “social and political discourse.” This kind of error undermines genuine efforts to report on and raise awareness about the ongoing conflict in Gaza and other conflict-affected regions. The Board has issued several recommendations to improve enforcement of Meta’s Dangerous Organizations and Individuals policy. In a policy advisory opinion , the Board asked Meta to “explain the methods it uses to assess the accuracy of human review and the performance of automated systems in the enforcement of its Dangerous Organizations and Individuals policy,” (Referring to Designated Dangerous Individuals as “Shaheed,” recommendation no. 6). The Board has also urged Meta to “assess the accuracy of reviewers enforcing the reporting allowance under the Dangerous Organizations and Individuals policy in order to identify systemic issues causing enforcement errors,” a recommendation Meta reported implementation on but did not publish information to demonstrate implementation ( Mention of the Taliban in News Reporting , recommendation no. 5). Furthermore, the Board has recommended that Meta “add criteria and illustrative examples to Meta’s Dangerous Organizations and Individuals policy to increase understanding of exceptions, specifically around neutral discussion and news reporting,” a recommendation for which Meta demonstrated implementation through published information ( Shared Al Jazeera Post , recommendation no. 1). Decision The Board overturns Meta’s original decisions to remove the two pieces of content. The Board acknowledges Meta’s corrections of its initial errors once the Board brought these cases to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" bun-kj6lo858,Greek 2023 Elections Campaign,https://www.oversightboard.com/decision/bun-kj6lo858/,"March 28, 2024",2024,,Elections,Dangerous individuals and organizations,Upheld,"Australia,Greece","The Oversight Board reviewed two Facebook posts together, both shared around the time of Greece’s June 2023 General Election. The Board has upheld Meta’s decisions to remove the content in both cases for violating the company’s Dangerous Organizations and Individuals policy.",45976,7033,"Multiple Case Decision March 28, 2024 The Oversight Board reviewed two Facebook posts together, both shared around the time of Greece’s June 2023 General Election. The Board has upheld Meta’s decisions to remove the content in both cases for violating the company’s Dangerous Organizations and Individuals policy. Upheld FB-368KE54E Platform Facebook Topic Elections Standard Dangerous individuals and organizations Location Australia,Greece Date Published on March 28, 2024 Upheld FB-3SNBY3Q2 Platform Facebook Topic Elections Standard Dangerous individuals and organizations Location Greece Date Published on March 28, 2024 Greek 2023 Elections Campaign Public Comments Appendix Greek 2023 Elections Campaign Decision PDF Greek Translation To read this decision in Greek, click here . Για να διαβάσετε αυτήν την απόφαση στα ελληνικά, κάντε κλικ εδώ . In reviewing two cases about Facebook content posted around the time of the June 2023 General Election in Greece, the Board has upheld Meta’s removal of both posts. Both were removed for violating the company’s Dangerous Organizations and Individuals policy. The first case involved an electoral leaflet that included a statement in which a lawful candidate aligned himself with a designated hate figure, while in the second case an image of a designated hate entity’s logo was shared. The majority of the Board find these removals to be consistent with Meta’s human rights responsibilities. However, the Board recommends that Meta clarify the scope of the policy’s exception allowing content to be shared in the context of “social and political discourse” during elections. About the Cases These two cases involve content posted on Facebook by different users around the time of the June 2023 General Election in Greece. In the first case, a candidate for the Spartans party in Greece posted an image of their electoral leaflet. On it, there is a statement that Mr. Ilias Kasidiaris – a Greek politician sentenced to 13 years in prison for directing the criminal activities and hate crimes of Golden Dawn – supports the Spartans. Mr. Kasidiaris and other members of the far-right Golden Dawn party had been persecuting migrants, refugees and other minority groups in Greece before the party was declared a criminal organization in 2020. Ahead of his sentencing in 2020, Mr. Kasidiaris founded a new political party called National Party – Greeks. Later, in May 2023, the Greek Supreme Court disqualified National Party – Greeks from running in the 2023 elections since, under Greek law, parties with convicted leaders are banned from participating. Although Mr. Kasidiaris has been banned from Facebook since 2013 for hate speech, he uses other social media platforms in prison. This is how he declared his support for the Spartans about a couple of weeks before the June election. The Spartans, which won 12 seats, acknowledged the part that Mr. Kasidiaris played in driving its party’s success. In the second case, another Facebook user posted an image of the logo of National Party – Greeks, which also includes the Greek word for “Spartans.” Golden Dawn, National Party – Greeks and Mr. Kasidiaris are designated as Tier 1 hate organizations and a Tier 1 hate figure respectively, under Meta’s Dangerous Organizations and Individuals policy. Both posts were reported to Meta. The company determined separately that both posts violated its Dangerous Organizations and Individuals Community Standard, removed the content and applied a severe strike and 30-day restriction to both accounts. The two different Facebook users who posted the content appealed to Meta, but the company again found it to be violating. Both users then appealed separately to the Board. Key Findings First Case The majority of the Board find the post violated the Dangerous Organizations and Individuals policy (as written in June 2023) because the user broke the rule that prohibits “praise” of a designated entity. He did this by “ideologically aligning” himself with Mr. Kasidiaris, who is designated by Meta as a hate figure. As this rule included an explicit example of ideological alignment, this would have been sufficiently clear to users and content moderators. Even after the latest policy update, this post would still fall under the prohibition on “positive references” to Mr. Kasidiaris. Furthermore, the majority of Board Members note that removing this post did not infringe on the public’s right to know about this endorsement. The public had plentiful other opportunities, including in local and regional media, to learn about this expression of support by Mr. Kasidiaris for the Spartans party. A minority, however, find that violation of the rule on ideological alignment was not directly obvious because Mr. Kasidiaris was endorsing the lawful candidate, not vice versa. These Board Members also believe the exception for “newsworthiness” should have been applied to keep this content on Facebook so that voters could have access to the fullest possible information on which to make their decisions. Second Case The majority of the Board find the image violated the Dangerous Organizations and Individuals policy because it shared a symbol of National Party – Greeks, a designated organization, and should have been removed. No context was provided by the user to allow for the exceptions on “reporting on, neutrally discussing or condemning” to be applied. However, there are also Board Members in the minority who believe simply sharing logos associated with a designated entity, when there are no other violations or context of harmful content, should be allowed. Overall Concerns In the Board’s view, the policy exception for “social and political discourse” about designated entities during elections needs to be made clearer publicly. The Board also remains concerned about the lack of transparency around Meta’s designation of hate entities, which makes it challenging for users to understand which organizations or individuals they are allowed to align with ideologically or whose symbols they can share. The Oversight Board’s Decision The Oversight Board has upheld Meta’s decisions to remove both posts. The Board recommends that Meta: *Case summaries provide an overview of cases and do not have precedential value. 1. Decision Summary The Oversight Board reviewed together two Facebook posts concerning the June 2023 General Election in Greece. The first case involves a Greek electoral candidate’s post, in which he shared details about his electoral campaign and an image of his electoral leaflet that featured an endorsement by a politician designated as a hate figure under Meta’s Dangerous Organizations and Individuals Community Standard. The second case concerns a post sharing the logo of a Greek party, National Party – Greeks, which is also a designated entity, with the word “Spartans” in Greek as part of the image. Meta removed both for violating Meta’s Dangerous Organizations and Individuals Community Standard. The majority of the Board uphold Meta’s decisions to remove the content in both cases, finding that these removals conformed with Meta’s policies and human rights responsibilities. The Board recommends that Meta clarify the scope of its new “social and political discourse” exception to its Dangerous Organizations and Individuals Community Standard in elections. 2. Case Description and Background These cases involve content posted on Facebook by different users in Greece around the time of the June 2023 General Election , the second set of elections to take place in the country that year following the failure of any party to secure a majority in elections in May. In the first case, a Facebook user, who was a candidate for the Spartans party in Greece, posted an image of his electoral leaflet, containing his photo and name, along with a caption in Greek describing his campaign’s progress ahead of the elections, including his preparations and engagement with the public. The leaflet included a statement that Mr. Ilias Kasidiaris supports the Spartans. Mr. Kasidiaris, a Greek politician, was sentenced to 13 years in prison for directing the activities of Golden Dawn. Golden Dawn was declared a criminal organization in 2020 for its responsibility for hate crimes, including the murder of a Greek rap singer. In 2013, two Golden Dawn members were found guilty of murdering a Pakistani migrant worker. Mr. Kasidiaris and other Golden Dawn members had been actively engaged in persecuting migrants, refugees and other minority and vulnerable groups. During a 2012 Golden Dawn rally, Mr. Kasidiaris called the Roma community “human trash” and asked his supporters to “fight [...] if they wanted their area to become clean,” (see public comments, e.g., PC-20008 from ACTROM - Action for and from the Roma). Before being sentenced in 2020, Mr. Kasidiaris founded a new political party called National Party – Greeks. On May 2, 2023, the Greek Supreme Court disqualified National Party – Greeks from running in the 2023 general elections in the light of recently adopted amendments to the Greek constitution that bans parties with convicted leaders from participating in elections. Several international and regional media outlets reported that ahead of the June 2023 elections, Mr. Kasidiaris had declared his support for the Spartans from prison using his social media accounts. Mr. Kasidiaris, who was banned from Facebook in 2013 for hate speech, mainly uses other social platforms now. In the second case, a different Facebook user posted an image of the National Party – Greeks’ logo, which also includes the Greek word that translates as “Spartans.” The Spartans party was founded in 2017 by Vasilis Stigkas and, according to the European Center for Populism Studies , promotes a far-right ideology and is a successor to the Golden Dawn party. The Spartans did not run in the May 2023 elections, but the party did apply to participate in the second set of elections in June that year. Greek law requires political parties to submit applications in order to participate in the national parliamentary elections, which subsequently have to be certified by a court. On June 8, 2023, the Greek Supreme Court issued a decision allowing 26 parties, four alliances and two independent candidates to participate in the June 2023 election, including Spartans. Mr. Stigkas, who won one of 12 seats for the Spartans party (4.65%), stated that Mr. Kasidiaris’ support “drove their success.” Civic space in Greece has been marked by increasing threats and attacks perpetrated by extremist groups and private individuals, who target the human rights of refugees, migrants, LGBTQIA+ communities and religious minorities. Scholars of Greek politics , human rights defenders and local NGOs are concerned that far-right groups, including those affiliated with Golden Dawn, use mainstream social media platforms to spread misinformation and hate speech, actively operating online and offline, with their impact extending beyond what is visible on platforms such as Facebook (see public comments, e.g., PC-20017 from Far Right Analysis Network). Freedom House’s annual Freedom in the World (2023) report ranked Greece as Free with a score of 86/100, noting that the media environment remains highly free and non-governmental organizations generally operate without interference from the authorities. Still, recent studies published by Reuters Institute for the Study of Journalism , International Press Institute and the Incubator for Media Education and Development highlight a significant decline of trust in Greek media, in particular in journalists and broadcasting media. This is largely due to concerns about political and business influence on journalism, coupled with the increasing digital spread of media. These studies also reveal concerns about manipulation of information, censorship and the decrease of media independence. Both posts were reported to Meta, which, after human review, determined the content in both cases violated Facebook’s Dangerous Organizations and Individuals Community Standard . It applied a severe strike and 30-day restriction to both accounts, preventing them from using live video and ad products, without suspending the accounts. Both Facebook users who posted the content appealed, but Meta again found the content to be violating. The two users then separately appealed to the Board. 3. Oversight Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When the Board identifies cases that raise similar issues, they may be assigned to a panel as a bundle to deliberate together. A binding decision will be made with regard to each piece of content. The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). When Meta commits to act on recommendations, the Board monitors their implementation. 4. Sources of Authority and Guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board Decisions II. Meta’s Content Policies The policy rationale for the Dangerous Organizations and Individuals Community Standard explains that in “an effort to prevent and disrupt real-world harm,” Meta does not “allow organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence” on its platforms. Meta assesses “these entities based on their behavior both online and offline – most significantly, their ties to violence.” According to the policy rationale, organizations and individuals designated under Tier 1 of the Dangerous Organizations and Individuals Community Standard fall into three categories: terrorist organizations, criminal organizations and hate entities. Tier 1 focuses on entities that engage in serious offline harms, including “organizing or advocating for violence against civilians, repeatedly dehumanizing or advocating for harm against people based on protected characteristics, or engaging in systematic criminal operations.” The policy rationale notes that Tier 1 designations result in the most extensive enforcement as Meta believes these entities have “the most direct ties to offline harm.” Meta defines a “hate entity” as an “organization or individual that spreads and encourages hate against others based on their protected characteristics.” Meta states that the entity’s activities are characterized “by at least some of the following behaviors: violence, threatening rhetoric, or dangerous forms of harassment targeting people based on their protected characteristics; repeated use of hate speech; representation of hate ideologies or other designated hate entities; and/or glorification or support of other designated hate entities or hate ideologies.” Under Tier 1 of the Dangerous Organizations and Individuals policy as in force in June 2023, Meta did not allow “leaders or prominent members of these organizations to have a presence on the platform, symbols that represent them to be used on the platform or content that praises them or their acts.” At that time, “praise” was defined as any of the following: “speak positively about a designated entity or event” or “aligning oneself ideologically with a designated entity or event.” Following December 2023 updates to the Dangerous Organizations and Individuals policy, the company now removes “glorification, support and representation of Tier 1 entities, their leaders, founders or prominent members, as well as unclear references to them.” This includes “unclear humor, captionless or positive references that do not glorify the designated entity’s violence or hate.” Meta requires users to clearly state their intent when sharing content that discusses designated entities or their activities. The Dangerous Organizations and Individuals policy allows users to report on, neutrally discuss or condemn designated organizations or individuals or their activities. Meta updated this exception in August 2023 to clarify that users may share content referencing dangerous organizations and individuals or their activities in the context of “social and political discourse.” As Meta publicly announced in a newsroom blog post , the updated “social and political discourse” exception includes content shared in the context of elections. The Board’s analysis of the content policies was also informed by Meta’s value of voice, which the company describes as “paramount,” as well as its value of safety. Newsworthiness Allowance Meta defines the newsworthiness allowance as a general policy exception that can be applied across all policy areas within the Community Standards, including to the Dangerous Organizations and Individuals policy. It allows otherwise violating content to be kept on the platform if the public interest value in doing so outweighs the risk of harm. According to Meta, such assessments are made only in “rare cases,” following escalation to its Content Policy team. This team assesses whether the content in question surfaces an imminent threat to public health or safety or gives voice to perspectives currently being debated as part of a political process. This assessment considers country-specific circumstances, including whether elections are underway. While the speaker's identity is a relevant consideration, the allowance is not limited to content posted by news outlets. III. Meta’s Human Rights Responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , in which it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board’s analysis of Meta’s human rights responsibilities in this case was informed by the following international standards: 5. User Submissions The author of each post in these two cases appealed Meta’s decision to remove their content to the Board. In their submission to the Board, the user in the first case stated they were a candidate from a legitimate Greek political party participating in the Greek parliamentary elections and noted that as a result of the strike applied to their account, they were unable to manage their Facebook page. The user in the second case claimed they shared the logo of the Spartans party, expressing their surprise about the removal of their post. 6. Meta’s Submissions Meta told the Board that the decisions to remove the content in both cases were based on its Dangerous Organizations and Individuals Community Standard . Meta informed the Board that Golden Dawn, National Party – Greeks and Mr. Kasidiaris are designated as Tier 1 Hate Organizations and as a Tier 1 Hate Figure respectively. The designation of National Party – Greeks occurred on May 5, 2023. In response to the Board’s questions, Meta noted that the company designates entities in an independent process based on a set of designation signals. Meta stated that the Facebook user in the first case praised a designated entity by speaking positively about Mr. Kasidiaris. Expressing “ideological alignment” was listed as an example of prohibited praise. Meta explained that the post’s caption indicated the user was distributing leaflets in support of their own parliamentary campaign and their own party, the Spartans. However, the leaflet also stated that Mr. Kasidiaris “supports the party Spartans,” explicitly highlighting that Mr. Kasidiaris, a designated individual, had endorsed the user’s political party. For Meta, this user publicly aligned themselves with Mr. Kasidiaris by promoting the latter’s endorsement. Meta informed the Board that following the December 2023 update to the Dangerous Organizations and Individuals policy, the post in the first case would violate the rule prohibiting “positive references that do not glorify the designated entity's violence or hate.” The post did not contain any explicit glorification of Mr. Kasidiaris or his violent or hateful activities. In the second case, Meta considered the sharing of the National Party – Greeks’ logo as praise for the party, which is a designated entity, without any accompanying explanatory caption, so it removed the content. Meta informed the Board that following the December 2023 update to the Dangerous Organizations and Individuals policy, the post in the second case would be removed as the user shared a reference (a symbol) of the National Party – Greeks without an accompanying explanatory caption, although it did not contain any explicit glorification of Mr. Kasidiaris or his violent or hateful activities. Meta found that neither post would have benefited from the Dangerous Organizations and Individuals exception in force at the time in June 2023 as neither user clearly indicated their intent to “report on, neutrally discuss or condemn” a designated entity or their actions. According to Meta, this remained the case after the August 2023 changes to that exception, which reframed the exception as permitting “social and political discourse.” In response to the Board’s questions, Meta stated that the “social and political discourse” exception was introduced to permit some types of “content containing explicit context relating to a set of defined categories such as elections,” which it would have previously removed under the policy. When a designated entity is officially registered and enrolled in a formal electoral process, Meta was concerned that by removing all praise or references of the entity, this would unduly restrict people’s ability to discuss the election and candidates. However, the exception was never intended to encompass substantive support such as providing tangible operational or strategic advantage to a designated entity by distributing official campaign material, official propaganda or allowing official channels of communication on their behalf. In response to a question from the Board, Meta explained the social and political discourse exception attempts to strike a balance between allowing discussion of designated entities participating in an election while preserving safety by removing substantive support for or glorification of these entities. Meta noted that it intentionally focused the allowance on entities that are registered and formally enrolled in the election process. This is because the allowance aims to permit discussion of candidates who are running for office, while removing glorification of a designated entity’s hate or violence or providing any substantive support to a designated entity. Meta added that “the purpose of creating this allowance was to enable users to express their opinion about their electoral preferences if the designated entity was running in elections, not to allow designated entities to circumvent existing electoral processes and the company’s enforcement to share their agendas.” For the second case, Meta concluded that the social and political discourse exception under its updated policy would not apply because sharing a symbol or logo of National Party – Greeks with text that identifies Spartans, without additional commentary (e.g., a caption condemning or neutrally discussing National Party – Greeks), does not clearly indicate the user’s intent. Moreover, the exception also did not apply in the second case as the National Party – Greeks, a designated entity, was disqualified from participating in the Greek elections. The Board asked Meta five questions in writing. Questions related to the application of Meta’s “social and political discourse” allowance under the Dangerous Organizations and Individuals policy; the transparency of the designation process and the list of designated entities under the policy. Meta answered the five questions. 7. Public Comments The Oversight Board received 15 public comments that met the terms for submission. Thirteen were submitted from Europe and two from the United States and Canada. To read the public comments submitted with consent to publish, click here . The submissions covered the following themes: the political context in Greece, including discussion of Greek political parties; 2023 elections in Greece and the impact of social media on election results; far right and extremist groups in Greece and other European countries, and their use of social-media platforms; recent legislative amendments in Greece and their impact on 2023 elections; and the importance of the transparency of entity lists under Meta’s Dangerous Organizations and Individuals policy. 8. Oversight Board Analysis The Board selected these cases to assess the impact of Meta’s Dangerous Organizations and Individuals Community Standard on freedom of expression and political participation, especially during elections when designated entities or persons associated with them may be active in political discourse. The cases fall under the Board’s strategic priorities of Elections and Civic Space and Hate Speech Against Marginalized Groups. The Board examined whether this content should be restored by analyzing Meta’s content policies, human rights responsibilities and values. 8.1 Compliance With Meta’s Content Policies The Board upholds Meta’s decisions to remove the content in both cases. First Case: An Electoral Candidate’s Campaign Leaflet The Board notes that Meta’s commitment to voice is paramount and is of heightened importance in electoral contexts. The Board emphasizes that to provide voters with access to the fullest information to cast their vote, Meta should allow public discourse among the electorate, candidates and parties on the activities of designated entities. The Board finds that this post fell under Meta’s prohibition of “praise” of a designated entity that was in force in June 2023 because the user ideologically aligned themselves with Mr. Kasidiaris, a designated hate figure under Tier 1 of the Dangerous Organizations and Individuals policy. This was clearly described in the relevant Community Standard as conduct that Meta considers to be an example of prohibited “praise.” Following the December 30, 2023, policy changes, the content would fall under the prohibition on positive references to a designated entity that do not glorify the designated entity’s violence or hate. For a minority of Board Members, the application of the rule on ideological alignment was not directly obvious because Mr. Kasidiaris was endorsing (i.e., “praising” or “referencing”) the user, rather than vice versa. It requires some level of inference that the user was in effect reciprocating that endorsement, and thus fell afoul of Meta’s policy on ideological alignment. A minority of the Board consider that while this post violated the Dangerous Organizations and Individuals policy and did not fall under any policy exception in force in June 2023, Meta should have applied its newsworthiness allowance to keep this content on the platform, given that the public interest in the post outweighed the risk of harm. The post directly informed voters about a convicted criminal’s endorsement of an electoral candidate, which is relevant and valuable information in the electoral context, especially during the second set of elections, given the participation of a new party. These Board Members note that following the August 2023 updates to the Dangerous Organizations and Individuals policy, under the “social and political discourse” exception, Meta should allow lawful candidates in elections to express in neutral terms their ideological alignment with designated entities, absent any inclusion of hate speech or incitement of specific harm. This will enable voters to have the fullest possible information on which to make a decision. Second Case: The Logo of National Party – Greeks and the Slogan “Spartans” The majority of the Board find that the content violates the Dangerous Organizations and Individuals Community Standard because it shared a symbol of National Party – Greeks, which is a designated hate entity. This post does not fall under the policy exception, in force in June 2023, as there are no contextual indications that the user intended to feature the logo of National Party – Greeks alongside the name of a lawful party, the Spartans, to “report on, neutrally discuss or condemn” National Party – Greeks or their activities. The majority of the Board distinguish these posts from the content in the Nazi Quote case where contextual cues allowed the Board to conclude that the user’s post neutrally discussed a designated hate entity. In that case, the user referenced a quote from a known historical figure that did not show ideological alignment with the person but attempted to draw “comparisons between the presidency of Donald Trump and the Nazi regime.” No such context is present in this case. Following the December 30, 2023, policy changes, the content in this case would be removed for sharing a reference (symbol) of a designated entity without an explanatory caption. A minority of the Board consider that this post should not be found to violate the Dangerous Organizations and Individuals policy. They note that simply sharing logos associated with a designated entity, absent other violations or context of harmful intent, should be allowed on the platform. 8.2 Compliance With Meta’s Human Rights Responsibilities The Board finds that Meta’s decisions to remove the content in both cases were consistent with the company’s human rights responsibilities. Freedom of Expression (Article 19 ICCPR) Article 19 (2) of the ICCPR provides for broad protection of expression, including “to seek, receive and impart information and ideas of all kinds.” Protected expression includes “political discourse,” “commentary on public affairs” and expression that may be considered “deeply offensive” ( General Comment No. 34 (2011), para. 11). In an electoral context, the right to freedom of expression also covers access to sources of political commentary, including local and international media, and “access of opposition parties and politicians to media outlets” ( General Comment No. 34 (2011), para. 37). When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human rights commitments, both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression,” (report A/74/486 , para. 41). I. Legality (Clarity and Accessibility of the Rules) The principle of legality under international human rights law requires rules that limit expression to be clear and publicly accessible (General Comment No.34, para. 25). Restrictions on expression should be formulated with sufficient precision to enable individuals to regulate their conduct accordingly (Ibid). As applied to Meta, the company should provide guidance to users as to what content is permitted on the platform and what is not. Additionally, rules restricting expression “may not confer unfettered discretion on those charged with [their] execution” and must “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not,” ( A/HRC/38/35 , para. 46). For the first case, the Board notes that the examples of “praise” were added to the public facing language of the Dangerous Organizations and Individuals policy in response to the Board’s recommendation no. 2 in the Nazi Quote case. The explicit example of prohibition of “aligning oneself ideologically with a designated entity or event” made Meta’s rule sufficiently clear and accessible for the user in the first case and the content reviewers implementing the rule. The Board notes that this example was removed in the December 2023 update. In relation to the second case, the Board agrees that Meta’s policy against sharing symbols of designated entities unless the user clearly states their intent to report on, neutrally discuss or condemn designated entities, is sufficiently clear and meets the legality test. The Board further finds that as applied to the second case, the Dangerous Organizations and Individuals policy exception both before and after the August 2023 revisions, meets the legality test. The Board is, nonetheless, concerned about the lack of transparency around the designation of hate entities and which entities are included under Tier 1 of the Dangerous Organizations and Individuals policy. This makes it challenging for users to understand which entities they are or are not permitted to express ideological alignment with or of those whose symbols they can share. Tier 1 terrorist organizations include entities and individuals designated by the United States government as Foreign Terrorist Organizations (FTOs) or Specially Designated Global Terrorists (SDGTs), and criminal organizations include those designated by the United States government as Specially Designated Narcotics Trafficking Kingpins (SDNTKs). The U.S. government publishes lists for FTOs, SDGTs and SDNTKs designations, which correspond to at least some of Meta’s Dangerous Organizations and Individuals designations. However, Meta’s full list of Tier 1 “hate entity” designations is not based on an equivalent public U.S. list. The Board has called for Tier 1 entity list transparency in the Nazi Quote case, which Meta declined doing for “safety reasons.” In response to recommendation no. 1 in the Shared Al Jazeera Post case, following the August 2023 update, the public-facing language on Meta’s Dangerous Organizations and Individuals policy has been supplemented with several examples of the application of the exception. The Board finds that the full scope of the updated exception is not clear to users, as none of the examples illustrates the application of the policy exception in the context of elections. In circumstances of shrinking civic space and threats to media freedom globally, social media platforms serve as an invaluable information source. Given the uncertainty about the scope of the updated policy exception during electoral periods, users in such contexts could be unsure what types of discussion they can engage in on electoral candidates and their supporters, who may also be Tier 1 designated entities. The Board finds that Meta’s prohibition of “praise” in the form of ideological alignment as well as prohibition of sharing symbols of designated entities as in force in June 2023 met the legality standard. However, the extent of “social and political discourse” about designated entities permitted in the electoral context requires further clarification. II. Legitimate Aim Restrictions on freedom of expression must pursue a legitimate aim, which include the protection of the rights of others and the protection of public order and national security. According to the policy rationale, Meta’s Dangerous Organizations and Individuals policy aims to “prevent and disrupt real-world harm.” In several decisions, the Board has found that Meta’s Dangerous Organizations and Individuals policy pursues the legitimate aim of protecting the rights of others (see Nazi Quote ; Mention of the Taliban in News Reporting; Punjabi Concern Over the RSS in India ). The Board finds that in these two cases, Meta’s policy pursues a legitimate aim of protecting the rights of others, such as the right to non-discrimination and equality (ICCPR, Articles 2 and 26), the right to life (ICCPR, Article 6), the prohibition of torture, inhuman and degrading treatment (ICCPR, Article 7), and the right to participate in public affairs and the right to vote (ICCPR, Article 25). III. Necessity and Proportionality The principle of necessity and proportionality provides that any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected,” (General Comment No. 34, paras. 33-34). Elections are crucial for democracy, and the Board acknowledges that Meta’s platforms have become a virtually indispensable medium in most parts of the world for political discourse, especially in election periods. Given its close relationship with democracy, political speech “enjoys a heightened level of protection,” (General Comment No. 37, paras. 19 and 32). International freedom of expression mandates noted that “digital media and platforms should make a reasonable effort to adopt measures that allow users to access a diversity of political views and perspectives,” ( Joint Declaration 2022 ). The UN Special Rapporteur on freedom of peaceful assembly and of association stated that “the freedom of political parties to expression and opinion, particularly through electoral campaigns, including the right to seek, receive and impart information, is as such, essential to the integrity of elections,” ( A/68/299, at para. 38 (2013)). However, to mitigate adverse human rights impacts, it is crucial to distinguish between protected political speech and political expression that can be restricted because it may further harm. In this regard, as noted by the Board, Meta has a responsibility to identify, prevent, mitigate and account for adverse human rights impacts for use of its platforms (UNGPs, Principle 17). The UN Special Rapporteur on freedom of peaceful assembly and of association underlined that a political party or any of its candidates can be lawfully prohibited if “they use violence or advocates for violence or national, racial or religious hatred constituting incitement to discrimination, hostility or violence,” (ICCPR, Article 20, ICERD, Article 5). Any restrictions under Article 20, ICCPR, and Article 5, ICERD, must meet the standards of necessity and proportionality under Article 19, para. 3, ICCPR (General Comment 34, para. 50-52; CERD/C/GC/35 , para. 24-25). First Case: An Electoral Candidate’s Campaign Leaflet The majority of the Board consider that Meta's decision to remove the first post under its Dangerous Organizations and Individuals policy satisfies the principles of necessity and proportionality. The majority acknowledge the importance of freedom of expression during elections, including users’ rights to share and receive information. However, these Board Members find that Meta was justified in removing the post of an electoral candidate expressing ideological alignment with a designated hate figure. This prohibition, coupled with the allowance for users to “report on, neutrally discuss or condemn” designated entities or their activities, including endorsements of this kind during elections, is in line with Meta’s human rights commitments. In this case, these Board Members understand that removing this post from Meta’s platform did not disproportionately restrict the public’s right to know the information contained therein. Given there were multiple local and regional media reports on the endorsement of the designated entity, convicted for leading a criminal organization connected with hate crimes, the public had other opportunities to learn about this expression of support to the candidate’s party. These media reports would have qualified for the policy exception, which allows for lawful discussion in electoral contexts, without furthering any real-world harm. Meta’s responsibility to prevent, mitigate and address adverse human rights impacts is heightened in electoral and other high-risk contexts, and requires the company to establish effective guardrails against harm. Meta has a responsibility both to allow political expression and to avoid serious risks to other human rights. Given the potential risk of its platforms being used to incite violence in the context of elections, Meta should continuously ensure the effectiveness of its election integrity efforts (see Brazilian General’s Speech ). In view of the multiple elections around the world, Meta’s careful enforcement of the Dangerous Organizations and Individuals policy, especially its updated policy exception in electoral contexts, is imperative. For some Board Members, a lawful candidate’s post publicizing the support offered by a Tier 1 designated entity is not information about the candidate’s program, but an act of association with a prohibited party. Such publications can be used to circumvent Meta’s prohibition of Tier 1 designated entities from using its services and undermine the democratic process (ICCPR, Article 5). Furthermore, in the present case, where the public had sufficient opportunities to learn about the existing alliances, the removal of the candidate’s post was not disproportionate. For a minority, removing the content in the first case disproportionately interfered with users’ rights to share and receive information during an election. These Board Members highlight that Meta’s “commitment to expression is paramount” and in this case the company has erred by prioritizing safety over voice. The electorate should have access to information about candidates and their activities, and a party that has been allowed by the Greek Supreme Court to participate in an election should likewise have the widest latitude on what information its candidates can publish. In this case, since the Spartans is a newer party, voters may not yet know much about it. At the same time, given the reports on decreasing trust towards media in Greece (see section 2 above), voters should have the opportunity to hear directly from lawful candidates. This is especially needed when candidates or their parties receive support or allegiance from entities disqualified from running for elections or those that may be designated under the Dangerous Organizations and Individuals policy. These Board Members note that a social media platform should not become the arbiter of what voters are and are not allowed to know about a candidate or party. They consider that given the importance of the electoral context, removal of the content in the first case was not the least intrusive means and was a disproportionate restriction of the candidate’s speech and the electorate’s right to access to information. Instead, in line with Meta’s values and human rights commitments, the company should have kept the post up under its newsworthiness allowance. Given the content was an electoral post from a lawful candidate directly informing the electorate about his campaign and the support from Mr. Kasidiaris, published during the elections in Greece, the public’s interest in knowing more about the parties and candidates outweighed the risk of harm. Second Case: The Logo of National Party – Greeks and the Slogan “Spartans” In the second case, the majority of the Board find that Meta’s removal of the content in that case was necessary and proportionate as the post shared a symbol of a designated hate entity. In absence of any contextual cues that the content was shared to report on, neutrally discuss or condemn a designated entity, the removal was justified. A minority of the Board consider that Meta erred in removing this content. This minority note that a contextual analysis is required when determining if the content is harmful. Removal of a post simply sharing a symbol of a designated entity, without any indication of incitement to violence or unlawful action, is disproportionate and cannot be the least intrusive means to protect against harm. 9. Oversight Board Decision The Oversight Board upholds Meta’s decisions to take down the posts in both cases. 10. Recommendations Content Policy 1. To provide greater clarity to users, Meta should clarify the scope of the policy exception under the Dangerous Organizations and Individuals Community Standard, which allows for content “reporting on, neutrally discussing or condemning dangerous organizations and individuals or their activities” to be shared in the context of “social and political discourse.” Specifically, Meta should clarify how this policy exception relates to election-related content. The Board will consider this implemented when Meta makes this clarification change in its Community Standard. *Procedural Note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg, which draws on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social-media trends, also provided analysis. Return to Case Decisions and Policy Advisory Opinions" bun-kobfl44h,Cartoons About College Protests,https://www.oversightboard.com/decision/bun-kobfl44h/,"September 12, 2024",2024,,"Freedom of expression,Politics,Protests",Dangerous individuals and organizations,Overturned,United States,"In these two summary decisions, the Board reviewed two posts containing cartoons about college protests.",6924,1022,"Multiple Case Decision September 12, 2024 In these two summary decisions, the Board reviewed two posts containing cartoons about college protests. Overturned FB-TO38JZ4O Platform Facebook Topic Freedom of expression,Politics,Protests Standard Dangerous individuals and organizations Location United States Date Published on September 12, 2024 Overturned FB-0F4BU4NY Platform Facebook Topic Freedom of expression,Politics,Protests Standard Dangerous individuals and organizations Location United States Date Published on September 12, 2024 Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Summary In these two summary decisions, the Board reviewed two posts containing cartoons about college protests. Meta removed both posts on initial review. However, after the Board brought these cases to Meta’s attention for additional review, the company reversed its original decisions and restored both posts. About the Cases In the first case, in May 2024 a Facebook user posted a cartoon depicting two people. The first one is wearing a headband with 'Hamas' written on it and holding a weapon. The person is saying that they will kill Jewish people until Israel is ""wiped off the map."" The second person is dressed in a T-shirt with ""college"" written on it. That person is holding a book and saying, ""It's obvious they just want to live in peace."" In the second case, in April 2024 a Facebook user posted a cartoon that shows a family eating together. The son figure looks like Adolf Hitler and wears a shirt emblazoned “I [heart] Hamas.” The father figure expresses concern about how college has changed his son. Both users posted the content in the context of the Israel-Gaza conflict-related protests taking place at universities across the United States. Meta originally removed both posts from Facebook under its Dangerous Organizations and Individuals (DOI) policy. Under the DOI policy, the company removes ""Glorification,"" ""Support,"" and ""Representation"" of designated entities, their leaders, founders, or prominent members, and unclear references to them. In their appeals to the Board, both users stated that they posted a political cartoon that does not violate Meta's Community Standards. After the Board brought these two cases to Meta's attention, the company determined that the posts did not violate its policies and restored both pieces of content to its platform. Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook, Instagram and Threads users. Significance of Cases These cases highlight errors in the enforcement of exceptions to Meta’s Dangerous Organizations and Individuals policy that allows content “reporting on, neutrally discussing or condemning dangerous organizations and individuals and their activities,” in order to safeguard a space for “social and political discourse.” The Board has issued several recommendations to increase transparency around the enforcement of Meta’s Dangerous Organizations and Individuals policy and its exceptions. The Board has also issued recommendations to address enforcement challenges associated with this policy. This includes a recommendation to “assess the accuracy of reviewers enforcing the reporting allowance under the Dangerous Organizations and Individuals policy in order to identify systemic issues causing enforcement errors.” While Meta reported it had implemented this recommendation, it did not publish information to demonstrate this ( Mention of the Taliban in News Reporting , recommendation no. 5). The Board also recommended that Meta “add criteria and illustrative examples to Meta’s Dangerous Organizations and Individuals policy to increase understanding of exceptions, specifically around neutral discussion and news reporting,” a recommendation for which Meta demonstrated implementation through published information ( Shared Al Jazeera Post , recommendation no. 1). Furthermore, in a policy advisory opinion , the Board asked Meta to “explain the methods it uses to assess the accuracy of human review and the performance of automated systems in the enforcement of its Dangerous Organizations and Individuals policy,” (Referring to Designated Dangerous Individuals as “Shaheed,” recommendation no. 6). Meta reframed this recommendation. The company shared information about the audits it conducts to assess the accuracy of its content moderation decisions and how this informs areas for improvement. Meta did not, however, explain the methods it uses to perform these assessments, nor has the company committed to share the outcome of such assessments. Additionally, the Board has issued a recommendation regarding Meta’s enforcement of satirical content. This includes a recommendation for Meta to “make sure it has adequate procedures in place to assess satirical content and relevant context properly. This includes providing content moderators with: (i) access to Facebook’s local operation teams to gather relevant cultural and background information; and (ii) sufficient time to consult with Facebook’s local operation teams and to make the assessment. [Meta] should ensure that its policies for content moderators incentivize further investigation or escalation where a content moderator is not sure if a meme is satirical or not,” a recommendation Meta reported implementation on but did not publish information to demonstrate implementation ( Two Buttons Meme decision , recommendation no. 3). The Board also recommended that Meta include the satire exception to the public language of the Hate Speech Community Standard , a recommendation for which Meta demonstrated implementation through published information, ( Two Buttons Meme decision , recommendation no. 2). Decision The Board overturns Meta’s original decisions to remove the two pieces of content. The Board acknowledges Meta’s corrections of its initial errors once the Board brought these cases to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" bun-lj939ea3,Criticism of EU Migration Policies and Immigrants,https://www.oversightboard.com/decision/bun-lj939ea3/,"April 23, 2025",2025,,"Discrimination,Freedom of expression,Marginalized communities",Add the term “murzyn” to its Polish market slur list.,Overturned,Germany,"The majority of the Board has found that two pieces of immigration-related content, posted ahead of the June 2024 European Parliament elections, violate the Hateful Conduct policy and Meta should take them down.",39920,6259,"Multiple Case Decision April 23, 2025 The majority of the Board has found that two pieces of immigration-related content, posted ahead of the June 2024 European Parliament elections, violate the Hateful Conduct policy and Meta should take them down. Overturned FB-ZQQA0ZIP Platform Facebook Topic Discrimination,Freedom of expression,Marginalized communities Location Germany Date Published on April 23, 2025 Overturned FB-0B8YESCO Platform Facebook Topic Discrimination,Freedom of expression,Marginalized communities Location Poland Date Published on April 23, 2025 Criticism of EU Migration Policies and Immigrants, Polish Translation Criticism of EU Migration Policies and Immigrants PDF Criticism of EU Migration Policies and Immigrants, German Translation To read the full decision in Polish, click here . Aby zapoznać się z tą decyzją w języku polskim, kliknij tutaj . To read the full decision in German, click here . Klicken Sie hier , um diese Entscheidung auf Deutsch zu lesen. The majority of the Board has found that two pieces of immigration-related content, posted on Facebook ahead of the June 2024 European Parliament elections, violate the Hateful Conduct policy and Meta should take them down. The Board recognizes the right to free expression is paramount when assessing political discussions and commentary. However, content such as these two posts contributed to heightened risks of violence and discrimination in the run-up to an election, in which immigration was a major political issue and anti-migrant sentiment was on the rise. For the majority, it is necessary and proportionate to remove them. One post by a Polish political party intentionally uses racist terminology to harness anti-migrant sentiment. The other post generalizes immigrants as gang rapists, a claim that, when repeated, whips up fear and hatred. Additional Note: Meta’s January 7, 2025, revisions did not change the outcome in these cases, though the Board took the rules at the time of posting and the updates into account during deliberation. On the broader policy and enforcement changes hastily announced by Meta in January, the Board is concerned that Meta has not publicly shared what, if any, prior human rights due diligence it performed in line with its commitments under the UN Guiding Principles on Business and Human Rights. It is vital Meta ensures adverse impacts on human rights globally are identified and prevented. About the Case The first case involves a meme posted on the official Facebook page of Poland’s far-right political alliance, Confederation. In the meme, Polish Prime Minister Donald Tusk looks into a door’s peephole, while a Black man walks up behind him. Polish text says: “Good evening, did you vote for Platform? I’ve brought the murzyn from the immigration pact.” Platform is Tusk’s political party, the Civic Platform coalition, while the pact is the European Union’s Pact on Migration and Asylum. The Polish word, “murzyn,” used to describe Black people, is widely considered to be derogatory. The caption criticizes the EU pact and encourages people to vote for Confederation in the European elections to stop “uncontrolled immigration.” This content has been viewed around 170,000 times. In the second case, a German Facebook page describing itself as against left-leaning groups posted an AI-generated image of a blonde-haired, blue-eyed woman holding up her hand in a stop gesture. German text says people shouldn’t come to the country anymore because no more “gang rape specialists” are needed due to the Green Party’s immigration policy. There is also a non-hyperlinked address for an article, titled “Non-German suspects in gang rapes,” on the German Parliament’s website. This post has been viewed around 9,000 times. Both posts were reported for hate speech. Meta found no violations, leaving them on Facebook. Users then appealed the cases to the Board. Key Findings The majority of the Board finds that both posts violate the renamed Hateful Conduct policy, while a minority finds no violations in either. The Polish post contains the word “murzyn,” which the majority considers to be a discriminatory slur, used to attack Black people based on race. Meta’s January 7 changes did not impact its rule on slurs, defined as “words that inherently create an atmosphere of exclusion and intimidation against people on the basis of a protected characteristic.” Implying the inferiority or uncleanliness of Black people, the term’s offensive nature is recognized in the main Polish language dictionaries. It is also notable that Black-led and Polish-speaking civil society movements have played an important role in raising awareness of the term’s discriminatory and harmful impacts. The majority notes that Meta does not currently include “murzyn” as a slur, recommending this be changed and calling on the company to more accurately enforce its slurs policy. A minority of the Board disagrees, finding the term does not meet Meta’s definition of a slur and clearer evidence is needed that it inherently creates an atmosphere of exclusion and intimidation. The majority also finds that the German post is violating because it contains a Tier 1 attack, generalizing that the majority of immigrants are “gang rape specialists.” This rule, which does not allow allegations of “serious immorality and criminality” based on immigration status, including by calling people “sexual predators,” remains unchanged since January 7. For this rule to apply, posts must target more than 50% of a group, with Meta’s internal guidance (not available publicly) advising that reviewers leave up content when it is unclear if this condition has been met. This is why Meta left up the German post. The majority of the Board disagrees with Meta’s assessment. It recommends the company change this rule to require users to clearly indicate they are targeting less than half of a group, for example, by using qualifiers such as “some.” A minority of the Board disagrees, finding the German post does not state or imply that all or most immigrants are gang rapists. Finally, the majority notes it is appropriate for Meta to consider the effects on human rights of such hateful conduct accumulating on its platforms. A minority disagrees with the majority, finding that removals would only have been justified if the posts constituted incitement to likely and imminent violence and discrimination. These two posts called for no action, other than participation in an election and discussion of public interest around immigration. The Oversight Board’s Decision The Oversight Board overturns Meta in both cases. The Board recommends that Meta: *Case summaries provide an overview of cases and do not have precedential value. 1. Case Description and Background The Oversight Board has reviewed two cases involving content posted on Facebook ahead of the June 2024 European Parliament elections, in which immigration was a key issue. In May of that year, the European Union (EU)’s Pact on Migration and Asylum was adopted, establishing new rules to manage migration in Europe. The first case involves a meme posted by an administrator of the official Facebook page of Poland’s far-right political alliance, Confederation (Konfederacja Wolność i Niepodległość). The image shows the country’s Prime Minister Donald Tusk looking into a door viewer (or peephole), as a Black man walks up behind him. Polish text over the image says: “Good evening, did you vote for Platform? I've brought the murzyn from the immigration pact.” Platform refers to Tusk’s centrist Civic Platform coalition, which came into power in December 2023. “Murzyn,” the Polish word used to describe Black people in the text, is widely considered to be a derogatory slur in Poland, although Meta does not prohibit it. The caption criticizes the EU pact and encourages people to vote for Confederation in the European elections to stop immigrants being allowed into Poland and the EU. The post has been viewed around 170,000 times, shared less than 500 times and has under 500 comments. In the second case, the administrator of a German Facebook page described as being against left-leaning groups posted an image that appears to be AI-generated. The image shows a blonde-haired, blue-eyed woman holding up her hand in a stop gesture, with both a stop sign and the German flag in the background. German text over the image says people should no longer come to Germany as they don’t need any more “gang rape specialists,” due to the Green Party’s immigration policy. This is followed, in much smaller text, by a non-hyperlinked website address for an article on the German Parliament’s website titled “Non-German suspects in gang rapes.” The post has been viewed about 9,000 times and shared less than 500 times. Ten Facebook users reported the Polish post and one reported the German post, all for hate speech. Meta left both posts on Facebook and, after each decision was unsuccessfully appealed to Meta, both cases were appealed to the Board. On January 7, 2025, Meta announced revisions to its Hate Speech policy, renaming it the Hateful Conduct policy. These changes, to the extent relevant to these cases, will be described in Section 3 and analyzed in Section 5. The Board notes content is accessible on Meta’s platforms on a continuing basis, and updated policies are applied to all content present on the platform, regardless of when it was posted. The Board therefore assesses the application of policies as they were at the time of posting, and, where applicable, as since revised (see also the approach in Holocaust Denial ). 2. User Submissions The user who appealed against the Polish post cited academic references to support their position that “murzyn” is a pejorative and derogatory term that perpetuates racial stereotypes and discrimination. The user who appealed against the German post noted that it appears to claim all refugees are criminals and rapists. 3. Meta’s Content Policies and Submissions I. Meta’s Content Policies Hateful Conduct (previously named Hate Speech) Community Standard Meta defines hateful conduct in the same way that it previously defined “hate speech,” as “direct attacks against people” on the basis of protected characteristics, including national origin, race and ethnicity. The policy continues to treat immigration status as a “quasi-protected characteristic.” This means Meta only protects immigrants from the most severe attacks under Tier 1 of the policy. On January 7, Meta added an explanation to the policy rationale that people sometimes “call for exclusion or use insulting language in the context of discussing political or religious topics,” including immigration. Meta explicitly states that its “policies are designed to allow room for these types of speech.” Tier 1 prohibits “allegations of serious immorality and criminality,” giving sexual predators and violent criminals as examples. The policy previously prohibited allegations about less serious forms of criminality, but this has been moved from Tier 1 to Tier 2. Tier 2 does not provide such protections to migrants – therefore, Meta now allows assertions that most migrants are, for example, thieves. Tier 2 continues to prohibit calls for exclusion but this protection also does not extend to migrants. Tier 1 states that its prohibitions do not apply if content targets less than half of a group. Meta’s internal guidance to moderators explains how to treat direct attacks that refer to less than 100% of a target group, including on the basis of immigration status. If the content contains a quantifier like “most” indicating it refers to more than 50% of the group, then Tier 1 prohibitions apply. If it is unclear whether the content refers to more than 50% of the group, then the content is permitted. Accordingly, content asserting that all or most migrants in a country are rapists or violent criminals is prohibited, but content asserting that some of them are rapists or violent criminals is allowed. Tier 1 of the Hateful Conduct policy continues to prohibit “content that describes or negatively targets people with slurs.” Slurs are defined as “words that inherently create an atmosphere of exclusion and intimidation against people on the basis of a protected characteristic, often because these words are tied to historical discrimination, oppression and violence.” Meta sets out how it develops, enforces and updates its slur list on its Transparency Center . II. Meta’s Submissions Meta left both posts up, finding neither violated its renamed Hateful Conduct policy. Meta confirmed the January 7 changes did not impact its decisions because its policies on racial slurs and generalizations comparing migrants to sexual predators or violent criminals have not changed. Meta stated that the Polish post did not violate the policy because it does not contain a violating attack under Tier 1. Meta does not designate “murzyn” as a slur in the Polish market. Meta explained that the term was last considered for categorization in 2023, but was not added because Meta determined its use was historically neutral and, though it can be used contemptuously, its similarity to other words could lead to overenforcement. Regarding the German post, Meta found the content was not violating as “it is unclear whether the content is calling all, most, or some migrants gang rape specialists.” For Meta, the content does not state or imply that all or most migrants will commit gang rape. Meta also noted that the article referred to in the post does not support the conclusion that it is attacking the majority of immigrants coming to Germany. Finally, while Meta acknowledged both posts may be read as exclusionary, the company explained that neither violates its prohibition on “calls for exclusion” as Tier 2 prohibitions do not provide protections on the basis of immigration status. The Board asked questions on Meta’s Hateful Conduct policy, the company’s slur lists, and how it assesses content from political parties and anti-migrant speech in the context of elections. Meta responded to all questions. 4. Public Comments The Oversight Board received 18 public comments that met the terms for submission . Of these, 15 were submitted from Europe, two from the United States and one from Sub-Saharan Africa. Because the public comments period closed before January 7, 2025, none of the comments addresses the policy changes Meta made on that date. To read public comments submitted with consent to publish, click here . The submissions covered the following themes: whether “murzyn” is a discriminatory slur; anti-immigrant rhetoric on social media; links between online hate speech and offline violence; the importance of being able to discuss immigration issues; and the rise of conspiracy theories in political rhetoric on migration issues. 5. Oversight Board Analysis The Board selected these cases to examine how Meta ensures freedom of expression in discussions around immigration, while also respecting the human rights of migrants in the context of an election. These cases fall within the Board’s strategic priorities of Hate Speech Against Marginalized Groups and Elections and Civic Space. The Board analyzed Meta’s decisions in these cases against Meta’s content policies, values and human rights responsibilities. The Board also assessed the implications of these cases for Meta’s broader approach to content governance. 5.1 Compliance With Meta’s Content Policies The majority of the Board finds that both the Polish and German posts violate the Hateful Conduct policy and should be removed from Facebook. A minority finds no violations, however, in either post under the Hateful Conduct policy. The Board’s outcome did not change as a result of Meta’s January 7 changes. The majority of the Board finds that “murzyn” is a discriminatory slur within the meaning of Meta’s policy because it is used to attack Black people on the basis of their race, inherently creating an atmosphere of discriminatory exclusion and intimidation. The Board notes it is overwhelmingly used online as part of derogatory statements about Black people (also see public comments, including from the Institute for Strategic Dialogue – PC-30797, PC-30795 and PC-30790). Experts consulted by the Board explained the term is used in idioms and proverbs that imply the inferiority or uncleanliness of Black people, based on race. Black-led and Polish-speaking civil society movements in Poland played a key role in raising awareness of the term’s discriminatory and harmful impacts. Harms, including perpetuating negative stereotypes and legitimizing discriminatory treatment by portraying Black people as the “other” within society, result from the term’s derogatory nature and associations with inferiority. For the majority, it is especially compelling that a term is viewed as both derogatory and harmful by the marginalized group that it refers to. For this reason, Meta should be more systematic and thorough in its consultations with impacted groups when auditing its slur list, and more broadly when updating its policies. The Board notes that contemporary understandings of the term matter. While some Polish speakers maintain that the term is neutral, the Polish Language Council issued guidance in 2020 that it is archaic, pejorative and should not be used in the public sphere. Experts also noted that while the term may have been perceived as neutral in the 20th century, it had negative and pejorative connotations prior to this. For example, the word was previously used to mean “ a slave ,” tying it directly to one of history’s worst examples of discrimination, oppression and violence, clearly meeting Meta’s definition of a slur. The main Polish language dictionaries have now updated their definitions of the term to recognize it as offensive. For these reasons, the majority finds that use of the term creates an atmosphere of exclusion and intimidation. As such, the Board issues a recommendation to ensure Meta more accurately enforces its slurs policy moving forward. The majority also notes that, had the post not used this slur, it would have been permissible under Meta’s content policies (see the Board’s Armenians in Azerbaijan decision). A minority of the Board disagrees that the Polish post is violating, finding the term does not meet Meta’s definition of a slur. While the term may be seen as offensive and derogatory, this is insufficient to find that it should be considered a banned term. For the minority, Meta’s policy requires clearer evidence that the use of the term inherently creates an atmosphere of exclusion and intimidation. There should be more than correlative ties to periods of historic discrimination, oppression and violence (in other times and places), but evidence that its use has been and continues to be intrinsic to the infliction of those harms. The majority of the Board finds that the German post constitutes a Tier 1 attack by generalizing that the majority of immigrants are “gang rape specialists.” This prohibition remains unchanged following Meta’s January 7 policy changes. For the majority, the characterization of immigrants entering the country as “gang rape specialists,” without any qualifying language (e.g., “some” or “too many”), clearly conveys a generalized attack on all immigrants. Contrary to Meta’s assessment, the fact that the post includes the website address (which is not hyperlinked and appears in smaller text) of an article titled “Non-German suspects in gang rapes,” does not affect this conclusion. Instead, it supports the majority’s conclusion. The text in the post only includes the title of the article, which, rather than conveying the nuances discussed in the article’s fuller analysis, implies that “non-Germans” are generally the suspects of gang rapes. For more accurate enforcement of the Hateful Conduct policy, the majority of the Board recommends Meta should reverse its default presumption that unless content clearly refers to more than 50% of a group, it will be considered non-violating (e.g., “immigrants are gang rapists” should be presumed as a generalization and therefore be violating). Meta should require users posting content that could violate the Hateful Conduct policy to clearly indicate they are targeting less than 50% of a group (e.g., “some immigrants are gang rapists”). A minority of the Board finds that, while the German post is deeply offensive, it is not a generalization prohibited by Meta’s revised Hateful Conduct policy or the pre-January 7 version. The content does not state or imply that all or most immigrants are gang rapists. This group of Board Members is also concerned that the majority’s recommendation would place an undue burden on users having to explain their positions. The article referenced in the post, “Non-German suspects in gang rapes,” does not support a conclusion that the post is attacking the majority of immigrants, as it includes a nuanced discussion of why immigrants may be over-represented in official statistics on the perpetration of gang rapes. The minority notes that the post addresses a valid subject of discussion, especially in the context of an election where immigration, and in particular the relationship between migrants and crime, is a pivotal issue. Meta’s January 7 changes to the Hateful Conduct policy rationale make it clear the company intends its policies to provide more space for freedom of expression when discussing immigration. 5.2 Compliance With Meta’s Human Rights Responsibilities The majority of the Board finds that the removal of both posts, as required by a proper interpretation of Meta’s content policies, is also consistent with Meta’s human rights responsibilities. A minority of the Board disagrees, finding that removal is not consistent with these responsibilities. Freedom of Expression (Article 19 ICCPR) Article 19 of the ICCPR provides for broad protection of expression, including views about politics, public affairs and human rights ( General Comment No. 34 , paras. 11-12). When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s human rights responsibilities in line with the UN Guiding Principles on Business and Human Rights (UNGPs), which Meta itself has committed to in its Corporate Human Rights Policy. The Board does this both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. Under UNGPs Principle 13, companies should “avoid causing or contributing to adverse human rights impacts through their own activities” and “prevent or mitigate adverse human rights impacts that are directly linked to their operations, products or services.” As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression,” ( A/74/486 , para. 41). At the same time, when company rules differ from international standards, companies should give a reasoned explanation of the policy difference in advance, in a way that articulates the variation (ibid., at para 48). I. Legality (Clarity and Accessibility of the Rules) The principle of legality requires rules limiting expression to be accessible and clear, formulated with sufficient precision to enable an individual to regulate their conduct accordingly (General Comment No. 34, para. 25). Additionally, these rules “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” and must “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not” ( Ibid. ). When applied to private actors’ governance of online speech, rules should be clear and specific ( A/HRC/38/35 , para. 46). People using Meta’s platforms should be able to access and understand the rules, and content reviewers should have clear guidance regarding their enforcement. The Board concludes there are no legality issues with the Hateful Conduct rules as applied to these cases. However, the Board is concerned that a recent version of this policy, following a December 2023 update, was being enforced for many months globally while only available in U.S. English, until the Board questioned Meta on this. Users accessing the Transparency Center from any other market would, by default, be accessing an outdated translation of the policy. The Board again encourages Meta to pay greater attention to ensuring its rules are accessible in all languages as swiftly as possible following any policy changes (see Punjabi Concern Over the RSS in India ). II. Legitimate Aim Any restriction on freedom of expression should pursue one or more of the legitimate aims of the ICCPR, which include the “rights of others” (Article 19, para. 3, ICCPR). In several decisions, the Board has found that Meta’s Hate Speech (renamed Hateful Conduct) policy aims to protect the right to equality and non-discrimination, a legitimate aim that is recognized by international human rights standards (see e.g., Knin Cartoon and Myanmar Bot ). This continues to be the legitimate aim of the Hateful Conduct policy. III. Necessity and Proportionality Under ICCPR Article 19(3), necessity and proportionality require that restrictions on expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected,” (General Comment No. 34, para. 34). The value of expression is particularly high when discussing matters of public concern and the right to free expression is paramount in the assessment of political discourse and commentary on public affairs. People have the right to seek, receive and impart ideas and opinions of all kinds, including those that may be controversial or deeply offensive (General Comment 34, para. 11). In the Politician’s Comments on Demographic Changes decision, the Board found that, while controversial, the expression of this opinion on immigration did not include direct dehumanizing or hateful language towards vulnerable groups, or a call for violence. The majority of the Board finds that removal of both posts is necessary and proportionate. This is guided by the six factors outlined in the Rabat Plan of Action in assessing risks posed by potential hate speech. For the Polish post, the word “murzyn” is used generally and in this case to denigrate people on the basis of their race. The term’s repeated use on Meta’s platforms creates an environment in which discrimination and violence against Black people is more likely. Here, the slur is not used in a permissible context, either self-referentially in an empowering way, or to condemn or raise awareness of someone else’s hate speech. For the majority, the cumulative effects of repeated use of this slur on Meta’s platforms are comparable to the dehumanizing use of “blackface,” as discussed by the majority in the Zwarte Piet case. It is much more obvious, however, in this post that the user is intentionally invoking racist terminology to harness anti-migrant sentiment by mobilizing anti-Black stereotypes (whereas in Zwarte Piet removal was justified without there being hostile intent). For these reasons, the majority of the Board finds that removal of the post would be necessary regardless of when it was shared. It additionally notes that in this instance, in the run-up to an election with high levels of anti-migrant sentiment, there were heightened risks of violence and discrimination. Experts consulted by the Board highlighted that vigilante groups in Poland have organized on social media to form “ civic patrols ,” which target foreigners and people with foreign accents with offline violence and intimidation, including attacks on migrant accommodation. According to the OSCE , the police recorded 893 hate crimes in Poland in 2023, with racist and xenophobic motivation the highest recorded category. Research has also previously found that hate crimes in Poland were most often experienced by people of African descent. In this context, it is notable that the speaker is a political party with a sizable following and vote share in Poland. It has a broad reach (this post had around 170,000 views) and the ability to influence supporters to take action and attract media coverage. While it is of course important that a political party can freely campaign in an election, including by raising concerns about immigration, it can do this without using racial slurs (see Armenians in Azerbaijan ). The German post shares a similar context to the Polish post. It was also shared immediately before elections during which immigration was a major political issue, with high levels of anti-migrant sentiment present. Consistent with Meta’s policies, the majority considers it necessary and proportionate to remove statements generalizing that the majority of immigrants are gang rapists. Crimes against migrants and anti-migrant online discourse were on the rise in Germany at the time. The United Nations High Commissioner for Human Rights has previously “expressed alarm at the often extraordinarily negative portrayal in many countries of migrants, but also of minority groups by the media, politicians and other actors in society [calling] for measures to curb growing xenophobic attitudes,” ( A/HRC/22/17/ADD.4 , para. 3). Experts consulted by the Board noted that anti-immigrant rhetoric in Germany, often voiced and amplified on social media, may have contributed to attacks on immigrants and minorities (also see public comments PC-30803, PC-30797 and PC-30790). The 2024 riots in the United Kingdom also highlighted how social media content on topics like race and immigration can contribute to offline violence. The German post intentionally generalizes immigrants as sexual predators, a claim that repeated over again whips up fear and hatred, laying the foundations for inciting discrimination and violence against this group. The users in both these cases could have contributed to the political debate without using racial slurs or engaging in degrading generalizations if Meta had given them notifications as to why their posts were potentially violating. Specificity in notifications when content is removed is important, but Meta should also explore increasing the use of prompts to invite users prior to posting to reconsider language that may potentially violate the company’s policies. In the Pro-Navalny Protests in Russia case, the Board recommended that Meta notify users of the reason their content was violating, so they could repost without the violating part. In response to this recommendation, Meta has introduced notifications to users that their posts might be violating, giving them the opportunity to delete and repost content before any enforcement action is taken. Meta shared that, over a 12-week period in 2023, users opted to delete their post more than 20% of the time, decreasing the amount of violating content through self-remediation. The majority emphasizes that in reaching its decisions on both posts, the standards for content moderation by a social media company should not be compared so directly to the standards limiting states’ application of punitive law. Meta is not engaged in an after-the-fact detailed investigation of whether a crime was committed but is operating in real-time with incomplete information. Were it to wait until violence or discrimination is imminent before acting, it would be too late for it to prevent harm in accordance with its responsibilities under the UNGPs. Both the challenge of assessing the impact of each piece of content at scale and the unpredictable nature of online virality justify Meta taking a more cautious approach to moderation. The majority reiterates that Meta as a private actor may remove hate speech that falls short of the threshold of incitement to imminent discrimination or violence, where this meets the ICCPR Article 19(3) requirements of necessity and proportionality (see South Africa Slurs ). Meta allowing all hate speech that falls short of incitement as foreseen under Article 20 of the ICCPR would make Meta’s platforms an intolerable and unsafe place for minorities and marginalized groups to express themselves. In these cases, it may cause not only migrants but anyone who is not white to withdraw from public discourse, having a chilling effect that diminishes the value of pluralism and access to information for all people. It is therefore appropriate that Meta’s approach to content moderation considers the effects on human rights of hateful content accumulating on its platforms, even when in isolation those posts do not incite imminent violence or discrimination (see Depiction of Zwarte Piet , Communal Violence in Indian State of Odisha , Armenians in Azerbaijan and Knin Cartoon ). The majority notes that less severe interventions, such as labels, warning screens or other measures to reduce dissemination, would not provide adequate protection against the cumulative effects of leaving content of this nature on the platform (see Depiction of Zwarte Piet and Knin Cartoon ). A minority of the Board finds that the removal of neither the Polish or German post is necessary and proportionate. They note that both posts may be offensive, but neither reaches the threshold of incitement to likely and imminent acts of violence, discrimination or hostility. For the minority, the concept of cumulative harms is not based on principles flowing from international freedom of expression standards. Rather, it is so elastic as to depart from requirements of basic causation, emptying the necessity and proportionality evaluation of substance. Compared to using the Rabat Plan of Action in a strict sense to assess the necessity and proportionality of content removal based on whether speech poses the likelihood of imminent harm, the cumulative harms concept essentially abandons this key factor. With respect to these posts, it is significant that neither called for action other than participation in an election, and/or a discussion of public interest matters around immigration. It is essential that users are able to express their opinions on the most pressing political issues facing their countries, including immigration. The minority notes that a wide array of content moderation tools are available to Meta beyond the binary “leave up/take down” choice, with less intrusive means than removals available to mitigate potential harms. When faced with the binary up/down choice, a minority would accord more weight to the importance of the electorate having full access to the views of political candidates and parties in the context of an election, and the heightened risks to expression that private censorship can have on democratic processes. Perceptions of unfairness and bias in the moderation of political views threaten the legitimacy of platform governance more broadly. Meta should take inspiration from the Rabat Plan, which also has a focus on positive policy measures, to consider less intrusive means than censorship to ensure potential harms are averted. Access to Remedy The users who reported these posts were not informed that those reports (or appeals) were not prioritized for review. The Board reiterates concerns raised previously (see Explicit AI Images of Female Public Figures ) that users may be unaware that their report or appeal was not prioritized for review. Given Meta’s January 7 announcement that it now plans to focus automated systems on tackling “illegal and high-severity violations,” and rely more on user reports for “less severe” policy violations, the demands on reviewing user reports may increase. It will be crucial that Meta is able to accurately prioritize and actually review the volume of reports it receives so that its policies are fairly enforced. When user reports are not prioritized for review, users should be informed that no review has taken place. Human Rights Due Diligence Principles 13, 17 (c) and 18 of the UNGPs, require Meta to engage in ongoing human rights due diligence for significant policy and enforcement changes, which the company would ordinarily do through its Policy Product Forum, including engagement with impacted stakeholders . The Board is concerned that Meta’s January 7, 2025, policy and enforcement changes were announced hastily, in a departure from regular procedure, with no public information shared as to what, if any prior human rights due diligence it performed. Now these changes are being rolled out globally, it is important that Meta ensures adverse impacts of these changes on human rights are identified, mitigated and prevented, and publicly reported. This should include a focus on how groups may be differently impacted, including immigrants, refugees and asylum seekers. In relation to enforcement changes, due diligence should be mindful of the possibilities of both overenforcement ( Call for Women’s Protest in Cuba , Reclaiming Arabic Words ) as well as underenforcement ( Holocaust Denial , Homophobic Violence in West Africa , Post in Polish Targeting Trans People ). 6. The Oversight Board’s Decision The Oversight Board overturns Meta’s decisions to leave up the content in both cases. 7. Recommendations Content Policy 1. As part of its ongoing human rights due diligence, Meta should take all of the following steps in respect of the January 7, 2025, updates to the Hateful Conduct Community Standard. First, it should identify how the policy and enforcement updates may adversely impact the rights of immigrants, in particular refugees and asylum seekers, with a focus on markets where these populations are at heightened risk. Second, Meta should adopt measures to prevent and/or mitigate these risks and monitor their effectiveness. Third, Meta should update the Board on its progress and learnings every six months, and report on this publicly at the earliest opportunity. The Board will consider this recommendation implemented when Meta provides the Board with robust data and analysis on the effectiveness of its prevention or mitigation measures on the cadence outlined above, and when Meta reports on this publicly. Enforcement 2. Meta should add the term “murzyn” to its Polish market slur list. The Board will consider this recommendation implemented when Meta informs the Board this has been done. 3. When Meta audits its slur lists, it should ensure it carries out broad external engagement with relevant stakeholders. This should include consulting with impacted groups and civil society. The Board will consider this recommendation implemented when Meta amends its explanation of how it audits and updates its market-specific slur lists on its Transparency Center. 4. To reduce instances of content that violates its Hateful Conduct policy, Meta should update its internal guidance to make it clear that Tier 1 attacks (including those based on immigration status) are prohibited, unless it is clear from the content that it refers to a defined subset of less than half of the group. This would reverse the current presumption that content refers to a minority unless it specifically states otherwise. The Board will consider this recommendation implemented when Meta provides the Board with the updated internal rules. *Procedural Note: Return to Case Decisions and Policy Advisory Opinions" bun-qbblz8wi,Mention of Al-Shabaab,https://www.oversightboard.com/decision/bun-qbblz8wi/,"November 22, 2023",2023,,War and conflict,Dangerous individuals and organizations,Overturned,Somalia,"In this summary decision, the Board reviewed two posts referring to the terrorist group Al-Shabaab.",6074,921,"Multiple Case Decision November 22, 2023 In this summary decision, the Board reviewed two posts referring to the terrorist group Al-Shabaab. Overturned FB-41ERXHF1 Platform Facebook Topic War and conflict Standard Dangerous individuals and organizations Location Somalia Date Published on November 22, 2023 Overturned FB-XP5K3L52 Platform Facebook Topic War and conflict Standard Dangerous individuals and organizations Location Somalia Date Published on November 22, 2023 This is a summary decision. Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors, and inform the public about the impact of the Board’s work. They are approved by a Board Member panel, not the full Board. They do not involve a public comment process, and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement. Case Summary In this summary decision, the Board reviewed two posts referring to the terrorist group Al-Shabaab. After the Board brought these two appeals to Meta’s attention, the company reversed its original decisions and restored both posts. Case Description and Background For the first case, in July 2023, a Facebook user, who appears to be a news outlet, posted a picture showing a weapon and military equipment lying on the ground at soldiers' feet with a caption saying ""Somali government forces"" and ""residents"" undertook a military operation and killed Al-Shabaab forces in the Mudug region of Somalia. For the second case, also in July 2023, a Facebook user posted two pictures with a caption. The first picture shows a woman painting a black color over a blue pillar. The second picture shows a black Al-Shabaab emblem painted over the pillar. The caption says, “the terrorists that used to hide have come out of their holes, and the world has finally seen them.” Harakat al-Shabaab al-Mujahideen, popularly known as Al-Shabaab , or “the Youth,”(in Arabic) is an Islamist terrorist group with links to al-Qa’ida working to overthrow the Somali government. The group mainly operates in Somalia and has carried out several attacks in neighboring countries . Meta originally removed the post from Facebook, citing its Dangerous Organizations and Individuals (DOI) policy , under which the company removes content that ""praises,” “substantively supports,” or “represents” individuals and organizations the company designate as dangerous. However, the policy recognizes that “users may share content that includes references to designated dangerous organizations and individuals to report on, condemn or neutrally discuss them or their activities.” In their appeal to the Board, both users argued that their content did not violate Meta’s Community Standards. The user in the first case described their account as a news outlet and stated that the post is a news report about the government operation against the terrorist group Al-Shabaab. The user in the second case stated that the aim of the post is to inform and raise awareness about the activities of Al-Shabaab and condemn it. After the Board brought these two cases to Meta’s attention, the company determined that the posts did not violate its policies. Although the posts refer to Al-Shabaab, a designated dangerous organization, they do not praise Al-Shabaab but instead report on and condemn the group. Meta concluded that its initial removal was incorrect as the posts fell into the exception to the DOI policy and restored both pieces of content to the platform. Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). Where Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, to reduce errors and increase fairness for people who use Facebook and Instagram. Case Significance These cases highlight an over-enforcement in Meta's enforcement of its DOI policy in a country experiencing armed conflict and terrorist attacks. This kind of error undermines genuine efforts to condemn, report, and raise awareness about terrorist organizations, including alleged human right abuses and atrocities committed by such groups. Previously, the Board has issued several recommendations regarding Meta's DOI policy. These include a recommendation to “assess the accuracy of reviewers enforcing the reporting allowance under the DOI policy to identify systemic issues causing enforcement errors,” on which Meta showed progress on implementation (“ Mention of the Taliban in News Reporting , ” recommendation no. 5). The Board has also recommended that Meta “add criteria and illustrative examples to Meta’s DOI policy to increase understanding of exceptions, specifically around neutral discussion and news reporting,” a recommendation for which Meta demonstrated implementation through published information (“ Shared Al Jazeera Post ,” recommendation no. 1). Furthermore, the Board has recommended Meta ""implement an internal audit procedure to continuously analyze a statistically representative sample of automated content removal decisions to reverse and learn from enforcement mistakes,” ( Breast Cancer Symptoms and Nudity , recommendation no.5). Meta described this recommendation as work it already does but did not publish information to demonstrate implementation. Decision The Board overturns Meta’s original decisions to remove the two pieces of content. The Board acknowledges Meta’s correction of its initial error once the Board brought these cases to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" bun-sf915isw,Thai Hostage Negotiator Interview,https://www.oversightboard.com/decision/bun-sf915isw/,"April 18, 2024",2024,,"News events,Violence,War and conflict",Dangerous individuals and organizations,Overturned,"Israel,Palestinian Territories,Thailand","The Board reviewed two Facebook posts containing near identical segments of a Sky News video interview with a Thai hostage negotiator describing his experience of working to free hostages captured by Hamas. After the Board brought the appeals to Meta’s attention, the company reversed its original decisions and restored each of the posts.",7592,1128,"Multiple Case Decision April 18, 2024 The Board reviewed two Facebook posts containing near identical segments of a Sky News video interview with a Thai hostage negotiator describing his experience of working to free hostages captured by Hamas. After the Board brought the appeals to Meta’s attention, the company reversed its original decisions and restored each of the posts. Overturned FB-XO941WWQ Platform Facebook Topic News events,Violence,War and conflict Standard Dangerous individuals and organizations Location Israel,Palestinian Territories,Thailand Date Published on April 18, 2024 Overturned FB-U3Y5VV2E Platform Facebook Topic News events,Violence,War and conflict Standard Dangerous individuals and organizations Location Israel,Palestinian Territories,Thailand Date Published on April 18, 2024 This is a summary decision . Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Summary The Board reviewed two Facebook posts containing near identical segments of a Sky News video interview with a Thai hostage negotiator describing his experience of working to free hostages captured by Hamas. After the Board brought the appeals to Meta’s attention, the company reversed its original decisions and restored each of the posts. About the Cases In December 2023, two users appealed Meta’s decisions to remove their posts, each containing near identical clips of an interview broadcast by Sky News in November 2023. This video features a negotiator from Thailand who led an unofficial team of negotiators and helped secure the release of Thai nationals taken hostage by Hamas on October 7 in Israel. In the clip, the interviewee describes his part in the negotiations. He says he believes the Thai hostages and all hostages “were well taken care of” by Hamas since they follow Islamic law and that Hamas had set no conditions on the Thai captives’ release. The negotiator, who sympathizes with Palestinian people, cites decades of what he described as Israeli mistreatment of Palestinians in the Occupied Territories. He asserts that Hamas was “targeting soldiers” and affirms Hamas was justified in taking hostages “to help the Palestinians” and “to get the world’s attention focused on the Israeli treatment of Palestinians.” In their appeals to the Board, both users said they posted the video to bring attention to the Thai negotiator’s statements, but for different reasons. One user said their intent was to highlight an interview that “shows Hamas in a more balanced light” in contrast to common attitudes of the “Western propaganda machine.” The user indicates in the caption that the negotiator is refusing to stick to the narrative and further explains that their post was previously censored because the content mentioned a particular political organization. On the second post, the user, who had posted the video without a caption or further commentary, said they were “calling out collaborators who lie and manipulate” in support of Hamas. Meta initially removed the posts from Facebook, citing its Dangerous Organizations and Individuals policy under which the company prohibits “glorification” (previously “praise”), “support” and “representation” of individuals and organizations it designates as dangerous. However, the policy recognizes that “users may share content that includes references to designated dangerous organizations and individuals in the context of social and political discourse. This includes content reporting on, neutrally discussing or condemning dangerous organizations and individuals or their activities.” Once the Board brought these cases to Meta’s attention, the company determined that the posts do “not include any captions that glorify, support, or represent a dangerous organization or individual.” Additionally, the video was “previously shared by Sky News, and other news outlets, on Facebook… and [therefore] falls under the scope of its news reporting carveout.” Meta restored both posts. Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, to reduce errors and increase fairness for people who use Facebook and Instagram. Significance of Cases These cases emphasize the persistent over-enforcement of the company’s Dangerous Organizations and Individuals Community Standard , as highlighted in previous decisions by the Board. Continued errors in applying this policy reduce users’ access to neutral commentary, news reporting and condemnatory posts, to the detriment of freedom of expression. In a previous decision, the Board urged Meta to “include more comprehensive data on Dangerous Organizations and Individuals Community Standard error rates in its transparency report,” ( Öcalan’s Isolation , recommendation no. 12), which the company declined to implement after a feasibility assessment. Furthermore, the Board recommended the inclusion of “criteria and illustrative examples to Meta’s Dangerous Organizations and Individuals Community Standard policy to increase understanding of exceptions, specifically around neutral discussion and news reporting,” in order to provide greater guidance to human reviewers ( Shared Al Jazeera Post , recommendation no. 1). This recommendation is particularly relevant as it concerned the removal of a news post about a threat of violence from the Izz al-Din al-Qassam Brigades, the military wing of the Palestinian group Hamas. The Board has also advised Meta to “assess the accuracy of reviewers enforcing the reporting allowance… to identify systemic issues causing enforcement errors,” ( Mention of Taliban in News Reporting , recommendation no. 5). While Meta has reported implementation on both recommendations, for Mention of the Taliban in News Reporting , recommendation no. 5, they have not published information to demonstrate this. The Board has recommended improvements to moderation of posts containing videos, calling on the company to adopt “product and/or operational guideline changes that allow more accurate review of long form videos , ” ( Cambodian Prime Minister , recommendation no. 5). In response, Meta stated that it would “continue to iterate on new improvements for our long-form video review processes and metric creation and evaluation” in its Q3 2023 Quarterly Update on the Oversight Board . The Board believes that full implementation of these recommendations could reduce the number of enforcement errors of Meta’s Dangerous Organizations and Individuals policy. Decision The Board overturns Meta’s original decisions to remove the content. The Board acknowledges Meta’s correction of its initial error s once the Board brought these cases to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" bun-xgwxj6hs,Anti-Colectivos Content in Post-Election Venezuela,https://www.oversightboard.com/decision/bun-xgwxj6hs/,"September 5, 2024",2024,,"Elections,Freedom of expression,Protests",Violence and incitement,Upheld,Venezuela,"In this expedited case bundle, the Oversight Board reviews two videos containing violent language against the colectivos, state-linked informal armed groups in Venezuela, in the context of the protests following the July 2024 presidential elections.",27324,4180,"Multiple Case Decision September 5, 2024 In this expedited case bundle, the Oversight Board reviews two videos containing violent language against the colectivos, state-linked informal armed groups in Venezuela, in the context of the protests following the July 2024 presidential elections. Upheld IG-BLFI4MP4 Platform Instagram Topic Elections,Freedom of expression,Protests Standard Violence and incitement Location Venezuela Date Published on September 5, 2024 Overturned FB-SV81R3HF Platform Facebook Topic Elections,Freedom of expression,Protests Standard Violence and incitement Location Venezuela Date Published on September 5, 2024 1. Case Description Following Venezuela’s presidential election on July 28, 2024, the country has been in turmoil. After Venezuela’s election authorities announced that current President Nicolás Maduro had won the election in widely disputed results, thousands of people protested, and Maduro in turn called for an “ iron fist ” response. Online, the government has moved to restrict access to some social media platforms and encouraged citizens to report protesters to authorities. Offline, thousands have been detained and more than two dozen killed, with state-supported armed groups known as “colectivos” involved in the crackdown. In the weeks after the election, Meta’s moderators noted an influx of anti-colectivos content. This has raised critical questions about the balance the company must strike in moderating posts that could contain vital political criticism and raise awareness of human rights abuses in a repressive environment yet may also employ violent language during such a volatile period. The two cases in this bundle involve videos posted after the July 2024 presidential election and during the ongoing protests that followed. Both posts reference colectivos. In the first case, an Instagram user posted a video in Spanish without a caption. The video appears to be taken from inside an apartment complex showing a group of armed men on motorbikes pulling up to it. A woman can be heard shouting that the colectivos are trying to enter the building. The person filming shouts “Go to hell! I hope they kill you all!” Meta found this content did not violate its Violence and Incitement policy because, in the company’s view, the expression was a conditional or aspirational statement against a violent actor rather than a call to action. In the second case, a Facebook user shared a video that appears to be taken from a moving motorcycle. It shows a group of men on motorbikes, presumably colectivos, and people running on the street. The man filming shouts that the colectivos are attacking them. The video has a caption in Spanish calling out the security forces for not defending the people and saying that the security forces should go and “kill those damn colectivos.” Meta removed this post under the Violence and Incitement policy as a call to action to commit high-severity violence. 2. Expedited Case Background and Context Since Maduro came to power in 2013, the country has undergone an economic and political crisis with continued repression of opposition and dissent (UN High Commissioner on Human Rights report on Venezuela, A/HRC/53/54 , November 2023) through enforced disappearances and arbitrary detentions, torture, and sexual or gender-based violence . The situation has recently worsened due to the ongoing electoral crisis in the country. Venezuela held presidential elections on July 28, 2024, with incumbent candidate, President Nicolás Maduro and the opposition Democratic Unitary Platform’s candidate Edmundo González Urrutia, dominating the contest. In the early hours of July 29, 2024, the president of Venezuela’s National Electoral Council (CNE) proclaimed Maduro the winner with no explanation as to how it had counted the votes. The CNE has not published a breakdown of the results from polling stations across the country, as required by Venezuelan law, or other evidence to substantiate this claim. The results have been widely disputed. The UN Panel of Electoral Experts sent by the UN Secretary-General, at the invitation of the CNE of Venezuela to follow and report on the election process , stated that the CNE’s results reporting “fell short of the basic transparency and integrity measures that are essential to holding credible elections.” The Carter Center , a civil society group that monitors elections and was similarly invited by the CNE to observe the presidential election, also found these “did not meet international standards of electoral integrity and cannot be considered democratic,” and further noted the CNE’s “failure to announce disaggregated results by polling station constitutes a serious breach of electoral principles.” Thousands have protested Maduro’s claims of victory. Street protests, as well as criticism on social media, in the weeks following the election have been met with fierce repression by the state, with the state-linked colectivos joining this crackdown, inducing a climate of widespread fear. Between July 28 and August 8, in the context of the protests, the UN has reported 23 deaths, mostly by gunfire. The government has also detained more than 2,000 people, including more than 100 children and adolescents. Protesters, leaders, members and supporters of political parties, journalists, and human rights defenders considered or perceived by the authorities to be in opposition, and individuals who participated in protests or expressed their opinions on social media have been targeted and harassed by state forces and colectivos. According to the Inter-American Commission on Human Rights (IACHR), the demonstrations have been harshly repressed by state forces and colectivos, and while most of the reported deaths are attributed to state forces, at least six of those are attributed to the colectivos. The Commission further stated that the colectivos “act with the consent, tolerance, or acquiescence of the State.” The UN Independent International Fact-Finding Mission on Venezuela , as well as the UN High Commissioner for Human Rights , have issued statements highlighting and expressing concern about state repression, including violence perpetrated by security forces and the colectivos during these protests. Since 2019, through various reports on the situation of human rights in the Bolivarian Republic of Venezuela ( A/HRC/41/18 , A/HRC/44/20 , A/HRC/48/19 and A/HRC/53/54 ), the UN High Commissioner of Human Rights (OHCHR) stated that “pro-government armed civilian groups” called colectivos “contribute to [a] system [of targeted repression and persecution on political grounds] by exercising social control in local communities and supporting security forces in repressing demonstrations and dissent.” The OHCHR has documented attacks by armed colectivos against political opponents, demonstrators and journalists, with security forces making “no effort to prevent these attacks,” and has called upon the Venezuelan g overnment to “disarm and dismantle” armed colectivos and “ensure investigations into their crimes.” Both the UN and Inter-American system Special Rapporteurs on freedom of expression have noted their concerns on the lack of freedom of expression in Venezuela: ""There are worrying limitations on the exercise of freedom of expression in Venezuela, marked by the harassment and persecution of dissident voices, particularly journalists, media workers and independent media outlets, as well as social leaders and human rights defenders. Restrictive measures have also been reported in the digital space in Venezuela, notably via unjustified internet shutdowns and targeted content blocking against independent media outlets. The closure of media outlets, and/or seizure of their equipment, ordered by the government are increasingly limiting the access of citizens to reliable information from independent sources, while accentuating a general environment of self-censorship among the media."" The IACHR and civil society organizations have also noted that following the 2024 presidential elections, there have been reports of harassment and persecution strategies enabled by the use of technology. The government has intensified its digital surveillance and censorship measures, using tools such as VenApp to report on dissenting activities and to dox demonstrators, video surveillance to monitor protests, and patrolling drones to provoke widespread fear. 3. Justification for Expedited Review and Meta’s Response Meta referred both pieces of content to the Board on August 15, 2024 for a decision on an expedited basis. The Oversight Board’s Bylaws provide for expedited review in “exceptional circumstances, including when content could result in urgent real-world consequences,” and decisions are binding on Meta (Charter, Art. 3, section 7.2; Bylaws, Art. 2, section 2.1.2). The expedited process does not include the extensive research, consultations or public comments that would be undertaken for standard cases. The case is decided on the information available to the Board at the time of deliberation and is decided by a five-member panel without a full vote of the Board. Meta informed the Board that following the widespread protests against the announced election results and subsequent crackdowns by state actors and colectivos, the company noted an increase in content containing violent speech against colectivos on its platforms. It described colectivos as a general term for armed paramilitary-style groups closely aligned with the regime in Venezuela, that have engaged in clashes with protestors following the election. In this context, colectivos are considered violent actors by the company. Meta’s policies distinguish between permitted “statements expressing a hope that violent actors will be killed,” and prohibited “calls for action against violent actors.” Through this distinction, internally known as the “violent actor carveout,” the company aims to balance “legitimate discussion on topics of public importance” with “safety concerns.” Meta finds this balance “particularly difficult” in the context of violent threats against colectivos for several reasons: “(1) the heightened voice concerns around people seeking to raise awareness of the colectivos, sometimes in a self-defense context, (2) the limited outlets for free expression, and (3) the role of colectivos in the violent crackdowns against protesters.” While, in general, the company views “aspirational or conditional threats of violence, including expression of hope that violence will be committed, directed at terrorists and other violent actors” as “non credible, absent specific evidence to the contrary,” it removes “statements of intent or calls for action” to commit violence, irrespective of the target, to ensure the most severe threats are captured. Nonetheless, Meta acknowledged that in the context of Venezuela, the speech at issue in the Facebook post and posts like it express the perspective of people who may feel victimized and unsafe due to the presence of colectivos in their daily lives, and do not have other places to express their fear and frustration, given limited outlets for free expression in the country. At the same time, as the situation in Venezuela remains volatile, the company chose to err on the side of safety and remove the Facebook content applying the letter of its Violence and Incitement policy. While colectivos are not vulnerable targets, but organized, heavily armed groups, the company explained it was concerned that allowing calls for action and statements of intent to kill these groups could nevertheless contribute to a heightened risk of offline violence in an ongoing crisis. Finally, because Meta recognized that the two pieces of content express similar sentiment, it asked for the Board’s input on this distinction, particularly in the context of the post-electoral crisis in Venezuela. The Board also notes that since 2021, Meta has reduced the distribution of political content on its platforms . This means that, unless a user proactively searches for it, Meta will not recommend this type of content on its platforms. The company defines political content as generally including posts discussing politics, laws, elections and other social topics, and presumably content similar to the posts addressed in this case. The Board accepted these cases on an expedited basis because of the importance of Meta’s platforms to freedom of expression during the ongoing crisis in Venezuela, where government repression of protests has led to escalating violence and human rights violations. It is important that Meta’s policies and enforcement measures allow for political dissent while not contributing to violence in the country. Both cases fall within the Board’s elections and civic space as well as crisis and conflict strategic priorities . 4. User Submission Meta notified the users about their respective cases being referred to the Board. The users were invited to submit a statement, but did not provide one 5. Decision In the first case, the Board upholds Meta’s decision to leave the content on Instagram. In the second case, the Board overturns Meta’s decision to remove the content from Facebook. It finds that in the context of the ongoing crisis in Venezuela, allowing both pieces of content is consistent with Meta’s content policies, values and human rights responsibilities. 5.1 Compliance with Meta’s Content Policies The Board finds that neither post violates Meta’s content policies. Meta’s Violence and Incitement policy prohibits threats of violence, defined as “statements or visuals representing an intention, aspiration, or call for violence against a target.” Previously, Meta acknowledged in its policy rationale that it presumed that “aspirational or conditional threats of violence” that target violent actors are “non-credible, absent specific evidence to the contrary.” Following the Board's decision in the Haitian Police Station Video case , which noted that this principle was not reflected in a rule, Meta updated its rules on April 25, 2024 to include an exception that allows “threats when shared in awareness-raising or condemning context, […] or certain threats against violent actors, like terrorist groups.” This exception is relevant to the cases, as Meta informed the Board that it considers colectivos as violent actors. In the first case, the Board agrees with Meta’s decision to keep the content on Instagram. It finds the statement “Go to hell! I hope they kill you all!” to be an aspirational statement that is allowed under the violent actor exception or carveout. The Board agrees with Meta’s assessment that the colectivos have engaged in violent acts against perceived government opponents. The video contains a wish for violence to be carried out against the colectivos, and the post falls squarely within the violent actor exception for aspirational statements. However, in the second case, the Board disagrees with Meta that the statement that security forces should “kill those damn colectivos” in the Facebook post is a threatening call for action. While the Board understands the rationale underlying Meta’s general approach to threats targeting violent actors, which distinguishes between permitted “statements expressing a hope that violent actors will be killed,” and prohibited “calls for action against violent actors,” it finds this content similar to the Instagram post and, in the context in which it was posted, should also be understood as an aspirational statement eligible for the violent actor exception. The phrase “kill those damn colectivos” was part of a broader caption calling on the security forces to defend people against violence being perpetrated by paramilitary groups, in the context of a video that shows a group of men, presumably colectivos, on motorbikes, and people running on the street, with a man shouting that the colectivos are attacking them. In response to the Board’s questions, Meta explained that the reference to the security forces in the content did not impact its decision to remove the post as the company does not allow “calls for action targeting violent actors” regardless of who is being asked to perpetrate violence. It further explained that the company is not generally in a position to determine whether actors referenced in a post are authorized to use high-severity violence or whether the use of such force would be justified in a given situation. The Board understands Meta’s reasons for taking this approach, absent specific context. However, in this case and in the context of the ongoing crisis in Venezuela, the Board finds that the reference to the security forces in the video, and the fact that the user is calling them out for not defending the people from the violence perpetrated by the colectivos, are both relevant to understanding the content as a whole. This context makes the threat, which read literally could be understood as a call for action, not credible, and thus aspirational for several reasons. First, the security forces are linked to the colectivos, and both are engaged in repression of the opposition (see Section 2 above). The security forces are therefore extremely unlikely to attack, or even to be perceived as willing to attack the colectivos in the current context of Venezuela. Second, the person posting the content appears to be fleeing the colectivos. As Meta noted in its referral, anti-colectivos content is arising amid their participation in a violent crackdown on largely non-violent protests. The user posting this content is a private individual, with no significant influence or authority over others (unlike in the Tigray Communication Affairs Bureau decision ). Further, the people in the video appear to be the target of violence or harassment by the colectivos, as opposed to a source of violence against the colectivos. Given the reasons above, while the caption expressly calls for security forces to “kill the damn colectivos,” the statement is better interpreted, with both the context of the video and the wider crisis in Venezuela, as an expression of fear and frustration, on one of the limited avenues for free expression in the country. The Board acknowledges Meta’s concern that allowing this type of expression could contribute to a heightened risk of offline violence in an ongoing crisis. However, given the specific context of Venezuela, in which widespread repression and violence is carried out by state forces jointly with colectivos, and where there are strong restrictions on people’s rights to freedom of expression and peaceful assembly, it is fundamental to allow people to freely express their dissent, anger or desperation, even resorting to strong language. Statements such as those contained in this post are thus better understood in the current context in Venezuela, as non-credible aspirational statements, eligible for the violent actor exception. The Board acknowledges that in crisis situations, where the stakes are high both in leaving up harmful content and removing protected political speech, Meta should adapt generalized enforcement guidance to be more responsive to the realities of how people targeted by state-backed violence express themselves on its platforms. In this regard, Meta has developed a Crisis Policy Protocol , allowing it to implement time-limited adaptations to its policies and how they are enforced. When, as in this case, Meta designates a crisis situation, it should assess the specific power dynamics of the crisis at hand and the likelihood of real-world harm to determine the extent to which violent expressions of anger or desperation are likely to constitute credible threats or lead to offline violence, or if they should be understood as aspirational, absent specific evidence to the contrary. The Board believes that the present context in Venezuela justifies the activation of this protocol to ensure Meta respects the voice of protesters and others targeted by state-backed violence. Specifically, there should be an expansion of guidance around how to define “aspirational or conditional statements of violence” against some violent actors. This expansion of enforcement guidance should be subject to regular review, with input from potentially affected groups and relevant stakeholders. 5.2 Compliance with Meta’s Human Rights Responsibilities The Board finds that keeping the post on Instagram and restoring the post to Facebook aligns with Meta’s human rights responsibilities. Article 19 of the ICCPR guarantees the freedom to share, seek, and receive information and ideas “of all kinds.” Protected expression includes “political discourse,” commentary on public affairs, and “discussion of human rights,” ( General Comment No. 34 , 2011, para. 11; General Comment No. 37 , 2020, para. 32). Moreover, government actors are “legitimately subject to criticism and political opposition” ( General Comment No. 34 , 2011, para. 38). Access to social media is crucial in Venezuela, where longstanding repression of opposition voices and independent media has only become more acute in the present crisis. As “digital gatekeepers,” social media platforms have a “profound impact” on public access to information ( A/HRC/50/29 , para. 90; See Mention of the Taliban in News Reporting , Iran Protest Slogan decisions). When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human-rights commitments, both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. In doing so, the Board attempts to be sensitive to how those rights may be different when applied to a private social media company compared to when they are applied to a government. Nonetheless, as the UN Special Rapporteur on freedom of expression has stated, while companies do not have the obligations of governments, “their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression” (report A/74/486 , para. 41). The principle of legality requires that rules restricting freedom of expression should be accessible and sufficiently clear to provide guidance as to what is permitted and what is not. The Board finds that, as applied to these cases, the violent actor exception or carveout to Meta’s incitement rules is sufficiently clear, especially after the April 25, 2024 updates. Nonetheless, as mentioned above, in crisis situations, Meta should adapt its generalized enforcement guidance to be more responsive to contextual factors that impact how people targeted by state-backed violence express themselves on its platforms. Similarly, the Board has previously found that in seeking to “prevent potential offline violence” by removing content that poses “a genuine risk of physical harm or direct threats to public safety,” the Violence and Incitement Community Standard serves the legitimate aims of protecting the right to life (Article 6, ICCPR) and the right to security of person (Article 9 ICCPR, General Comment No. 35 , para. 9; See Reporting on Pakistani Parliament Speech , Tigray Communication Affairs Bureau , Hostages Kidnapped From Israel , Iran Protest Slogan decisions). The principle of necessity and proportionality provides that any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected” ( General Comment No. 34, para. 34 ). The Board finds that it was not necessary to remove either post. As detailed in Section 5.1 above, various contextual factors made clear that neither post should be understood as a call to others to engage in violence, and, importantly, it was neither imminent nor likely that violence would result from these statements. The people who posted the content are private individuals sharing their direct experiences of the violence or harassment the colectivos are inflicting on them. In this context, their posts can be understood as condemning the security forces (in the Facebook case) and a cry of fear and desperation, calling for help in a time of crisis and uncertainty (in both). Both posts depict and describe how the colectivos are attacking or harassing people and criticize these actions, and in the context of Venezuela, the imminence or even likelihood of harm posed by content like this is low. The targets of aspirational violence are state-backed forces that have contributed to the longstanding repression of civic space and other human rights violations in Venezuela, including in the present post-election crisis. By contrast, the civilian population has largely been the target of human rights abuses. As previously mentioned in this decision, the posts were published in the context of high social and political tension characterized by a wave of repression following the highly disputed results of the 2024 presidential election. In both posts, which express very similar sentiments, private individuals resort to strong language to express their fear, anger and desperation regarding the actions of the colectivos, and the lack of response by the security forces (in the Facebook case). The removal of content such as the one in the Facebook case, which in context does not constitute a credible threat, has a significant negative impact on the people denouncing the actions of colectivos, who face enormous constraints on free speech and on holding state and state-backed actors accountable. The Board is also deeply concerned that in the context of Venezuela, the company’s policy to reduce the distribution of political content could undermine the ability of users expressing political dissent and raising awareness about the situation in Venezuela to reach the widest possible audience. Should this be the case, the Board believes that a policy lever could be included in its Crisis Policy Protocol to ensure that political content, especially around elections and post-electoral protests, is eligible for the same reach as non-political content. Finally, the Board has repeatedly affirmed the importance of evaluating context to ensure political speech is protected, especially in countries in conflict or that face significant constraints on freedom of expression, as in Venezuela (see the Colombia Protests , Iran Protest Slogan and Call for Women’s Protest in Cuba decisions). Meta should therefore also use the Crisis Policy Protocol to enable responses to situations like those seen in Venezuela. Particularly in contexts with repression of democratic dissent, when the threats appear to be non-credible, and the likelihood of such content leading to offline violence is low, Meta should adjust its policy and enforcement guidance accordingly, subject to regular review, with input from potentially affected groups and relevant stakeholders. Return to Case Decisions and Policy Advisory Opinions" bun-zr5os2ko,Footage of Moscow Terrorist Attack,https://www.oversightboard.com/decision/bun-zr5os2ko/,"November 19, 2024",2024,,"News events,Violence",Violent and graphic content,Overturned,Russia,"The Board has overturned Meta’s decisions to remove three Facebook posts showing footage of the March 2024 terrorist attack in Moscow, requiring the content to be restored with “Mark as Disturbing” warning screens.",49758,7685,"Multiple Case Decision November 19, 2024 The Board has overturned Meta’s decisions to remove three Facebook posts showing footage of the March 2024 terrorist attack in Moscow, requiring the content to be restored with “Mark as Disturbing” warning screens. Overturned FB-A7NY2F6F Platform Facebook Topic News events,Violence Standard Violent and graphic content Location Russia Date Published on November 19, 2024 Overturned FB-G6FYJPEO Platform Facebook Topic News events,Violence Standard Violent and graphic content Location Russia Date Published on November 19, 2024 Overturned FB-33HL31SZ Platform Facebook Topic News events,Violence Standard Violent and graphic content Location Russia Date Published on November 19, 2024 Russian Translation Footage of Moscow Terrorist Attack Decision PDF To read the full decision in Russian, click here . Чтобы прочитать это решение на русском языке, нажмите здесь . To download a PDF of the full decision, click here. The Board has overturned Meta’s decisions to remove three Facebook posts showing footage of the March 2024 terrorist attack in Moscow, requiring the content to be restored with “Mark as Disturbing” warning screens. While the posts violated Meta’s rules on showing the moment of designated attacks on visible victims, removing them was not consistent with the company’s human rights responsibilities. The posts, which discussed an event on front page news worldwide, are of high public interest value and to be protected under the newsworthiness allowance, according to the majority of the Board. In a country such as Russia with a closed media environment, accessibility on social media of such content is even more important. The posts each contain clear language condemning the attack, showing solidarity with or concern for the victims, with no clear risk of them leading to radicalization or incitement. Suppressing matters of vital public concern based on unsubstantiated fears it could promote radicalization is not consistent with Meta’s responsibilities to free expression. As such, Meta should allow, with a “Mark as Disturbing” warning screen, third-party imagery of a designated event showing the moment of attacks on visible but not identifiable victims when shared for news reporting, condemnation and raising awareness. About the Cases The Board has reviewed three cases together involving content posted on Facebook by different users immediately after the March 22, 2024, terrorist attack at a concert venue and retail complex in Moscow. The first case featured a video showing part of the attack inside the retail complex, seemingly filmed by a bystander. While the attackers and people being shot were visible but not easily identifiable, others leaving the building were identifiable. The caption asked what is happening in Russia and included prayers for those impacted. The second case featured a shorter clip of the same footage, with a caption warning viewers about the content and stating there is no place in the world for terrorism. The third case involved a post shared on a Facebook group page by an administrator. The group’s description expresses support for former French presidential candidate Éric Zemmour. The post included a still image from the attack, which could have been taken from the same video, showing armed gunmen and victims. Additionally, there was a short video of the retail complex on fire, filmed by someone driving past. The caption stated that Ukraine had said it had nothing to do with the attack, while pointing out that nobody had claimed responsibility. The caption also included a statement of support for the Russian people. Meta removed all three posts for violating its Dangerous Organizations and Individuals policy, which prohibits third-party imagery depicting the moment of such attacks on visible victims. Meta designated the Moscow attack as a terrorist attack on the day it happened. According to Meta, the same video shared in the first two cases had already been posted by a different user and then escalated to the company’s policy or subject matter experts for additional review earlier on in the day. Following that review, Meta decided to remove the video and added it to a Media Matching Service (MMS) bank. The MMS bank subsequently determined that the content in the first two cases matched the banked video that had been tagged for removal and automatically removed it. In the third case, the content was removed by Meta following human review. The attack carried out on March 22, 2024 in Moscow’s Crocus City Hall claimed the lives of at least 143 people. An affiliate of the Islamic State, ISIS-K, claimed responsibility soon after the attack. According to experts consulted by the Board, tens of millions of Russians watched the video of the attack on state-run media channels, as well as Russian social media platforms. While Russian President Vladimir Putin claimed there were links to Ukraine and support from Western intelligence for the attack, Ukraine has denied any involvement. Key Findings While the posts were either reporting on, raising awareness of or condemning the attacks, Meta does not apply these exceptions under the Dangerous Organizations and Individuals policy to “third-party imagery depicting the moment of [designated] attacks on visible victims.” As such, it is clear to the Board that all three posts violate Meta’s rules. However, the majority of the Board finds that removing this content was not consistent with Meta’s human rights responsibilities, and the content should have been protected under the newsworthiness allowance. All three posts contained subject matter of pressing public debate related to an event that was front page news worldwide. There is no clear risk of the posts leading to radicalization or incitement. Each post contains clear language condemning the attack, showing solidarity with or concern for the victims, and seeking to inform the public. In combination with the lack of media freedom in Russia, and the fact the victims are not easily identifiable, this further moves these posts in the direction of the public interest. Suppressing content on matters of vital public concern based on unsubstantiated fears it could promote radicalization is not consistent with Meta’s responsibilities to free expression. This is particularly the case when the footage has been viewed by millions of people and accompanied by allegations that the attack was partly attributable to Ukraine. The Board notes the importance of maintaining access to information during crises particularly in Russia, where people rely on social media to access information or to raise awareness among international audiences. While, in certain circumstances, removing content depicting identifiable victims is necessary and proportionate (e.g., in armed conflict when victims are prisoners of war), as the victims in these cases are not easily identifiable, restoring the posts with an age-gated warning screen is more in line with Meta’s human rights responsibilities. Therefore, Meta should amend its policy to allow third-party imagery of visible but not personally identifiable victims when clearly shared for news reporting, condemnation or awareness raising. A minority of the Board disagrees and would uphold Meta’s decisions to remove the posts from Facebook. For the minority, the graphic nature of the footage and the fact that it shows the moment of attack and, in this case, death of visible victims, makes removal necessary for the dignity of the victims and their families. In addition, the Board finds that the current placement of the rule on footage of violating violent events under the Dangerous Organizations and Individuals policy creates confusion for users. While the “We remove” section implies that condemnation and news reporting is permissible, other sections state that perpetrator-generated imagery and third-party imagery of moment of attacks on visible victims is prohibited and does not specify that Meta will remove such content even if it condemns or raises awareness of attacks. The Oversight Board’s Decision The Oversight Board overturns Meta’s decisions to remove the three posts, requiring the content to be restored with “Mark as Disturbing” warning screens. The Board also recommends that Meta: * Case summaries provide an overview of cases and do not have precedential value. 1. Case Description and Background The Oversight Board has reviewed three cases together involving content posted on Facebook by different users immediately after the March 22, 2024, terrorist attack at a concert venue and retail complex in Moscow. Meta’s platforms have been blocked in Russia since March 2022, when a government ministry labeled the company an “extremist organization.” However, Meta’s platforms remain accessible to people through Virtual Private Networks (VPNs). In the first case, a Facebook user posted a short video clip on their profile accompanied by a caption in English. The video showed part of the attack from inside the retail complex, with the footage seemingly taken by a bystander. Armed people were shown shooting unarmed people at close range, with some victims crouching on the ground and others fleeing. The footage was not high resolution. While the attackers and people being shot were visible but not easily identifiable, others leaving the building were identifiable. In the audio, gunfire could be heard, with people screaming. The caption asked what is happening in Russia and included prayers for those impacted. When Meta removed the post within minutes of it being posted, it had fewer than 50 views. In the second case, a different Facebook user posted a shorter clip of the same footage, also accompanied by an English caption, which warned viewers about the content, stating there is no place in the world for terrorism. When Meta removed the post within minutes of it being posted, it had fewer than 50 views. The third case involves a post shared on a group page by an administrator. The group’s description expresses support for former French presidential candidate Éric Zemmour. The post included a still image from the attack, which could have been taken from the same video, showing armed gunmen and victims. Additionally, there was a short video of the retail complex on fire, filmed by someone driving past. The French caption included the word “Alert” alongside commentary on the attack, such as the reported number of fatalities. The caption also stated that Ukraine had said it had nothing to do with the attack, while pointing out that nobody had claimed responsibility for it. The caption concluded with a comparison to the Bataclan terrorist attack in Paris and a statement of support for the Russian people. When Meta removed the post the day after it was posted, it had about 6,000 views. The company removed all three posts under its Dangerous Organizations and Individuals Community Standard, which prohibits sharing all perpetrator-generated content relating to designated attacks as well as footage captured by or imagery produced by third parties (e.g., bystanders, journalists), depicting the moment of terrorist attacks on visible victims. Meta designated the Moscow attack as a terrorist attack on the same day it happened. According to Meta, the same video shared in the first two cases had already been posted by a different user and then escalated to the company’s policy or subject matter experts for additional review earlier on in the day. Following that review, Meta decided to remove the video and added it to a Media Matching Service (MMS) bank. The MMS bank subsequently determined that the content in the first two cases matched the banked video that had been tagged for removal and automatically removed it. Meta did not apply a strike or a feature limit to the users’ profiles as the bank was configured to remove content without imposing a strike. In the third case, the content was removed by Meta following human review, with the company applying a strike that resulted in a 30-day feature limit. The feature limit applied to the user prevented them from creating content on the platform, creating or joining Messenger rooms, and advertising or creating live videos. It is unclear why the MMS system did not identify this content. In all three cases, the users appealed to Meta. Human reviewers found each post violating. After the Board selected these cases for review, Meta confirmed its decisions to remove all three posts were correct but removed the strike in the third case. The Board notes the following context in reaching its decision. The attack carried out on March 22, 2024 in Moscow’s Crocus City Hall claimed the lives of at least 143 people. An affiliate of the Islamic State, ISIS-K, claimed responsibility soon after the attack. Russian investigators quickly charged four men. Russian officials stated they had 11 people in custody, including the four alleged gunmen, and claimed to have found a link between the attackers and Ukraine although Ukraine has denied any involvement. ISIS-K emerged in 2015 from disaffected fighters of the Pakistani Taliban. The group has been fighting the Taliban in Afghanistan, as well as carrying out targeted attacks in Iran, Russia and Pakistan. According to reporting , the group has “released a flood of anti-Russian propaganda, denouncing the Kremlin for its interventions in Syria and condemning the Taliban for engaging with the Russian authorities decades after the Soviet Union invaded Afghanistan.” According to experts consulted by the Board, tens of millions of Russians watched the video of the attack on state-run media channels, as well as Russian social media platforms. Russian President Vladimir Putin claimed there were links to Ukraine and support from Western intelligence for the attack. According to a public opinion survey conducted by the Levada Center in Russia from April 18-24, almost all respondents said they knew of the attack and were following the story closely, while half believed that the Ukrainian intelligence services were involved. According to research commissioned by the Board, the video shared in these cases was circulated widely online, including by Russian and international media accounts. Researchers found some posts on Facebook with the footage and isolated instances of accounts possibly affiliated with or supportive of ISIS celebrating the attack. Researchers report that social media platforms with less rigorous content moderation contain significantly more perpetrator-generated content. In 2024, VK, WhatsApp and Telegram were the most widely used platforms in Russia. The government exerts significant control of the media environment, with direct or indirect authority over “all national television networks and most radio and print outlets.” Since the invasion of Ukraine, the “government also began restricting access to [a] wide variety of websites, including those of domestic and foreign news outlets. More than 300 media outlets have been forced to suspend their activities.” The government also severely restricts reporting access for foreign media outlets and has subjected affiliated journalists to false charges, arrests and prison. 2. User Submissions The users in all three cases appealed to the Board. In their statements, they explained that they shared the video to warn people in Russia to stay safe. They said that they condemn terrorism, and that Meta should not prevent them from informing people of real events. 3. Meta’s Content Policies and Submissions I. Meta’s Content Policies Dangerous Organizations and Individuals Community Standard The Dangerous Organizations and Individuals policy rationale states that, in an effort to prevent and disrupt real-world harm, Meta does not allow organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on its platforms. The Community Standard prohibits “content that glorifies, supports, or represents events that Meta designates as violating violent events,” including terrorist attacks. Nor does it allow “(1) glorification, support or representation of the perpetrator(s) of such attacks; (2) perpetrator-generated content relating to such attacks; or (3) third-party imagery depicting the moment of such attacks on visible victims ,” (emphasis added). The Community Standard provides the following examples of violating violent events: “terrorist attacks, hate events, multiple-victim violence or attempted multiple-victim violence, serial murders, or hate crimes.” However, it does not provide specific criteria for designation or a list of designated events. According to internal guidelines for reviewers, Meta removes imagery depicting the moment of attacks on visible victims “regardless of sharing context.” Violent and Graphic Content Community Standard The Violent and Graphic Content policy rationale states that the company understands people “have different sensitivities with regard to graphic and violent imagery,” and that Meta removes the most graphic content, also adding a warning label to other graphic content to warn people. This policy allows, with a “Mark as Disturbing” warning screen, “imagery (both videos and still images) depicting a persons’ violent death (including their moment of death or the aftermath) or a person experiencing a life threatening event.” The policy prohibits such imagery when they depict dismemberment, visible innards, burning or throat slitting. Newsworthiness Allowance In certain circumstances, the company will allow content that may violate its policies to remain on the platform if it is “ newsworthy and if keeping it visible is in the public interest.” When making the determination, “[Meta will] assess whether that content surfaces an imminent threat to public health or safety, or gives voice to perspectives currently being debated as part of a political process.” The analysis is informed by country-specific circumstances, considering the nature of the speech and political structure of the country affected. “For content we allow that may be sensitive or disturbing, we include a warning screen. In these cases, we can also limit the ability to view the content to adults, ages 18 and older. Newsworthy allowance can be ‘narrow,’ in which an allowance applies to a single piece of content or ‘scaled,’ which may apply more broadly to something like a phrase.” II. Meta’s Submissions Meta found all three posts violated its Dangerous Organizations and Individuals policy prohibiting third-party imagery depicting the moment of such attacks on visible victims. Meta finds “removing this content helps to limit copycat behaviors and avoid the spread of content that raises the profile of and may have propaganda value to the perpetrator.” Additionally, the company aims to “protect the dignity of any victims who did not consent to being the subject of public curiosity and media attention.” According to Meta, as with all policy forums, the company will consider a range of sources in making a decision, including academic research, external stakeholder feedback, and insights from internal policy and operational teams. Meta also explained that it will allow such violating content under the newsworthy allowance on a limited basis. However, in these three cases, the company did not apply the allowance as it concluded that the public interest value of permitting the content to be distributed did not outweigh the risk of harm. Meta considered the fact that the footage exposed visible victims and was shared shortly after the attacks. In its view, displaying this footage was not necessary to condemn or raise awareness. Meta recognizes that removing this kind of content regardless of context “can risk over-enforcement on speech and may limit information and awareness about events of public concern, particularly when coupled with commentary condemning, raising awareness, or neutrally discussing such attacks.” The current default approach is that the company configures MMS banks to remove all content that matches banked content, regardless of caption, without applying a strike. The approach prevents the distribution of the offending content without applying a penalty, recognizing that many users may be sharing depictions of a crisis for legitimate reasons or without nefarious motives. Meta conducted a formal policy development process regarding designated violent attack imagery, including videos depicting terrorist attacks. That process concluded this year, after the content in these three cases was posted. As a result of this process, Meta adopted the following approach: after an event is designated, Meta will remove all violating event imagery (perpetrator-generated or third-party showing moment of attacks on victims) without strikes in all sharing contexts for longer periods than the current protocol. After this period, only imagery shared with glorification, support or representation will be removed and receive a severe strike. The company stated that this approach is the least restrictive means available to mitigate harms to the rights of others, including the right to privacy and protecting the dignity of the victims and their families. The Board asked Meta questions on whether the company considered the impact in countries with closed media environments of prohibiting all perpetrator and third-party imagery of moment of attacks on visible victims; whether there are policy levers in the Crisis Policy Protocol relevant to designated events; and the outcome of Meta’s policy development process on imagery of designated events. Meta responded to all questions. 4. Public Comments The Oversight Board received six public comments that met the terms for submission . Five of the comments were submitted from the United States and Canada and one from West Africa. To read public comments submitted with consent to publish, click here . The submissions covered the following themes: risks of overenforcement; use of graphic videos by designated entities and risk of radicalization; the psychological harms from proliferation of graphic content; the challenge of distinguishing between perpetrator-produced and third-party footage; the importance of social media for timely information during crises; the value of such content for documentation by the public, journalists and researchers; the option of age-gated warning screens; and the need to clarify definitions in the Dangerous Organizations and Individuals and Violent and Graphic Content policies. 5. Oversight Board Analysis The Board analyzed Meta’s decisions in these cases against Meta’s content policies, values and human rights responsibilities. The Board also assessed the implications of these cases for Meta’s broader approach to content governance. 5.1 Compliance With Meta’s Content Policies I. Content Rules It is clear to the Board that all three posts (two videos and one image) violate Meta’s prohibition on “third-party imagery depicting the moment of [designated] attacks on visible victims.” Meta designated the March 22 attack in Moscow under its Dangerous Organizations and Individuals policy before the posts were shared. The rule, as set out in the Community Standard and further explained in the internal guidelines, prohibits all such footage of attacks regardless of the context in or the caption with which it is shared (see Hostages Kidnapped From Israel decision). The video showed armed individuals shooting unarmed people at close range, with some victims crouching on the ground and others fleeing. The video included audio with gunfire and sounds of people screaming. The third post captured the same event in a still image. While all three posts were either reporting on, raising awareness about or condemning the attacks, Meta does not apply its exception for these purposes under the prohibition on third-party imagery of moment of attacks on visible victims. However, the majority of the Board finds that all three posts are at the heart of what the newsworthy allowance aims to protect. The content depicts an event that was front page news worldwide. Each piece of content was shared soon after the attack and included information intended for the public. During this time, when facts about what had happened, who might be responsible and how the Russian government was responding were all subjects of pressing debate and discussion, the public interest value of this content was especially high. Images and videos such as these allow citizens the world over to form their own impressions of events without having to rely entirely on content filtered through governments, media or other outlets. The Board considers the lack of media freedom in Russia and its impact on access to information relevant to its analysis, given that they underscore the importance of content that can help to facilitate an informed public. The fact that victims are visible, but not identifiable, in all three posts helps to further tilt this content in the direction of the public interest, as weighed against the privacy and dignity interests at stake. For additional analysis and the minority view, relevant to the Board’s decision, see the human rights section below. II. Transparency According to Meta, the company has a set of Crisis Policy Protocol levers to address “over-enforcement as needed in crisis situations.” However, it did not use these levers as the attack in Moscow was not designated as a crisis under the protocol. Meta created the Crisis Policy Protocol in response to a recommendation from the Board that the company should develop and publish a policy that governs Meta’s response to crises or novel situations ( Former President Trump’s Suspension , recommendation no. 19). The Board then called on Meta to publish more information about the Crisis Policy Protocol ( Tigray Communication Affairs Bureau , recommendation no. 1). In response, Meta published this explanation on its Transparency Center but still declined to publicly share the protocol in full. The Board finds the short explanation shared publicly is not sufficient to allow the Board, users and the public to understand the Crisis Policy Protocol. The Board has already stressed the importance of such a protocol for ensuring an effective and consistent response by Meta to crises and conflict situations. A 2022 “ Declaration of principles for content and platform governance in times of crisis” – developed by NGOs Access Now, Article 19, Mnemonic, the Center of Democracy and Technology, JustPeace Labs, Digital Security Lab Ukraine, Center for Democracy and Rule of Law (CEDEM) and the Myanmar Internet Project – identifies the development of a crisis protocol as a key tool for effective content governance during crisis. The Board and the public are in the dark, however, as to why the Crisis Policy Protocol was not applied in this case, and how the treatment of the content might have differed if it had been. Therefore, greater transparency is necessary about when and how the protocol is used, results of the audits and assessments the company carries out about the effectiveness of the protocol and any changes to policies or systems that address identified shortcomings. In accordance with the UN Guiding Principles on Business and Human Rights (UNGPs), companies should “track the effectiveness of their [mitigation measures]” (Principle 20) and “communicate this externally” (Principle 21). Without such disclosures it is impossible for the Board, the Meta user base or civil society to understand how well the protocol is working or how its efficacy might be enhanced. 5.2 Compliance With Meta’s Human Rights Responsibilities The Board finds that although the posts do violate Meta’s Dangerous Organizations and Individuals policy, removing this content was not consistent with Meta’s policies, its commitment to the value of voice or its human rights responsibilities. Freedom of Expression (Article 19 ICCPR) On March 16, 2021, Meta announced its Corporate Human Rights Policy , in which it outlines its commitment to respecting rights in accordance with the UN Guiding Principles on Business and Human Rights (UNGPs). The UNGPs, endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. These responsibilities mean, among other things, that companies should “avoid infringing on the human rights of others and should address adverse human rights impact with which they are involved,” (Principle 11, UNGPs). Companies are expected to: “(a) Avoid causing or contributing to adverse human rights impacts through their own activities, and address such impacts when they occur; (b) Seek to prevent or mitigate adverse human rights impacts that are directly linked to their operations, products or services by their business relationships, even if they have not contributed to those impacts,” (Principle 13, UNGPs). Meta’s content moderation practices can have adverse impacts on the right to freedom of expression. Article 19 of the International Covenant on Civil and Political Rights (ICCPR) provides broad protection for this right, given its importance to political discourse, and the Human Rights Committee has noted that it also protects expression that may be “deeply offensive,” ( General Comment No. 34 , paras. 11, 13 and 38). When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human rights commitments, in relation both to the individual content decisions under review and to Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of opinion and expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression,” ( A/74/486 , para. 41). I. Legality (Clarity and Accessibility of the Rules) The principle of legality requires rules limiting expression to be accessible and clear, formulated with sufficient precision to enable an individual to regulate their conduct accordingly (General Comment No. 34, para. 25). Additionally, these rules “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” and must “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not,” ( Ibid. ). The UN Special Rapporteur on freedom of expression has stated that when applied to private actors’ governance of online speech, rules should be clear and specific (A/HRC/38/35, para. 46). People using Meta’s platforms should be able to access and understand the rules and content reviewers should have clear guidance regarding their enforcement. The Board finds the current placement of the rule on footage of violating violent events under the Dangerous Organizations and Individuals policy likely creates confusion for users. The “We remove” section of the policy states: “We remove Glorification of Tier 1 and Tier 2 entities as well as designated events. For Tier 1 and designated events, we may also remove unclear or contextless references if the user’s intent was not clearly indicated.” The line specifically explaining the prohibition on perpetrator-generated and third-party imagery (which is a separate policy from the above) appears in the “Policy rationale” and the section marked “Types and tiers of dangerous organizations” under the Community Standard. The language in the “We remove” section implies that condemnation and news reporting is permissible, whereas the language in the other sections (policy rationale and types/tiers) states that perpetrator-generated imagery and third-party imagery of moment of attacks on visible victims is prohibited, and does not specify that Meta will remove such content regardless of the motive or framing with which the content is shared (e.g., condemnation or awareness raising). The placement of the rule and lack of clarity in the scope of applicable exceptions creates unnecessary confusion. Meta should move the rule on footage of designated events under the “We remove” section, creating a new section for violating violent events. II. Legitimate Aim Meta’s Dangerous Organizations and Individuals policy aims to “prevent and disrupt real-world harm.” In several decisions, the Board has found that this policy pursues the legitimate aim of protecting the rights of others, such as the right to life (ICCPR, Article 6) and the right to non-discrimination and equality (ICCPR, Articles 2 and 26), because it covers organizations that promote hate, violence and discrimination as well as designated violent events motivated by hate. See Referring to Designated Dangerous Individuals as “Shaheed,” Sudan’s Rapid Support Forces Video Captive , Hostages Kidnapped from Israel and Greek 2023 Elections Campaign decisions. Meta’s policies also pursue the legitimate aim of protecting the right to privacy of identifiable victims and their families (see Video After Nigeria Church Attack decision). III. Necessity and Proportionality Under ICCPR Article 19(3), necessity and proportionality requires that restrictions on expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected,” (General Comment No. 34, para. 34). In these three cases, the majority of the Board finds there is no clear and actual risk of these three posts leading to radicalization and incitement. Each post contains clear language condemning the attack, showing solidarity with or concern for the victims, and seeking to inform the public. The videos were posted immediately after the attack, with the caption for the first post explicitly showing support for the victims and indicating that the person who posted the content was doing so to share information to better understand what had happened. The person who posted the second post expressed solidarity with the victims in Russia, condemning the violence. And the third post provided information along with a still image and a brief video, reporting that nobody had yet claimed responsibility and that Ukraine had stated it had nothing to do with the attack; content contradicting propaganda widely disseminated by Russian state media. Suppressing content on matters of vital public concern based upon unsubstantiated fears that it could promote radicalization is not consistent with Meta’s free expression responsibilities, especially when the same footage has been viewed by millions of people accompanied by allegations that the attack was partly attributable to Ukraine. The Board takes note of the importance of maintaining access to information during crises and the closed media environment in Russia, where people rely on social media to access information or to raise awareness among international audiences. Allowing such imagery with a warning screen, under Meta’s Violent and Graphic Content Community Standard, provides a less restrictive means of protecting the rights of others (see the less restrictive means analysis in full below). That policy allows, with a Mark as Disturbing warning screen, “imagery (both videos and still images) depicting a persons’ violent death (including their moment of death or the aftermath) or a person experiencing a life threatening event.” Additionally, as the Board has previously held, when victims of such violence are identifiable in the image, the content “more directly engages their privacy rights and the rights of their families,” (see Video After Nigeria Church Attack decision). In that decision, in which the content showed the gruesome aftermath of a terrorist attack, the majority of the Board decided that removing the content was neither necessary nor proportionate, restoring the post with an age-gated warning screen. The footage at issue in these three posts is not high resolution, and the attackers and people being shot are visible but not easily identifiable. In certain circumstances, removal of content depicting identifiable victims will be the necessary and proportionate measure (e.g., in armed conflict when victims are prisoners of war or hostages subject to special protections under international law). However, in these three cases, given that victims are not easily identifiable, or seen in a humiliating or degrading manner, restoring the posts with an age-gated warning screen is more in line with Meta’s human rights responsibilities. A minority of the Board disagrees and would uphold Meta’s decision to remove the three posts from Facebook. The minority agrees that the content in this case, captured by someone at the venue and shared to report on or to condemn an attack, is not likely to incite violence or promote radicalization. However, for the minority, the graphic nature of the footage with sounds of gun fire and victims’ screams, and as it shows the moment of attack and, in this case, death of visible if not easily identifiable victims, mean the privacy and dignity of the victims and their families make removal necessary. In the aftermath of terrorist attacks, when footage of violence spreads quickly and widely, and can re-traumatize survivors and the families of the deceased, the minority believes that Meta is justified in prioritizing the privacy and dignity of the victims and their families above the public interest value of allowing citizens access to newsworthy content. For the minority, the newsworthiness of the content counts against it remaining on the platform. The minority maintains that the attack of March 22 was widely covered in Russia as well as by international media. Therefore, in the view of the minority, allowing this footage on Meta’s platforms was not necessary to ensure access to information about the attack. Users who wished to comment on the attack or challenge the government’s narrative attributing it to Ukraine, could have done so without sharing the most graphic moments of the footage. The Board understands that in developing and adopting its policy on imagery of terrorist attacks during the recent policy development process, Meta has erred on the side of safety and privacy, adopting reasoning similar to that of the minority on the latter. The company explained there is a risk of adversarial behavior, for example, the repurposing of third-party footage by violent actors, and there are enforcement challenges in terms of moderating content at-scale that mean that a more permissive approach would increase these risks. The company also highlighted the risks to the privacy and dignity of victims of these attacks and their families, when victims are visible. A public comment submitted by the World Jewish Congress highlights similar considerations to those articulated by Meta. Referring to the online proliferation of videos of the October 7, 2023, attack by Hamas on Israel, the submission notes that “in such events, the understanding of who is a ‘bystander’ or ‘third party’ is problematic, as many accomplices were filming and distributing terrorist content,” (PC-29651). The Board acknowledges that, in the digital age, videography and photography are tools employed by some terrorists in order to document and glorify their acts. But not all video of attacks involving designated entities is created with this purpose, calibrated to yield this effect or seen as such by viewers. Imagery that is not produced by perpetrators or their supporters is not created for the purpose of glorification or promotion of terrorism. When recorded by a bystander, a victim, an independent journalist or through a CCTV camera, the imagery itself is not intended to and generally less likely to sensationalize and fetishize violence (i.e., footage recorded through a headcam of the perpetrator is different than footage captured by a CCTV camera or a bystander). It will capture the horror of violence but may not in its presentation trivialize or promote it. While there are risks of imagery of attacks being repurposed to encourage glorification of violence or terrorism and copycat behavior, absent signs of such recasting, a blanket ban overlooks the potential for video documenting violent attacks to trigger sympathy for victims, foster accountability and build public awareness of important events, potentially steering anger or contempt towards the perpetrators, and putting the public on notice about the brutal nature of terrorist groups and movements. In the policy advisory opinion on Referring to Designated Dangerous Individuals as “Shaheed,” the Board noted several UN Security Council resolutions calling on states to address incitement to terrorist acts and raising concerns about the use of the internet by terrorist organizations. See UN Security Council Resolution 1624 (2005), UNSC Resolution 2178 (2014) and UNSC Resolution 2396 (2017). Meta’s approach may be understood as an effort to address these concerns. However, as the Board also noted in that policy advisory opinion, the UN Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism has warned against adopting overbroad rules and spoke of the impact of focusing on the content of speech rather than the “causal link or actual risk of the proscribed result occurring,” (Report A/HRC/40/52 , para 37). See also Joint Declaration on the Internet and on Anti-Terrorism Measures of the UN Special Rapporteur on freedom of expression, the OSCE Representative on freedom of the media and the OAS Special Rapporteur on freedom of expression (2005). The Board agrees there is a risk that creating exceptions to the policy could lead to underenforcement of content depicting terror attacks, and of footage being reused for malign purposes that Meta will not be able to identify and remove effectively. The Board commends Meta for seeking to address the risk of its platforms being used by violent actors to recruit and radicalize individuals, and to address the harms to the privacy and dignity of victims. However, as the three posts in these cases demonstrate, images of attacks can serve multiple functions and there are risks to freedom of expression, access to information and public participation from a policy that errs on the side of overenforcement, when less restrictive means are available to enable a more proportionate outcome. When Meta applies an age-gated warning screen, content is not available to users under the age of 18, other users have to click through to view the content, and the content is then removed from recommendations to users who do not follow the account (see Al-Shifa Hospital and Hostages Kidnapped From Israel decisions). Meta can rely on MMS banks to automatically apply a “Mark as Disturbing” warning screen to all content that contains identified imagery. These measures can mitigate the risks of content going viral or reaching particularly vulnerable or impressionable users who have not sought it out. A warning screen thus lessens the likelihood that the content will provide unintended inspiration for copycat acts. A warning screen does not fully mitigate risks of footage being repurposed by bad actors. However, once the risk of virality is mitigated, Meta has other, more targeted, tools to identify repurposing by bad actors and remove such content from the platform (e.g., internal teams proactively looking for such content and Trusted Partner channels). A more targeted approach will undoubtedly require additional resources. Given the extent of Meta’s resources, and the impact on expression and access to information of the current approach, a more targeted approach is warranted. Images of attacks can communicate and evoke moral outrage, create a sense of solidarity with victims and provide a mechanism for sharing information with those on the ground or international audiences. There are also some indications that there is a greater tendency to help or a stronger emotional response from people when they can see a picture or a video of a specific victim versus when the information is presented through abstract description or mere numbers. In a country with a closed media environment where the government exerts significant control over what the people see and how information is presented, the accessibility on social media of content with strong public awareness interest and political salience is even more important. The majority concludes that the prohibition and removal of all third-party imagery of attacks on visible but not personally identifiable victims, when shared for news reporting, awareness raising and condemnation is not a necessary nor a proportional measure. When the video/image is perpetrator-generated, shows personally identifiable victims in degrading circumstances or depicts particularly vulnerable victims (e.g., hostages or minors), or lacks a clear awareness-raising, reporting or condemning purpose, it may be appropriate for Meta to err on the side of removal. But a rule prohibiting all third-party imagery of attacks on visible victims, regardless of the reason for and context in which the post is shared, eschews a more proportionate and less restrictive approach, when it is not clear that such a heavy-handed approach is necessary. Meta should allow, with a “Mark as Disturbing” interstitial, third-party imagery showing moment of attacks on visible but not identifiable victims when shared in news reporting and condemnation contexts. This would be in line with the Dangerous Organizations & Individuals policy rationale, which states: Meta’s policies are designed to allow room for … references to designated organizations and individuals in the context of social and political discourse [including] content reporting on, neutrally discussing or condemning dangerous organizations and individuals and their activities.” However, given the different types of violent attacks that are eligible for designation and that the context of a given situation may present especially high risks of copycat behavior or malicious use, Meta should utilize expert human review in evaluating specific situations and enforcing the policy exception recommended by the Board. For the minority, Meta’s current policy prohibiting all imagery of designated attacks depicting visible victims is in line with the company’s human rights responsibilities and the principles of necessity and proportionality. When graphic footage of an attack depicts visible victims, even where victims are not easily identifiable, the aim of protecting the right to privacy and dignity of survivors and victims far outweighs the value of voice, in the view of the minority. Even content recorded by a third-party can harm the privacy and dignity of victims and their families. And applying a warning screen to content showing the death of a person, as the majority recommends, does not protect the privacy or dignity of the victims or their families from those who opt to move past the screen. As the minority in the Video After Nigeria Church Attack decision stated, when terrorist attacks occur, videos of this nature frequently go viral, compounding the harm and increasing risk of re-traumatization. Meta should act quickly and at-scale in order to prevent and mitigate the harms to the human rights of victims, survivors and their families. This also serves the broader public purpose of countering the widespread terror that perpetrators of such attacks seek to instill, knowing that social media will amplify their psychological impacts. Additionally, as the Board has indicated in prior decisions, Meta could ease the burden on users and mitigate risks to privacy by providing users with more specific instructions or access within its products to, for instance, face-blurring tools for videos depicting visible victims of violence (see News Documentary on Child Abuse in Pakistan decision). 6. The Oversight Board’s Decision The Oversight Board overturns Meta’s decisions to take down the three posts, requiring the content to be restored with “Mark as Disturbing” warning screens. 7. Recommendations Content Policy 1. To ensure its Dangerous Organizations and Individuals Community Standard is tailored to advance its aims, Meta should allow, with a “Mark as Disturbing” warning screen, third-party imagery of a designated event showing the moment of attacks on visible but not personally identifiable victims when shared in news reporting, condemnation and awareness-raising contexts. The Board will consider this recommendation implemented when Meta updates the public-facing Dangerous Organizations and Individuals Community Standard in accordance with the above. 2. To ensure clarity, Meta should include a rule under the “We remove” section of the Dangerous Organizations and Individuals Community Standard and move the explanation of how Meta treats content depicting designated events out of the policy rationale section and into this section. The Board will consider this recommendation implemented when Meta updates the public-facing Dangerous Organizations and Individuals Community Standard moving the rule on footage of designated events to the “We remove” section of the policy. *Procedural Note: Return to Case Decisions and Policy Advisory Opinions" fb-0nlir3fz,Political Korean Poem,https://www.oversightboard.com/decision/fb-0nlir3fz/,"April 4, 2024",2024,,"TopicArt / Writing / Poetry, Freedom of expression, PoliticsCommunity StandardHateful conduct",Hateful conduct,Overturned,"Japan, South Korea","A user appealed Meta’s decision to remove an image on Facebook of a Korean poem called “The Scream of General Hong Beom-Do” written by Lee Dong Soon. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post.",5994,933,"Overturned April 4, 2024 A user appealed Meta’s decision to remove an image on Facebook of a Korean poem called “The Scream of General Hong Beom-Do” written by Lee Dong Soon. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Summary Topic Art / Writing / Poetry, Freedom of expression, Politics Community Standard Hateful conduct Location Japan, South Korea Platform Facebook This is a summary decision . Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Case Summary A user appealed Meta’s decision to remove an image on Facebook of a Korean poem called “The Scream of General Hong Beom-Do” written by Lee Dong Soon. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Case Description and Background In September 2023, a Facebook user posted an image of a Korean poem entitled “The Scream of General Hong Beom-Do” by Lee Dong Soon, which criticizes an attempt by the authorities to relocate the bust of the general. The poem artistically expresses Hong Beom-Do’s sentiment on the proposed relocation of his bust, and it includes the term “wae-nom” (왜놈), which literally translates as “person from Japan.” However, it was historically used by Koreans as a general term to refer to Japanese invaders during the Japanese occupation of Korea. Over the years since, it has been frequently employed as an offensive, derogatory term meaning “Japanese bastards’’ or bad people. The post was viewed less than 500 times. Hong Beom-Do was a prominent figure in early-20th-century Korea while the region was under the rule of Japan. He was an activist and general who led the Korean Independence Army to several notable victories in battles against Japanese forces. The user posted this content during a period of intensifying ideological conflict among politicians regarding a proposal to relocate the bust of the general from the Korean Military Academy because of his past involvement with Soviet communist forces. The Defense Ministry’s rationale for relocating his bust has faced significant public pushback. Lee Dong Soon also posted the poem on Facebook, but it was removed by Meta for violating its Hate Speech policy , a move that caused controversy. After the poem was taken down, users began a movement to share the poem more widely on Facebook. Meta initially removed the user’s post from Facebook under its Hate Speech Community Standard , for content that targets “a person or group of people [based on their] protected characteristic(s) [through] cursing.’’ The policy defines cursing as “profane terms or phrases ... with the intent to insult.’’ After the Board brought this case to Meta’s attention, the company determined that the term “wae-nom” in this poem was not employed as a curse word, but rather as a description of Japanese soldiers as invaders. Therefore, the content did not violate the Hate Speech Community Standard and its removal was incorrect. The company then restored the content to Facebook. Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook and Instagram users. Case Significance This case illustrates the challenges faced by Meta in enforcing its Hate Speech policy, particularly when dealing with artistic expression and historical references. This case bears similarities to a prior decision, the Russian Poem case, in which the Board overturned Meta’s initial decision to remove a post under its Hate Speech policy that insulted Russians and compared the Russian army invading Ukraine to Nazis. In this decision, the Board noted that failure during content moderation at-scale to consider the context of Russia’s invasion of Ukraine hindered users’ abilities to express views on public interest issues. The Board has also observed in multiple cases, such as in the Reclaiming Arabic Words and Praise Be to God decisions, that problems of cultural and linguistic misunderstanding can lead to improper enforcement of Meta’s policies. The Board has issued recommendations to improve enforcement of Meta’s Hate Speech policy with relevant cultural context. In a previous decision, the Board asked Meta to “conduct accuracy assessments focused on Hate Speech policy allowances that cover artistic expression and about human rights violations (e.g., condemnation, awareness raising),” ( Wampum Belt , recommendation no. 3). Meta implemented this recommendation, as demonstrated through published information. The Board believes that full implementation of these recommendations could contribute to decreasing the number of enforcement errors under the Hate Speech policy. These errors are frequently connected to the lack of nuance, context and culturally specific linguistic analyses. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-14uy7pvn,Sudan’s Rapid Support Forces Video Captive,https://www.oversightboard.com/decision/fb-14uy7pvn/,"April 11, 2024",2024,,"TopicWar and conflictCommunity StandardDangerous individuals and organizations, Hate speech, Violence and incitement","Policies and TopicsTopicWar and conflictCommunity StandardDangerous individuals and organizations, Hate speech, Violence and incitement",Overturned,Sudan,"The Oversight Board has overturned Meta’s original decision to leave up a video that shows armed men in Sudan, from the Rapid Support Forces (RSF), detaining someone in the back of a military vehicle.",50221,7674,"Overturned April 11, 2024 The Oversight Board has overturned Meta’s original decision to leave up a video that shows armed men in Sudan, from the Rapid Support Forces (RSF), detaining someone in the back of a military vehicle. Standard Topic War and conflict Community Standard Dangerous individuals and organizations, Hate speech, Violence and incitement Location Sudan Platform Facebook Sudan’s Rapid Support Forces Video Captive Decision PDF Sudan’s Rapid Support Forces Video Captive Public Comments Appendix Urdu Translation To read the full decision in Urdu, click here . مکمل فیصلہ اردو میں پڑھنے کے لیے، یہاں پر کلک کریں The Oversight Board has overturned Meta’s original decision to leave up a video that shows armed men in Sudan, from the Rapid Support Forces (RSF), detaining someone in the back of a military vehicle. The video violates both the Dangerous Organizations and Individuals and Coordinating Harm and Promoting Crime Community Standards. The Board is concerned that Meta did not remove the content – which shows a prisoner of war and includes support for a group designated by the company as dangerous – quickly enough. This indicates broader issues around both effective content enforcement during armed conflicts and how content revealing the identity (“outing”) of a prisoner of war is reviewed. The Board calls on Meta to develop a scalable solution to proactively identify content outing prisoners of war during an armed conflict. About the Case On August 27, 2023, a Facebook user posted a video of armed men in Sudan detaining a person in the back of a military vehicle. A man speaking in Arabic identifies himself as a member of the RSF and claims the group has captured a foreign national, likely a combatant associated with the Sudanese Armed Forces (SAF). The man goes on to say they will deliver him to the RSF leadership, and that they intend to find and capture the leaders of the SAF as well as any of the SAF’s foreign associates in Sudan. The video includes derogatory remarks about foreign nationals and leaders of other nations supporting the SAF, while the accompanying caption states in Arabic, “we know that there are foreigners fighting side by side with the devilish Brotherhood brigades.” In April 2023, an armed conflict broke out in Sudan between the RSF paramilitary group and the SAF, which is the official government’s military force. Approximately 7.3 million people have been displaced because of the conflict, with more than 25 million facing severe food insecurity. Sudanese human rights organizations have reported that the RSF has detained more than 5,000 people, keeping them in inhumane conditions. There are reports that both sides have committed war crimes and crimes against humanity. Meta has designated the RSF under its Dangerous Organizations and Individuals Community Standard. Shortly after the video was posted, three Facebook users reported the content but due to a low severity (likelihood of violating community standards) and low virality (number of views) score, they were not prioritized for human review and the content left up. One of the users appealed but this report was closed because of Meta’s COVID-19 automation policies. The same user then appealed to the Oversight Board. After the Board brought the case to Meta’s attention, the company removed the Facebook post under its Dangerous Organizations and Individuals Community Standard, also applying both a standard and severe strike to the profile of the person who posted the video. Key Findings The content violates Meta’s Dangerous Organizations and Individuals Community Standard because it contains support for a group designated by the company as a Tier 1 dangerous organization – specifically by “channeling information or resources, including official communications” on the organization’s behalf. The man seen speaking in the video identifies himself as part of the RSF, describes its activities, speaks of the actions the group is taking and directly names the RSF commander, Mohamed Hamdan Dagalo. The Board finds that removal of this content, which includes threats to anyone who opposes or challenges the RSF, is necessary and proportionate. In previous decisions, the Board has emphasized its concern around the lack of transparency of Meta’s designated organizations and individuals list. Given the situation in Sudan, where the RSF has de facto influence or control over parts of the country, civilians who rely on Facebook, including the RSF’s communications channels, for critical security and humanitarian information, could be at greater risk through the restrictions placed on those communications channels. Additionally, the Board finds this content violates the Coordinating Harm and Promoting Crime policy because it shows a captured man who is fully visible and described in the video as a “foreign captive” associated with the SAF. Meta’s policy does not allow for the identity of a prisoner of war to be exposed during an armed conflict. Removing the video is necessary given the specific rules of international humanitarian law to protect detainees in armed conflict. The Board is concerned that this content was not identified and removed for violating Meta’s rule against outing prisoners of war. This lack of enforcement is likely because this rule is currently enforced on escalation-only, meaning human reviewers moderating content at-scale cannot take action themselves. In fact, the rule can only be enforced if brought to the attention of Meta’s escalations-only teams by some other means, for example, Trusted Partners or content with significant press coverage. Finally, the Board is also concerned that Meta failed to remove this content immediately or shortly after it was posted. Meta’s automated systems failed to correctly identify a violation in this video, indicating a broader issue of enforcement. The Board believes that changes need to be made to allow more content supporting dangerous organizations to be sent for human review when it relates to armed conflicts. The Oversight Board’s Decision The Oversight Board has overturned Meta’s original decision to leave up the video. The Board recommends that Meta: *Case summaries provide an overview of cases and do not have precedential value. 1. Decision Summary The Oversight Board overturns Meta’s original decision to leave up a video that shows armed men, who describe themselves as Rapid Support Forces (RSF) members, detaining a person in the back of a military vehicle. The RSF members describe the captive, whose face can be seen clearly, as a foreign national associated with the Sudanese Armed Forces (SAF). Meta has designated the RSF under its Dangerous Organizations and Individuals Community Standard. After the Board selected the case, Meta reviewed its original decision and removed the video for violating its Dangerous Organizations and Individuals Community Standard that prohibits “support” for designated entities, specifically “channeling information or resources, including official communications, on behalf of a designated entity or event.” The post also violates Meta’s Coordinating Harm and Promoting Crime policy, which prohibits content revealing the identity of a prisoner of war in an armed conflict; in this case, the person detained in the vehicle. The Board is concerned that Meta did not remove the content quickly enough, which could indicate there are broader issues of effective policy enforcement during armed conflicts. 2. Case Description and Background On August 27, 2023, a Facebook user posted a video showing armed men in Sudan detaining a person in the back of a military vehicle. In the video, a man, who is not the user who posted the content, identifies himself in Arabic as a member of the RSF paramilitary group. He claims the group has captured a foreign national, likely a combatant associated with the SAF, and that they intend to deliver him to the RSF leadership. The man also states they intend to find the leaders of the SAF forces and their foreign associates in Sudan, that they will capture anyone working against the RSF and that they remain loyal to their own leader, Mohamed Hamdan Dagalo. The video includes derogatory remarks about foreign nationals and leaders of other nations supporting the SAF. The video was accompanied by a caption, also in Arabic, that translates as “we know that there are foreigners from our evil neighbor fighting side by side with the devilish Brotherhood brigades.” Shortly after the video was posted, the user edited the caption. The edited caption translates as “we know that there are foreigners fighting side by side with the devilish Brotherhood brigades.” The post had fewer than 100 reactions, 50 comments and 50 shares, while the person who posted the content has about 4,000 friends and 32,000 followers. Shortly after it was posted, other Facebook users reported the content, but these reports were not prioritized for human review and the post was kept up on the platform. One of these users appealed Meta’s decision but the appeal was again closed without review. The same user then appealed to the Oversight Board. After the Board brought the case to Meta’s attention in October 2023, and following a review by Meta’s policy subject matter experts, the company removed the post from Facebook under its Dangerous Organizations and Individuals policy. Following removal of the content, Meta applied a severe strike in addition to a standard strike to the profile of the person who posted the content because a severe strike results in different, and stricter, penalties than a standard strike. The accumulation of standard strikes can lead to an ascending severity of penalties. When the content posted infringes Meta’s more severe policies, such as the Dangerous Organizations and Individuals policy, the company may apply additional, more severe restrictions on top of the standard restrictions. For example, users may be restricted from creating ads and using Facebook Live for set periods of time. The Board considered the following context in reaching its decision on this case. The armed conflict in Sudan started in April 2023 between the SAF – the military forces of the internationally recognized government, led by General Abdel Fattah al-Burhan – and the paramilitary group, the RSF, led by Mohamed Hamdan Dagalo , generally known as “Hemedti.” Shortly after the beginning of the conflict, the SAF declared the RSF a rebel group and ordered its dissolution. The war in the country has been classified as a non-international armed conflict. As of November 2023, according to Sudan War Monitor , the RSF was controlling most of West Darfur, the area around the capital Khartoum and parts of North and West Kordofan, while the SAF was in control of most of the Nile Valley and the country’s eastern provinces and ports. The U.S. Treasury Department sanctioned Abdelrahim Hamdan Dagalo, an RSF figurehead and brother of Mohamed Hamdan Dagalo, on September 6, 2023. Meta independently designated the RSF as a Tier 1 terrorist organization almost one month earlier on August 11, 2023, under its Dangerous Organizations and Individuals policy. At the time of publishing this decision, Meta’s designation of the RSF remains in place. According to the United Nations, since April 2023 approximately 7.3 million people have been displaced because of the conflict, with women and children representing about half of that total. Over 25 million people, including more than 14 million children, are facing severe food insecurity and need humanitarian assistance . Gender-based violence, sexual violence, harassment, sexual exploitation and trafficking are all escalating. It is estimated that disease outbreaks and the decline of the health system have resulted in around 6,000 deaths across Sudan. In October 2023, the UN Human Rights Council adopted a resolution to urgently establish an independent international fact-finding mission to Sudan, with a mandate to investigate and establish the facts and circumstances of alleged human rights and international humanitarian law violations committed during the conflict. Sudanese human rights organizations have reported that the RSF had detained more than 5,000 people in the capital Khartoum, keeping them in degrading, inhumane conditions of detention, with a lack of access to basic necessities essential for human dignity. According to multiple sources, including the International Criminal Court and the U.S. Department of State , there are reports that members of both the SAF and the RSF have committed genocide, crimes against humanity and war crimes in Sudan. Additionally, the reports mention that the RSF and allied militias have committed war crimes by ethnically targeting Masalit communities in Sudan and the Chad border. Experts specializing in Middle East and North Africa studies, consulted by the Board, highlighted reports that both sides are also responsible for widespread abuses against detainees, including inhumane conditions , illegal and arbitrary detentions , ethnic targeting , sexual violence , killing and using hostages as human shields . Experts consulted by the Board noted that Meta’s designation of the RSF as a dangerous organization led to the organization’s dissemination of information, including harmful narratives, being limited. However, this designation also encouraged the RSF to explore other tactics for sharing information, like resorting to the use of non-official personal pages and accounts, including to post content about detainees. This made it harder for observers to effectively monitor or counter the group’s activities. Experts also noted the designation of the RSF contributed to information asymmetry and hampered access to information for civilians. For example, people would be less likely to receive RSF updates about the security conditions in certain areas through Meta’s platforms (see public comment from Civic Media Observatory, PC -24020). Sudanese civilians and media rely on social media platforms, Facebook in particular, for acquiring crucial information and updates about social, political, military and humanitarian developments and spreading these beyond Sudan; finding routes to safety within the country or to flee Sudan; finding crucial information on military operations or violent outbreaks to learn about the military actions being taken in certain locations and to seek shelter or take refuge from those actions (see public comment from Civic Media Observatory, PC -24020); seeking humanitarian and medical help; and learning about hostages and prisoners of war . 3. Oversight Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the person who previously reported content that was left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). When Meta commits to act on recommendations, the Board monitors their implementation. 4. Sources of Authority and Guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board Decisions II. Meta’s Content Policies The Board’s analysis was informed by Meta’s commitment to voice , which the company describes as “paramount,” and its values of safety, privacy and dignity. After the Board identified this case for review, Meta removed the content for violating the Dangerous Organizations and Individuals policy for support of a designated entity. The content also violated the Coordinating Harm and Promoting Crime policy on depicting identifiable prisoners of war in an armed conflict. As Meta has informed the Board in previous cases, when the content violates several policies, the company enforces under the most severe violation. In this case, Meta considered the Dangerous Organizations and Individuals policy violation to be the most severe. Dangerous Organizations and Individuals According to the Dangerous Organizations and Individuals policy rationale, in an effort to prevent and disrupt real-world harm, Meta does not allow organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on its platforms. Meta assesses these entities based on their behavior both online and offline, most significantly, their ties to violence. At the time the content in this case was posted, the policy prohibited “praise, substantive support and representation” of designated entities. “Substantive support” covered “channeling information or resources, including official communications, on behalf of a designated entity or event,” by “directly quoting a designated entity without [a] caption that condemns, neutrally discusses or is a part of news reporting.” On December 29, 2023, Meta updated the policy line for “substantive support.” The updated version stipulates that Meta removes “glorification, support and representation of Tier 1 entities.” Meta added two sub-categories: “material support” and “other support.” The rule for “channeling” now appears under “other support.” Coordinating Harm and Promoting Crime According to the policy rationale, the Coordinating Harm and Promoting Crime Community Standard aims to “disrupt offline harm and copycat behaviour” by prohibiting people from “facilitating, organizing, promoting or admitting to certain criminal or harmful activities targeted at people, businesses, property or animals.” This Community Standard prohibits “outing: exposing the identity of a person and putting them at risk of harm.” Among the groups protected from “outing,” the policy lists “prisoners of war, in the context of an armed conflict.” III. Meta’s Human Rights Responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , in which it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. Significantly, the UNGPs impose a heightened responsibility on businesses operating in a conflict setting (“Business, human rights and conflict-affected regions: towards heightened action,” A/75/212 ). The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following international standards: 5. User Submissions The user who appealed the company’s decision to keep the content up stated that the post includes misleading information and scenes and threats of violence by the RSF in Sudan’s capital, Khartoum. The user asked that the content be removed because it poses a danger to people in Sudan. 6. Meta’s Submissions Meta told the Board that its initial decision to keep the content up was because its automated systems did not prioritize the content for human review. According to Meta’s Transparency Center , in general, reports are dynamically prioritized for review based on factors such as the severity of the predicted violation, the content’s virality and the likelihood that the content will violate the Community Standards. Reports that are consistently ranked lower in priority than others in the queue will typically be closed after 48 hours. Shortly after the content in this case was posted, three Facebook users reported the content four times for “terrorism,” “hate speech” and “violence.” Due to a low severity and a low virality score, these reports were not prioritized for human review and the content was left on the platform. One of these users appealed Meta’s decision to keep the content up. According to the company, that appeal was automatically closed due to COVID-19 automation policies, which Meta introduced at the beginning of the pandemic in 2020 to reduce the volume of reports being sent to human reviewers, while keeping open potentially “high-risk” reports (see the Holocaust Denial decision). When the report was auto-closed, the content was not escalated to policy or subject matter experts for additional review. Meta explained that following the Board selecting this case, the company decided to remove the post because it violated the Dangerous Organizations and Individuals Community Standard. Meta concluded that by posting a video that shows a self-proclaimed member of the RSF speaking about the organization’s activities, without a caption that “condemns, neutrally discusses or is a part of news reporting,” the user violated the “substantive support” policy line by “channeling information” about a Tier 1 designated entity. Meta therefore removed it from the platform. The Board asked Meta 13 questions in writing. Questions related to Meta’s enforcement measures for content related to Sudan’s conflict, automated systems and ranking models, the processes related to the designation of dangerous organizations and individuals, the rationale for designating the RSF a Tier 1 terrorist organization and the impact of this decision on access to information in Sudan. Meta answered all questions. 7. Public Comments The Oversight Board received 16 public comments that met the terms for submission. Ten of them were submitted from the Middle East and North Africa, three from Europe and one each from Latin America and the Caribbean, Sub-Saharan Africa and the United States and Canada. To read the public comments submitted with consent to publish, click here . The submissions covered the following themes: the RSF’s treatment of hostages, detainees and civilians; the RSF’s alleged abuses and instances of violence in the region; the risks of exposing identifiable hostages and detainees on social media; the RSF’s use of social media; the importance of social media for civilians in Sudan; the consequences of Meta’s designation of the RSF on the information environment in Sudan; and Meta’s prioritization of content for automated and human review in conflict situations. 8. Oversight Board Analysis The Board examined whether this content should be removed by analyzing Meta’s content policies, human rights responsibilities and values. The Board also assessed the implications of this case for Meta’s broader approach to content governance. The Board selected this case because it offered the opportunity to explore how social media companies should respect access to information in countries such as Sudan where information can be vital during an ongoing conflict, especially for civilians, yet dangerous organizations can also use these platforms to further their violent mission and promote real-world harm. Additionally, the case provides the Board with the opportunity to assess how Meta protects detainees in armed conflicts in line with international humanitarian law. The case primarily falls into the Board’s Crisis and Conflict Situations strategic priority, but also touches on Automated Enforcement of Policies and Curation of Content. 8.1 Compliance With Meta’s Content Policies I. Content Rules Dangerous Organizations and Individuals Community Standard The Board finds that the content in this case violates the Dangerous Organizations and Individuals policy because it supports a designated Tier 1 organization. Meta informed the Board that it removed the content in this case because it contained “substantive support” for a Tier 1 terrorist organization. The company explained that substantive support includes “channeling information or resources, including official communications, on behalf of a designated entity” by “directly quoting a designated entity without [a] caption that condemns, neutrally discusses or is a part of news reporting.” In this case, the video shows a person who identifies himself as a member of the RSF, speaks of the RSF’s activities and the actions that will be taken, and names the RSF commander. Additionally, Meta’s internal guidelines provide a non-exhaustive list of examples of written or visual elements that show substantive support. This includes posts where “the content features, or claims to feature, a leader, spokesperson, or a known or self-proclaimed member of a designated entity speaking about the organization or its cause.” On December 29, 2023, Meta updated the Dangerous Organizations and Individuals policy line for “substantive support.” The updated version stipulates that Meta removes “glorification, support and representation of Tier 1 entities.” Although Meta has substituted “substantive support” with “support,” these changes do not impact the analysis in this case or how Meta would enforce against this content. Coordinating Harm and Promoting Crime policy The Board finds that the content also violates the Coordinating Harm and Promoting Crime Community Standard. Meta’s policy prohibits exposing the identity of a prisoner of war during an armed conflict. According to Meta, this policy is enforced on escalation only and does not include an exception for content raising awareness about prisoners of war or condemning their treatment. Meta defines a prisoner of war as “a member of the armed forces who has been captured or fallen into the hands of an opposing power during or immediately after an armed conflict.” The Board understands this rule to apply equally to international armed conflicts and non-international armed conflicts. In this case, the Board finds that the video shows an identifiable individual described by the armed members of the RSF who have detained him as a “foreign captive” associated with the SAF, which is the main opponent of the RSF in Sudan’s ongoing conflict. Therefore, the content violates the policy and should be removed. 8.2 Compliance With Meta’s Human Rights Responsibilities Freedom of Expression (Article 19 ICCPR) Article 19 of the ICCPR provides for broad protection of expression, including political expression. This right includes the “freedom to seek, receive and impart information and ideas of all kinds.” These rights are to be respected during active armed conflicts and should continue to inform Meta’s human rights responsibilities, alongside the mutually reinforcing and complementary rules of international humanitarian law that apply during such conflicts (General Comment 31, Human Rights Committee, 2004, para. 11; Commentary to UNGPs, Principle 12 ; see also UN Special Rapporteur’s report on Disinformation and freedom of opinion and expression during armed conflicts, Report A/77/288, paras. 33-35 (2022); and OHCHR report on International legal protection of human rights in armed conflict (2011) at page 59). The UN Special Rapporteur on freedom of expression has stated that “during armed conflict, people are at their most vulnerable and in the greatest need of accurate, trustworthy information to ensure their own safety and well-being. Yet, it is precisely in those situations that their freedom of opinion and expression, which includes ‘the freedom to seek, receive and impart information and ideas of all kinds,’ is most constrained by the circumstances of war and the actions of the parties to the conflict and other actors to manipulate and restrict information for political, military and strategic objectives,” (Report A/77/288, para. 1). The Board recognizes the importance of ensuring that people can freely share information about conflicts, especially when social media is the main source of information, while simultaneously ensuring content that is likely to fuel or incite further offline violence is removed. I. Legality (Clarity and Accessibility of the Rules) The principle of legality under international human rights law requires rules that limit expression to be clear and publicly accessible (General Comment No.34, para. 25). Restrictions on expression should be formulated with sufficient precision to enable individuals to regulate their conduct accordingly ( Ibid. ). As applied to Meta, the company should provide guidance to users as to what content is permitted on the platform and what is not. Additionally, rules restricting expression “may not confer unfettered discretion on those charged with [their] execution” and must “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not,” (A/HRC/38/35, para. 46). Dangerous Organizations and Individuals The UN Special Rapporteur on freedom of expression has raised concerns with social media platforms’ rules prohibiting “praise” and “support,” finding the terms “excessively vague,” (A/HRC/38/35, para. 26). The Board has previously criticized the Dangerous Organizations and Individuals policy’s lack of clarity. Meta does not publicly share the list of entities that it designates under the policy. The company explains that it chooses to designate entities that have been designated by the United States government as Foreign Terrorist Organizations (FTOs) or Specially Designated Global Terrorists (SDGTs). While Meta’s full list of Tier 1 terrorist designations is created by the company and extends beyond U.S. designations, the Board understands a substantial proportion of Meta’s designated Tier 1 terrorist entities are on the FTOs and SDGTs lists. While the U.S. government lists are public, Meta’s Community Standards only reference the FTOs and SDGTs frameworks and do not provide a link to these U.S. government lists. The Board therefore recommends Meta hyperlink the U.S. Foreign Terrorist Organizations and Specially Designated Global Terrorists lists in its Community Standards to improve transparency and clarity for users. However, in this case, the RSF has not been designated by the United States government, meaning the public would not know that Meta had designated one of the parties to this conflict. This lack of transparency on designations means the public may not know whether their content could be potentially violating. In the Nazi Quote decision, the Board recommended that Meta “provide a public list of the organizations and individuals designated ‘dangerous’ under the Dangerous Organizations and Individuals Community Standard.” Meta declined to implement this recommendation after a feasibility assessment. The Board is concerned that given the situation in Sudan, the designation of the RSF as a Tier 1 terrorist organization, together with the lack of transparency around that designation, means that people in Sudan are not aware of the fact that one party to the conflict is prohibited from having a presence on the platform, which may lead to a disproportionate impact on the access to information in Sudan, with no notice to users of this fact. The Board believes that Meta should be more transparent when making decisions in regions affected by armed conflicts, with restricted civic space, where reliable sources of information available to civilians are limited, media freedom is under threat and civil society is fragile. Given the RSF’s de facto influence and control over parts of the country (see section 2), and the reliance of Sudanese civilians on Facebook to access critical security and humanitarian information, including from the RSF’s communications channels, the Board finds that the unpredictable consequences on the local population of Meta’s lack of transparency on designating parties to the conflict will put their physical security at additional risk. In the Referring to Designated Individuals as “Shaheed” policy advisory opinion, the Board addresses this issue of transparency around Meta’s lists of designated entities. In recommendation no. 4, the Board urges Meta to “explain the procedure by which entities and events are designated” in more detail. It should also “publish aggregated information on the total number of entities within each tier its designation list, as well as how many were added and removed in the past year,” ( Referring to Designated Individuals as “Shaheed,” recommendation no. 4). The Board re-emphasizes this recommendation, urging more transparency. For a minority of the Board, although Meta did not publicly announce the RSF’s designation, abuses committed by the RSF are widely known, including alleged war crimes and crimes against humanity, and extensively reported as the conflict escalated (see section 2). With the public’s awareness of these abuses, users could reasonably expect that sharing content recorded or disseminated by the RSF could breach Meta’s Community Standards. Considering the specific context in Sudan, the Board concludes that the rule applied in this case – prohibiting “substantive support” by “channeling information or resources, including official communications, on behalf of a designated entity or event” – is clearly explained by Meta’s Dangerous Organizations and Individuals policy, is accessible to users and therefore meets the legality test. The Board reaches the same conclusion for the updated policy rules on “support,” published by Meta on December 29, 2023, which substituted “substantive support” with “support.” The Board notes that although Meta added two sub-categories, “material support” and “other support,” with the policy line for “channeling” now appearing under “other support,” the rule itself did not materially change. Coordinating Harm and Promoting Crime Community Standard The Board finds that Meta’s rule prohibiting “outing prisoners of war” is sufficiently clear and accessible to users, satisfying the legality principle. II. Legitimate Aim Any limitation on expression should pursue one of the legitimate aims listed in the ICCPR, which include national security, public order and respecting the rights of others. According to its rationale, the Dangerous Organizations and Individuals policy aims to “prevent and disrupt real-world harm” and does not allow “organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on Meta.” The Board has previously recognized that the Dangerous Organizations and Individuals policy pursues the aim of protecting the rights of others, including the right to life, security of person, and equality and non-discrimination (Article 19, para. 3, ICCPR; see also the Punjabi Concern Over the RSS in India and Nazi Quote decisions). The Board has also previously found that the purpose of the Dangerous Organizations and Individuals policy of preventing offline harm is a legitimate aim (see the Öcalan’s Isolation decision). The Coordinating Harm and Promoting Crime policy serves the legitimate aim of protecting the rights of others (Article 19, para. 3, ICCPR), including the right to life, privacy and protection from torture or cruel, inhuman or degrading treatment. In this case, the legitimacy of the aim underlying the prohibition on depicting identifiable prisoners of war is informed by rules of international humanitarian law that call for the protection of life, privacy and dignity of prisoners of war (Common Article 3, Geneva Conventions; also see Armenian Prisoners of War Video ), and the fact that the hostilities in Sudan have been qualified as an armed conflict (see section 2). III. Necessity and Proportionality Any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected,” (General Comment 34, para. 34). Social media companies should consider a range of possible responses to problematic content beyond deletion to ensure restrictions are narrowly tailored (A/74/486, para. 51). As the Board has previously highlighted in the Tigray Communication Affairs Bureau decision, the UNGPs impose a heightened responsibility on businesses operating in conflict settings. In the Armenian Prisoners of War Video decision, the Board found that “in a situation of armed conflict, the Board’s freedom of expression analysis is informed by the more precise rules in international humanitarian law.” Dangerous Organizations and Individuals The Board finds that removing the content in this case is necessary and proportionate. Prohibiting content that directly quotes a self-proclaimed member of a designated organization, involved in widespread violence against civilians, when there is no caption condemning, neutrally discussing or indicating the post is part of news reporting, is necessary. In this case, the post shares a video showing a member of the RSF describing the activities and plans of the group, including threatening anyone who opposes or challenges them. Spreading this kind of information on behalf of a designated organization on Facebook, especially in the context of the armed conflict in Sudan, with the RSF implicated in widespread violence, war crimes and crimes against humanity (see section 2), could lead to a heightened risk of real-world harm. Under such circumstances, no measure short of content removal will address the risk of harm, and removal is the least restrictive means of protecting the rights of others. The Board is concerned that Meta failed to remove this content immediately or shortly after it was posted, acting only when the Board selected this case, two months later. Meta informed the Board that despite being reported by multiple users, its automated systems gave this content a low score, which means it was not prioritized for human review. Meta’s automated systems use a variety of features when determining what action to take on a piece of content, including machine-learning classifiers that score content on the probability of a violation, severity of the potential violation and virality of the content. If added to a queue for review, these features may also be used to prioritize or rank the order in which content may be reviewed. According to Meta, the content in this case was not prioritized due to the company’s systems not detecting a violation in the video and predicting the content to have a low number of views, and so was automatically closed without review once the 48-hour period had passed. Specifically in ranking this video, there were two cross-problem classifiers that generated predicted severity ranking scores and both classifiers generated low scores. The Board is concerned that Meta’s automated detection errors in this case may indicate broader issues, in particular the classifiers failures to identify content supporting the RSF, a designated entity not allowed to have a presence on the platform, and depicting a member identifying himself as belonging to the designated entity – without a caption that condemns, neutrally discusses or is a part of news reporting. In response to the Board’s question about what caused the failure of classifiers to detect the violation in this case, Meta noted it could not identify what exactly factored in this content receiving a low score. The Board therefore concludes that Meta should take the necessary steps to enhance its automated detection and prioritization of content by auditing the training data used in its video content understanding classifier to evaluate whether it has sufficiently diverse examples of content supporting designated organizations in the context of armed conflicts, including different languages, dialects, regions and conflicts. Meta should ensure this change allows for more content to be lined up for human review. This will likely require increasing human review capacity to ensure Meta is able to effectively address an increase in volume of content necessitating review following the outbreak of a conflict. This adjustment will help the company calibrate how its automated systems respond to challenges related to armed conflicts, and better identify and address content involving dangerous organizations in these contexts, enhancing the effectiveness of its enforcement measures. Additionally, the Board finds that Meta failed to establish a sustainable mechanism to adequately enforce its content policies during the war in Sudan. In the Weapons Post Linked to Sudan’s Conflict case, Meta explained that it did not set up an Integrity Product Operations Center for Sudan, which is used to respond to threats in real-time, because the company was able to “handle the identified content risks through the current processes.” Meta reiterated a similar position in this case. Previously, the Board recommended that in order to “improve enforcement of its content policies during periods of armed conflict, Meta should assess the feasibility of establishing a sustained internal mechanism that provides the expertise, capacity and coordination required to review and respond to content effectively for the duration of a conflict (see Tigray Communication Affairs Bureau decision, recommendation no. 2).” In August 2023, Meta informed the Board that it set up “a team to address crisis coordination and provide dedicated operations oversight throughout the lifecycle of imminent and emerging crises. We have since fulfilled staffing requirements and are now in the process of ramping up this team for their operational execution responsibilities before, during, and after high risk events and elections. All operational logistics for the team have been established, and the team will be fully live across all regions in the coming months. We will continue to improve its execution framework as we encounter conflict incidents and assess the effectiveness of this structure. We now consider this recommendation complete and will have no further updates.” However, in response to the Board’s question, Meta noted that it has not established such a mechanism for the conflict in Sudan, although the company considers the recommendation complete. Coordinating Harm and Promoting Crime Community Standard The necessity and proportionality of removing this content under the policy on outing prisoners of war is informed by the more specific rules of international humanitarian law (see Armenian Prisoners of War Video ). Common Article 3 to the Geneva Conventions prohibits “outrages upon personal dignity, in particular humiliating and degrading treatment” of detainees in international and non-international armed conflicts. Article 13 of the Geneva Convention (III) prohibits acts of violence or intimidation against prisoners of war as well as exposing them to insults and public curiosity. Only in some limited circumstances, international humanitarian law allows for the public disclosure of images of prisoners of war. As the ICRC notes in its guidance to media, if a “compelling public interest” or “the vital interest” of the prisoner requires it, images depicting prisoners of war may exceptionally be released so long as the dignity of the depicted prisoner is protected. When a prisoner is depicted in humiliating or degrading situations, the identity must be obscured “through appropriate methods, such as blurring, pixelating or otherwise obscuring faces and name tags,” (ICRC Commentary on Article 13 at p.1627). While the Board acknowledges that there are available online tools for users to anonymize sensitive prisoners of war content, Meta does not currently provide users with such means to blur or obscure the faces of prisoners of war in video content published on its platform. These international humanitarian law prohibitions, and narrowly drawn exceptions, intend to protect detainees in conflict. As the Board previously held in the Armenian Prisoners of War Video case, prohibiting the sharing of images of prisoners of war “is consistent with goals embodied in international humanitarian law,” and “where content reveals the identity or location of prisoners of war, removal will generally be proportionate considering the severity of harms that can result from such content.” In this case, removing the post was necessary given the rules of international humanitarian law and the risks present in the conflict in Sudan. As outlined in section 2 above, since the outbreak of the conflict, the RSF has detained thousands of civilians and members of the SAF’s forces or those suspected of providing them with support. There are reports of widespread violations of international humanitarian law, where detainees have been held in inhumane and degrading conditions, mistreated and even killed. Under such circumstances, and absent a compelling human rights reason for allowing this content to remain on the platform, removal is necessary and proportionate to ensure the dignity and safety of the prisoner. The Board is concerned, given the gravity of the potential harms and the heightened risks in an armed conflict, that this content was not identified and removed for violating Meta’s rule against outing (revealing the identity of) prisoners of war. The lack of enforcement is likely because the rule that prohibits outing prisoners of war in an armed conflict is currently enforced on escalation only, meaning at-scale content moderators cannot enforce the policy. In the Armenian Prisoners of War Video , the Board held that the “rule requiring additional context to enforce, and thus requiring escalation to internal teams before it can be enforced, is necessary, because determining whether a person depicted is an identifiable prisoner of war in the context of an armed conflict requires expert consideration.” However, since that decision was published, the Board learned that Meta’s at-scale moderators are not instructed or empowered to identify content that violates the company’s escalations-only policies, like the rule at issue in this case. In other words, the rule can only be enforced if content is brought to the attention of Meta’s escalations-only teams by some other means, e.g., through Trusted Partners or significant press coverage. In practice, this means that significant amounts of content identifying prisoners of war is likely left on the platform. This raises additional concerns about the accuracy of Meta’s automated detection enforcement as escalations-only policies most likely do not produce enough human decisions to train an automated classifier. Therefore, while the Board finds that the rule prohibiting the outing of prisoners of war in an armed conflict is necessary, the Board finds that Meta’s enforcement of the policy is not adequate to meet the company’s responsibility to respect the rights of prisoners of war. To ensure effective protection of the rights of detainees under international humanitarian law, the company should develop a scalable solution to enforce the policy. Meta should establish a specialized process or protocol to proactively identify such content during an armed conflict. Access to Remedy Meta informed the Board that the appeal in this case was automatically closed due to Meta’s COVID-19 automation policies, which meant the content was left on the platform. In the Holocaust Denial case, the Board recommended Meta to “publicly confirm whether it has fully ended all COVID-19 automation policies put in place during the COVID-19 pandemic.” The Board is concerned that Meta’s COVID-19 automation policies, justified by the temporary reduction in human review capacity due to the pandemic, are still in place and reiterates its recommendation, urging Meta to publicly explain when it will no longer have reduced human reviewer capacity. 9. Oversight Board Decision The Oversight Board overturns Meta’s original decision to leave up the content. 10. Recommendations Enforcement 1. To ensure effective protection of detainees under international humanitarian law, Meta should develop a scalable solution to enforce the Coordinating Harm and Promoting Crime policy that prohibits outing prisoners of war within the context of armed conflict. Meta should set up a protocol for the duration of a conflict that establishes a specialized team to prioritize and proactively identify content outing prisoners of war. The Board will consider this implemented when Meta shares with the Board data on the effectiveness of this protocol in identifying content outing prisoners of war in armed conflict settings and provides updates on the effectiveness of this protocol every six months. 2. To enhance its automated detection and prioritization of content potentially violating the Dangerous Organizations and Individuals policy for human review, Meta should audit the training data used in its video content understanding classifier to evaluate whether it has sufficiently diverse examples of content supporting designated organizations in the context of armed conflicts, including different languages, dialects, regions and conflicts. The Board will consider this recommendation implemented when Meta provides the Board with detailed results of its audit and the necessary improvements that the company will implement as a result. 3. To provide more clarity to users, Meta should hyperlink the U.S. Foreign Terrorist Organizations and Specially Designated Global Terrorists lists in its Community Standards, where these lists are mentioned. The Board will consider this recommendation implemented when Meta makes these changes to the Community Standards. Procedural Note: The Oversight Board’s decisions are prepared by panels of five Members and approved by the majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Linguistic expertise was provided by Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world. Return to Case Decisions and Policy Advisory Opinions" fb-1rwwjuat,Image of gender-based violence,https://www.oversightboard.com/decision/fb-1rwwjuat/,"August 1, 2023",2023,,"TopicHumor, Sex and gender equality, ViolenceCommunity StandardBullying and harassment","Policies and TopicsTopicHumor, Sex and gender equality, ViolenceCommunity StandardBullying and harassment",Overturned,Eritrea,The Oversight Board has overturned Meta’s original decision to leave up a Facebook post that mocks a target of gender-based violence,36955,5725,"Overturned August 1, 2023 The Oversight Board has overturned Meta’s original decision to leave up a Facebook post that mocks a target of gender-based violence Standard Topic Humor, Sex and gender equality, Violence Community Standard Bullying and harassment Location Eritrea Platform Facebook Public comments appendix Sorani Kurdish translation To read this decision in Sorani Kurdish, click here . بۆ خوێندنەوەی ئەم بڕیارە بە زمانی کوردیی سۆرانی، کرتە لێرە بکە. The Oversight Board has overturned Meta’s original decision to leave up a Facebook post that mocks a target of gender-based violence. While Meta has since recognized this post broke its rules on Bullying and Harassment, the Board has identified a gap in Meta’s existing rules which seems to allow content that normalizes gender-based violence by praising, justifying, celebrating or mocking it (for example, in cases where the target is not identifiable, or the picture is of a fictional character). The Board recommends that Meta undertake a policy development process to address this gap. About the case In May 2021, a Facebook user in Iraq posted a photo with a caption in Arabic. The photo shows a woman with visible marks of a physical attack, including bruises on her face and body. The caption begins by warning women about making a mistake when writing to their husbands. The caption states that the woman in the photo wrote a letter to her husband, which he misunderstood, according to the caption, due to the woman’s typographical error. According to the post, the husband thought the woman asked him to bring her a “donkey,” while in fact, she was asking him for a “veil.” In Arabic, the words for “donkey” and “veil” look similar (“حمار"" and “خمار""). The post implies that because of the misunderstanding caused by the typographical error in her letter, the husband physically beat her. The caption then states that the woman got what she deserved as a result of the mistake. There are several laughing and smiling emojis throughout the post. The woman depicted in the photograph is an activist from Syria whose image has been shared on social media in the past. The caption does not name her, but her face is clearly visible. The post also includes a hashtag used in conversations in Syria supporting women. In February 2023, a Facebook user reported the content three times for violating Meta’s Violence and Incitement Community Standard. If content is not reviewed within 48 hours, the report is automatically closed, as it was in this case. The content remained on the platform for nearly two years and was not reviewed by a human moderator. The user who reported the content appealed Meta’s decision to the Oversight Board. As a result of the Board selecting this case, Meta determined that the content violates the Bullying and Harassment policy and removed the post. Key findings The Board finds that the post violates Meta’s policy on Bullying and Harassment as it mocks the serious physical injury of the woman depicted. As such, it should be removed. However, this post would not have violated Meta’s rules on Bullying and Harassment if the woman depicted was not identifiable, or if the same caption had accompanied a picture of a fictional character. This indicates to the Board that there is a gap in existing policies that seems to allow content that normalizes gender-based violence. According to Meta, a recent policy development process on praise of violent acts focused heavily on identifying any existing enforcement gaps in treating praise of gender-based violence under various policies. As part of that process, Meta considered the policy on the issue of mocking or joking about gender-based violence. Meta informed the Board that the company determined that the Bullying and Harassment policy generally captures this content. However, as noted in the examples above, the Board finds that existing policies and their enforcement do not necessarily capture all relevant content. This case also raises concerns about how Meta is enforcing its rules on bullying and harassment. The content in this case, which included a photograph of a Syrian activist who had been physically attacked and was reported multiple times by a Facebook user, was not reviewed by a human moderator. This may indicate that Meta does not prioritize this type of violation for review. The Oversight Board’s decision The Oversight Board overturns Meta’s original decision to leave up the content. The Board recommends that Meta: * Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board overturns Meta’s original decision to leave up a Facebook post that mocks a target of gender-based violence. Meta has acknowledged that its original decision was wrong, and that the content violates its policy on Bullying and Harassment . The Board recommends that Meta undertakes a policy development process to establish a policy aimed at addressing content that normalizes gender-based violence through praise, justification, celebration or mocking of gender-based violence. The Board understands that Meta is conducting a policy development process which, among other issues, is considering how to address praise of gender-based violence. This recommendation is in support of a more thorough approach to limiting the harms caused by the normalization of gender-based violence. 2. Case description and background In May 2021, a Facebook user in Iraq posted a photo with a caption in Arabic. The photo shows a woman with visible marks of a physical attack, including bruises on her face and body. The caption begins by warning women about making mistakes when writing to their husbands. The caption states that the woman in the photo wrote a letter to her husband, which the husband misunderstood, according to the caption, due to the woman’s typographical error in writing the letter. According to the post, the husband thought the woman asked him to bring her a “donkey,” while in fact, she was asking him for a “veil.” In Arabic, the words for “donkey” and “veil” look similar ( ""حمار"" and ""خمار”). The caption then mocks the situation and concludes that the woman got what she deserved as a result of the mistake. There are several laughing and smiling emojis throughout the post. According to several sources, the woman depicted in the photograph is a Syrian activist who had been imprisoned by the regime of Bashar Al-Assad and later beaten by individuals believed to be affiliated with the regime. Her image has been shared on social media in the past. The caption does not name her, but her face is clearly visible. The post also includes a hashtag which, according to experts consulted by the Board, is primarily used by pages and groups in Syrian conversations supporting women. The post had about 20,000 views and under 1,000 reactions. In February 2023, a Facebook user reported the content three times for violating the Violence and Incitement Community Standard. The reports were closed without human review, leaving the content on the platform. Meta told the Board it considers a series of signals to determine how to prioritize content for human review, which includes the virality of the content and how severe the company considers the violation type. If content is not reviewed within 48 hours, the report is automatically closed. In this case, the content remained on the platform for nearly two years before it was first reported. After it was reported, it was not reviewed by a human reviewer within 48 hours and thus the report was automatically closed. The user who reported the content appealed Meta’s decision to the Oversight Board. As a result of the Board selecting this case, Meta determined that the content violates the Bullying and Harassment policy and removed the post. The Board notes the following context in reaching its decision in this case. This content was posted by a user in Iraq. According to the World Health Organization, some 1.32 million people in Iraq are estimated to be at risk of different forms of gender-based violence. The majority are women and adolescent girls. Despite repeated calls by women’s groups to pass legislation in Iraq to combat domestic violence, a draft law remains stalled, and the current penal code allows for husbands to punish their wives as an exercise of a legal right and provides for lower sentencing for murder when connected to an ‘honour killing.’ The activist depicted in the photograph is from Syria. According to the United Nations , “[o]ver a decade of conflict in Syria has had a significant gendered impact on women and girls.” As many as 7.3 million Syrians, overwhelmingly women and girls, require services related to gender-based violence. An inadequate national legal framework and discriminatory practices are barriers to women’s protection and hinder effective accountability for violence against them. ( UN Syria Report , pages 5-9) The UN reports widespread impunity from prosecution for gender-based violence and stigmatization of victims or survivors of gender-based violence, leading to ostracization and further restrictions on participation in public life. The regime has targeted women associated with the opposition, subjecting them to torture and sexual abuse. According to a study conducted by UN Women, nearly half (49 per cent) of women internet users in eight nations in the League of Arab States reported feeling unsafe from online harassment. The same study found “33 per cent of women who experienced online violence report[ed] that some or all of their experiences of online violence moved offline.” Online violence was defined as including receiving unwanted images or symbols with sexual content; annoying phone calls, inappropriate or unwelcome communications; and receiving insulting and/or hateful messages. According to a UN Secretary-General report, online violence “impedes women’s equal and meaningful participation in public life through humiliation, shame, fear and silencing. This is a ‘chilling effect,’ whereby women are discouraged from actively participating in public life.” ( A/77/302 , para. 22) 3. Oversight Board authority and scope The Board has authority to review Meta’s decision following an appeal from a person who previously reported content that was left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). Where Meta commits to act on recommendations, the Board monitors their implementation. When the Board selects cases like this one, where Meta subsequently acknowledges that it made an error, the Board reviews the original decision to increase understanding of the content moderation process and to make recommendations to reduce errors and increase fairness for people who use Facebook and Instagram. 4. Sources of authority and guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board decisions: The most relevant previous decisions of the Oversight Board include: II. Meta’s content policies: The Bullying and Harassment Community Standard aims to prevent individuals being targeted on Meta’s platform through threats and different forms of malicious contact. According to Meta’s policy rationale, such behavior “prevents people from feeling safe and respected on Facebook.” The Community Standard is divided into tiers, with more protection provided for private individuals and limited scope public figures than for public figures. When the content was reviewed by Meta and the Board began its review, Tier 4 of the Bullying and Harassment Community Standard prohibited targeting private individuals or limited scope public figures with “content that praises, celebrates or mocks their death or serious physical injury.” The Community Standard defines limited scope public figures as “individuals whose primary fame is limited to their activism, journalism, or those who become famous through involuntary means.” Meta made this definition public in response to the Board’s recommendation in Pro-Navalny protests in Russia , 2021-004-FB-UA. Meta’s internal guidance for content moderators defines “mocking” as “an attempt to make a joke about, laugh at, or degrade someone or something.” On June 29, Meta updated the Community Standard. Under Tier 1 of the current policy, everyone is protected from “Celebration or mocking of [their] death or medical condition.” The Board’s analysis was informed by Meta's commitment to ""Voice,"" which the company describes as “paramount,” and its values of “Safety” and “Dignity.” III. Meta’s human rights responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following international standards: 5. User submissions Following a Facebook user’s report of the content and appeal to the Oversight Board, both the user who created the content and the user who reported it were sent a message notifying them of the Board’s review and providing them with an opportunity to submit a statement to the Board. Neither user submitted a statement. 6. Meta’s submissions In its decision rationale, Meta explained that the content should have been removed from Facebook for violating the Bullying and Harassment policy. Meta’s regional team identified the woman depicted as a known Syrian activist who had been jailed for her activism. According to Meta, the photograph in the post shows the activist after she was beaten by individuals affiliated with the regime of Bashar Al-Assad. Based on the policy in place at the time, Meta stated that it considers the content to be mocking the serious physical injury of the woman depicted, and therefore, it violates the policy. Meta considers the woman depicted in the photograph to be a limited scope public figure, as her primary fame is limited to her activism. Meta understands the content to be joking about her injuries and implying that she “brought them upon herself due to ‘karma’.” According to Meta, the content also “makes up” a story about her having written a poorly worded letter, implying she lacks intelligence, when in reality she suffered these injuries as a result of a violent attack. Following the update in the policy, Meta told the Board the content remains violating, and the update does not impact the substantive protection provided by the policy to the woman depicted. According to Meta, “[t]he update to the Bullying and Harassment policy was intended to streamline the policy. The update did not change the protections afforded limited scope public figures, like the woman identified in the case content. The relevant line under which [Meta] removed this content was initially in Tier 4 of the policy, but as a result of the update it is now part of Tier 1.” The Board asked Meta 11 questions in writing. Questions related to Meta policies addressing content depicting gender-based violence, how Meta enforces the Bullying and Harassment policy, and any research on depictions of gender-based violence on social media and offline harms. Ten questions were answered and one question, asking for regional enforcement data for the Bullying and Harassment policy, was not answered. 7. Public comments The Oversight Board received 19 public comments for this case. Three comments were submitted from Middle East and North Africa, two comments were submitted from Central and South Asia, two comments were submitted from Asia Pacific and Oceania, three comments were submitted from Europe, eight comments were submitted from United States and Canada, and one from Latin America and Caribbean. The submissions covered the following themes: cyber harassment and targeting of women activists and public figures, the serious consequences of the digital dimension of gender-based violence on the safety, physical and psychological health and dignity of women, and the difficulty of bringing online violence against women to the attention of content moderators who may not understand the relevant regional dialect. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board examined whether this content should be removed by analyzing Meta's content policies, human rights responsibilities and values. The Board also assessed the implications of this case for Meta’s broader approach to content governance, providing recommendations on how Meta’s policies and enforcement processes can better respect Meta’s human rights responsibilities. The Board selected this appeal because it offers the opportunity to explore how Meta’s policies and enforcement address content that targets women human rights defenders and content that mocks gender-based violence, issues the Board is focusing on through its strategic priority of gender. 8.1 Compliance with Meta’s content policies I. Content rules The Board finds that the post violates Meta’s policy on Bullying and Harassment, both at the time and under the updated policy, and should be removed. The caption of the post read in conjunction with the image violates Meta’s policy because it mocks the serious physical injury or medical condition of the woman depicted. Meta’s internal guidance for content moderators defines “mocking” as “an attempt to make a joke about, laugh at, or degrade someone or something.” In the form of a joke, the content implies the woman depicted deserved to be physically attacked for making a typographical error in her request to her husband. According to Meta, the internal guidance provided to content moderators defines “medical condition” to include “serious injury.” Prior to the update on June 29, the public-facing policy prohibited mocking the “serious physical injury” of a private individual or a limited scope public figure, while the internal guidance provided to moderators prohibited mocking their “medical condition.” The Board asked Meta about the discrepancy in the terms used in the public-facing policy and the internal guidance. Meta acknowledged the discrepancy and amended its public facing policy to use the same term used in its internal guidance. The Board finds the post has multiple plausible interpretations. The woman may be targeted as a human rights defender or as a target of abuse, or both. The different interpretations are analyzed further below. Regardless of the interpretation, the gender of the depicted person, and the gendered nature of the mocking, the policy is violated in this case so long as the depicted person is identifiable. II. Enforcement action The Board is concerned about potential challenges in the enforcement of this Community Standard. First, the content in this case, which included a photograph of a Syrian activist who had been physically attacked and was reported multiple times by a Facebook user, was not reviewed by a human moderator. This may indicate that this type of violation is not prioritized for review. Second, the Board is concerned that enforcement of the policy, especially when it requires analyzing an image together with a caption, is challenging for Arabic language content. As the Board has previously explained, Meta relies on a combination of human moderators and machine learning tools referred to as classifiers to enforce its Community Standards. (See Wampum belt , 2021-012-FB-UA). In this case, Meta informed the Board that the company has a classifier targeting Bullying and Harassment for ‘General Language Arabic’. The Board notes that the independent human rights due diligence report published by BSR, which Meta commissioned in response to the Board’s recommendation in an earlier case , noted problems in Meta’s enforcement in Arabic. It found the company’s problems in enforcement may be due to inadequate sensitivity to different dialects of Arabic. The Board is concerned, based on findings by the independent human rights due diligence report and the lack of enforcement in this case, that there may be challenges with both the proactive and reactive paths for effective enforcement of this policy in the region. The lack of transparency on auditing of the classifiers enforcing this policy is also concerning to the Board. 8.2 Compliance with Meta’s human rights responsibilities The Board finds that Meta’s initial decision to leave up the post was inconsistent with its human rights responsibilities as a business. Freedom of expression (Article 19 ICCPR) Article 19, para. 2, of the International Covenant on Civil and Political Rights (ICCPR) provides that “everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice.” Article 19 provides for broad protection of expression, including expression which people may find offensive (General Comment 34, para 11). While the right to freedom of expression is fundamental, it is not absolute. It may be restricted, but restrictions should meet the requirements of legality, legitimate aim, necessity, and proportionality (Article 19, para. 3, ICCPR). The Board has acknowledged that while the ICCPR does not create obligations for Meta as it does for states, Meta has committed to respect human rights as set out in the UNGPs. ( A/74/486 , paras. 47-48). Meta’s policies prohibit specific forms of discriminatory and hateful expression, absent a requirement that each individual piece of content incite direct and imminent violence or discrimination. The Special Rapporteur on free expression has noted that on social media, “the scale and complexity of addressing hateful expression presents long-term challenges.” ( A/HRC/38/35 , para. 28) The Board, drawing upon the Special Rapporteur’s guidance, has previously explained that such prohibitions would raise concerns if imposed by a government, particularly if enforced through criminal or civil sanctions. As the Board noted in its Knin Cartoon , Depiction of Zwarte Piet , and South African Slurs decisions, Meta can regulate such expression, demonstrating the necessity and proportionality of its actions due to the harm that results from the accumulation of content. I. Legality (clarity and accessibility of the rules) Any restriction on freedom of expression should be accessible and clear enough in scope, meaning and effect to provide guidance to users and content reviewers as to what content is and is not permitted on the platform. Lack of clarity or precision can lead to inconsistent and arbitrary enforcement of the rules ( General Comment No. 34 , para 25; A/HRC/38/35 , para. 46). The Board notes that Meta made changes to the Community Standard on June 29, aligning the terminology used in its public-facing Bullying and Harassment Community Standard and internal guidance provided to content moderators. Prior to this change, the Community Standard prohibited content mocking “serious physical injury” while the internal guidance prohibited mocking a “medical condition.” According to Meta, “medical condition” is the broader term. The Board welcomes this change as the use of different terminology may lead to confusion and inconsistent enforcement. However, the Board is concerned that it may not be clear to users that “medical condition” includes “serious physical injury” and recommends that Meta makes this clear to its users. II. Legitimate aim State restrictions on freedom of expression must pursue a legitimate aim, which includes the protection of the rights of others. The Human Rights Committee has interpreted the term “rights” to include human rights as recognized in the ICCPR and more generally in international human rights law ( General Comment 34 , para. 28). The Board finds that Meta’s Bullying and Harassment policy is directed towards the legitimate aim of respecting the rights of others, including the right to equality and non-discrimination, and to freedom of expression. Among other aims, these policies seek the legitimate aim of preventing the harms resulting from bullying and harassment, discrimination on the basis of sex or gender, and respecting the freedom of expression and access to Meta’s platform for those targeted by this expression. These aims are linked, as according to the Joint Declaration on Freedom of Expression and Gender Justice , “online violence against women has particular significance for freedom of expression” and “social media platforms have an obligation to ensure that online spaces are safe for all women and free from discrimination, violence, hatred and disinformation.” III. Necessity and proportionality The principle of necessity and proportionality provides that any restrictions on freedom of expression ""must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected"" ( General Comment 34 , para. 34). Applied to the company, the Board finds that Meta’s policy, under which this content should have been removed, constitutes a necessary and proportionate response to protect users from targeted on-line bullying and harassment. The policy respects the equal right to freedom of expression of women human rights defenders who are often forced off the platform through such harassment. The UN Special Rapporteur on violence against women has reported that “[w]omen human rights defenders, journalists and politicians are directly targeted, threatened, harassed, or even killed for their work. They receive online threats, generally of a misogynistic nature, often sexualized and specifically gendered. The violent nature of these threats often leads [women] to suspend, deactivate or permanently delete their online accounts, or to leave the profession entirely” ( A/HRC/38/47 para. 29). The Special Rapporteur on the situation of human rights defenders has identified similar practices and harms specifically for women: “Women human rights defenders are often subjected to online harassment, violence and attacks, which include sexual violence, verbal abuse, sexuality baiting, doxing...and public shaming. Such abuse occurs in comments on news articles, blogs, websites and social media. The online terror and slander to which women are subjected can also lead to physical assault” ( A/HRC/40/60 (2019) para 45). According to a study conducted by UN Women, 70% of women activists and human rights defenders in Arab states reported feeling unsafe from online harassment. In this sense, women who take part in public life through activism or by running for office are disproportionately targeted online, which can lead to self-censorship and even withdrawal from public life. The Board finds this post mocks the depicted woman by using a gendered joke to belittle her, implying she deserved to be physically attacked. Such online harassment is widespread and leads to women being silenced and shut out of public life. It can also be accompanied by physical attacks. Removal of this content is therefore necessary, as less restrictive means would not prevent her image with a joke meant to belittle her from being disseminated. The Board also finds that this post is addressed to women and girls more broadly. It normalizes gender-based violence by implying that physically attacking women is funny and that men are entitled to use violence. The use of the hashtag also indicates the intention of the user to reach a broader group of women than the individual depicted. The Special Rapporteur on violence against women draws the connection between targeting women in public life and intimidation aimed at women more broadly: “In addition to the impact on individuals, a major consequence of online and ICT [Information and Communication Technology] facilitated gender-based violence is a society where women no longer feel safe either online or offline, given the widespread impunity for perpetrators of gender-based violence” ( A/HRC/38/47, para 29; A/HRC/RES/35/18 , para. 4, urging states to address gendered stereotypes that are a root cause of violence against women and discrimination). The Board is concerned that Meta’s existing policies do not adequately address content that normalizes gender-based violence by praising it or implying it is deserved. The content analyzed in this case was dealt with under the Bullying and Harassment policy, but this policy is not always adequate to limit the harm caused by material that, by referring more generally to gender-based violence, exacerbates discrimination and the exclusion of women from the public sphere online or offline. This same post would not violate the Bullying and Harassment policy if the woman depicted was not identifiable, or if the same caption had accompanied a picture of a fictional character. According to Meta, this content does not violate the Hate Speech policy because it “does not target a person or people on the basis of their protected characteristic.” This indicates to the Board that there is a gap in existing policies that seems to allow discriminatory content, including content that normalizes gender-based violence, to remain and be shared on the platform. According to Meta, a recent policy development process on praise of violent acts focused heavily on identifying any existing enforcement gaps in treating praise of gender-based violence under various policies. As part of that process, Meta considered the policy on the issue of mocking or joking about gender-based violence. Meta informed the Board that the company determined that the Bullying and Harassment policy generally captures this content. However, as noted in the examples above, the Board finds that existing policies and their enforcement do not necessarily capture all relevant content. According to the UN Special Rapporteur on violence against women, “[v]iolence against women is a form of discrimination against women and a human rights violation falling under CEDAW” ( A/HRC/38/47, para. 22). Online violence against women includes “any act of gender-based violence against women that is committed, assisted or aggravated in part or fully by the use of ICT…against a woman because she is a woman, or affects women disproportionately” ( A/HRC/38/47, para. 23). The Committee on the Elimination of Discrimination against Women (UN body of independent experts monitoring the implementation of the Convention) in General Comment 35, called on states to adopt preventive measures, including by encouraging social media companies to strengthen self-regulatory mechanisms “addressing gender-based violence against woman that takes place through their services and platforms” ( CEDAW/C/GC/35, para 30(d)). Content that normalizes gender-based violence by praising it or implying it is logical or deserved, validates violence and seeks to intimidate women, including women who seek to take part in public life (see Public Comment by Digital Rights Foundation, PC-11226). The message the accumulation of this content delivers is that violence is acceptable and can be used to punish transgressions of gender norms. While academic studies showing causation are limited, multiple studies have shown a correlation between the normalization of gender-based violence and the increased occurrence of such violence. In multiple previous cases, the Board has recognized how certain content which may be discriminatory ( Depiction of Zwarte Piet , 2021-002-FB-UA) or hateful ( Knin cartoon , 2022-001-FB-UA, South African Slurs , 2021-011-FB-UA) can be removed due to its cumulative effect, without the need to show that each piece of content can cause direct and imminent physical harm. The Board has also noted that the accumulation of harmful content creates an environment in which acts of discrimination and violence are more likely ( Depiction of Zwarte Piet , 2021-002-FB-UA). The UN Special Rapporteur on Violence against Women called attention to the important role social media plays in addressing gender-based violence and the need to shape ICT policies and practices with the understanding of the “broader environment of widespread and systemic structural discrimination and gender-based violence against women and girls, which frame their access to and use of the Internet and other ICT” (A/HRC/38/47, para 14). Gendered stereotypes promote violence and inadequate responses to it, which further perpetuates such discrimination. In several cases, the Committee on the Elimination of Discrimination against Women has found that when state authorities act on gendered stereotypes in their decision making, the state fails to effectively prevent or address gender-based violence (See Belousova v. Kazakhstan , R.P.B. v. Philippines ; Jallow v. Bulgaria (para 8.6); and L.R. v. Republic of Moldova .). In the context of a broader environment of gender-based discrimination and violence, Meta has a responsibility not to exacerbate threats of physical harm and the suppression of women’s speech and participation in society. Content like the post in this case normalizes gender-based violence by denigrating women and trivializing, excusing, or encouraging both public aggressions and domestic abuse. The cumulative effect of content normalizing gender-based violence to encourage or defend the use of violence, and the harm to women’s rights and the perpetuation of an environment of impunity all contribute to a heightened risk of offline violence, self-censorship, and suppression of the participation of women in public life. Therefore, the Board recommends that Meta undertake a policy development process to establish a policy aimed at addressing content that normalizes gender-based violence through praise, justification, celebration or mocking of gender-based violence. 9. Oversight Board decision The Oversight Board overturns Meta's original decision to leave up the content. 10. Recommendations A. Content policy 1. To ensure clarity for users, Meta should explain that the term “medical condition,” as used in the Bullying and Harassment Community Standard, includes “serious physical injury.” While the internal guidance explains to content moderators that “medical condition” includes “serious physical injury,” this explanation is not provided to Meta’s users. The Board will consider this recommendation implemented when the public-facing language of the Community Standard is amended to include this clarification. 2. The Board recommends that Meta undertakes a policy development process to establish a policy aimed at addressing content that normalizes gender-based violence through praise, justification, celebration or mocking of gender-based violence. The Board understands that Meta is conducting a policy development process which, among other issues, is considering how to address praise of gender-based violence. This recommendation is in support of a more thorough approach to limiting the harms caused by the normalization of gender-based violence. The Board will consider this recommendation implemented when Meta publishes the findings of this policy development process and updates its Community Standards. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg which draws on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Linguistic expertise was provided by Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world. Return to Case Decisions and Policy Advisory Opinions" fb-2ahd01lx,Metaphorical statement against the president of Peru,https://www.oversightboard.com/decision/fb-2ahd01lx/,"June 27, 2023",2023,,TopicPoliticsCommunity StandardViolence and incitement,Violence and incitement,Overturned,Peru,"A user appealed Meta’s decision to remove a Facebook post which included a metaphorical statement against Peru’s then-President Pedro Castillo. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post.",5753,872,"Overturned June 27, 2023 A user appealed Meta’s decision to remove a Facebook post which included a metaphorical statement against Peru’s then-President Pedro Castillo. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Summary Topic Politics Community Standard Violence and incitement Location Peru Platform Facebook This is a summary decision . Summary decisions examine cases where Meta reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors. They are approved by a Board Member panel, not the full Board. They do not consider public comments, and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas of potential improvement in its policy enforcement. Case summary A user appealed Meta’s decision to remove a Facebook post that included a metaphorical statement against Peru’s then-President Pedro Castillo. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Case description and background On November 24, 2022, a Facebook user from Peru posted content in Spanish stating that “we” will hang the then-president of Peru Pedro Castillo, and compared this to the execution of Italian dictator Benito Mussolini. The post says that this was a “metaphorical” statement , not a threat to be feared, and referred to the potential “suspension” of the president by a vote of the legislature amidst corruption allegations. The post also states that Pedro Castillo does not need to worry about the user’s metaphorical statement because they are not “filosenderista” like Mr. Castillo, an idiomatic reference comparing the leftist president to Sendero Luminoso, a communist terrorist group from Peru. The user posted this content approximately two weeks before Peru’s Congress ultimately impeached Mr. Castillo, soon after he attempted to dissolve the country’s legislative body and install an emergency government. Meta initially removed the post from Facebook under its Violence and Incitement policy. In their appeal to the Board, the user stated that Meta had misinterpreted the text, which was not a call to violence, and that the post should be understood in the context of the presidential impeachment process being discussed at that time. Under Meta’s Violence and Incitement policy, the company removes “language that incites or facilitates serious violence” including “statements of intent to commit high-severity violence,” when Meta believes “there is a genuine risk of physical harm or direct threats to public safety.” The policy further explains that the company considers “language and context in order to distinguish casual statements from content that constitutes a credible threat to public or personal safety.” After the Board brought this case to Meta’s attention, the company determined that the content did not violate its Violence and Incitement policy. Given the metaphorical nature of the statement and the context of impeachment proceedings against Pedro Castillo, who was president at the time, Meta concluded that the user appears to advocate ""suspending"" (or impeaching) the then-president, not committing violence against him. Therefore, the initial removal was incorrect, and Meta restored the content on Facebook. Board authority and scope The Board has authority to review Meta's decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). Where Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, to reduce errors and increase fairness for people who use Facebook and Instagram. Case significance The case highlights an inconsistency in how Meta enforces its Violence and Incitement policy as applied to political metaphorical statements, which can be a significant deterrent to open online expression about politicians. This underlines the importance of designing context-sensitive moderation systems with awareness to irony, satire, or rhetorical discourse, especially to protect political speech. That is why, in its case decisions, the Board has urged Meta: to execute proper procedures for evaluating content in its relevant context (“’ Two Buttons’ meme ” recommendation no. 3); to allow users to indicate in their appeals whether the content falls under any of the exceptions to its policies (“’ Two Buttons’ meme ’” recommendation no. 4); to provide criteria for when threatening statements directed at heads of state are permitted to protect clearly rhetorical political speech (“ Iran protest slogan ” recommendation no. 1); and ultimately to develop and publish a policy that governs Meta’s response to crises or novel situations where its regular processes would not prevent nor avoid imminent harm (“ Former President Trump’s suspension "" recommendation no. 18). Meta has committed to implement, or implemented, all of these recommendations. Their complete implementation may help to decrease the error rate of content moderation in times of political crisis in which the value of voice is especially important. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention . Return to Case Decisions and Policy Advisory Opinions" fb-2rdrcavq,Nazi quote,https://www.oversightboard.com/decision/fb-2rdrcavq/,"January 28, 2021",2021,January,TopicPoliticsCommunity StandardDangerous individuals and organizations,Type of DecisionStandardPolicies and TopicsTopicPoliticsCommunity StandardDangerous individuals and organizationsRegion/CountriesLocationUnited StatesPlatformPlatformFacebook,Overturned,United States,The Oversight Board has overturned Facebook's decision to remove a post which the company claims violated its Community Standard on dangerous individuals and organisations.,18034,2819,"Overturned January 28, 2021 The Oversight Board has overturned Facebook's decision to remove a post which the company claims violated its Community Standard on dangerous individuals and organisations. Standard Topic Politics Community Standard Dangerous individuals and organizations Location United States Platform Facebook The Oversight Board has overturned Facebook’s decision to remove a post which the company claims violated its Community Standard on Dangerous Individuals and Organizations. The Board found that these rules were not made sufficiently clear to users. About the case In October 2020, a user posted a quote which was incorrectly attributed to Joseph Goebbels, the Reich Minister of Propaganda in Nazi Germany. The quote, in English, claimed that, rather than appealing to intellectuals, arguments should appeal to emotions and instincts. It stated that truth does not matter and is subordinate to tactics and psychology. There were no pictures of Joseph Goebbels or Nazi symbols in the post. In their statement to the Board, the user said that their intent was to draw a comparison between the sentiment in the quote and the presidency of Donald Trump. The user first posted the content two years earlier and was prompted to share it again by Facebook’s “memory” function, which allows users to see what they posted on a specific day in a previous year, with the option of resharing the post. Facebook removed the post for violating its Community Standard on Dangerous Individuals and Organizations. Key findings In its response to the Board, Facebook confirmed that Joseph Goebbels is on the company’s list of dangerous individuals. Facebook claimed that posts which share a quote attributed to a dangerous individual are treated as expressing support for them, unless the user provides additional context to make their intent explicit. Facebook removed the post because the user did not make clear that they shared the quote to condemn Joseph Goebbels, to counter extremism or hate speech, or for academic or news purposes. Reviewing the case, the Board found that the quote did not support the Nazi party’s ideology or the regime’s acts of hate and violence. Comments on the post from the user’s friends supported the user’s claim that they sought to compare the presidency of Donald Trump to the Nazi regime. Under international human rights standards, any rules which restrict freedom of expression must be clear, precise and publicly accessible, so that individuals can conduct themselves accordingly. The Board does not believe that Facebook’s rules on Dangerous Individuals and Organizations met this requirement. The Board noted a gap between the rules made public through Facebook’s Community Standards and additional, non-public rules used by the company’s content moderators. In its publicly available rules, Facebook is not sufficiently clear that, when posting a quote attributed to a dangerous individual, the user must make clear that they are not praising or supporting them. Facebook’s policy on Dangerous Individuals and Organizations also does not provide clear examples that explain the meaning of terms such as “praise” and “support,” making it difficult for users to understand this Community Standard. While Facebook confirmed to the Board that Joseph Goebbels is designated as a dangerous individual, the company does not provide a public list of dangerous individuals and organizations, or examples of these. The Board also notes that, in this case, the user does not seem to have been told which Community Standard their content violated. The Oversight Board’s decision The Oversight Board overturns Facebook’s decision to remove the content and requires that the post be restored. In a policy advisory statement, the Board recommends that Facebook: *Case summaries provide an overview of the case and do not have precedential value. 1.Decision Summary The Oversight Board has overturned Facebook’s decision to remove a post which the company claims violated its Community Standard on Dangerous Individuals and Organizations. The Board found that these rules were not made sufficiently clear to users. 2. Case Description In October 2020, a user posted a quote which was incorrectly attributed to Joseph Goebbels, the Reich Minister of Propaganda in Nazi Germany. The quote, in English, claimed that there is no point in appealing to intellectuals, as they will not be converted and, in any case, yield to the stronger man in the street. As such, the quote stated, arguments should appeal to emotions and instincts. It ended by claiming that truth does not matter and is subordinate to tactics and psychology. There were no pictures of Goebbels or Nazi symbols in the post. The user first posted the content two years earlier and was prompted to share it again by Facebook’s “memory” function, which allows users to see what they posted on a specific day in a previous year, with the option of resharing the post. There were no user reports of the content. Facebook removed the post for violating the Community Standard on Dangerous Individuals and Organizations. The post comprised the quote and attribution to Goebbels alone. There was no additional commentary within the post indicating the user's intent in sharing the content. In their statement to the Board, they explained that their quote involved important social issues and that the content of the quote was “VERY IMPORTANT right now in our country as we have a ‘leader’ whose presidency is following a fascist model.” Their intent was to draw a comparison between the sentiment in the quote and the presidency of Donald Trump. The comments on the post suggest that the user’s friends understood this to be the case. 3. Authority and Scope The Board has the authority to review Facebook’s decision under Article 2 (Authority to Review) of the Board’s Charter and may uphold or reverse that decision under Article 3, Section 5 (Procedures for Review: Resolution) of the Charter. Facebook has not presented reasons for the content to be excluded in accordance with Article 2, Section 1.2.1 (Content Not Available for Board Review) of the Board’s Bylaws, nor has Facebook indicated that it considers the case to be ineligible under Article 2, Section 1.2.2 (Legal Obligations) of the Bylaws. 4.Relevant Standards The Oversight Board considered the following standards in its decision: I. Facebook’s Community Standards: The Community Standard on Dangerous Individuals and Organizations states that “in an effort to prevent and disrupt real-world harm, we do not allow any organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on Facebook.” It further states that Facebook will “remove content that expresses support or praise for groups, leaders or individuals involved in these activities”. II. Facebook’s Values: The Facebook values relevant to this case are outlined in the introduction to the Community Standards. The first is “Voice”, which is described as “paramount”: The goal of our Community Standards has always been to create a place for expression and give people a voice. […] We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Facebook limits ""Voice” in service of four other values. The Board considers that the value of “Safety” is relevant to this decision: Safety: We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook. III. Relevant Human Rights Standards considered by the Board: The UN Guiding Principles on Business and Human Rights ( UNGPs ), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. Drawing upon the UNGPs, the following international human rights standards were considered in this case: 5. User Statement The user says that they first posted this content two years ago and were prompted to post it again by Facebook’s “memory” function. They explain that the post is important as the United States has a leader whose presidency is following a fascist model. They further state that their ability to use Facebook was restricted after they posted the content. 6. Explanation of Facebook’s Decision Facebook states that it treats content that quotes, or attributes quotes (regardless of their accuracy), to a designated dangerous individual as an expression of support for that individual unless the user provides additional context to make their intent explicit. It says that in this case, the user provided no additional context indicating that the quote was shared to condemn Goebbels, to counter extremism or hate speech, or as part of an academic or newsworthy discourse. Facebook says that it would not have removed this content if the user’s post made clear that it was shared for these reasons. Although comments made by others on the user’s post indicated that they did not intend to praise or support Joseph Goebbels, Facebook explained that they only review the post itself when making a moderation decision. The content was not removed when originally posted because there were no user reports against it and it was not automatically detected. The Board also notes that, when Facebook informed the user that their post had been removed, the company did not tell them which Community Standard their post had violated. 7. Third party submissions The Oversight Board considered 12 public comments related to this case. Three of the comments were submitted from Europe and nine from the United States and Canada region. The submissions covered the following themes: compliance with the relevant Community Standards; whether this constitutes political speech; the role of Facebook’s “memory” function; the effect of sanctions on users; and feedback on improving the public comment process. 8. Oversight Board Analysis 8.1 Compliance with Community Standards The Board finds that Facebook’s decision to remove the user’s post does not comply with the Community Standard on Dangerous Individuals and Organizations . Facebook says that to prevent and disrupt real-world harm, it prohibits organizations and individuals (living or deceased) involved in organized hate from having a presence on Facebook. It also prohibits content that expresses support or praise for such groups, their leaders, or individuals involved in these activities. Facebook does not publish a list of individuals and organizations whom it has designated as dangerous. In the decision rationale it provided to the Board, Facebook clarified certain aspects of the Dangerous Individuals and Organizations policy that are not outlined in the Community Standards. First, Facebook confirmed that the Nazi party (the national Socialist German Workers’ Party, active between 1920 and 1945) has been designated as a hate organization since 2009 by Facebook internally. Joseph Goebbels, as one of the party’s leaders, is designated as a dangerous individual. Second, Facebook treats all content that supposedly quotes a designated dangerous individual as an expression of praise or support for that individual unless the user provides additional context to make their intent explicit. Third, Facebook determines compliance with the policy solely based on the text and/or imagery within the post itself, without assessing reactions or comments to the post. In this case, the content involved a single quote attributed to Joseph Goebbels. The Board finds that the quote did not promote the ideology of the Nazi party and did not endorse the regime’s acts of hate and violence. Comments on the post from the user’s friends appear to support the user’s claim that the post sought to draw comparisons between the presidency of Donald Trump and the Nazi regime. The Board notes an information gap between the publicly available text of the Dangerous Individuals and Organizations policy and the additional internal rules applied by Facebook’s content moderators. The public text is not sufficiently clear that when posting a quote attributed to a dangerous individual, the user must provide additional context in their post to make it explicit that they are not praising or supporting an individual or organization involved in organized hate. The Community Standards state a similar requirement for posts including symbols of designated organizations and individuals, but do not state the same for content praising or supporting them. As illustrated by this case, this results in speech being suppressed which poses no risk of harm. While the Board appreciates the importance of combatting the spread of Nazi ideology and hate speech, as well as the difficulty of pursuing such aims at scale, in this case the removal of the post clearly falls outside of the spirit of the policy. 8.2 Compliance with Facebook Values The Board finds that the removal does not comply with Facebook’s values. When considering content removed under the Dangerous Individuals and Organizations policy, the value of “Safety” is balanced against the “paramount” value of “Voice.” Facebook explains that “Safety” may be given more weight when content may lead to physical harm. In this case, however, considering the minimal benefit to the value of “Safety” from the user’s post, the Board finds that the removal unnecessarily undermined the value of ""Voice.” 8.3 Compliance with Human Rights Standards Applying international human rights standards on the right to freedom of expression, the Board finds that the content must be restored. The value placed on the right to freedom of expression is particularly high in public debate about political figures, which was the subject of this post (ICCPR Article 19, para. 2, General Comment 34, para. 34). The right to freedom of expression is not absolute. Any restriction of the right must, however, meet the requirements of legality, legitimate aim, and necessity and proportionality. Facebook’s removal of the content failed both the first and third parts of this test. a. Legality Any rules restricting expression must be clear, precise and publicly accessible (General Comment 34, para. 25) to allow individuals to change their conduct accordingly. Facebook’s policy on Dangerous Individuals and Organizations falls short of the standard of legality. The policy lacks clear examples that explain the application of “support,” “praise” and “representation,” making it difficult for users to understand this Community Standard. This adds to concerns around legality and may create a perception of arbitrary enforcement among users. The Board is also concerned that in this case the user does not appear to have been informed which Community Standard they violated when their content was removed. Facebook also fails to provide a list of individuals and organizations designated as dangerous, or, at the least, examples of groups or individuals that are designated as dangerous. Lastly, the policy fails to explain how it ascertains a user’s intent, making it hard for users to foresee how and when the policy will apply and conduct themselves accordingly. b. Legitimate aim Article 19, para. 3 of the ICCPR states that legitimate aims include respect for the rights or reputations of others, as well as the protection of national security, public order, or public health or morals. Facebook’s rationale indicates that the aim of the Dangerous Individuals and Organizations policy in relation to what it terms as “hate organizations” is to protect the rights of others. The Board is satisfied that the specific provisions on “hate organizations” aim to protect individuals from discrimination and protect them from attacks on life or foreseeable intentional acts resulting in physical or mental injury. c. Necessity and Proportionality Any restriction “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected” (General Comment 34, para. 34). Context is key for assessing necessity and proportionality. The Board notes that there has reportedly been a global rise in support and acceptance of neo-Nazi ideology, and the challenge Facebook faces in restricting the presence of “hate organizations” on the platform (Report A/HRC/38/53, 2018). The Board considers that it may be necessary, when moderating content about dangerous organizations at scale, to remove posts where there is insufficient context. In this case, the content of the quote and other users’ responses to it, the user’s location and the timing of the post during an election campaign are all relevant. Facebook’s approach requiring content moderators to review content without regard to these contextual cues resulted in an unnecessary and disproportionate restriction on expression. d. Equality and non-discrimination Any restrictions on expression must respect the principle of equality and non-discrimination (General Comment 34, paras. 26 and 32). The Board recognizes the importance of Facebook combatting Nazi ideology on the platform, particularly in the context of documented increases in support for such ideas and anti-Semitism around the world. However, removing content that sought to criticize a politician by comparing their style of governance to architects of Nazi ideology does not promote equality and non-discrimination. 9. Oversight Board Decision 9.1 Content Decision The Oversight Board overturns Facebook’s decision to take down the content, requiring the post to be restored. 9.2 Policy Advisory Statement The Board recommends that Facebook: *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and must be agreed by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. Return to Case Decisions and Policy Advisory Opinions" fb-33nk66fg,Anti-colonial leader Amílcar Cabral,https://www.oversightboard.com/decision/fb-33nk66fg/,"June 27, 2023",2023,,TopicFreedom of expressionCommunity StandardDangerous individuals and organizations,Dangerous individuals and organizations,Overturned,"Guinea-Bissau, Senegal","A user appealed Meta’s decision to remove a Facebook post that consisted of a poem referencing the Bissau-Guinean anti-colonial leader Amílcar Cabral. After the Board brought the appeal to Meta’s attention, the company reversed its earlier decision and restored the post.",4594,705,"Overturned June 27, 2023 A user appealed Meta’s decision to remove a Facebook post that consisted of a poem referencing the Bissau-Guinean anti-colonial leader Amílcar Cabral. After the Board brought the appeal to Meta’s attention, the company reversed its earlier decision and restored the post. Summary Topic Freedom of expression Community Standard Dangerous individuals and organizations Location Guinea-Bissau, Senegal Platform Facebook This is a summary decision . Summary decisions examine cases where Meta reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors. They are approved by a Board Member panel, not the full Board. They do not consider public comments, and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas of potential improvement in its policy enforcement. Case summary A user appealed Meta’s decision to remove a Facebook post that consisted of a poem referencing the Bissau-Guinean anti-colonial leader Amílcar Cabral. After the Board brought the appeal to Meta’s attention, the company reversed its earlier decision and restored the post. Case description and background In January 2023, a Facebook user posted content in French commemorating the passing of Amílcar Cabral on the anniversary of his assassination in 1973. Cabral is world-renowned as a Pan-African thinker who led an ultimately successfully revolutionary movement against Portuguese colonial rule in Guinea-Bissau and Cabo Verde. The post contained a poem, praising Cabral’s contributions to the anti-colonial struggle and its impact across the African continent. The user claimed that the poem was written in 1973 and published in an African-Asian journal. Meta originally removed the post from Facebook, citing its Dangerous Organizations and Individuals (DOI) policy, under which the company removes content that ""praises,” “substantively supports,” or “represents” individuals and organizations they designate as dangerous. In their appeal to the Board, the user stated that the poem is decades old and was posted to celebrate Amílcar Cabral. After the Board brought this case to Meta’s attention, the company determined that its removal was incorrect and restored the content to the platform. The company told the Board that the Bissau-Guinean leader Amílcar Cabral is not a designated individual in its DOI policy but could be mistakenly associated with another person who is designated. The post’s reference to the 1973 assassination indicates who the poster intended to reference. As a result of its review in this case, Meta said it improved its enforcement practice “to avoid false positive removals of content praising the non-designated individual Amílcar Cabral.” Board authority and scope The Board has authority to review Meta's decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). Where Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, to reduce errors and increase fairness for people who use Facebook and Instagram. Case significance The case highlights an error that can occur in Meta’s enforcement of its DOI policy. Across four previous decisions, the Board has issued 13 specific recommendations urging Meta to clarify this policy and its enforcement (“ Mention of the Taliban in news reporting ,” in September 2022, “ Shared Al Jazeera post ” in September 2021, “ Öcalan’s isolation ” in July 2021, and “ Nazi quote ” January 2021). Meta has implemented or reported progress on 10 of those recommendations, while it declined to implement three. Further progress by Meta in updating its DOI policy and improving related moderation systems may help to decrease the error rate of content moderation. In the case of Amílcar Cabral, the Board’s prompting led Meta to update its enforcement practice to avoid mistaken removals of content naming him, which will apply across the platform to posts similar to this one. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-3vis8b1r,Threat of Violence Against the Rohingya People,https://www.oversightboard.com/decision/fb-3vis8b1r/,"June 4, 2024",2024,,"TopicMarginalized communities, Race and ethnicity, War and conflictCommunity StandardViolence and incitement",Violence and incitement,Overturned,Myanmar,A user appealed Meta’s decision to leave up a comment under a Facebook post claiming the Rohingya people cause disturbances and are “tricksters.” The comment also called for the implementation of control measures against them.,5091,781,"Overturned June 4, 2024 A user appealed Meta’s decision to leave up a comment under a Facebook post claiming the Rohingya people cause disturbances and are “tricksters.” The comment also called for the implementation of control measures against them. Summary Topic Marginalized communities, Race and ethnicity, War and conflict Community Standard Violence and incitement Location Myanmar Platform Facebook Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Summary A user appealed Meta’s decision to leave up a comment under a Facebook post claiming the Rohingya people cause disturbances and are “tricksters.” The comment also called for the implementation of control measures against them as well as for their “total erasure.” This case highlights a recurring issue in the under-enforcement of the company’s Violence and Incitement policy, specifically regarding threats against vulnerable groups. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and removed the post. About the Case In January 2024, a Facebook user commented on a post about the Rohingya people in Myanmar. The comment included a caption above an image of a pig defecating. In the caption, the user writes that “this group” (in reference to the Rohingya) are “tricksters” who “continue to cause various social problems.” The caption argues that the Myanmar government has taken the right course of action in “curbing” the Rohingya and calls for their “absolute erasure from the face of the Earth” for the sake of “national security and well-being.” In their statement to the Board, the user who appealed wrote that they live in Myanmar and expressed frustration at Meta’s lack of action against comments calling for genocide, such as the one in this case. They explained how they have witnessed first-hand how Meta’s inability to effectively moderate hate speech against the Rohingya people has led to offline violence, and how the Rohingya people are languishing in refugee camps. According to Meta’s Violence and Incitement policy, the company prohibits, “threats of violence that could lead to death (or other forms of high-severity violence).” After the user appealed to Meta, the company initially left the content on the platform. After the Board brought this case to Meta’s attention, the company determined the comment “advocates lethal violence through ‘erasing’ Rohingya people from the Earth” and therefore violates the Violence and Incitement Community Standard. The company then removed the content from the platform. Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the user who reported content that was then left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook, Instagram and Threads users. Significance of Case This case highlights issues in Meta’s repeated under-enforcement of violent and incendiary rhetoric against the Rohingya people. This is a well-known and recurring problem: in 2018, Meta commissioned an independent human rights assessment to ascertain the degree to which the company played a role in exacerbating disinformation campaigns and prejudice against the Rohingya. Meta’s inability to moderate content that endorses genocide and promotes ethnic cleansing of the marginalized Rohingya population has been documented by other civil society groups, such as Amnesty International in a report detailing Meta’s role in the atrocities committed against the community. In a previous decision, the Board recommended that “Meta should rewrite [its] value of ‘safety’ to reflect that online speech may pose risk to the physical security of persons and the right to life, in addition to the risks of intimidation, exclusion and silencing,” ( Alleged Crimes in Raya Kobo , recommendation no. 1). Meta has completed implementing this recommendation in part. The Board urges Meta to improve its detection and enforcement of speech that calls for violence against the Rohingya people. Decision The Board overturns Meta’s original decision to leave up the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to the company’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-4294t386,Fruit juice diet,https://www.oversightboard.com/decision/fb-4294t386/,"October 30, 2023",2023,,"TopicChildren / Children's rights, HealthCommunity StandardSuicide and self injury","Policies and TopicsTopicChildren / Children's rights, HealthCommunity StandardSuicide and self injury",Upheld,Italy,The Oversight Board has upheld Meta’s decisions to keep up two posts in which a woman shares her first-hand experience of a fruit juice-only diet.,40570,6168,"Upheld October 30, 2023 The Oversight Board has upheld Meta’s decisions to keep up two posts in which a woman shares her first-hand experience of a fruit juice-only diet. Standard Topic Children / Children's rights, Health Community Standard Suicide and self injury Location Italy Platform Facebook Fruit Juice Diet Public Comments Appendix The Oversight Board has upheld Meta’s decisions to keep up two posts in which a woman shares her first-hand experience of a fruit juice-only diet. The Board agrees that neither violate Facebook’s Suicide and Self-Injury Community Standard because they do not “provide instructions for drastic and unhealthy weight loss,” nor do they “promote” or “encourage” eating disorders. However, since both pages involved in these two cases were part of Meta’s Partner Monetization Program, the Board recommends that the company restrict “extreme and harmful diet-related content” in its Content Monetization policies. About the cases Between late 2022 and early 2023, two videos were posted to the same Facebook page, described as featuring content on life, culture and food in Thailand. In both, a woman is interviewed by a man about her experience of following a diet consisting only of fruit juice. The conversations take place in Italian. In the first video, the woman says she has experienced increased mental focus, improved skin and bowel movement, happiness and a “feeling of lightness” since starting the diet, while she also shares that she previously suffered from skin problems and swollen legs. She brings up the issue of anorexia but states her weight has normalized, after she initially lost more than 10 kilograms (22 pounds) due to her dietary changes. Around five months later, the man interviews the woman again in a second video, asking how she feels almost a year into observing this fruit juice-only diet. She responds by saying she looks young for her age, that she has not lost any more weight except for “four kilos of impurities,” and she encourages him to try the diet. She also states she will become a “fruitarian” upon breaking her fast, but that she is thinking about starting a “pranic journey,” which, according to her, means living “on energy” in place of eating or drinking regularly. Between them, the posts were viewed more than 2,000,000 times and received over 15,000 comments. The videos share details of the woman’s Facebook page, which experienced a significant increase in interactions following the second post. After both posts were reported multiple times for violating Facebook’s Suicide and Self-Injury Community Standard, and following human review that assessed the content as non-violating, they remained on Facebook. A separate user in each case then appealed Meta’s decision to the Board. Both the content creator’s Facebook page on which the two videos were posted and the Facebook page of the woman shown in the videos are part of Meta’s Partner Monetization Program. This means the content creator and presumably the woman being interviewed earn money from posts on their pages, when Meta displays ads on their content. For this to happen, the pages would have passed an eligibility check and the content would have had to comply with both Meta’s Community Standards and its Content Monetization policies. Within its Content Monetization policies, Meta prohibits certain categories from being monetized on its platforms, even if they do not violate the Community Standards. Key findings The Board finds that neither of these posts violate the Suicide and Self-Injury Community Standard because they do not provide “instructions for drastic and unhealthy weight loss when shared together with terms associated with eating disorders,” and do not “promote, encourage, coordinate, or provide instructions for eating disorders.” While the Board notes a fruit juice-only diet can cover eating practices with different health consequences, depending on its duration and intensity, the videos did not include any eating disorder signal or reference in the sense required to violate Meta’s rules. Even the woman’s passing mention of a diet-related “pranic journey” – which the Board understands to be an extreme “breatharian” diet, considered medically dangerous by experts – was descriptive in nature, without any mention of weight. While Meta’s platforms should continue to be spaces in which users can share their lifestyle and diet experiences, the Board equally recognizes that content permissible under the Suicide and Self-Injury Community Standard may contribute to harm, even if it does not meet the threshold for removal. These harms could be particularly severe for some users, with adolescents, especially adolescent women and girls, vulnerable to developing an eating disorder. In this case, the Board finds the content in these videos promotes eating practices that may be dangerous in some circumstances. The Board also notes that despite the generally broad scope of Meta’s Content Monetization policies, content relating to eating practices, including extreme and harmful diet-related content, is not subject to reduced or restricted monetization. As such, the Board agrees that both videos do not violate these policies. However, the Board recommends that Meta should amend these policies to better meet its human rights responsibilities, given the research showing that users, especially adolescents, are vulnerable to harmful diet-related content. The majority of the Board considers the omission of “extreme and harmful diet-related content” as a restricted category in Meta’s Content Monetization policies a conspicuous and concerning one. With health and communications experts noting the ability of influencers to use first-hand narration styles to secure high engagement with their content – coupled with the ubiquity of wellness influencers – it is important that Meta should not provide financial benefits to create this type of content. For a minority of the Board, since demonetization may negatively impact expression on these issues, Meta should explore whether demonetization is the least intrusive means of respecting the rights of vulnerable users. For a separate minority of Board Members, demonetization is necessary but not sufficient; they find that Meta should additionally restrict extreme and harmful diet-related content to adults over the age of 18, and explore other measures such as putting a label on the content, to include reliable information on the health risks of eating disorders. The Oversight Board’s decision The Oversight Board upholds Meta’s decisions to leave up the two posts. The Board recommends that Meta: * Case summaries provide an overview of cases and do not have precedential value. 1. Decision summary The Oversight Board upholds Meta’s decisions to leave up two videos posted on the same Facebook page featuring an interview with a woman about her experience observing a fruit juice-only diet. Both videos were monetized with in-stream ads, which means the content creator earned a share of ad revenue and Meta presumably benefited from ad revenue. Meta assessed the videos against the Community Standards and the Content Monetization Policies, as both apply to monetized content, and found the videos violated neither. In upholding Meta’s decisions, the Board finds that both decisions complied with Meta’s Suicide and Self-Injury Community Standard because the videos did not contain any reference to eating disorders, which is required to violate the policy. The Board additionally determines that these decisions complied with Meta’s current Content Monetization Policies. However, the Board also finds that the content in these videos promotes eating practices that may be dangerous in some circumstances. For the majority of the Board, given the potential for harm, particularly for children and teenagers, Meta should remove the financial benefit for creating this type of content. To meet its human rights responsibilities, the majority of the Board recommends that Meta include “extreme and harmful diet-related content” as a restricted category in its Content Monetization Policies. For a minority of the Board, demonetization can negatively impact expression on these issues and Meta should explore whether demonetization is the least intrusive means of respecting the rights of vulnerable users. Some members are concerned that demonetization is a disproportionate restriction. For a separate minority, demonetization is necessary but not sufficient and Meta should additionally restrict extreme and harmful diet-related content to adults over the age of 18, and explore other measures such as putting a label on the content, to include information on the health risks of eating disorders. 2. Case description and background These cases concern two videos posted on the same Facebook page, described as featuring content about life, culture and food in Thailand. The page has about 130,000 followers. In both videos, a man interviews a woman in Italian about her experience observing a fruit juice-only diet, eating nothing solid. Each video shares the woman’s Facebook page at the end. Based on research commissioned by the Board, the woman’s Facebook page has 17,000 followers and features content about the lifestyle of the woman, including her diet. In the first video, posted in late 2022, the woman shares that she used to suffer from skin problems and swollen legs, which she described as huge and heavy. She claims that since starting the fruit juice-only diet, she has experienced increased mental focus, improved skin and bowel movement, happiness and a “feeling of lightness.” After saying that some users may comment that this is anorexia, the woman states that dietary changes are often accompanied by sudden weight loss, which explains why she initially lost more than 10 kilograms (22 pounds). The woman states her weight has now “normalized.” This post received 3,000 reactions, about 1,000 comments and over 200,000 views. In the second video, posted in early 2023, the same man asks the same woman, who appears extremely thin, how she feels after observing the fruit juice-only diet for almost a year. The woman laments that she would soon break her fast and eat solid fruit. When asked about her weight, the woman states she has not lost any more weight, but “four kilos of impurities.” The woman shares that she looks young for her age because of the diet and encourages the man to try the diet. She also shares that she would now be a “fruitarian” and wants to begin “prana,” which she describes as not eating or drinking regularly but instead living “only on energy.” This post received about 8,000 reactions, about 14,000 comments and over two million views. According to research commissioned by the Board, the woman’s Facebook page experienced a significant increase in interactions following this second post. Both posts were reported multiple times to Meta for violating its Suicide and Self-Injury Community Standard . A separate user in each case ultimately appealed to the Board. These users’ initial appeals to Meta to remove the content were immediately closed through automation because prior human reviews had found the content non-violating. The users then appealed the decisions further with Meta. In both cases, human reviewers again found both videos non-violating and left them on Facebook. The Facebook page where the videos were posted is part of Meta’s Partner Monetization Program . This means that the content creator earns money from the content they post on their Facebook page when Meta displays ads on this content. This means the page passed an eligibility check and content posted on these pages must comply with both Meta’s Community Standards and its Content Monetization Policies in order to display ads on content. Review of monetized content based on Meta’s Community Standards and Content Monetization Policies happens after the content is posted. According to Meta, both videos complied with the Community Standards and Content Monetization Policies. The Facebook page of the woman featured in the two videos is also part of Meta’s Partner Monetization Program and is also presumably earning money for similar types of content posted on her page. The Board noted the following context in reaching its decision in this case. First, experts consulted by the Board explained that eating disorders are “complex mental health conditions characterized by abnormal eating behaviors, negative body image, and distorted perceptions of food and weight.” According to the American Psychiatric Association , orthorexia is an obsession with “clean” or “pure” foods. As noted by the National Eating Disorders Association , orthorexia is not formally recognized as an eating disorder in the Diagnostic and Statistical Manual 5. While fruitarian and fruit juice-only diets are generally not classified as eating disorders, consuming only fruit juice for an extended period can be symptomatic of disordered eating. Fruit juice-only diets pose numerous health risks, a point highlighted by psychologists and dietetics and nutrition scholars who submitted public comments or were consulted by the Board. Impacts can vary depending on duration, the specifics of the diet and individual health. Additionally, some forms of prana diet involve eating, but one form of prana diet involves living only on one’s breath or life energy (also called “inedia” or “breatharianism”). According to experts consulted by the Board, this is considered an extreme form of diet with no legitimate health uses and is medically dangerous. Second, a growing body of research indicates that social media use, particularly time and frequency of use as well as exposure to content promoting idealized body images such as “fitspiration” and “thinspiration” trends, leads to body dissatisfaction, disordered eating and negative mental health outcomes. As explained by a study examining the impact of social media use on teens and young adults, “adolescence is a vulnerable period for the development of body image issues, eating disorders and mental health.” Most eating disorders begin in adolescence . Medical experts and research documenting the rise in number of adolescents admitted into hospitals for eating disorders over the course of the pandemic cited increased time on social media as a contributing factor. Social media recommender algorithms may lead adolescents to more extreme diet-related content encouraging them to internalize thinness as a beauty ideal. Many young people turn to influencers for health and fitness-related advice on social media despite their lack of medical credentials. Many social media influencers provide health-related advice under the guise of “wellness,” connecting external beauty and perceived well-being. 3. Oversight Board authority and scope The Board has authority to review Meta’s decision following an appeal from the person who previously reported content that was left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When the Board identifies cases that raise similar issues, they may be assigned to a panel simultaneously to deliberate together. Binding decisions will be made with respect to each piece of content. The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). When Meta commits to act on recommendations, the Board monitors their implementation. 4. Sources of authority and guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board decisions The most relevant previous decisions of the Oversight Board include: II. Meta’s content policies This case involves Meta’s Suicide and Self-Injury Community Standard, in addition to the company’s Content Monetization Policies. Under the Suicide and Self-Injury Community Standard , Meta defines “self-injury” as the “intentional and direct injuring of the body, including… eating disorders.” According to Meta, the policy allows people to discuss self-injury, including eating disorders, because the company wants to provide a “space where people can share their experiences, raise awareness about these issues, and seek support from one another.” Certain types of eating disorder-related content are prohibited, with two rules most relevant here. First, Meta removes content that “contains instructions for drastic and unhealthy weight loss when shared together with terms associated with eating disorders.” Second, Meta removes content that “promotes, encourages, coordinates, or provides instructions for eating disorders.” Both videos, which featured in-stream ads, were also subject to Meta’s Content Monetization Policies . Within these policies, Meta prohibits certain categories of content from being monetized on its platforms, even though they do not violate the Community Standards. These categories of content may receive reduced monetization or are ineligible for monetization. They include broad areas such as “objectionable content” and “debated social issues,” but none of the multiple examples across categories mention any type of diet or eating practice. The Board’s analysis of the content policies was informed by Meta’s value of “Voice” which the company describes as “paramount,” as well as its value of “Safety.” III. Meta’s human rights responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , in which it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board’s analysis of Meta’s human rights responsibilities in this case was informed by the following international standards: 5. User submissions The author of the two posts was notified of the Board’s review and provided with an opportunity to submit statements to the Board. The author of the posts did not submit a statement. In their appeals to the Board, the users who reported the case content stated that the content promotes an unhealthy lifestyle and may encourage others, especially teenagers, to do the same. They described the content as “inaccurate” and presenting anorexia “as a good thing,” which can pose health risks to people exposed to the content. 6. Meta’s submissions Meta told the Board that neither post violated the Suicide and Self-Injury rule that prohibits “content that contains instructions for drastic and unhealthy weight loss when shared together with terms associated with eating disorders.” According to Meta, the posts contained no reference to “drastic or unhealthy weight loss” nor to eating disorders. Additionally, Meta noted the woman described her experience “but does not instruct others to make the same choice.” When asked by the Board, Meta responded that neither post violated the policy line prohibiting “content that promotes, encourages, coordinates, or provides instructions for eating disorders,” again because the two videos did not reference any eating disorder. Meta explained that it does not want to over-enforce or under-enforce on content “related to body image or health status, as [its] platforms can be an important space for people to share their own experiences and journeys around self-image and body acceptance.” The company therefore requires references to eating disorder signals or terminology for content to violate the policy. Meta stated that the company does not consider fruitarian, fruit juice-only, and prana diets as eating disorders based on the American Psychiatric Association's classification of eating disorders , as well as the company’s ongoing expert engagement. Meta told the Board that its Safety Policy team regularly engages with experts and advisory groups to learn about eating disorder trends and to update its list of violating eating disorder signals. According to Meta, the Regional Operations team also plays a large role in shaping the eating disorder signals list, providing examples of how proposed terms are used on the platform. Meta’s Content Policy team uses insights from the Regional Operations team’s work to shape the overall policy. The Board asked Meta nine questions in writing. Questions related to how the company defines eating disorders to enforce the policy; the rationale for requiring eating disorder terms to be referenced in the content to be considered violating; any financial incentive Meta has regarding the case content; and the process, internal research and stakeholders consulted, if any, for developing the policy. Meta answered all questions. 7. Public comments The Oversight Board received eight public comments, including from dietetics and eating disorders specialists from the United States and Canada and Europe regions. These comments noted the health threats posed by this diet to public health and to minors' physical and mental health. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board selected these cases to examine how Meta’s policies and enforcement practices address diet, fitness and eating disorder-related content on Facebook. The Board examined whether this content should be removed by analyzing Meta’s content policies, which includes the Facebook Community Standards and Content Monetization Policies, in addition to Meta’s values and human rights responsibilities. 8.1 Compliance with Meta’s content policies I. Content rules The Board finds that these two posts do not violate Meta’s content policies. a. Suicide and Self-Injury Community Standard The Board finds that the content in these cases does not violate Meta’s Suicide and Self-Injury Community Standard. The first relevant rule prohibits “content that contains instructions for drastic and unhealthy weight loss when shared together with terms associated with eating disorders,” and the second relevant rule prohibits content promoting, encouraging, coordinating or providing instructions for eating disorders. Both rules therefore require that there must be an eating disorder reference to be considered violating. Meta’s internal guidance contains a non-exhaustive list of eating disorder signals that are considered violating. It includes references to recognized eating disorders, slang terms and physical descriptions, with significant focus on hashtags. The Board finds that the videos in these cases do not mention any eating disorder signal or eating disorder reference in the sense required to violate the policy. The Board found these to be challenging cases, and notes that a fruit juice-only diet could cover a wide range of eating practices with different health consequences depending on the duration and level of intensity of the practice. The Board believes there are harmful eating practices that do not meet the threshold to be removed as eating disorders, and both videos fall within that margin. The woman also mentioned a prana diet in the second post, which she described as living “only on energy.” Based on the woman’s description of the diet, the Board understands this would be an extreme “breatharian” diet, which experts consulted by the Board considered medically dangerous. However, the passing reference to prana was not accompanied by any mention of weight, and was expressed descriptively, which is not the same as promotion, encouragement or instructing others to engage in the same practice. The Board therefore finds the two posts do not violate Meta’s Suicide and Self-Injury policy. As discussed in the analysis of Meta’s human rights responsibilities below, the Board finds that while the scope of prohibited content subject to removal in this area should be narrow to allow for critical discussion of these topics, Meta should also adopt the least intrusive means to address harmful but non-policy violating content posted by influential users. b. Content Monetization policies As the content in these cases featured in-stream ads, the Content Monetization Policies apply. Meta only disclosed this aspect of the case after the Board asked about any financial incentives the company might have in relation to the case content. Before users can monetize content they post on Meta’s platforms, they must abide by the Partner Monetization Policies . This requires both an initial eligibility check on the entity and that each post by the entity complies with both the Community Standards and the Content Monetization Policies. Content Monetization Policies are distinct from the Community Standards. The Community Standards apply to all content on Meta’s platforms, while Content Monetization policies apply only to content that users wish to monetize. Meta prohibits many types of content from being monetized even if the content is otherwise allowed on Meta’s platforms. Within these policies, certain categories of content receive reduced monetization or cannot be monetized altogether. Categories that may be either restricted or ineligible for monetization include content depicting or discussing “debated social issues,” content with “objectionable activity,” “strong language,” and ""explicit content"" such as “injury, gore, or bodily functions or conditions.” Content ineligible for monetization includes “content that contains medical claims that have been disproven by an expert organization,” with the specific example of anti-vaccination claims. Despite the generally broad scope of the monetization policies, content relating to eating practices, including extreme and harmful diet-related content, is not subject to reduced or restricted monetization. As such, the Board agrees with Meta’s assessment that both videos do not violate Meta’s Content Monetization Policies. However, the Board recommends below that Meta should amend these policies to better meet its human rights responsibilities given research showing that users, especially children, are vulnerable to harmful diet-related content. 8.2 Compliance with Meta’s human rights responsibilities The Board finds that Meta's decisions to allow these posts on Facebook is consistent with the company’s human rights responsibilities. However, the majority of the Board finds that extreme and harmful diet-related content is related to public health harms, specifically for some groups such as children, and a rights-respecting approach means the company should not incentivize its production and spread by providing financial benefits to influential users to post such content. The Board notes that Meta already acknowledges in its Content Monetization Policies that some “content appropriate for Facebook in general is not necessarily appropriate for monetization.” This means that Meta has decided not to profit from certain types of content even if these do not violate the Community Standards. Examples of such content are “objectionable activity,” “debated social issues,” “strong language,” and “explicit content,” among others. Freedom of expression (Article 19 ICCPR) Article 19 of the ICCPR provides for broad protection of expression. This right includes the “freedom to seek, receive and impart information and ideas of all kinds.” Access to information is a key part of freedom of expression. Article 12 of the International Covenant on Economic, Social and Cultural Rights guarantees the right to health, including the right to access health-related education and information (ICESCR Art. 12; Committee on Economic, Social and Cultural Rights, General Comment No. 14 (2000), para. 3). Where restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human rights commitments, both in relation to the individual content decision under review and Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users' right to freedom of expression” ( A/74/486 , para. 41). I. Legality (clarity and accessibility of the rules) The principle of legality requires rules limiting expression to be accessible and clear, both to those enforcing the rules and those impacted by them (General Comment No. 34, para. 25). Rules restricting expression “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” and must “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not” ( Ibid ). Applied to platform rules that govern online speech, the UN Special Rapporteur on freedom of expression has said they should be clear and specific ( A/HRC/38/35 , para. 46). People using Meta’s platforms should be able to access and understand the rules and content reviewers should have clear guidance regarding their enforcement. The Board assessed two rules within Meta’s Suicide and Self-Injury policy: (i) content containing instructions for drastic and unhealthy weight loss when shared together with terms associated with eating disorders; and (ii) content promoting, encouraging, coordinating or providing instructions for eating disorders. Both require a reference to eating disorders, and Meta provides its reviewers with internal guidance to make that determination. Although Meta states that this list is “non-exhaustive” and does not focus on a particular format, the examples provided in the list are almost exclusively in hashtag format. Content moderators implementing this policy who refer to the internal guidelines may focus their enforcement and removal on this more explicit type of content. For the Board, this gives the impression that the categories of prohibited content that seem relatively broad in the public standards are, in practice, more limited in scope. Such apparent inconsistency raises serious legality concerns, though, as applied to these posts, the public rules provided sufficient notice to users and the Community Standards and internal rules provided sufficient guidance to content moderators. II. Legitimate aim Under Article 19, para. 3 of the ICCPR, expression may be restricted for a defined and limited list of reasons. In these cases, the Board finds that the Suicide and Self-Injury Community Standard that prohibits eating disorder content serves the legitimate aim of protecting public health and respecting the rights of others to physical and mental health, especially of children. III. Necessity and proportionality The principle of necessity and proportionality provides that any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected” ( General Comment No. 34 , para. 34 ). The Board finds that the current approach in the Suicide and Self-Injury policy to require an eating disorder signal is a proportionate restriction on freedom of expression in order to safeguard public health and respect the right to health of others, while also enabling users to discuss and debate health-related matters on Meta’s platforms. Without this requirement, Meta believes, and the Board concurs, that the policy could be overbroad in scope and could unduly restrict freedom of expression and access to information on matters of public health, including information about unhealthy diets and eating disorders. The Board finds that Meta’s platforms should continue to be a space where users can share their positive and negative experiences regarding specific lifestyles and diets. Importantly, as noted by the experts consulted by the Board, eating disorders are a mental disorder and eating habits and diets can be symptomatic of an eating disorder but not a conclusive indicator of one. The Board therefore finds that removal of the case content would not be consistent with the right to freedom of expression and the right to access, seek and share health information. At the same time, the Board also recognizes that content that is permissible under this policy may contribute to harm, even if the content does not meet the threshold for harm that warrants removal. The harms in this case may be particularly severe for some users, including children. In their appeals to the Board, the reporting users stated that the content promotes an unhealthy lifestyle and may encourage others, especially teenagers, to do the same. They described the content as “inaccurate” and presenting anorexia “as a good thing,” which can pose health risks to people exposed to the content. As noted by the public comments and the experts consulted by the Board, adolescents are especially vulnerable to developing an eating disorder, as most eating disorders begin in adolescence . The UN Committee on the Rights of the Child has stated that children’s rights involve freedom from all forms of violence including self-harm, which includes eating disorders ( General Comment no. 13 , (2011), para. 28). The Committee further highlighted risks related to children being exposed to “actually or potentially harmful advertisements” online ( General Comment no. 13 , (2011), para. 31). The UNGPs establish that businesses should prevent and mitigate adverse human rights impacts directly linked to their products, services and business (Principle 13). Relatedly, Article 17 of the Convention on the Rights of the Child recognizes the importance of the media for the “social, spiritual and moral well-being and physical and mental health” of children. The UN Committee on the Rights of the Child has indicated that Article 17 “delineates the responsibility of mass media organizations. In the context of health, these can…include promoting health and healthy lifestyles among children… promoting access to information; not producing communication programs and material that are harmful to children and general health [among other responsibilities]” ( General Comment No. 15 , para. 84). The Board has consistently held that Meta should explore the least intrusive means of addressing harmful content on its platforms. The Board has specifically noted that “developing effective mechanisms to avoid amplifying speech that poses risks” is part of that responsibility ( Former President Trump’s suspension case ). In this case, the Board recognizes that monetization policies impact users’ freedom of expression as well as other human rights. For the majority of the Board, providing a financial benefit to influential users for producing content that promotes harmful diets incentivizes the creation and amplification of such content. Removing this incentive is within the company’s control and is consistent with Meta’s current approach to other types of content that do not violate the Community Standards but which the company restricts under the Content Monetization Policies. Communications scholars and health experts have emphasized that the ability of influencers to appeal and to persuade is found in their adoption of communication styles that give the perception of being just like regular individuals. As seen in this content, rather than directly call on users to do specific acts, influencers narrate their personal story or show through first-hand experience the supposed benefits of diet and lifestyle changes. This style helps influential users to produce content that will receive high engagement, which is particularly appealing when they seek to monetize content. Considering the ubiquity of wellness influencers on Meta’s platforms, as well as the broad set of content that Meta does not seek to profit from under its Content Monetization Policies, the omission of harmful diet content is conspicuous and concerning. The Board has previously recommended that Meta revise its Branded Content policies to “clarify the meaning of the ‘paid partnership’ label and ensure content reviewers are equipped to enforce Branded Content policies where applicable"" (Promoting Ketamine for non-FDA approved treatments case). In a similar vein, the majority of the Board recommends in the present case that Meta include “extreme and harmful diet-related content” as a restricted category in its Content Monetization Policies. This decision has elicited varying minority opinions. For one group of the minority, demonetization may negatively impact freedom of expression. For this minority, even assuming that Meta has a responsibility to mitigate the risk of potential indirect harm to vulnerable users through demonetization, this approach could amount to a broad restriction of expression that would diminish the opportunity of users to seek and share information. Demonetization is therefore subject to the requirements of proportionality. Meta should explore whether demonetization is the least intrusive means of ensuring respect for the rights of vulnerable users. For a separate minority, demonetization is necessary but not sufficient; Meta should additionally restrict extreme and harmful diet-related content to adults over the age of 18, and explore other measures such as putting a label on the content with reliable information on the health risks of eating disorders. For these Board Members, given the growing body of research (outlined in section 2 above) showing that social media use and exposure to idealized bodies, “thinspiration” and “fitspiration” trends leads to body dissatisfaction, disordered eating and a multitude of other negative mental health outcomes, particularly for adolescent women and girls, it is necessary and proportionate for Meta to amend the Suicide and Self-Injury Community Standard. The ubiquity of beauty, diet and fitness-related content on social media, together with recommender algorithms that group and further promote it, make the risk of such content to young users’ mental and physical health real and severe. Ensuring Meta’s policies address harmful diet-related content is especially necessary given the reality that influential users often frame extreme dietary practices in “wellness” or “clean” eating terms, without ever explicitly referring to an eating disorder. For these Board Members, Meta’s current approach in the Suicide and Self-Injury Community Standard fails to address this reality. Restricting extreme and harmful diet-related content to adults and providing more information to users on potential health effects ensures the impact on freedom of expression is least intrusive while also addressing the risk of harm to children. 9. Oversight Board decision The Oversight Board upholds Meta’s decisions to leave up both posts on Facebook. 10. Recommendations Content policy 1. To not create financial incentives for influential users to create harmful content, Meta should restrict extreme and harmful diet-related content in its Content Monetization Policies. The Board will consider this implemented when Meta’s Content Monetization Policies have been updated to include a definition and examples of what constitutes extreme and harmful diet-related content, in the same way that it defines and explains other restricted categories under the Content Monetization Policies. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. The Board was also assisted by Memetica, an organization that engages in open-source research on social media trends, which also provided analysis. Return to Case Decisions and Policy Advisory Opinions" fb-515jve4x,Communal Violence in Indian State of Odisha,https://www.oversightboard.com/decision/fb-515jve4x/,"November 28, 2023",2023,,"TopicFreedom of expression, Religion, ViolenceCommunity StandardViolence and incitement","Policies and TopicsTopicFreedom of expression, Religion, ViolenceCommunity StandardViolence and incitement",Upheld,India,The Oversight Board has upheld Meta’s decision to remove a Facebook post containing a video of communal violence in the Indian state of Odisha.,62208,9721,"Upheld November 28, 2023 The Oversight Board has upheld Meta’s decision to remove a Facebook post containing a video of communal violence in the Indian state of Odisha. Standard Topic Freedom of expression, Religion, Violence Community Standard Violence and incitement Location India Platform Facebook Communal Violence in Indian State of Odisha Public Comments Appendix Odia translation To read this decision in Odia click here . ଏହି ନିଷ୍ପତ୍ତିକୁ ଓଡ଼ିଆରେ ପଢ଼ିବା ପାଇଁ ଏଠାରେ କ୍ଲିକ୍ କରନ୍ତୁ। The Oversight Board has upheld Meta’s decision to remove a Facebook post containing a video of communal violence in the Indian state of Odisha. The Board found that the post violated Meta’s rules on violence and incitement. The majority of the Board also concludes that Meta’s decision to remove all identical videos across its platforms was justified in the specific context of heightened tensions and ongoing violence in the state of Odisha. While the content in this case was not covered by any policy exceptions, the Board urges Meta to ensure that its Violence and Incitement Community Standard allows content that “condemns or raises awareness of violent threats.” About the Case In April 2023, a Facebook user posted a video of an event from the previous day that depicts a religious procession in Sambalpur in the Indian state of Odisha related to the Hindu festival of Hanuman Jayanti. The video caption reads “Sambalpur,” which is a town in Odisha, where communal violence broke out between Hindus and Muslims during the festival. The video shows a procession crowd carrying saffron-colored flags, associated with Hindu nationalism, and chanting “Jai Shri Ram” - which can be translated literally as “Hail Lord Ram” (a Hindu god). In addition to religious contexts where the phrase is used to express devotion to Ram, the expression has been used in some circumstances to promote hostility against minority groups, especially Muslims. The video then zooms into a person standing on the balcony of a building along the route of the procession who is shown throwing a stone at the procession. The crowd then pelts stones towards the building amidst chants of “Jai Shri Ram,” “bhago” (which can be translated as “run”) and “maro maro” (which can be translated as “hit” or “beat”). The content was viewed about 2,000 times and received fewer than 100 comments and reactions. Following the violence that broke out during the religious procession shown in the video, the Odisha state government shut down internet services, blocked social media platforms, and imposed a curfew in several areas of Sambalpur. In the context of the violence that broke out during the procession, shops were reportedly set on fire and a person was killed. Shortly after the events depicted in the video, Meta received a request from Odisha law enforcement to remove an identical video, posted by another user with a different caption. Meta found that the post violated the spirit of its Violence and Incitement Community Standard and added the video to a Media Matching Service bank. This locates and flags for possible action content that is identical or nearly identical to previously flagged photos, videos, or text. Meta informed the Board that the Media Matching Service bank was set up to globally remove all instances of the video, regardless of the caption, given the safety risks posed by this content. This blanket removal applied to all identical videos, even if they fell within Meta’s exceptions for awareness raising, condemnation, and news reporting. The Board noted that, given the settings of the Media Matching Service bank, many pieces of content identical to this video have been removed in the months that followed the events in Sambalpur, Odisha. Through the Media Matching Service bank, Meta identified the content at issue in this case and removed it, citing its rules prohibiting “[c]alls for high-severity violence including […] where no target is specified but a symbol represents the target and/or includes a visual of an armament or method that represents violence.” Key Findings The Board finds that the post violated the Violence and Incitement Community Standard which prohibits “content that constitutes a credible threat to public or personal safety.” The majority of the Board finds that given the ongoing violence in Odisha at the time, and the fact that no policy exceptions applied, the content posed a serious and likely risk of furthering violence. A minority of the Board believes that the post could be properly removed under Meta’s Violence and Incitement Community Standard, but for a different reason. As the video depicted a past incident of incitement with no contextual clues indicating that a policy exception should apply, and similar content was being shared with the aim of inciting violence, Meta was justified in removing the content. The majority of the Board concludes that Meta’s decision to remove all identical videos across its platforms regardless of the accompanying caption, was justified in the context of ongoing violence at the time. The majority also finds, however, that such broad enforcement measures should be time-bound. After the situation in Odisha changes and the risk of violence associated with the content is reduced, Meta should reassess its enforcement measures for posts containing the video and apply its policy exceptions as usual. In the future, the Board would welcome approaches that limit such sweeping enforcement measures to a moment in time and to geographic areas which are at heightened risk. Such measures would better address the risk of harm without disproportionally impacting freedom of expression. The minority of the Board finds that Meta’s blanket removal of all posts that included the identical video depicting an incident of incitement, regardless of whether the posts qualified for its awareness raising or condemnation exceptions, was not a proportional response and constituted an undue restriction on expression. While the content in this case was not covered by any policy exceptions, the Board notes that the “awareness raising” exception under the Violence and Incitement Community Standard is still not available in the public-facing language of the policy. As such, users are still unaware that otherwise violating content is permitted if it is shared to condemn or raise awareness. This may prevent users from engaging in public interest discussions on Meta’s platforms. The Oversight Board’s Decision The Oversight Board upholds Meta’s decision to remove the content. The Board reiterates recommendations from previous cases that Meta: * Case summaries provide an overview of cases and do not have precedential value. 1. Decision Summary The Oversight Board upholds Meta’s decision to take down a post of a video on Facebook depicting a scene of communal violence in the state of Odisha in India during the Hanuman Jayanti religious festival. The video shows a procession crowd carrying saffron-colored flags, associated with Hindu nationalism, and chanting “Jai Shri Ram” - which can be translated literally as “Hail Lord Ram” (a Hindu god), and which has been used in some circumstances to promote hostility against minority groups, especially Muslims. The video then zooms into a person standing on the balcony of a building along the route of the procession who is shown throwing a stone at the procession. The crowd then pelts stones towards the building amidst chants of “Jai Shri Ram,” “bhago” (which can be translated as “run”) and “maro maro” (which can be translated as “hit” or “beat”). Meta referred this case to the Board because it illustrates the tensions between Meta’s values of “Voice” and “Safety,” and it requires full analysis of contextual factors and assessment of risks of offline harm posed by the video. The Board finds that the post violated the Violence and Incitement Community Standard. Given the volatile context and ongoing violence in Odisha at the time the content was posted; both the nature of the religious procession and the calls for high-severity violence in the video; and the virality and widespread nature of similar content being posted on the platform, the majority of the Board finds that the content constituted a credible call for violence. The minority of the Board believed that the post could be properly removed under Meta’s Violence and Incitement Community Standard but for a different reason. They did not construe the post as a “credible call for violence” absent any contextual clues regarding the purpose of the posting. Rather, they viewed the post as a form of potential “depicted incitement” (i.e., content depicting a past scene of incitement). The minority concluded that the post could be removed under the Violence and Incitement Community Standard because it satisfied two conditions this minority believes must be met to warrant such a removal: 1) there was contextual evidence that postings of similar content were shared with the aim of inciting violence, and 2) the post contained no contextual clues indicating the applicability of a policy exception such as awareness raising or news reporting. The majority of the Board concludes that considering the challenges of moderating content at scale, Meta’s decision to remove all identical videos across its platforms regardless of the accompanying caption without applying strikes, was justified in the specific context of heightened tensions and ongoing violence in the state of Odisha, in which it was made. The majority also finds, however, that such broad enforcement measures should be time-bound. After the local situation at hand changes and the risk of harm associated with the piece of content under analysis by the Board is reduced, Meta should reassess its enforcement measures, and allow for the application of policy exceptions at scale. The minority of the Board finds that Meta’s blanket removal of all posts that included the identical video depicting an incident of incitement regardless of whether the posts qualified for its awareness raising or condemnation exceptions was not a proportional response; constituted an undue restriction on expression; and could place vulnerable persons at risk in the midst of a volatile context. The minority is of the view that the content in question is a depiction of incitement rather than incitement in itself. The minority believes a post depicting incitement should not be taken down where contextual clues point to the applicability of an exception to the Violence and Incitement policy. Such exceptions include content that is shared for purposes of spreading awareness or news reporting. The minority believes that where there are indications that the intent behind a posting of depicted incitement content is not to incite but rather to raise awareness, condemn or report, Meta’s human-rights commitments require that such content remain on the platform. The minority therefore believes that mass removal of posts containing the video in question is an impermissible infringement on users’ free expression. 2. Case Description and Background On April 13, 2023, a Facebook user posted a video of an event from the previous day, April 12, that depicts a religious procession in Sambalpur, in the Indian state of Odisha in the context of the Hindu festival of Hanuman Jayanti. The video caption reads “Sambalpur,” which is a town in Odisha, where communal violence broke out between Hindus and Muslims during the festival. The video shows a procession crowd carrying saffron-colored flags, associated with Hindu nationalism, and chanting “Jai Shri Ram” - which can be translated literally as “Hail Lord Ram” (a Hindu god). In addition to religious contexts where the phrase is used to express devotion to Ram, the expression has been used in some circumstances to promote hostility against minority groups, especially Muslims. Experts consulted by the Board reported that the chant has become “a cry of attack meant to intimidate and threaten those who worship differently.” The video then zooms into a person standing on the balcony of a building along the route of the procession who is shown throwing a stone at the procession. The crowd then pelted stones towards the building amidst chants of “Jai Shri Ram,” “bhago” (which can be translated as “run”) and “maro maro” (which can be translated as “hit” or “beat”). The content was viewed about 2,000 times, received fewer than 100 comments and reactions, and was not shared or reported by anyone. Communal violence, a form of collective violence that involves clashes between communal or ethnic groups defining themselves by their differences of religion, ethnicity, language or race, is reported to be widespread in India. In this context, violence is disproportionally targeting religious minorities, especially Muslims, and is reportedly met with impunity . Public comments received by the Board highlight the widespread nature of communal violence across India. As of 2022, over 2900 instances of communal violence were registered in the country (see also, PC-14046). Experts consulted by the Board explained that religious festivals and processions have been reportedly used to intimidate members of minority religious traditions and incite violence against them. In the wake of the violence that broke out during the religious procession and its aftermath, when shops were set on fire and a person was killed, the Odisha state government shut down internet services , blocked social media platforms, and imposed a curfew in several areas in Sambalpur. The police reportedly made 85 arrests related to the violent events in question. On April 16, Meta received a request from Odisha law enforcement to remove an identical video, posted by another user with a different caption. Meta found that the post violated the spirit of the Violence and Incitement Community Standard and decided to remove it. Thereafter, on April 17, Meta added the video in the post to a Media Matching Service (MMS) bank, which locates and flags for possible further action content that is identical or nearly identical to previously flagged photos, videos or text. However, the user who posted that content deleted it on that same date before Meta could take action on it. Through the MMS bank, Meta then identified the content at issue in this case and removed it, also on April 17, citing its prohibition of “[c]alls for high-severity violence including […] where no target is specified but a symbol represents the target and/or includes a visual of an armament or method that represents violence.” On April 23, the Odisha state government lifted the curfew and restored access to internet services. In July 2023, the state government announced a ban on religious processions in Sambalpur for a year. According to reports , Bharatiya Janatha Party (BJP) state leadership criticized the Odisha state government led by the Biju Janta Dal (BJD) party for its failure to maintain law and order and blamed members of minority groups, particularly Muslims, for attacking peaceful religious processions. The BJD, in turn, accused the BJP of trying to inflame religious tensions. Meta explained that the content did not fall under a policy exception as it “was not shared to condemn or raise awareness” since there was no academic or news report context, nor discussion of the author’s experience being a target of violence. Additionally, Meta noted that the caption does not condemn nor express “any kind of negative perspective about the events depicted in the video.” The company highlighted, however, that even if the content had included an awareness raising or condemning caption, Meta would still have removed it “given the significant safety concerns and ongoing risk of Hindu and Muslim communal violence.” Meta also disclosed to the Board that it has configured the MMS bank to remove all instances of the video regardless of the caption accompanying it even if such a caption made clear that the news reporting and/or awareness raising exceptions were implicated. Meta further explained that the company did not apply strikes to users whose content was removed by the MMS bank “to account for non-violating commentary and strike the right balance between voice and safety.” According to reports , social media platforms have been used to encourage deadly attacks on minority groups amidst rising communal tensions across India. Experts note that there had been coordinated campaigns on social media in India spreading anti-Muslim messages, hate speech or disinformation. They also observed that videos about communal violence had been spread in patterns that bore the earmarks of coordination. After violence broke out in Sambalpur, a video from Argus News, a local media outlet in Odisha, was posted on Facebook at least 34 times within 72 hours, often by pages and groups within minutes of each other, claiming that Muslims were behind the attack on the Hanuman Jayanti celebration in Sambalpur. Additionally, the Board notes that given the settings of the MMS bank, many pieces of content identical to this video have been removed in the months that followed the events in Sambalpur, Odisha. Meta referred this case to the Board, stating that it is difficult due to the tensions between Meta’s values of “Voice” and “Safety,” and because of the context required to fully assess and appreciate the risk of harm posed by the video. 3. Oversight Board Authority and Scope The Board has authority to review decisions that Meta submits for review (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1.1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). Where Meta commits to act on recommendations, the Board monitors their implementation. 4. Sources of Authority and Guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board Decisions: The previous decisions of the Oversight Board referenced in this decision include: II. Meta’s Content Policies: Violence and Incitement Community Standard The policy rationale for the Violence and Incitement Community Standard explains that it aims to “prevent potential offline harm that may be related to content” on Meta’s platforms and that while Meta “understand[s] that people commonly express disdain or disagreement by threatening or calling for violence in non-serious ways, [the company] remove[s] language that incites or facilitates serious violence.” The policy prohibits “[t]hreats that could lead to death (and other forms of high-severity violence) ...targeting people or places where threat is defined as” “calls for high-severity violence including content where no target is specified but a symbol represents the target and/or includes a visual of an armament or method that represents violence.” Under this Community Standard Meta “remove[s] content, disable[s] accounts, and work[s] with law enforcement when [Meta] believe[s] there is a genuine risk of physical harm or direct threats to public safety.” Meta also considers the context “in order to distinguish casual statements from content that constitutes a credible threat to public or personal safety.” In assessing whether a threat is credible, Meta considers additional information such as the “person’s public visibility and the risks to their physical safety.” Spirit of the policy allowance As the Board discussed in the “ Sri Lanka Pharmaceuticals ” case, Meta may apply a “spirit of the policy” allowance to content when the policy rationale (the text that introduces each Community Standard) and Meta’s values demand a different outcome than a strict reading of the rules (i.e., the rules set out in the “do not post” and in the list of prohibited content). The Board’s analysis of content policies was informed by the Meta’s value of “Voice,” which the company describes as “paramount,” and its value of “Safety.” III. Meta’s Human-Rights Responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human-rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis of Meta’s human-rights responsibilities in this case was informed by the following international standards: 5. User Submissions The author of the post was notified of the Board’s review and provided with an opportunity to submit a statement to the Board. The user did not submit a statement. 6. Meta’s Submissions When referring this case to the Board, Meta stated that it is difficult due to the tensions between Meta’s values of “Voice” and “Safety,” and because of the context required to fully assess and appreciate the risk of harm posed by the video. Meta stated that this case is significant because of the communal clashes between Hindu and Muslim communities during the Hanuman Jayanti religious festival in Odisha. Meta explained that the originally escalated content – a post with a video identical to the one under analysis by the Board, but with a different caption – violated the “spirit” of the Violence and Incitement policy, despite the fact that it contained an awareness raising caption because: (1) it “raised significant safety concerns that were flagged by law enforcement” which Meta confirmed through independent analysis; (2) it was going viral; and (3) it triggered a significant number of violating comments. Meta then configured a MMS bank to remove all instances of the video regardless of the caption, which included the video ultimately referred to the Board, given the safety risks posed by this content.” In reaching this decision, Meta was able to independently corroborate the concerns raised by law enforcement based on feedback from Meta’s local public policy and safety teams, as well as local news coverage and feedback from other internal teams. In making its decision, the company considered: (1) the nature of the threat; (2) the history of violence between Hindu and Muslim communities in India; and (3) the risk of continuing violence in Odisha in the days leading up to the Hanuman Jayanti religious festival. Additionally, Meta stated that news coverage and local police reports reinforced the conclusion that this video could contribute to a risk of communal violence and retaliation. Meta also explained that the content in this case violated the Violence and Incitement policy, since it includes a call for high-severity violence, as the video shows stones or bricks being thrown into a crowd and the crowd calling on others to “hit” or “beat” someone in response. Moreover, while Meta acknowledges that the target is not expressly identified, viewers can clearly see stones being thrown towards the building and the individual on the balcony, which Meta considers a visual depiction of a method of violence directed towards a target. In response to the Board’s questions, Meta explained that under the letter of the Violence and Incitement policy, otherwise violating content may be allowed on the platform if the content is shared in a condemning or an awareness raising context. However, as one of the main purposes of the Violence and Incitement policy is to “prevent potential offline harm,” in this case, Meta determined that the safety concern that the originally escalated content could contribute to a risk of further Hindu and Muslim communal violence merited a spirit of the policy call to remove it (and all other instances of the video on their platforms), irrespective of the content’s caption. Meta also determined the content did not qualify for a newsworthiness allowance, as the risk of harm outweighed its public interest value. According to Meta, the risk of harm was high for several reasons. First, the content highlights ongoing religious and political tensions between Hindus and Muslims that regularly result in violence across India. Moreover, localized incidents of this type of communal and religious violence have the potential to trigger clashes elsewhere and spread quickly beyond the initial location. Meta’s local public policy and safety teams were also concerned about the risk of recurring violence in Odisha once the curfew and internet suspension were lifted and more people could view the video. Finally, local law enforcement’s identification of the content as likely to contribute to further imminent violence corroborated Meta’s concerns. Meta acknowledged there may be value associated with attempts to notify others of impending violence and current events. However, in this case, Meta found that the risk of harm outweighed the public interest. Meta noted that at the time the content was removed, the post was more than four days old and its value as a real-time warning had diminished. Meta underlined that the post had a neutral caption, which did not have a higher informational value. Meta also mentioned that the caption didn’t lessen the risk of the content inciting violence. According to Meta, there was widespread local and national news coverage of the underlying events in this case, which diminished the informative value of this particular post. Meta also informed the Board that “no action short of removing the content could adequately address the potential risks associated with sharing this content.” In response to the Board’s questions, Meta noted that “in general, strikes are applied at scale for all Violence and Incitement policy violations.” But, on escalation, Meta can decide not to apply strikes based on exceptional circumstances including where the content was posted in an awareness-raising context, or the content seeks to condemn an issue of public importance. Meta explained that the company did not apply a strike to content removed through the MMS bank mentioned above to “effectively balance voice and safety and to account for the fact that some content the bank removed would not have violated the letter of the policy.” As previously explained in this section, Meta’s decision to remove the originally reported content was based on the “spirit of the Violence and Incitement policy.” Meta added that as MMS banks were involved, there was no opportunity to review each piece of content individually as would be done at scale. Therefore, Meta did not apply strikes in order to not further penalize users who posted content that did not violate the letter of the policy. In response to the Board’s questions on government requests, Meta mentioned the information provided in its Transparency Center . Meta explained that, when a formal report based on a violation of local law is received from a government or local law enforcement, it is first reviewed against Meta’s Community Standards, even if it includes requests to remove or restrict content for violating local laws. If Meta determines that the content violates its policies, it is removed. However, if not, then Meta conducts a legal review to confirm whether the report is valid and performs human rights due diligence consistent with Meta’s Corporate Human Rights Policy. The Board asked Meta 16 questions in writing. Questions related to Meta’s processes for government requests for content review, Meta’s usage of MMS banks for at scale enforcement and account level enforcement practices. Meta answered 15 questions and declined to provide a copy of the content review request received from the Odisha state law enforcement in this case. 7. Public Comments The Oversight Board received 88 public comments relevant to this case: 31 of the comments were submitted from Asia Pacific and Oceania, 42 from Central and South Asia, eight from Europe, one from Latin America and the Caribbean, one from Middle East and North Africa, and five from United States and Canada. This total includes 32 public comments that were either duplicates, were submitted without consent to publish or were submitted with consent to publish, but did not meet the Board’s conditions for publications. Public comments can be submitted to the Board with or without consent to publish, and with or without consent to attribute. The submissions covered the following themes: social and political context in India, particularly with regards to different ethnic and religious groups; relevant government policies and treatment of different ethnic and religious groups; the role of social media platforms, particularly Meta platforms, in India; whether content depicting communal violence in Odisha was likely to incite offline violence; how social media companies should treat government requests to review and/or remove content; importance of transparency reporting, especially with regards to government requests; the role of media and communications in the increase of violence and discrimination in India; importance of analyzing contextual cues and offline signals when assessing how likely a piece of content is to incite offline violence; concerns around coordinated online disinformation campaigns aimed at spreading hate against specific ethnic and religious groups. To read public comments submitted for this case, please click here . The Board also filed Right to Information requests with several State and Central Indian authorities. The received responses were limited to information about the local context at the time the content under review in this case was posted and prohibitory measures in Sambalpur, Odisha. 8. Oversight Board Analysis The Board analyzed Meta's content policies, human-rights responsibilities and values to determine whether this content should be removed. The Board also assessed the implications of this case for Meta’s broader approach to content governance, particularly in contexts involving ongoing communal violence. The Board selected this case as an opportunity to assess Meta’s policies and practices in moderating content that depicts instances of communal violence. Additionally, it provides the Board with an opportunity to discuss the nature of online incitement and provide guidance to Meta on how to address it. Finally, the case allows the Board to examine Meta’s compliance with its human-rights responsibilities in crisis and conflict situations more generally. 8.1 Compliance With Meta’s Content Policies I. Content Rules Violence and Incitement The Board finds that the post violated the Violence and Incitement Community Standard, under which Meta removes “content that constitutes a credible threat to public or personal safety.” In particular, the policy prohibits “[t]hreats that could lead to death (and other forms of high-severity violence) ...targeting people or places where threat is defined as” “[c]alls for high-severity violence.” Under this policy, content containing calls to violence is considered to be violating when it contains a credible threat. In determining whether a threat is credible, Meta considers the language and context to distinguish threats from casual statements. The majority found the following factors relevant: the volatile context and ongoing violence in Odisha at the time the content was posted; the nature of the religious procession; the calls for high-severity violence in the video; and the virality and widespread nature of similar content being posted on the platform (as outlined in Section 2 above). Based on these factors, the majority of the Board finds that the content constituted a credible call for violence. The content in this case depicts a scene of violence in which a crowd in the religious procession calls for people to throw stones/bricks (“high-severity violence”) against an unidentified person standing on the balcony of the building seen in the background (“target”). Meta includes under the definition of “target” provided to content reviewers any “person,” including anonymous persons, defined as “a real person that is not identified by name or imagery.” Meta defines “high-severity violence” as “any violence that is likely to be lethal. Meta instructs its content reviewers to “consider a threat as high severity,” if they’re unsure “whether a threat is high or mid severity.” Given that all the requirements are met, the majority of the Board finds that the content violates the relevant policy line of the Violence and Incitement Community Standard. Contextual factors are significant in this case. Stone pelting incidents have been widespread and organized during processions and have been observed to trigger Hindu-Muslim violence (See e.g., PC-14070), especially when Hindu and Muslim religious festivals overlap. As noted in Section 2 above, these processions have been reported to display symbols associated with Hindu nationalism (e.g., saffron-coloured flags) and to be accompanied by coded calls for violence (the chanting of “Jai Shri Ram”) against minority groups, particularly Muslims. Moreover, the Board is aware that social media platforms – and specifically the sharing of videos that depict acts of incitement – are used, in this context, to mobilize and incite more widespread violence, especially through “live” and video posts (Id.) akin to the one at issue in this case. The risk of high-severity violence was heightened in this case as the rally and the instigated violence resulted in a fatality, injuries, and property damage, as highlighted under Section 2 above. Thus, the content was likely to further high-severity violence. Despite the government-imposed internet shutdown in Odisha, the Board takes note of the fact that many postings of the same video have been removed from Meta’s platforms, given the MMS bank’s settings. Interestingly, Meta informed the Board that the originally escalated video flagged by Odisha law enforcement “was going viral” when it was reviewed and included “a significant number of violating comments.” As noted in Section 2 above, there have been reports of coordinated campaigns aimed at spreading anti-Muslim disinformation and hate speech. In the implementation guidelines to its content reviewers, Meta allows “violating content if it is shared in a condemning or raising awareness context.” Meta defines awareness raising context as “content that clearly seeks to inform and educate others about a specific topic or issue; or content that speaks to one’s experience of being a target of a threat or violence. This might include academic and media reports.” Meta told the Board that “these allowances are designed to limit the spread of content that incites violence and could have consequences offline while still allowing space for counter-speech that is not supportive but is intended to educate or warn people about threats made by third parties.” The Board notes that while the user shared the content shortly after violence broke out in Sambalpur, Odisha, it was accompanied with a neutral caption (“Sambalpur” – the name of the town where the violent events took place). Given the neutral caption and the lack of contextual cues pointing towards a different direction, the Board concludes that the content didn’t “clearly seek to inform and educate others about a specific topic or issue” or “speak to one’s experience of being a target of a threat or violence.” The Board finds that the content as posted did not fall under the awareness-raising exception to the Violence and Incitement Community Standard. Given the risk of harm, as discussed in Section 8.2 below, the Board considered that the risk of harm outweighed the public interest value of the post. Therefore, the newsworthiness allowance should not be applied in this case. The majority of the Board therefore concludes that given the online and offline context surrounding the posting of the content, the heightened tensions and violence that were still ongoing in Sambalpur, Odisha at the time of the posting, and the lack of any indication of the applicability of any policy exception, the content posed a serious and likely risk of furthering violence, constituting a credible threat or call for violence against religious communities engaging in confrontation in Sambalpur. Thus, its removal is consistent with Meta’s Violence and Incitement policy. In contrast to the majority, the minority could not identify any contextual indications supporting the belief that the reposting of the video depicting a scene of a motorcycle procession in Odisha during the Hanuman Jayanti religious festival “constituted a credible call for violence.” The minority notes that there is no evidence to support the assertion that the user was issuing or endorsing such calls as voiced in the video. To interpret a post of this nature, without more, as a “credible call for violence” is a standard that could be applied to prohibit the reposting of virtually any scene depicting incitement, no matter the aim or purpose of such a post. However, the minority believes the post in this case could be properly removed under Meta’s Violence and Incitement Community Standard for a different reason. The minority notes that the Violence and Incitement Community Standard is silent on whether posts of “depicted incitement” are banned. In the view of the minority, “depicted incitement” which constitutes the repetition, replaying, recounting or other depiction of past expression (e.g. the posting of a video, news story, audio clip or other content) cannot properly be considered a form of incitement in itself. “Depicted incitement” differs materially from original incitement, namely expression conveyed with the intent and result of inciting harm (e.g. a video extorting listeners to commit vandalism or a written post encouraging revenge attacks). Posts involving depicted incitement may be shared in order to raise awareness, debate recent events, condemn or analyze, and must not be construed to constitute incitement unless specific conditions are met. Whereas the Violence and Incitement Community Standard explicitly bans depictions of past acts of kidnapping, it does not address depictions of past acts of incitement. One could interpret the Standard as not covering past depictions of incitement. The minority, however, concludes that the Violence and Incitement policy may properly be applied to “depicted incitement” when either of the following conditions are met: 1) the posting of depicted incitement evinces a clear intent to incite; or 2) the posting a) contains no contextual clues indicating the applicability of a policy exception such awareness raising or news reporting; and b) there is evidence that postings of similar content are shared with the aim of inciting violence or result in violence. The conditions spelled out in (2) were met in this case, thus rendering the content removal permissible. The minority of the Board believes it would be important for the Violence and Incitement Community Standard to be clarified to state that the policy applies not only to content posted to incite violence, but also to “depicted incitement,” namely posts merely sharing content depicting past incitement under the above-mentioned conditions. II. Enforcement Action Meta employs MMS banks to locate content that is identical or nearly identical to previously flagged photos, videos, and text. These banks are able to match users' posts with content previously flagged as violating by Meta's internal teams. Meta explained that MMS banks can be configured to take different actions once they identify banked content. In this case, Meta informed the Board that the MMS bank was set up to globally remove all instances of the video regardless of the caption, given the safety risks posed by this content. In other words, the blanket removal applied to all identical videos even if they fell within Meta’s exceptions for awareness raising, condemnation, and news reporting. Meta also mentioned that this particular MMS bank was “set up to take action without applying a strike.” The Board highlights the significant impact Meta’s enforcement action has on users posting identical content for awareness raising and condemnatory purposes. Meta informed the Board that the company’s decision to remove all instances of the video was not time-limited (nor limited to certain geographic locations), and that there are no current plans to roll back the enforcement. The Board addresses Meta’s enforcement action in more detail below, under Section 8.2 in the context of the “Necessity and Proportionality” analysis. 8.2 Compliance With Meta’s Human-Rights Responsibilities Freedom of Expression (Article 19 ICCPR) Article 19, para. 2, of the ICCPR provides for broad protection of “the expression and receipt of communications of every form of idea and opinion capable of transmission to others,” including about “political discourse,” “religious discourse” and “journalism,” as well as expression that people may find ""deeply offensive” ( General Comment No. 34, (2011), para. 11). The right to freedom of expression includes the right to access information (General Comment no. 34, (2011), paras. 18-19). Where restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” This three-part test has been proposed by the UN Special Rapporteur on freedom of expression as a framework to guide platforms’ content moderation practices ( A/HRC/38/35 ). The Board uses this framework to interpret Meta’s voluntary human-rights commitments, both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although ""companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users' right to freedom of expression"" ( A/74/486 , para. 41). In this case, the Board applied the three-part test not only to Meta’s decision to remove the content at issue, but also the company’s decision to automatically remove videos identical to the one under analysis by the Board, regardless of the accompanying caption. I. Legality (Clarity and Accessibility of the Rules) The principle of legality under international human rights law requires rules that limit expression to be clear and publicly accessible (General Comment No.34, para. 25). Rules restricting expression ""may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution"" and ""provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not"" (Ibid). Applied to rules that govern online speech, the UN Special Rapporteur on freedom of expression has said they should be clear and specific ( A/HRC/38/35 , para. 46). People using Meta’s platforms should be able to access and understand the rules and content reviewers should have clear guidance on their enforcement. The Board finds that, as applied to the facts of this case, Meta’s prohibition of content calling for high-severity violence against unspecified targets as well as the conditions under which the prohibition is triggered are sufficiently clear. The Board also notes that the “awareness raising” exception under the Violence and Incitement Community Standard is still not available in the public-facing language of the policy. In other words, users are still unaware that otherwise violating content is permitted if it is shared in a condemning or raising awareness context, which may prevent users from initiating or engaging in public interest discussions on Meta’s platforms. Therefore, the Board reiterates its recommendation no. 1 in the “ Russian Poem ” case, in which the Board urges Meta to add to the public-facing language of its Violence and Incitement Community Standard that the company interprets the policy to allow content containing statements with “neutral reference to a potential outcome of an action or an advisory warning,” and content that “condemns or raises awareness of violent threats.” Finally, the Board notes that Meta’s decision to remove all identical videos regardless of accompanying caption is based on the “spirit of the policy” allowance, which is not clear and accessible to users, thereby triggering serious concerns under the legality test. In this regard, the Board’s minority further finds that Meta’s own reference to mass removals being justified by the “spirit” of the Violence and Incitement policy as a tacit admission that the policy itself as written does not provide for such broad removals. The company’s decision not to apply strikes against users on the basis of the removed content further evinces recognition by Meta that the posting of such content cannot fairly be construed as a policy violation. These factors reinforce the minority’s conclusion that the company, virtually by its own admission, has failed to meet the legality test in relation to its broader enforcement action in this case. The Board reiterates its recommendation no. 1 in the “ Sri Lanka pharmaceuticals ” case, in which the Board urged Meta to provide more clarity to users and explain in the landing page of the Community Standards, in the same way the company does with the newsworthiness allowance, that allowances to the Community Standards may be made when their rationale, and Meta’s values, demand a different outcome than a strict reading of the rules. The Board also recommended Meta to include a link to a Transparency Center page which provides information about the “spirit of the policy” allowance. The Board believes that the implementation of this recommendation will address the issues of concern in relation to the clarity and accessibility of Meta’s broader enforcement approach in this case. While the Violence and Incitement policy does not specify whether “depicted incitement” is prohibited, the minority of the Board believes that such a prohibition – under limited conditions – may be inferred from the current policy but should be made explicit. The minority notes that the Violence and Incitement policy should clearly state the circumstances under which it applies to content merely depicting incitement (“depicted incitement”). The minority considers that the policy and its purposes, as applied to this case, are sufficiently clear to satisfy the legality requirement. II. Legitimate Aim Any restriction on freedom of expression should also pursue a ""legitimate aim"". The Violence and Incitement policy aims to “prevent potential offline harm” and removes content that poses “a genuine risk of physical harm or direct threats to public safety.” Prohibiting calls for violence on the platform to ensure people’s safety constitutes a legitimate aim under Article 19, para.3, as it protects the “rights of others” to life (Article 6, ICCPR) and freedom of religion or belief (Article 18, ICCPR). III. Necessity and Proportionality The principle of necessity and proportionality provides that any restrictions on freedom of expression ""must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected"" (General Comment No. 34, para. 34). When analyzing the risks posed by violent content, the Board is guided by the six-factor test described in the Rabat Plan of Action, which addresses advocacy of national, racial or religious hatred that constitutes incitement to hostility, discrimination or violence. Based on an assessment of the relevant factors, especially the context, content and form, as well as likelihood and imminence of harm, further described below, the Board finds that removing the post in question is consistent with Meta’s human-rights responsibilities as it posed imminent and likely harm. The video shows a scene of violence during a religious procession between a person standing on a nearby building and the people in the rally with the latter chanting “Jai Shri Ram’. The Board takes note of the expert reports, discussed under Section 2 above, that “Jai Shri Ram” - which can be literally translated as “Hail Lord Ram” (a Hindu god) - has been used in religious processions as those depicted in the video as a coded expression to promote hostility against minority groups, especially Muslims. The user posted the content one day after violence broke out in Sambalpur at a moment when the situation was still volatile. The Board also notes, as highlighted under Section 2 above, that this religious rally in Sambalpur, Odisha led to violence and a fatality, and these events were followed by arrests and internet shutdowns. The Board is aware of the relationship between religious processions and communal violence and highlights that stone pelting during processions is reported to have a widespread and organized nature that have been observed to trigger Hindu-Muslim violence (See e.g., PC-14070). Given the online and offline context surrounding the posting of the content, the heightened tensions and violence that were still ongoing in Odisha in the period when the video was posted, and the lack of any indication of the applicability of any policy exception, the Board finds that removing the post under the Violence and Incitement Community Standard was necessary and proportionate. Considering the volatile context in Odisha at the time the post was created, the video posed a serious and likely risk of furthering violence. The Board therefore agrees with Meta’s removal of the video in this case, given the contextual factors and lack of a clear awareness raising purpose (as discussed in Section 8.1 above). The Board also notes that the company added the video in the originally escalated content to a MMS bank configured to remove all similar posts containing that same video regardless of the caption accompanying the posts. That includes posts with awareness raising, condemnation and/or reporting purposes – exceptions to the Violence and Incitement Community Standard. A majority of the Board believes that the challenges of moderating content at scale are very relevant to the assessment of this broader enforcement decision. Meta made this decision to remove content that posed a serious and likely risk of furthering violence in a moment of heightened tension and violence. In such moments, the timeliness of Meta’s enforcement actions is of the essence. As the Board has previously emphasized, mistakes are inevitable among the hundreds of millions of posts that Meta moderates every month. While mistaken removals of non-violating content (false positives) negatively impact expression, mistakenly leaving up violent threats and incitement (false negatives) presents major safety risks and can suppress the participation of those targeted (see “ United States Posts Discussing Abortion ” cases). Given the scale of the risks to safety that surrounded the posting of this video, in a period of heightened tensions and ongoing violence in Odisha, Meta’s decision to take down identical videos regardless of any accompanying caption, without applying strikes to penalize users, was necessary and proportional to address the potential risks of this content being widely shared. In addition to the contextual factors highlighted in Sections 2 and 8.1 above, the majority of the Board notes that many pieces of identical videos have been removed from Meta’s platforms due to the MMS bank settings, despite the government-imposed internet shutdown in Sambalpur. In particular, according to Meta’s decision rationale, the originally escalated video “was going viral” and included “a significant number of violating comments.” Under Section 2 above, the Board points out reports highlighting that there are coordinated campaigns in India spreading hate speech and disinformation against Muslims. In the same section, the Board also takes note of reports indicating that videos about communal violence had been spread in patterns that bore the earmarks of coordination. In this regard, the majority notes that as stated by the Special Rapporteur on freedom of religion or belief, “[s]ocial media platforms are increasingly exploited as spaces for incitement to hatred and violence by civil, political and religious actors”. Relatedly, concerns “about the spread of real and constructed hate against religious minorities have been raised in India” ( A/75/385 , para. 35). The majority recognizes the history of frequent and widespread violence targeting Muslims, which are reportedly conducted with impunity. This majority acknowledges the challenges Meta faces in removing threats of violence at scale (see “Protest in India Against France” case). When analyzing the difficulties of enforcing Meta’s policies at scale, the Board has previously emphasized that dehumanizing discourse that consists of implicit or explicit discriminatory acts or speech may contribute to atrocities. To forestall such outcomes, Meta can legitimately remove posts from its platforms that encourage violence (see “Knin Cartoon” case). In interpreting the Hate Speech Community Standard, the Board has also considered that, in certain circumstances, moderating content with the objective of addressing cumulative harms caused by hate speech at scale may be consistent with Meta's human-rights responsibilities. This position holds even when specific pieces of content, seen in isolation, do not appear to directly incite violence or discrimination (see “Depiction of Zwarte Piet” case). For the majority of the Board, the same can be said, given the specific context of this case, in relation to the Violence and Incitement policy. The majority of the Board, however, notes that broad enforcement measures such as Meta’s MMS bank approach should be time-bound. After the situation in Odisha changes and the risk of violence associated with this piece of content is reduced, Meta should reassess enforcement measures adopted to moderate posts containing the video added to the MMS bank to ensure that policy exceptions are applied as usual. In the future, the Board would welcome approaches that limit such sweeping enforcement measures to a particular moment in time and to heightened-risk geographic areas so that such measures are better tailored to address the risk of harm without disproportionally impacting freedom of expression. The minority of the Board, however, does not believe that Meta’s mass blanket removal throughout the world of all identical videos depicting a past incident of incitement regardless of whether the videos were shared for awareness raising (e.g., by a news outlet) or condemnation was a proportional response. That a city or population is experiencing communal violence cannot, in of itself, constitute grounds for such sweeping restrictions on free expression in the name of avoiding furthering such violence. This is particularly so absent a showing or even grounds to believe that such restrictions will have the result of lessening violence. In situations of violent conflict, the imperative of awareness raising, sharing information and preparing communities to react to important events affecting them is paramount. Overly aggressive enforcement runs the risk of leaving vulnerable communities in the dark as to timely events, creating the potential for rumors and disinformation to spread. Indeed, it is dangerous to assume that voice and safety are necessarily clashing goals, and one must be sacrificed for the other. Rather, they are frequently deeply intertwined: while the spread of content intended to incite may result in increased risks of offline violence, suppressing information can lead to undermining safety, often that of vulnerable populations. Such mass blanket removals also run the risk of disproportionately affecting the speech of particular parties to a conflict, in ways that may heighten tensions and fuel the impetus to violence. Such omnibus removals can place individuals in a situation of being forcibly silenced at a time when they most urgently need to cry out for help or at least bear witness. In situations of violent conflict, there is an urgent need for readily accessible information and dialogue, for which Meta platforms offer a primary venue. A conclusion that situations of violent conflict can, in themselves, justify sweeping restrictions on free expression would be welcome news to authoritarian governments and powerful non-state actors who engage in such violence, and have an incentive to prevent the world from knowing, or delaying awareness until the powers have achieved their purposes. The minority further believes that a broad policy of removing all content depicting incitement would interfere with the vital role of news organizations in covering global events, limiting the distribution of their news content on Meta platforms when the events depicted therein included past incitements to violence. The awareness raising potential of the timely dissemination of such information can play an essential role in tempering violence or rallying opposition to it. The minority is also concerned that the blanket takedown of posts depicting incitement could impair efforts to identify and hold accountable those responsible for real-world incitement to violence that occurs off the platform. Given that the virality and reach of a Meta post occurs mostly in the hours and days right after the post is shared, the minority does not believe that, even on a time-bound basis, the blanket prohibition of “depicted incitement” on the platform is compatible with Meta’s values and human-rights commitments. The aggressive policing of content, without regard to the motives and context in which it is posted, constitutes an abdication of Meta’s responsibility to uphold the company’s own foremost commitment to voice, and its international human-rights commitment to protect freedom of expression. The minority is concerned that the majority’s reasoning could be embraced by repressive governments to legitimize self-serving orders of internet shutdowns and other forms of information suppression in the name of preventing what might be termed depictions of incitement, but amount to timely, potentially life-saving information about violence toward civilians, or minority groups. For the minority, the Board should not readily defer to Meta’s mere assertions of the challenge of “scale” in justifying such sweeping speech bans, particularly in a context where Odisha state government has shut down the internet, reached out directly to Meta about its content moderation, and placed bans on freedom of assembly for a year. The minority believes that a social media company that operates at a particular scale must ensure the application of its policies at the same scale. In addition, as the UN Special Rapporteur has noted, social media companies have a “range of options short of deletion that may be available . . . in given situations” ( A/74 /486 , para. 51) (noting options such as geoblocking, reducing amplification, warning labels, promoting counter speech, etc). The Special Rapporteur has also stated “just as States should evaluate whether a limitation on speech is the least restrictive approach, so too should companies carry out this kind of evaluation. And, in carrying out the evaluation, companies should bear the burden of publicly demonstrating necessity and proportionality.” (Id., emphasis added.) The Board has previously called on Meta to explain the continuum of options it has at its disposal in achieving legitimate aims and to articulate why the selected one is the least intrusive means (See “ Claimed COVID Cure ” case). For the minority, information along the lines proposed by the Special Rapporteur would be helpful in assessing whether it is necessary and proportionate to institute a sweeping mass removal of key content during a crisis. Moreover, in engaging in such a public dialogue with Meta, the company could explain in more detail to the Board and the public, particularly given its announced achievements with respect to artificial intelligence, the company’s efforts to improve its automated technologies to detect posts that may fall within its own policy exceptions. 9. Oversight Board Decision The Oversight Board upholds Meta's decision to take down the content. 10. Recommendations The Oversight Board decided not to issue new recommendations in this decision, given the relevance of previous recommendations issued in other cases. Therefore, the Board decides to reiterate the following recommendations, for Meta to follow closely: *Procedural Note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg which draws on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Linguistic expertise was provided by Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world. Return to Case Decisions and Policy Advisory Opinions" fb-57spp63y,Reporting on Pakistani Parliament Speech,https://www.oversightboard.com/decision/fb-57spp63y/,"April 4, 2024",2024,,"TopicElections, Freedom of expression, News eventsCommunity StandardViolence and incitement","Policies and TopicsTopicElections, Freedom of expression, News eventsCommunity StandardViolence and incitement",Upheld,Pakistan,"The Oversight Board has upheld Meta’s decision to leave up a post shared by a news outlet in Pakistan that includes a video of a politician giving a speech to the country’s parliament. The Board considers that safeguarding such figurative speech, in the run-up to elections, is fundamental.",52379,7966,"Upheld April 4, 2024 The Oversight Board has upheld Meta’s decision to leave up a post shared by a news outlet in Pakistan that includes a video of a politician giving a speech to the country’s parliament. The Board considers that safeguarding such figurative speech, in the run-up to elections, is fundamental. Standard Topic Elections, Freedom of expression, News events Community Standard Violence and incitement Location Pakistan Platform Facebook Reporting on Pakistani Parliament Speech Public Comments Appendix Urdu Translation Reporting on Pakistani Parliament Speech Decision PDF To read the full decision in Urdu, click here . مکمل فیصلہ اردو میں پڑھنے کے لیے، یہاں پر کلک کریں The Oversight Board has upheld Meta’s decision to leave up a post shared by a news outlet in Pakistan that includes a video of a politician giving a speech to the country’s parliament. The post does not violate the Violence and Incitement Community Standard because it falls under the exception for “awareness raising.” Additionally, the politician’s references to public officials being sacrificed or “hanged” are figurative (non-literal) when considering the whole speech, which seeks to draw attention to Pakistan’s political crisis and lack of accountability among the establishment. In a period of turmoil, ahead of national elections, the Board considers safeguarding such speech as fundamental. About the Case In May 2023, an independent news outlet in Pakistan posted a video on its Facebook page of a Pakistani politician giving a speech in Urdu to the country’s parliament. The speech references what he describes as an ancient Egyptian “tradition” in which people were sacrificed to control flooding of the Nile River. The politician uses this reference to express what he thinks should happen in present-day Pakistan, also recalling a previous speech in which he said the country could not heal itself until public officials, including the military, were “hanged.” The politician implicates himself and his colleagues among the officials that need to be sacrificed, saying they are all responsible for what is happening. His speech alludes to the ongoing political crisis, with criticism aimed at the government and military establishment. The post was shared about 20,000 times and had 40,000 reactions. The local news outlet posted the video ahead of national elections that were due to take place in 2023, but were delayed until February 2024. A time of political turmoil, which saw escalating confrontation between former Prime Minister Imran Khan and the military establishment, the country experienced political protests and growing polarization. There were crackdowns on political opponents and in Balochistan, the province where this politician’s party is based, state repression was particularly pronounced. Over a three-month period in 2023, Meta’s automated systems identified the post as potentially violating 45 times. Two human reviewers then came to different decisions on the post, one finding it to be non-violating, the other finding that it broke the rules of the Violence and Incitement policy. As the account that shared the content was part of Meta’s cross-check program, the post was marked for an additional level of review. Ultimately, Meta’s policy and subject matter experts found the post to be non-violating. Meta referred the case to the Board because it represents tensions in its values of voice and safety when applied to political speech. Key Findings The Board finds the post does not violate the Violence and Incitement Community Standard because it was shared by a media outlet seeking to inform others and therefore falls under the exception for “awareness raising.” Delivered in the run-up to elections before parliament, the politician’s speech undoubtedly covered matters of public interest, including events in the political and public domain. Shared during a period of national turmoil by a local news outlet, the speech demanded “particularly high” protection. Furthermore, the post’s caption did not endorse or support the politician’s speech, rather it pointed to the strong reaction the speech generated in parliament. At the time the post was shared in May 2023, the “awareness raising” exception was only included in Meta’s internal guidelines to reviewers, not publicly, but it has since been included in the Community Standards in line with one of the Board’s previous recommendations. The Board also emphasizes the importance of assessing context when applying the Violence and Incitement policy to speech by politicians that could incite violence. In this case, there was no credible threat that could lead to death from the post, which was a news report of a politician using figurative speech to comment on the political crisis in Pakistan. The comparison between “hanging” officials and the ancient Egyptian myth of sacrifice is clearly metaphorical and political exaggeration, rather than an actual threat. Experts consulted by the Board confirmed that Pakistani politicians commonly use highly charged and provocative language to draw attention to issues they consider important. The politician names no specific targets in his speech; instead, he refers generally to public officials, including himself. When considered in full, his speech urgently calls for action on accountability among public officials while drawing attention to broader issues, including human rights violations against the people of Balochistan. Therefore, the Board considers that safeguarding such speech, in the run-up to elections, is fundamental. The Oversight Board’s Decision The Oversight Board upholds Meta’s decision to leave up the content. The Board makes no new recommendations but reiterates recommendation no. 1 from the Brazilian General’s Speech decision to ensure that speech with high public interest value in the run-up to elections can be preserved on Meta’s platforms. Specifically, the Board urges Meta to speed up its implementation of a framework “for evaluating the company’s election efforts, including creating and sharing metrics.” This is particularly important given the large number of elections in 2024, including in Global Majority countries. *Case summaries provide an overview of cases and do not have precedential value. 1. Decision Summary The Oversight Board upholds Meta’s decision to leave up a post shared by a news outlet that includes a video of a politician giving a speech to Pakistan’s parliament, ahead of national elections in the country. The post contains a caption noting the intense reaction the speech evoked in parliament. The speech references what the politician describes as an ancient Egyptian “tradition” of sacrificing people to control the flooding of the River Nile. The politician uses this reference to express what he thinks should happen in present-day Pakistan and as a reminder of when he previously said the country could not heal itself until public officials were “hanged.” His speech is made in the context of significant political turmoil in Pakistan, in the lead-up to elections, and is critical of the government and military establishment. The Board finds the post did not violate the Violence and Incitement policy because it was shared by a media outlet seeking to inform others and therefore falls under the exception for “awareness raising.” The politician’s speech shared by the news outlet covered matters of public interest and was delivered before parliament in the run-up to elections, during a period of national turmoil. The Board also finds that the post’s caption did not endorse or support the politician’s speech, rather it pointed to the strong reaction the speech generated in parliament. In a period of turmoil, ahead of national elections, the Board considers safeguarding such speech as fundamental. Additionally, given the context and considering the politician’s speech in full, the Board considers that the relevant statement is figurative, rather than literal. The comparison between “hanging” officials and the ancient Egyptian myth of sacrifice is clearly metaphorical and political exaggeration, rather than an actual threat that could lead to death. The politician names no specific targets in his speech and he includes himself among those to sacrifice. The Board concludes that his speech should be understood as an urgent call for action on accountability among public officials while drawing attention to broader social and political issues in Pakistan. 2. Case Description and Background On May 16, 2023, a small private Urdu-language, local news outlet in Pakistan posted a video on its Facebook page of a Pakistani politician giving a speech to the country’s parliament a day earlier. The politician’s speech, in Urdu, references what he describes as an ancient Egyptian “tradition” in which people were sacrificed to control flooding of the Nile River. The politician references the “tradition” as part of his opinion on what should happen in present-day Pakistan and says that, in a previous speech, he had stated that Pakistan will not heal itself until different types of public officials, including the military, are “hanged.” The politician then alludes to the ongoing political crisis in Pakistan, referring to issues affecting the country ahead of parliamentary elections, including missing persons in Balochistan and references that are critical of government and the military establishment. He continues by saying that to end the “flood,” they need to make “sacrifices.” The politician clearly implicates himself and other colleagues in those public officials who need to be “hanged” as a form of sacrifice, saying they are all responsible for what is happening. The post includes a caption and text overlaying the video, also in Urdu, that repeat the politician’s statement about hanging public officials. The caption also mentions the strong reaction the speech generated in parliament. The content has been shared about 20,000 times, has about 3,000 comments and about 40,000 reactions, the majority of which are “likes.” Between June and September 2023, Meta’s automated systems identified the content in this case as potentially violating the Community Standards 45 times, creating reports that sent the content for review. Two of these reports were reviewed by at-scale human reviewers. The first review found the content to be non-violating while the second determined it violated the Violence and Incitement policy. Because the account that posted the content was part of the cross-check program, the content was marked for secondary review and remained on the platform pending the completion of that process. The content was ultimately escalated to policy and subject matter experts who determined it did not violate the Violence and Incitement policy. The content was left on the platform. Meta referred the case to the Board because it represents tension in its values of voice and safety when applied to political speech. The speech in this case was made in the context of significant political turmoil in Pakistan, a few days after the arrest of former Prime Minister Imran Khan. In April 2022, Mr. Khan was ousted in a no-confidence vote by Pakistan's political opposition amid an alleged escalating confrontation between Mr. Khan and the military establishment. Seeking to regain power, Mr. Khan and his party sought to bring forward parliamentary elections, as the National Assembly mandate was originally scheduled to conclude in August 2023. On May 9, 2023, Imran Khan was arrested on corruption charges, for which he was later convicted and sentenced to several years in prison – a move some saw as an attempt to block him from participating in the parliamentary elections. In August 2023, the president dissolved the National Assembly, setting the stage for upcoming general elections, constitutionally required to be held 90 days after dissolution, in November. An interim caretaker government took over, and in November, Pakistan’s election oversight body postponed elections to February 8, 2024, citing the need for redrawn constituency maps. This fueled political uncertainty surrounding the elections and extended the interim governments appointed since Mr. Khan’s ousting. In December 2023, Meta also publicly reported that the Pakistan Telecommunication Authority requested restricted access to a post criticizing the military establishment. Mr. Khan’s arrest galvanized massive political protests throughout the country and unprecedented attacks upon military buildings and public and private property, events that created the impetus for the politician’s speech. The UN reported that at least eight people died, around 1,000 people were arrested and hundreds were injured during clashes with security forces. The UN Secretary-General, António Guterres, called for an end to violence . Independent media has reported that thousands of the former prime minister’s supporters, party workers and members of his political party have been arrested since May 2023. Additionally, Pakistan’s telecommunications authorities reportedly shut off access to mobile internet and social media for days during the violent protests, with journalists attacked and detained by police as well as being attacked by protesters. The politician depicted in the video is the leader of a small, yet influential, political party in Balochistan (Pakistan’s largest province), which mostly focuses on addressing issues relevant to the region and has long decried the abuse of power deployed by the Pakistani state against the Baloch people. He served as a member of parliament until August 2023 and in the ruling coalitions of the last two governments. According to experts consulted by the Board, he has a reputation as a moderate politician and has previously condemned violence against civilians. He is very critical of the military establishment although his party has been part of government coalitions that have aligned with the establishment. While the politician’s speech followed the immediate turmoil created by Mr. Khan’s arrest, the politician refers to broader social and political issues in Pakistan and Balochistan. Experts consulted by the Board stated that Pakistan is experiencing severe levels of political polarization, fueled by the longstanding confrontation between Mr. Khan, the government and the military establishment. The military establishment, initially supportive of Mr. Khan, holds significant political influence in Pakistan and is not accustomed to facing public criticism. However, following a harsh crackdown on Mr. Khan and his supporters, anti-military sentiment has been escalating. Experts further noted that two days before the speech, attacks on security forces also occurred in Balochistan, which could also have prompted the speech. Balochistan has historically had a vibrant political and civil society movement that advocated for more political autonomy and socioeconomic rights – however, increasingly harsh state repression in Balochistan in an effort to maintain authority has led to the birth of a more radical armed secessionist movement. Balochistan has suffered from political violence for decades, which has been exacerbated by military repression and massive violations of human rights such as forced disappearances and extrajudicial killings , common tactics deployed by security forces and state-sponsored private militias to weaken the separatist movement. Experts noted that Pakistan’s military forces have a large presence in Balochistan due to active separatist movements and frequent terrorist attacks. They also emphasized the military had established violent militias, allegedly intended to target members of the Baloch population suspected to be connected to the separatist movement. Some of these militias later turned against the military, further fueling separatist sentiment and violence. Pakistan’s political crisis has been exacerbated by economic issues, the ongoing consequences of devastating floods in 2022 and an increase in terrorist acts in Balochistan and elsewhere , which have been met with punitive counterterrorism measures, including forced disappearances and death squads in Balochistan. Terrorist attacks have been repeatedly condemned by the UN while other human rights experts have reiterated their concerns about the adoption of abusive counterterrorism measures. In this context, the politician uses several inflammatory and illustrative terms in the speech that are relevant to Pakistan’s political history and current political landscape. These include references that are critical of government and military policies, as well as the lack of accountability among the establishment’s state officials. Simultaneously, the speech addresses violence against Baloch communities and their struggles for accessing justice. Linguistic and cultural experts consulted by the Board noted that Pakistani political culture involves the use of highly charged provocative language to bring attention to issues deemed important. They stated that the “tradition” mentioned in the politician’s speech refers to a myth regarding sacrificial practices in ancient Egypt to control flooding of the Nile. In this context, the politician refers to the “sacrifice” of those who are responsible for the political crisis. Experts noted that the need to stop the “flood” could symbolize putting an end to the raft of political problems faced by the country, both nationally and in Balochistan, or addressing the unrest caused by societal inequalities. Additionally, the politician refers to “Frankenstein” and other “monsters” in his speech. Experts noted these references could be to describe how the Pakistani state has created violent actors such as militant groups that were meant to serve the country’s interests but ended up turning against them, endangering the state – an issue particularly affecting Balochistan. Pakistan held parliamentary elections on February 8, 2024. The politician who delivered the speech in this case was successfully re-elected, securing a seat in the National Assembly. At the time of the speech, political tensions in Pakistan were notably heightened following the ousting and arrest of former Prime Minister Khan. Currently serving several years in prison, Mr. Khan was barred, along with his party, from running in the parliamentary elections. His party candidates were forced to run as independents. According to experts consulted by the Board, there are observers who allege that the establishment has opposed Mr. Khan’s political party returning to power. Other observers also consider that, although grounded in law, the timing of the charges brought against him may be politically motivated . Pakistan has remained in a state of turmoil. In response to the inconclusive national elections that did not return a clear majority winner, two of the leading opposition parties to Mr. Khan reached a formal agreement to form a coalition government. The situation has been further complicated by allegations of vote rigging. In a broader human rights context, UN human rights experts and civil rights organizations have highlighted that Imran Khan’s previous government , the current regime and the military establishment have all curtailed media freedom in recent years. Media outlets have faced interference, withdrawal of government advertising, bans on television presenters and on broadcasting content. The license of one of the country’s prominent private news channels was also suspended. Likewise, online activists, dissidents and journalists are often subjected to threats and harassment by the government and their supporters, including some cases of violence and enforced disappearances for criticizing the military establishment and the government. Women’s rights movements, seeking to address gender equality issues, faced permit denials and court petitions attempting to ban their marches, citing objections from public and religious organizations, ostensibly creating law and order risks. These organizations have also reported constraints on internet freedoms imposed by the Pakistani government. Authorities routinely use internet shutdowns, platform blocking and harsh convictions to suppress critical online speech. Independent media outlets have also documented how the Pakistani government makes requests for social-media platforms to remove content, especially when that content questions human rights violations and the military establishment’s involvement in politics. Meta informed the Board that it restricted local access to thousands of pieces of content reported by Pakistan for allegedly violating local laws. This information was also reported in the company’s Transparency Center . 3. Oversight Board Authority and Scope The Board has authority to review decisions that Meta submits for review (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1.1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5) and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect to identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). When Meta commits to act on recommendations, the Board monitors their implementation. 4. Sources of Authority and Guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board Decisions II. Meta’s Content Policies The Board’s analysis was informed by Meta’s commitment to voice , which the company describes as “paramount,” and its value of safety. Meta updated its Violence and Incitement Community Standard several times since the content was first posted in May 2023. The Board analyzed the content on the basis of the most recent version of the Violence and Incitement Community Standard, which came into effect on December 6, 2023. The policy rationale of the Violence and Incitement Community Standard states that it aims “to prevent potential offline violence that may be related to content” appearing on Meta’s platforms, and that while Meta “understand[s] that people commonly express disdain or disagreement by threatening or calling for violence in non-serious and casual ways, [the company] remove[s] language that incites or facilitates violence and credible threats to public or personal safety.” The policy rationale explains that “context matters, so [Meta] consider[s] various factors such as condemnation or awareness raising of violent threats, […] or the public visibility and vulnerability of the target of the threats.” Meta “remove[s] content, disable[s] accounts, and also work[s] with law enforcement when [the company] believe[s] there is a genuine risk of physical harm or direct threats to public safety.” The policy specifically prohibits, “Threats of violence that could lead to death (or other forms of high-severity violence).” The policy specifies that “threats of violence are statements or visuals representing an intention, aspiration, or call for violence against a target, and threats can be expressed in various types of statements such as statements of intent, calls for action, advocacy, aspirational statements and conditional statements.” Following the latest policy updates on December 6, 2023, the public-facing language of the Community Standard now also clarifies that Meta “does not prohibit threats when shared in awareness-raising or condemning context,” in line with the Board’s recommendation in the Russian Poem case. III. Meta’s Human Rights Responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. Meta’s Corporate Human Rights Policy , announced in 2021, reaffirmed the company's commitment to respecting human rights in accordance with the UNGPs. The following international standards were relevant to the Board’s analysis of Meta’s human rights responsibilities in this case: 5. User Submissions Following Meta’s referral and the Board’s decision to accept the case, the user was sent a message notifying them of the Board’s review and providing them with an opportunity to submit a statement to the Board. The user did not submit a statement. 6. Meta’s Submissions When Meta reviewed the content, the company found it did not violate the Violence and Incitement Community Standard (based on the version of the policy that was in effect at the time of its review) because it was posted by a news outlet to raise awareness of a politician’s speech. Meta stated that the company removes “statements advocating for high-severity violence,” such as calling for individuals to be hanged publicly but it allows the content when shared in an awareness raising context. The company said that in this case, the content was shared by a media outlet in the context of raising awareness and thus fell under the exception of the Community Standard. Even when a statement constitutes a credible threat, Meta allows this content if it seeks to inform others. Referring to its previous internal definition, Meta explained that this exception “applies specifically to content that clearly seeks to inform and educate others about a specific topic or issue (….). This might include academic and media reports.” This internal definition has been updated to reflect new definitions for “awareness raising” (as mentioned in Section 8.1 below). The company noted that when viewing the post “holistically,” it determined that the content was posted by the news agency to “raise awareness about statements made by a politician on issues of public importance.” Meta found that the post did more than reshare the specific portion of the politician’s speech that called for high-severity violence but shared a ten-minute video of the speech, placing the statements in greater context. Meta also considered that the news agency’s caption to the post didn’t endorse or support any particular message, but instead editorialized the politician’s comments, suggesting that the speech was powerful and impactful. The company additionally noted that the news outlet is not affiliated with the politician in the video or the government, and does not have a history of posting content that incites violence. Meta further explained that, even if the content contained a credible threat and did not fall under the policy allowance for “raising awareness,” it would have allowed the content because it was newsworthy. Meta states that “in some cases, [the company] allow[s] content – which would otherwise go against [its] standards – if it’s newsworthy and in the public interest.” Meta argued that the public interest value was high because the speech was delivered in a public forum and called out relevant issues. The content was also broadcast by reputable news organizations and had been originally broadcast publicly. Meta considered that the risk of harm was low because while the speech on the surface called for violent actions, these “appeared to be rhetorical” in light of the broader political context, and “there was no indication that the post was likely to result in violence or harm” since the post has remained on the platform “without any known incidents.” The company further noted that, although the raising awareness exception was applicable in this case, the speech itself did not contain an actual threat. The company stated that the politician’s statement in the video did not actually “advocate for high severity violence” as it did not contain a “credible threat.” Rather, it should be interpreted as a “rhetorical statement ... intended to make a political point.” Meta explained that it can be difficult to distinguish between credible and non-credible threats when reviewing content at-scale. In this case, the assessment that the content did not include a credible threat but was instead “political rhetoric” was made after escalation, meaning it was made by Meta’s internal expert teams. These teams consider context in more detail to distinguish between “advocating for violence and heated rhetoric.” The threat was “rhetorical” because the politician made a comparison between an ancient myth of sacrifice in Egypt and advocating for the hanging of unnamed politicians, generals, bureaucrats and judges. According to Meta, this “suggests political hyperbole rather than an actual threat.” The politician’s speech also highlighted “broader issues of corruption, nepotism, alleged discrimination against the Baloch people” and concerns about “lack of accountability for members of the military establishment in Pakistan’s history.” According to Meta, his comments advocating high-severity violence “must be viewed with this larger purpose in mind.” The Board asked Meta 18 questions in writing. Questions related to Meta’s automated and human enforcement; Meta’s escalation-only process; latest updates of the Violence and Incitement policy and the internal instructions for content moderators; processes for government requests for content to be reviewed; measures taken by Meta in light of the approaching election in 2024; and measures to protect politicians and candidates as well as channels of communications that Meta has established with the Pakistani government. Meta answered all the questions that the Board asked. Meta informed the Board that it was unable to provide complete information on requests it had received from the government of Pakistan to take down content over the past year because this would require data validation, which could not be completed in time. 7. Public Comments The Oversight Board received three public comments that met the terms for submission. One was submitted from the United States and Canada, one from Asia Pacific and Oceania, and one from Central and South Asia. To read the public comments submitted with consent to publish, click here . The submissions covered the following themes: the role of social media and digital platforms, and the increase in news reporting by entities other than journalists; the potential risks associated with permitting violent political speech on social media in Pakistan; the political and human rights situation in the country; freedom of expression, media freedoms and highlighting specific laws that pose serious threats to press freedom. 8. Oversight Board Analysis The Board examined Meta’s decision to leave up the content under the company’s content policies, human rights responsibilities and values. The Board selected this case because it offered the opportunity to explore Meta’s Violence and Incitement policy as well as the related enforcement process in the context of political speech. It raises relevant questions around how Meta should treat speech from politicians and any related news coverage of that speech on its platforms, particularly ahead of elections. This case provides an opportunity to directly explore issues around the protection of journalism and the importance of news outlets reporting on issues, events or subjects of public interest. Additionally, the case provides the Board with the opportunity to discuss Meta’s internal procedures for when threatening speech should be construed figuratively rather than literally. The case primarily falls into the Board’s Elections and Civic Space strategic priority. 8.1 Compliance with Meta’s Content Policies The Board finds the content in this case does not violate the Violence and Incitement Community Standard because, regardless of whether the underlying content would meet the threshold for incitement, it was shared by a media outlet seeking to inform others, and thus falls under the exception for raising awareness. At the time the content was posted, Meta’s “awareness raising” exception was contained only in its internal guidelines to reviewers, not the public facing Community Standard – it allowed “violating content if it is shared in a condemning or awareness raising context.” It defined awareness raising context as “content that clearly seeks to inform and educate others about a specific topic or issue,” which might include media reports. Following updates to the public-facing Community Standard on December 6, 2023, in line with the Board’s recommendation in the Russian Poem case , the policy now explicitly reflects this exception: “[Meta] do[es] not prohibit threats when shared in awareness-raising or condemning context.” Meta further updated its internal standards to define awareness raising in more detail, as “sharing, discussing or reporting new information ... for the purpose of improving the understanding of an issue or knowledge of a subject that has public interest value. Awareness raising … should not aim to incite violence or spread hate or misinformation. This includes, but is not limited to, citizen journalism and sharing of news reports by regular users.” Meta explained that “news reporting” falls in the broader category of content that is shared to raise awareness. In this case, there were several clear indicators that the content fell within the exception for raising awareness. It was posted by a news outlet and depicted a politician’s speech referring to the social and political situation in Pakistan, ahead of elections. The speech was undoubtedly referring to matters of public interest, concerning events and figures in the public and political domain. The video shows the politician’s call to “hang” officials in the context of his wider speech, placing the statements in a broader context and highlighting other issues of public interest. The post does not endorse or support the politician’s message and the caption, noting that the speech generated a strong reaction, makes it clear the content is shared to report on the politician’s speech to raise awareness. Although the content in this case benefits from the awareness-raising exception, the Board further notes that taking into account the context, it does not contain a “credible threat” of “violence that could lead to death” that would violate the Violence and Incitement Community Standard. The Board highlights that certain elements may aid in distinguishing whether a speech should be interpreted as figurative or non-literal, as opposed to constituting credible threats. This distinction holds particular significance when statements are of a political nature, especially in the lead-up to elections. The Board acknowledges the importance of removing speech by politicians that is likely to incite violence if this speech entails specific and credible threats and targets (see, for example, the Cambodian Prime Minister case ), but reiterates the importance of contextual assessments when applying the policy. In the absence of credible threats, speech using threatening language figuratively, or not literally, should not constitute a violation of the Violence and Incitement policy (see Russian Poem , Iran Protest Slogan and Iranian Woman Confronted on Street decisions). In this case, the content is a news report depicting a politician addressing parliament to make points on the social and political situation in Pakistan. Based on the context and on the linguistic and cultural experts consulted, the Board considers that the politician is using figurative speech rather than a literal, credible threat of violence. The politician uses illustrative speech and historical references to criticize the political crisis in Pakistan. The Board agrees with Meta that the metaphorical comparison between killing officials and the ancient myth of sacrificing something to control flooding of the Nile is an expression of political exaggeration rather than an actual threat. Experts consulted by the Board explained that highly charged and provocative language is commonly used by Pakistani politicians to draw attention to issues they consider important, and that they tend to be purposefully provocative and hyperbolic in their speeches before parliament. The Board considers that safeguarding such speech, when figurative (non-literal), especially in the lead-up to elections, is fundamental. Additionally, the politician’s statement addresses broader issues such as corruption, perceived discrimination and human rights violations against the Baloch people, who have struggled to access justice, and the lack of accountability among state officials and the military establishment in the country’s history. Unlike in the Cambodian Prime Minister decision, the politician in this case does not name specific targets (he refers only to general categories of public officials), includes himself in those targeted categories and does not have a history of inciting violence. The relevant context is discussed in section 8.2 below. In context, the statements should therefore be understood as a call to action, expression of alarm and assignment of blame rather than as threats against individual people. Similar to the content in the Iran Protest Slogan and Russian Poem cases, they are best understood as figurative expressions used to convey a political message rather than a credible threat. The Board acknowledges that while the content in this case clearly does not violate the policy, differentiating between statements using threatening language figuratively, or not literally, and credible threats, requires context and can be difficult at-scale. As the Board has stated previously, it is therefore important that Meta provides precise guidance to reviewers on which factors to consider when moderating potentially figurative speech (see recommendation no. 1 in the Iran Protest Slogan case). Given the potential challenges for reviewers at-scale in differentiating figurative speech from credible threats, the awareness raising exception provides additional protection for ensuring that figurative speech shared by news outlets for the purposes of reporting and raising awareness is not removed from the platform. 8.2 Compliance with Meta’s Human Rights Responsibilities The Board finds that leaving the content on the platform was consistent with Meta’s human rights responsibilities. Freedom of Expression (Article 19 ICCPR) Article 19 of the ICCPR provides for broad protection of expression, including “freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other [means].” This protection is “particularly high” for “public debate in a democratic society concerning figures in the public and political domain,” ( General Comment 34 , paras. 34 and 38). Political speech and speech on other matters of public interest enjoys the “highest possible level of protection … including through the media and digital communication platforms, especially in the context of elections,” ( Joint Declaration , 2021). The role of the media in reporting information across the digital ecosystem is critical (see Political Dispute Ahead of Turkish Elections decision). International human rights law places particular value on the role of journalism and media in providing information that is of interest to the public (see Mention of the Taliban in News Reporting decision ).The Human Rights Committee has stressed that a “free, uncensored and unhindered press or other media is essential” with press or other media being able to “comment on public issues without censorship or restraint and to inform public opinion,” (General Comment 34, para. 13). Social media platforms like Facebook have become a vehicle for distributing reporting around the world, and Meta has recognized its responsibilities to journalists in its corporate human rights policy. Digital platforms are important distribution and audience-engagement channels for many media outlets. As “digital gatekeepers,” social media platforms have a “profound impact” on public access to information, ( A/HRC/50/29 , para. 90). When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). The Board uses this framework to interpret Meta’s voluntary human rights commitments, both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression,” ( A/74/486 , para. 41). I. Legality (Clarity and Accessibility of the Rules) The principle of legality requires any restriction on freedom of expression to be pursuant to an established rule, which is accessible and clear to users. The rule must be “formulated with sufficient precision to enable an individual to regulate his or her conduct accordingly and it must be accessible to the public,” ( General Comment No. 34 , at para 25). Additionally, the rules restricting expression “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” and should “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not,” ( General Comment No. 34 , at para 25; A/HRC/38/35 (undocs.org), at para 46). Lack of clarity or precision can lead to inconsistent and arbitrary enforcement of the rules. Applied to Meta, users should be able to predict the consequences of posting content on Facebook and Instagram, and content reviewers should have clear guidance on their enforcement. The Board notes that the “awareness raising” exception described above was still not included in the public-facing language of the policy at the time this content was posted. In other words, at that time, users were still unaware that otherwise violating content was permitted if it was shared in a condemning or awareness raising context, which may have prevented users from initiating or engaging in public interest discussions on Meta’s platforms (see Communal Violence in Indian State of Odisha decision). Following its latest policy update and considering recommendations from the Board in previous cases, Meta now explicitly includes the awareness-raising exception in the Community Standard. The policy states that Meta does not prohibit threats when shared in awareness-raising or condemning context, thereby ensuring compliance with the legality requirement. The Board finds that while the policy rationale of the Violence and Incitement Community Standard suggests that “context” may be considered when evaluating a “credible threat,” the policy does not specify how figurative (or not literal) statements are to be distinguished from credible threats. The Board reiterates its findings from the Iran Protest Slogan and Iranian Woman Confronted on Street cases that Meta should include an explanation of how it moderates figurative (non-literal) threats. II. Legitimate Aim Restrictions on freedom of expression must pursue a legitimate aim listed under article 19, para. 3 of the ICCPR, which include the “rights of others.” In seeking to “prevent potential offline violence” by removing content that poses “a genuine risk of physical harm or direct threats to public safety,” the Violence and Incitement Community Standard serves the legitimate aims of protecting the right to life (Article 6, ICCPR) and the right to security of person (Article 9 ICCPR, General Comment No. 35 , para. 9.) III. Necessity and Proportionality Any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected,” ( General Comment 34 , para. 34). Social media companies should consider a range of possible responses to problematic content beyond deletion to ensure restrictions are narrowly tailored ( A/74/486 , para. 51). Removal of the content from the platform in this case would not satisfy the principles of necessity and proportionality as it was shared by a media outlet to raise awareness, and contains figurative political speech that does not constitute incitement to violence, instead of a literal, credible threat. The Board further notes that the expression at issue here deserves “particularly high” protection for its political nature ( General Comment 34 , para. 34) and because it was delivered before parliament in a debate focused on national political issues. This took place during a period of significant social and political turmoil leading up to Pakistan’s elections, which were delayed in 2023 and then held on February 8, 2024. Content shared by a media outlet to raise awareness on political issues in a pre-election context should not be restricted. The role of media reporting in this context becomes increasingly crucial and enjoys particular value and protection (see Mention of the Taliban in News Reporting decision and General Comment 34 , para. 13). Digital media outlets play a key role in distributing information and statements. The removal of this content by Meta would be a disproportionate restriction on the contribution of the press to discussion of matters of public interest. The Board further notes that removal would not be a proportionate restriction given that the speech itself should have been interpreted in a figurative, non-literal manner and did not constitute actual incitement to violence. The six factors described in the Rabat Plan of Action (looking at the context, speaker, intent, content of the speech, extent of the speech and likelihood of imminent harm) provide valuable guidance in this assessment. Although the Rabat factors were developed for advocacy of national, racial or religious hatred that constitutes incitement, and not for incitement generally, they offer a useful framework in assessing whether or not the content incites others to violence (see, for example, Iran Protest Slogan and Call for Women’s Protest in Cuba ). The content was posted in the context of the upcoming elections and amid ongoing political turmoil. The ousting and arrest of former Prime Minister Imran Khan heightened existing tensions and polarization, prompting massive protests throughout the country, arrests, unprecedented attacks upon army buildings and violent repression by the police – especially directed at protesting civilians, Khan’s supporters and opposition figures. Experts consulted by the Board noted the Pakistani government has a history of targeting those who speak critically of the government, military establishment and judiciary with arrest and legal action. Although the politician depicted in the post is a public figure, whose speech potentially carries a higher risk of harm due to their position of authority, he had no history of inciting violence. The content and form of the statement suggest that it was not meant literally and is figurative in nature, as the politician does not name specific targets (he refers only to categories of public officials) and includes himself in those targeted categories. Experts also noted that highly charged and provocative language is commonly used by Pakistani politicians. Furthermore, the politician’s whole speech, which was often illustrative in nature, also discussed broader issues of public interest ahead of the parliamentary elections in Pakistan, including human rights violations against the Baloch people. Thus, the contextual factors and the substance of the speech suggest that the politician’s intention was to urgently call for action, calling for public officials not to be hanged but to be held accountable. While the content had a wide reach, it did not stand out compared to other events at the time. Additionally, as the politician did not name specific targets but generally referred to the governing regime, of which he was a member, the speech was not likely to trigger imminent harm. The intent of the speaker, despite his identity, the content of the speech and its reach, as well as the likelihood of imminent harm, all justified leaving the content on the platform. The Board believes that in complex political contexts such as those described in this case, evaluating the significance and connection of the politician's speech to the broader electoral landscape is crucial. The timing of the speech, considering the political circumstances at that moment, is fundamental, as described earlier (see section 2). Any speech of this nature, viewed in the context of upcoming elections, should be retained on the platform. In order to ensure that speech with a high public interest value, such as the content in this case, is preserved on the platform, the Board reiterates recommendation no. 1 from the Brazilian General's Speech case, which was accepted by Meta. In that case, the Board recommended that Meta develop a framework for evaluating the company’s election efforts. This includes creating and sharing metrics for successful election efforts, particularly with a view to Meta’s enforcement of its content policies, allowing the company not only to identify and reverse errors, but also to keep track of how effective its measures are in the context of elections. Implementing this recommendation requires publishing country-specific reports. In its response to this recommendation, Meta informed the Board that it has a variety of metrics to evaluate the success of its elections efforts and increase its transparency about their impact but will seek to consolidate these into a specific set of election metrics that will allow the company to improve how it evaluates its efforts in the lead-up to, during, and after elections. Meta reported that it is currently conducting a pilot evaluation using different metrics across multiple elections in 2024 and informed the Board of its plan to publicly share a description of these metrics in early 2025. The Board urges Meta to undertake this process sooner, if possible, given the large number of countries holding elections this year, including in Global Majority countries. 9. Oversight Board Decision The Oversight Board upholds Meta’s decision to leave up the content. 10. Recommendations The Oversight Board decided not to issue new recommendations in this decision, given the relevance of a previous recommendation issued in the Brazilian General's Speech case, which was accepted by Meta. In order to ensure that speech with a high public interest value such as the content in this case is preserved on the platform, the Board reiterates the following recommendation: Meta should develop a framework for evaluating the company’s election efforts. This includes creating and sharing metrics for successful election efforts, particularly with a view to Meta’s enforcement of its content policies. Implementing this recommendation requires publishing country-specific reports ( Brazilian General's Speech , recommendation no. 1). *Procedural Note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg, which draws on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Linguistic expertise was provided by Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world. Return to Case Decisions and Policy Advisory Opinions" fb-659eawi8,Brazilian general’s speech,https://www.oversightboard.com/decision/fb-659eawi8/,"June 22, 2023",2023,,"TopicElections, Governments, ProtestsCommunity StandardCoordinating harm and publicizing crime, Violence and incitement, Violent and graphic content","Policies and TopicsTopicElections, Governments, ProtestsCommunity StandardCoordinating harm and publicizing crime, Violence and incitement, Violent and graphic content",Overturned,Brazil,"The Oversight Board overturns Meta’s original decision to leave up a Facebook video featuring a Brazilian general calling people to ""go to the National Congress and the Supreme Court.""",53041,8125,"Overturned June 22, 2023 The Oversight Board overturns Meta’s original decision to leave up a Facebook video featuring a Brazilian general calling people to ""go to the National Congress and the Supreme Court."" Standard Topic Elections, Governments, Protests Community Standard Coordinating harm and publicizing crime, Violence and incitement, Violent and graphic content Location Brazil Platform Facebook Public comments appendix The Oversight Board has overturned Meta’s original decision to leave up a Facebook video which features a Brazilian general calling people to “hit the streets,” and “go to the National Congress and the Supreme Court.” Though the Board acknowledges that Meta set up several risk evaluation and mitigation measures during and after the elections, given the potential risk of its platforms being used to incite violence in the context of elections, Meta should continuously increase its efforts to prevent, mitigate and address adverse outcomes. The Board recommends that Meta develop a framework for evaluating its election integrity efforts to prevent its platforms from being used to promote political violence. About the case Brazil’s presidential elections in October 2022 were highly polarized, with widespread and coordinated online and offline claims questioning the legitimacy of elections. These included calls for military intervention, and for the invasion of government buildings to stop the transition to a new government. The heightened risk of political violence did not subside with the assumption of office by newly elected President Luiz Inácio Lula da Silva on January 1, 2023, as civil unrest, protests, and encampments in front of military bases were ongoing. Two days later, on January 3, 2023, a Facebook user posted a video related to the 2022 Brazilian elections. The caption in Portuguese includes a call to “besiege” Brazil’s Congress as “the last alternative.” The video also shows part of a speech given by a prominent Brazilian general who supports the re-election of former President Jair Bolsonaro. In the video, the uniformed general calls for people to “hit the streets” and “go to the National Congress … [and the] Supreme Court.” A sequence of images follows, including one of a fire raging in the Three Powers Plaza in Brasília, which houses Brazil’s presidential offices, Congress, and Supreme Court. Text overlaying the image reads, in Portuguese, “Come to Brasília! Let’s Storm it! Let’s besiege the three powers.” Text overlaying another image reads “we demand the source code,” a slogan that protestors have used to question the reliability of Brazil’s electronic voting machines. On the day the content was posted, a user reported it for violating Meta’s Violence and Incitement Community Standard, which prohibits calls for forcible entry into high-risk locations. In total, four users reported the content seven times between January 3 and 4. Following the first report, the content was reviewed by a content reviewer and found not to violate Meta’s policies. The user appealed the decision, but it was upheld by a second content reviewer. The next day, the other six reports were reviewed by five different moderators, all of whom found that the content did not violate Meta’s policies. On January 8, supporters of former president Bolsonaro broke into the National Congress, Supreme Court, and presidential offices located in the “Three Powers Plaza” in Brasília, intimidating the police and destroying property. On January 9, Meta declared the January 8 rioting a “violating event” under its Dangerous Individuals and Organizations policy and said it would remove “content that supports or praises these actions.” The company also announced that it had “designated Brazil as a Temporary High-Risk Location” and had “been removing content calling for people to take up arms or forcibly invade Congress, the Presidential palace and other federal buildings.” As a result of the Board selecting this case, Meta determined that its repeated decisions to leave the content on Facebook were in error. On January 20, 2023, after the Board shortlisted this case, Meta removed the content. Key findings This case raises concerns around the effectiveness of Meta’s election integrity efforts in the context of Brazil’s 2022 General Election, and elsewhere. While challenging the integrity of elections is generally considered protected speech, in some circumstances widespread claims which attempt to undermine elections can lead to violence. In this case, the speaker’s intent, the content of the speech and its reach, as well as the likelihood of imminent harm resulting in the political context of Brazil at the time, all justified removing the post. For a post to violate Meta’s rules on calling for forcible entry into high-risk locations, the location must be considered “high-risk,” and it must be situated in an area or vicinity that is separately designated as a “temporary high-risk location.” As the post was an unambiguous call to forcibly enter government buildings situated in the Three Powers Plaza in Brasília (“high-risk locations” situated in a “temporary high-risk location,” Brazil), Meta’s initial decisions to leave this content up during a time of heightened political violence represented a clear departure from its own rules. The Board is deeply concerned that despite the civil unrest in Brazil at the time the content was posted, and the widespread proliferation of similar content in the weeks and months ahead of the January 8 riots, Meta’s content moderators repeatedly assessed this content as non-violating and failed to escalate it for further review. In addition, when the Board asked Meta for information on specific election-related claims on its platforms before, during, and after the Brazilian elections, the company explained that it does not have data on the prevalence of such claims. The content in this case was finally removed more than two weeks later, by which point the violating event it called had already occurred, and only after the Board brought the case to Meta’s attention. In response to a question from the Board, Meta said that it does not adopt any particular metrics for measuring the success of its election integrity efforts generally. Therefore, the Board finds that Meta should develop a framework for evaluating the company’s election integrity efforts, and for public reporting on the subject. This aims to provide the company with relevant data to improve its content moderation system as a whole and to decide how best to employ its resources in electoral contexts. Without this kind of information, neither the Board nor the public can evaluate the effectiveness of Meta’s election integrity efforts more broadly. The Oversight Board’s decision The Oversight Board overturns Meta’s original decision to leave up the post. The Board also recommends that Meta: * Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board overturns Meta’s original decision to leave up a Facebook video featuring a Brazilian general calling people to “hit the streets,” and “go to the National Congress and the Supreme Court.” These calls were followed by an image of the Three Powers Plaza in Brasília, where these government buildings are located, on fire, with overlay text which reads “Come to Brasília! Let’s storm it! Let’s besiege the three powers.” The Board finds these statements to be clear and unambiguous calls to invade and take control of these buildings in the context of Bolsonaro supporters disputing election results and calling for military intervention to stop the course of a government transition. After the Board shortlisted this post for review, Meta reversed its original decision and removed it from Facebook. The case raises broader concerns around the effectiveness of Meta’s election integrity efforts in the context of Brazil’s 2022 General Election, and elsewhere. Challenging elections’ integrity is generally considered protected speech, but in some circumstances, widespread online and offline claims attempting to undermine elections, such as the ones in this case, can lead to offline violence. In Brazil, every warning signal was present that such violence would result. Though the Board acknowledges that Meta set up several risk evaluation and mitigation measures during and after the elections, given the potential risk of its platforms being used to incite violence in the context of elections, Meta should continuously increase its efforts to prevent, mitigate and address adverse outcomes. The post-election phase should be covered by Meta’s election integrity efforts to address the risk of violence in a context of transition of power. The Board therefore recommends that Meta develop a framework for evaluating the company’s election integrity efforts and for public reporting on the subject. Such a framework should include metrics of success on the most relevant aspects of Meta’s election integrity efforts, allowing the company not only to identify and reverse errors, but also to keep track of how effective its measures are in critical situations. The Board also recommends that Meta provides clarity in regards to the different protocols and measures it has in place to prevent and address potential risk of harm arising in electoral contexts and other high-risk events. This includes naming and describing such protocols, their objective, the points of contact between them and how they differ from each other. Such protocols need to be more effective, have a clear chain of command, and be adequately staffed, especially when operating in a context of elections with a heightened risk of political violence. These recommendations would help improve the company’s content moderation system as a whole by placing Meta in a better position to prevent its platforms from being used to promote political violence and to enhance its responses to election-related violence more generally. 2. Case description and background On January 3, 2023, a Facebook user posted a video related to the 2022 Brazilian elections. The caption in Portuguese includes a call to “besiege” Brazil’s Congress as “the last alternative.” The one minute and 32-second video shows part of a speech given by a prominent Brazilian general and supporter of the reelection of former President Jair Bolsonaro. In the video, the uniformed general calls for people to “hit the streets” and “go to the National Congress … [and the] Supreme Court.” A sequence of images follows, including one of a fire raging in the Three Powers Plaza in Brasília, which houses Brazil’s presidential offices, Congress, and Supreme Court. Text overlaying the image reads, in Portuguese, “Come to Brasília! Let’s Storm it! Let’s besiege the three powers.” Text overlaying another image reads “we demand the source code,” a slogan that protestors have used to question the reliability of Brazil’s electronic voting machines. The video was played over 18,000 times and was not shared. Two days before the content was posted, Bolsonaro’s electoral opponent Luiz Inácio Lula da Silva had been sworn-in as Brazil’s president after winning the presidential run-off election on October 30, 2022 with 50.9 percent of the votes. The periods before, between, and after the two rounds of voting were marked by a heightened risk of political violence, spurred by claims about impending electoral fraud . This was premised on the alleged vulnerability of Brazil’s electronic voting machines to hacking. Ahead of the election, then-President Bolsonaro fueled distrust in the electoral system, alleging fraud without supporting evidence and claiming that the electronic voting machines are not reliable. Some military officials echoed similar claims of electoral fraud and spoke in favor of using the military as an arbiter in electoral disputes. Several instances of political ads attacking the legitimacy of the elections on Meta’s platforms were reported . These included posts and videos attacking judicial authorities and promoting a military coup. Further, Global Witness published a report on Brazil describing how political ads which violated the Community Standards were approved by the company and circulated on Meta’s platforms. The findings tracked similar reports from the organization concerning other countries such as Myanmar and Kenya. The post-election period was accompanied by civil unrest, including protests, roadblocks, and setting up encampments in front of military bases to call on the armed forces to overturn the election results. According to experts consulted by the Board, the video in this case first surfaced online in October 2022, soon after the electoral results were known; similar content remained on different social media platforms leading up to the January 8 riots. On December 12, 2022, the same day Lula’s victory was confirmed by the Superior Electoral Court, a group of pro-Bolsonaro protesters tried to break into the headquarters of the Federal Police in Brasília. Several acts of vandalism took place. On December 24, 2022, there was an attempted bombing near the country’s international airport in Brasília. The man responsible for the attack was arrested and confessed that his goal was to attract attention to their pro-coup cause. The heightened risk of political violence in Brazil did not subside with the newly elected president’s inauguration on January 1, 2023. Based on research commissioned by the Board, false claims about voting machines peaked on Meta platforms after the first and second rounds of voting, and again in the weeks following Lula’s victory. Additionally, in the days leading up to January 8, Bolsonaro supporters used several coded slogans to promote protests in Brasília which were specifically focused on government buildings. Most of the logistical organization appeared to be accomplished through communications channels other than Facebook. International election observation missions such as the Organization of American States and the Carter Center reported that there was no substantial evidence of fraud and that the election had been conducted in a free and fair manner despite the pressures of a highly polarized electorate. The Brazilian Ministry of Defense also formally observed the election and reported no evidence of irregularities or fraud, though it did subsequently release a conflicting statement that the armed forces “do not rule out the possibility of fraud.” In Brazil, the Ministry of Defense oversees the work of the armed forces. Tensions culminated on January 8, when supporters of former President Bolsonaro broke into the National Congress, Supreme Court, and presidential offices located in the “Three Powers Plaza” in Brasília referred to in the case content, intimidating the police and destroying property. Around 1,400 people were arrested for participating in the January 8 riots, with around 600 still in custody. In the wake of the events of January 8, the United Nations condemned the use of violence, saying that it was the “culmination of the sustained distortion of facts and incitement to violence and hatred by political, social and economic actors who have been fueling an atmosphere of distrust, division, and destruction by rejecting the result of democratic elections.” It reiterated its commitment and confidence in Brazil’s democratic institutions. Public comments and experts consulted by the Board indicated the harmful effect that claims which preemptively cast doubt on the integrity of Brazil’s electoral system had in driving political polarization and enabling offline political violence (See public comments from the Dangerous Speech Project [PC-11010], LARDEM - Clínica de Direitos Humanos da Pontifícia Universidade Católica do Paraná [PC-11011], Instituto Vero [PC-11015], ModeraLab [PC-11016], Campaign Legal Center [PC-11017], Center for Democracy & Technology [PC-11018], InternetLab [PC-11019], and Coalizão Direitos na Rede [PC-11020]). On January 9, 2023, Meta declared the January 8 rioting a “violating event” under the Dangerous Individuals and Organizations policy and said it would remove “content that supports or praises these actions.” The company also announced that “[i]n advance of the election” it had “designated Brazil as a Temporary High-Risk Location” and had “been removing content calling for people to take up arms or forcibly invade Congress, the Presidential palace and other federal buildings.” On January 3, the same day the content was posted, a user reported it for violating Meta’s Violence and Incitement Community Standard, which prohibits calls to “forcibly enter locations . . . where there are temporary signals of a heightened risk of violence or offline harm.” In total, four users reported the content seven times between January 3 and 4. Following the first report, the content was reviewed by a human moderator and found not to violate Meta’s policies. The user appealed the decision, but it was upheld by a second human moderator. The next day, the other six reports were reviewed by five different moderators, all of whom found that the content did not violate Meta’s policies. The content was not escalated to policy or subject matter experts for additional review. In response to a question from the Board, Meta clarified that the seven people who reviewed the content were based in Europe. According to Meta, they were all fluent in Portuguese and had the language and cultural expertise to review Brazilian content. As a result of the Board selecting this case, Meta determined that its repeated decisions to leave the content on Facebook were in error. On January 20, 2023, after the Board shortlisted the case, Meta removed the content, issued a strike against the content creator’s account, and applied a 24-hour feature-limit, preventing them from creating new content within that period. Despite Meta’s action, civil society group Ekō’s public comment submission to the Board and other reports emphasized that similar content remained on Facebook even after this case was brought to Meta’s attention by the Board (PC-11000). 3. Oversight Board authority and scope The Board has authority to review Meta’s decision following an appeal from the person who previously reported content that was left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). Where Meta commits to act on recommendations, the Board monitors their implementation. When the Board selects cases like this one, where Meta subsequently acknowledges that it made a mistake, the Board reviews the original decision, to help increase understanding of the policy parameters and content moderation processes that contributed to the error. The Board then seeks to address issues it identifies with Meta’s underlying policies or processes. The Board also aims to issue recommendations for Meta to improve enforcement accuracy and treat users fairly moving forward. 4. Sources of authority and guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board decisions: The most relevant previous decisions of the Oversight Board include: II. Meta’s content policies: Violence and Incitement Community Standard Under the Violence and Incitement Community Standard , Meta does not permit “statements of intent or advocacy, calls to action, or aspirational or conditional statements to forcibly enter locations (including but not limited to places of worship, educational facilities, polling places or locations used to count votes or administer an election) where there are temporary signals of a heightened risk of violence or offline harm.” The policy rationale for this Community Standard is to “prevent potential offline harm that may be related to content” appearing on Meta’s platforms. At the same time, Meta recognizes that “people commonly express disdain or disagreement by threatening or calling for violence in non-serious ways.” Meta therefore removes content when the company believes “there is a genuine risk of physical harm or direct threats to public safety.” In determining whether a threat is credible, Meta also considers “the language and the context.” The Board’s analysis was also informed by Meta’s commitment to “Voice,” which the company describes as “paramount,” and its value of “Safety.” III. Meta’s human rights responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of companies. In 2021, Meta announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following international standards: 5. User submissions In their appeal to the Board, the user who reported the content stated that they “have already reported this and countless other videos to Facebook and the answer is always the same, that it doesn’t violate the Community Standards.” The user further linked the content’s potential to incite violence to action taken by people in Brazil “who do not accept the results of elections.” 6. Meta’s submissions When the Board brought this case to Meta’s attention, the company determined that its original decision to leave the content up was incorrect. Meta provided the Board with a broad analysis of Brazil’s social and political context before, during, and after the presidential election to justify the – albeit belated – removal of the content in this case. It later provided the Board with probable factors that “may have contributed” to the persistent enforcement error. Meta stated its view that “the multiple references to ‘besieging’ high-risk locations in the caption and video do not independently rise to the level of ‘forcible entry’ under [the] [Violence and Incitement] policy.” However, “the combination of calling on people to ‘Come to Brasília! Let’s storm it! Let’s besiege the three powers’ with the background image of the Three Powers Plaza on fire makes the intent to forcibly enter these prominent locations clear.” According to Meta, the content did not qualify for a newsworthiness allowance even though it acknowledged that its platforms are “important places for political discourse, especially around elections.” In this case, the public interest value of the content did not outweigh the risk of harm given its “explicit call for violence” and the “heightened risk of offline harm following the Brazilian Presidential election and Lula’s inauguration.” Meta found no indication that the content was shared to condemn or raise awareness of the call for violence. The company maintains that its ultimate decision to remove the content is consistent with its values and with international human rights standards. To address elections and other crisis situations, Meta has set up several risk evaluation and mitigation measures that are run by different teams and can apply either simultaneously or independently. Each has different “tiers” or “levels” of intensity depending on the respective risk evaluation: The Election Operation Center covering the 2022 Brazilian general election ran at various points in time from September to November 2022, including during the first and second rounds of the election. However, there was no Election Operation Center (or IPOC) in place at the time the content was posted on January 3, 2023. Meta designated the “post-election unrest” as a crisis under the Crisis Policy Protocol to help the company assess how best to mitigate content risks. In response to a question from the Board regarding digital trends on Meta’s platforms before, during and after the Brazilian elections, the company stated that as part of its “election preparation and response work, a number of teams identified election-related content trends and incorporated them into [their] risk-mitigation strategy.” These included: “(i) risks associated with incitement or spread of threats of violence; (ii) misinformation; and (iii) business integrity, which include risks associated with potential abuse of advertisement with harmful content... or attempts to conduct campaigns in ways that manipulate or corrupt public debate.” Meta stated that the “results, among other factors, helped inform a number of product and policy mitigations.” However, Meta does not have “prevalence data” on specific claims (e.g. of electoral fraud, calls to go to Brasília or forcibly invade federal government buildings, calls for a military intervention), because in general, the company’s enforcement systems “are set up to monitor and track based on the policies they violate.” The Board asked Meta 15 questions in writing, including 5 in follow-up to an oral briefing on how Election Operation Centers work. Questions related to: policy levers available to address coordinated behavior on Meta’s platforms; risks identified ahead of the 2022 Brazilian elections; the relationship between the Election Operation Center for the Brazilian election and the Crisis Policy Protocol; how Meta draws the line when distinguishing between legitimate political organizing and harmful coordinated action; digital trends on Meta’s platforms in Brazil before, during, and after the elections; and the language capabilities of the content moderators who reviewed the case content. Meta answered 13 questions. Meta did not answer two questions, one concerning the relationship between political advertising and misinformation, and another concerning the number of removals of pages and accounts while the Election Operation Center for the 2022 Brazil elections was in place. Meta also informed the Board that the company did not have more general data on content moderation in the context of Brazil’s 2022 election readily available to share with the Board, in addition to the number of content takedowns which was already shared publicly. Meta further explained that the company does not assess its performance in the context of elections against a given set of metrics of success and benchmarks. Meta raised the need to prioritize resources when responding to the Board’s questions, and said that providing the requested data within the timeframe for deciding the case would not be possible. 7. Public comments The Oversight Board received 18 public comments relevant to this case. Eleven of the comments originated from Latin America and the Caribbean, three from the United States and Canada, two from the Middle East and North Africa, one from Asia Pacific and Oceania, and one from Central and South Asia. Additionally, in February 2023, the Board organized a roundtable with stakeholders from Brazil and Latin America on the topic of “Content Moderation and Political Transitions.” The submissions covered the following themes: the accumulation of harmful claims about election fraud and calls for a military coup on social media platforms before, during, and after the 2022 Brazil elections; election-related disinformation; Meta’s election integrity efforts; Meta’s responsibility to protect users’ rights in the context of a democratic transition of power; the relationship between election denialism and political violence; and the importance of content reviewers’ familiarity with the local political context. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board examined whether this content should be removed by analyzing Meta's content policies, human rights responsibilities, and values. This case was selected because it allows the Board to assess how Meta distinguishes peaceful organizing on its platforms from incitement or coordination of violent action, especially in a context of a transition of power. Additionally, the case allows the Board to examine Meta’s election integrity efforts more generally, and in Brazil more specifically, considering that post-election periods are crucial moments both to contest the integrity of an election and to guarantee that legitimate electoral results are respected. Therefore, the Board finds that Meta’s election integrity efforts should cover both the electoral process itself and the post-electoral period, for the latter is also vulnerable to manipulation, election-related misinformation, and threats of violence. The case falls within the Board’s “elections and civic space” strategic priority. 8.1 Compliance with Meta’s content policies I. Content rules Violence and Incitement The Board finds that the content in this case violates the Violence and Incitement Community Standard’s prohibition of content calling for forcible entry into certain high-risk locations. The Board finds that while Meta’s value of “Voice” is particularly relevant in electoral processes, including the post-electoral period, removing the content is necessary in this case to advance Meta’s value of “Safety.” In order to violate the policy line against calls for forcible entry into high-risk locations, two “high-risk” designations are required. Firstly, the location must be considered “high-risk,” and, secondly, it must be situated in an area or vicinity that is separately designated as a Temporary High-Risk Location. Meta’s specific instructions to content reviewers is to “[r]emove calls to action, statements of intent, statements advocating, and aspirational statements to forcibly enter high-risk locations within a Temporary High-Risk Location.” Meta defines a “high-risk location” as a “location, permanent or temporary, that is deemed high-risk due to its likelihood of being the target of violence.” Permanent high-risk locations include “places of work or residence of high-risk persons or their families (for example, the headquarters for a news organization, medical centers, laboratories, police stations, government offices, etc.); facilities used during local, regional, and national elections as a voter registration center, polling location, vote counting site (for example local library, government building, community or civic center, etc.) or a site used in the administration of an election.” According to Meta, the Brazilian Congress, Supreme Court and Presidential offices are all permanent “high-risk locations” by virtue of being places of work or residence of high-risk persons or their families.” The additional “Temporary High-Risk Location” designation of the broader area or vicinity covers any “location temporarily designated by [Meta as such] for a time-bound period.” A place is designated as a Temporary High-Risk Location based on many factors, including “whether high-severity violence occurred at a protest in the location in the last 7 days;” “evidence of an increased risk of violence associated with civil unrest or a contentious court decision at the location;” “an assessment from law enforcement, internal security reports, or a trusted partner that imminent violence is likely to occur at the location;” “evidence of planned or active protest at the location or a planned or active protest at the location where the organizer has called for armaments to be used or brought to the location of the protest;” and “an assessment by internal teams that the safety concerns outweigh the potential impact on the expression of self-defense and self-determination.” Once a Temporary High-Risk Location is designated, the designation is shared with Meta’s internal teams. Though such designations are time-limited, the company occasionally grants extensions. According to Meta, a Temporary High-Risk Location designation leads to the proactive review of content “before users report [it].” For the 2022 elections, Meta designated the entire country of Brazil as a Temporary High-Risk Location. The designation was initially established on September 1, 2022 based on Meta’s assessment of increased risk of violence associated with ongoing civil and election-related unrest. The designation was extended to cover the October 2022 election and its aftermath, until February 22, 2023. The designation was in place at the time the case content was posted. According to Meta, both designations must be present for a piece of content to violate the policy, which was the case for the post under analysis. According to Meta, the two-fold requirement helps ensure that calls for protests are not broadly suppressed and that only content likely to result in violence will be removed. Given the above, the Board regards Meta’s initial decisions that the content should remain on the platform during a time of heightened risk of political violence as a clear departure from its own standard, because it constituted an unambiguous call to forcibly enter government buildings situated in the Three Powers Plaza in Brasília, which are “high-risk locations” situated in a “temporary high-risk location,” Brazil. II. Enforcement Action According to Meta, seven human moderators who possessed the necessary linguistic and cultural expertise reviewed the content. Meta does not instruct at-scale reviewers to record their reasons for making decisions. When the Board selected this case, Meta’s internal teams conducted an analysis which concluded that three probable factors “may have contributed” to the persistent enforcement error: (1) reviewers may have misunderstood the user’s intent (a call to action) possibly due to a lack of punctuation that led to misinterpretation of the content as a neutral comment about the event; or (2) reviewers made a wrong decision despite the correct guidelines being in place due to multiple updates around the handling of content related to high-risk events from various sources; or (3) reviewers may not have seen the violation in the video. Factors 1 and 3 suggest that moderators did not review this content carefully nor watched the video fully, as the potential violation of Meta’s policies it contained was clear. However, Meta does not provide any explanation as to why the content was not escalated to subject matter and policy experts for further analysis. The content was not escalated despite the fact that it came from a country which, at the time the content was posted and reported, was designated as a “Temporary High-Risk Location” relating to a policy line that is only activated when this designation is in place. The content was also not escalated despite the overall online and offline context in Brazil (See Section 2). Meta already informed the Board that content reviewers are not always able to watch videos in full. Nonetheless, in situations of heightened risk of violence, where specific policy levers have been triggered, the Board would expect content reviewers to be oriented to watch videos in full, as well as to escalate potentially violating content. In relation to factor 2, while Meta stated it informs at-scale reviewers of Temporary High-Risk Location designations, the company acknowledges possible shortcomings in its socialization of this and other election-specific risk mitigation measures. The socialization of this kind of information enables content reviewers to detect, remove, or escalate problematic content such as the video in this case. The fact that different evaluation and mitigation measures were in place in Brazil at the time indicates that they likely need to be better articulated and have a clearer chain of command to make the company’s election integrity efforts more effective. Despite Meta’s ultimate decision to take down the content, the Board is deeply concerned that even with the civil unrest in Brazil at the time the content was posted, and the widespread proliferation of similar online content months and weeks before the January 8 riots, Meta’s content moderators repeatedly assessed this content as non-violating and failed to escalate it for further review despite the contextual cues it contained. These concerns are compounded by the fact that when the Board asked Meta for information on specific election-related claims on its platforms before, during, and after the Brazilian election, the company explained it does not have such prevalence data (see Section 6). The content in this case was finally removed more than two weeks later, after the violating event it had called for already occurred, and only after the Board brought the case to Meta’s attention. Meta acknowledged the heightened risk of violence in Brazil, first by adopting various risk evaluation measures before, during, and after the content was posted, and also directly to the Board when the company decided to finally remove the content. Yet, the company’s reviewers persistently failed to adequately enforce its Community Standards, particularly the very policy line of the Violence and Incitement Community Standard triggered by a Temporary High-Risk Location designation. The fact that the content was not escalated prior to Board selection, despite the clarity of the potential violation, and that there was similar content circulating on Facebook at the time (See Sections 2 and 8.2), indicates that escalation channels are likely to be insufficiently clear and effective (See Knin cartoon case ). It also demonstrates the need for Meta to improve its safeguards around elections. As the Board has noted in previous decisions, it is indispensable that at-scale reviewers possess adequate linguistic and contextual knowledge and are equipped with the necessary tools and channels to escalate potentially violating content. III. Transparency The Board recognizes that Meta made important efforts to safeguard the integrity of the 2022 Brazil elections. In August 2022, when the campaign period formally began, Meta publicly announced its election-related initiatives in the country. The company worked with Brazil’s Superior Electoral Court to add a label to posts about elections on Facebook and Instagram, “directing people to reliable information on the Electoral Justice website.” According to Meta, this led to a “10-fold increase” in visits to the website. The partnership also allowed the Superior Electoral Court to report potentially violating content directly to Meta. Meta hosted training sessions for electoral officials throughout Brazil to explain the company’s Community Standards and how misinformation on Facebook and Instagram is addressed. Meta also prohibited paid advertising “calling into question the legitimacy of the upcoming election.” Further, the company implemented a WhatsApp forwarding limit so that a message can only be forwarded to one WhatsApp group at a time. Finally, Meta reported the number of pieces of content removed under various Community Standards, such as the Violence and Incitement, Hate Speech, and Bullying and Harassment policies, and the total number of click-throughs on election labels that directed users to authoritative information about the Brazil elections. Nonetheless, when asked by the Board about its election integrity efforts in the context of the 2022 Brazil elections, Meta stated that the company does not adopt any particular metrics for measuring the success of its election integrity efforts generally, beyond reporting data on content takedowns, views, and click-throughs on election labels. The Board also notes that, from Meta’s disclosures in its Transparency Center and exchanges with the Board, it is not entirely clear how the company’s different risk evaluation measures and protocols run (See Section 6 above), independently or in parallel. Meta should clarify the points of contact between these different protocols, better explain how they differ from each other and how exactly the enforcement of content policies is affected by them. A number of public comments (Ekō [PC-11000], Dangerous Speech Project [PC-11010], ModeraLab [PC-11016], Campaign Legal Center [PC-11017], InternetLab [PC-11019], and Coalizão Direitos na Rede [PC-11020]) received by the Board stated that the company’s efforts to safeguard the elections in Brazil were not sufficient. While the Board acknowledges the challenges inherent to moderating content at scale, Meta’s responsibility to prevent, mitigate and address adverse human rights impacts is heightened in electoral and other high-risk contexts, and requires the company to establish effective guardrails against them. The enforcement error in this case does not appear to be an isolated incident. According to Ekō (PC-11000), similar content remained on Facebook even after the January 8 riots. More transparency is needed to assess whether Meta’s measures are adequate and sufficient throughout election contexts. The lack of data available for the Board to review undermined the Board’s ability to adequately assess whether the enforcement errors in this case, and concerns raised by different stakeholders, are symptomatic of a systemic issue in the company’s policies and enforcement practices. It also compromised the Board’s ability to issue more specific recommendations for Meta on how to further improve its election integrity efforts globally. Meta’s current data disclosures, predominantly on content takedowns, do not give a complete picture of the outcome of the election integrity measures it puts in place in a given market. For instance, they do not include enforcement accuracy in relation to important policies in electoral contexts, such as the Violence and Incitement Community Standard, nor the percentage of political ads initially approved by Meta but then found to violate its policies. Performing statistical auditing with metrics like these would allow Meta not only to reverse errors, but also to keep track of how effective its measures are when getting it right is of the utmost importance. Without this kind of information, neither the Board nor the public can evaluate the effectiveness of Meta’s election integrity efforts more broadly. This is important considering that many incidents of political violence often result from or are intensified by election-related disputes, where harmful content remained online to precede or accompany offline violence (See “Myanmar bot” (2021-007-FB-UA) , “ Tigray Communication Affairs Bureau” (2022-006-FB-MR ), and “ Former President Trump’s suspension” (2021-001-FB-FBR )). Therefore, the Board finds that Meta should develop a framework for evaluating the company’s election integrity efforts, and for public reporting on the subject. This aims to provide the company with relevant data to improve its content moderation system as a whole and decide how to best employ its resources in electoral contexts. It should also help Meta to effectively draw on local knowledge and to identify and evaluate coordinated online and offline campaigns aimed at disrupting democratic processes. Additionally, this framework should be useful for Meta to set up permanent feedback channels, and to determine measures to be adopted when political violence persists after the formal conclusion of electoral processes. Finally, the Board notes that, as explained above, the articulation between Meta’s different risk evaluation measures and protocols, such as the IPOCs, the Integrity Country Prioritization policy, and the Crisis Policy Protocol (See Section 6 above) in election-related contexts needs to be reviewed and better explained to the public. 8.2 Compliance with Meta’s human rights responsibilities Freedom of expression (Article 19 ICCPR) The right to freedom of opinion and expression is a “central pillar of democratic societies, and a guarantor of free and fair electoral processes, and meaningful and representative public and political discourse” (UN Special Rapporteur on freedom of expression, Research Paper 1/2019 , p. 2). Article 19 of the ICCPR provides for broad protection of expression, especially for political speech. Where restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimacy, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human rights commitments. I. Legality (clarity and accessibility of the rules) The principle of legality under international human rights law requires rules that limit expression to be clear and publicly accessible (General Comment No.34, at para. 25). Applied to the rules of social media companies, the UN Special Rapporteur on freedom of expression has said they should be clear and specific ( A/HRC/38/35 , para. 46). People using Meta’s platforms should be able to access and understand the rules, and content reviewers should have clear guidance on their enforcement. The Board finds that, as applied to the facts of this case, Meta’s prohibition of content calling for forcible entry into certain high-risk locations is clearly stated, and the exact conditions under which the prohibition is triggered are likewise clear. The case content could be easily understood as violating both by the user and content reviewers, especially in Brazil’s context of civil unrest. Therefore, the Board considers the legality requirement to be satisfied. II. Legitimate aim Restrictions on freedom of expression (Article 19, ICCPR) must pursue a legitimate aim. The Violence and Incitement policy aims to “prevent potential offline harm” by removing content that poses “a genuine risk of physical harm or direct threats to public safety.” This policy serves the legitimate aim of protecting the rights of others, such as the right to life (Article 6, ICCPR), as well as public order and national security (Article 19, para. 3, ICCPR). In electoral contexts, this policy may also pursue the legitimate aim of protecting others’ right to vote and participate in public affairs (Article 25, ICCPR). III. Necessity and proportionality The principle of necessity and proportionality provides that any restrictions on freedom of expression ""must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected"" ( General Comment No. 34, para. 33 and 34). As in prior cases involving incitement to violence, the Board finds the six UN Rabat Plan of Action factors relevant to determining the necessity and proportionality of the restriction (see, for example: Former President Trump’s suspension case ). The Board recognizes that in many political environments, challenging the integrity of the elections or the electoral system is a legitimate exercise of people’s rights to freedom of expression and protest, even if there are isolated incidents of violence. Due to their political message, they enjoy a heightened level of protection (General Comment No. 37, paras. 19 and 32). The Board notes, however, that this is not the case here. There is a crucial line distinguishing between protected political speech and incitement to violence to overturn the results of a lawful popular election. Based on the factors outlined in the Rabat Plan of Action, the threshold for speech restriction was clearly met in this case. The Board finds that several elements in the case content are relevant to its analysis: the calls to “besiege” Brazil’s Congress as “the last alternative” and to “storm” the “three powers”; the video with a call from a prominent Brazilian general to “hit the streets” and “go to the National Congress … [and the] Supreme Court; the image of the federal government buildings burning in the background; and the demand for “the source code.” They all are, in the wider Brazilian context of Bolsonaro supporters disputing the election results and asking for a military coup, an unambiguous call to invade and take control of government buildings. The intent of the speaker, the content of the speech and its reach, as well as the likelihood of imminent harm resulting in the political context of Brazil at that time, all justified removing the post. The content was posted in a context of heightened risk of political violence, with widespread ongoing calls on the armed forces to overturn the election results. At the same time, coded slogans were being used to promote protests specifically focused on government buildings in Brasília (See Section 2). In this regard, information the Board received through several public comments, including from ITS Rio – Modera Lab (PC-11016), Coalizão Direitos na Rede (PC-11020), InternetLab (PC-11019), and Ekō (PC-11000), which supported research commissioned by the Board, all show that similar content was circulating widely on social media in the lead-up to the January 8 events. They also underscore the imminence of Bolsonaro supporters storming buildings at the Three Powers Plaza, and pushing the military to intervene, including through a military coup. Given the above, the Board finds that the removal of the content is consistent with its human rights responsibilities. Removing the content is a necessary and proportionate response to protect the right to life of people, including public officials, and public order in Brazil. The removal of this and similar pieces of content is also necessary and proportionate to protect Brazilians’ right to vote and participate in public affairs, in a context where attempts to undermine a democratic transition of power were underway. The persistent failure of Meta’s review systems to properly identify the violation in the video or escalate it for further review and remove the case content is a serious concern, which the Board believes Meta will be in a better position to address if the company implements the recommendations below. While Meta took positive steps to improve its election integrity efforts in Brazil, it has not done enough to address the potential misuse of its platforms through coordinated campaigns of the kind seen in Brazil. In this case, the content that was left-up and widely shared appeared to be typical of the kind of misinformation and incitement reported to be circulating on Meta’s platforms in Brazil at the time. It further substantiates claims that influential accounts with significant powers of mobilization on Meta’s platforms had played a role in promoting violence. As asserted in public comments the Board received (See, Instituto Vero [PC-11015], ModeraLab [PC-11016], InternetLab [PC-11019], Instituto de Referência em Internet e Sociedade [PC-11021]), the review and potential removal of individual pieces of content from Meta’s platforms is insufficient and relatively ineffective when such content is part of an organized and coordinated action aimed at disrupting democratic processes. Election integrity efforts and crisis protocols need to address these broader digital trends. 8.3 Identical content with parallel context The Board expresses concern with the proliferation of content similar to the one under analysis in the months preceding the January 8 riots in Brazil. Given Meta’s repeated failure in identifying this piece of content as violating, the Board will pay special attention to Meta's application of its decision to identical content with parallel context that has remained on the company's platforms, except when shared to condemn or raise awareness around the general’s speech and the calls for storming the Three Powers Plaza buildings in Brasília. 9. Oversight Board decision The Oversight Board overturns Meta's original decision to leave up the content. 10. Recommendations A. Enforcement B. Transparency * Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg which draws on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Linguistic expertise was provided by Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world. Return to Case Decisions and Policy Advisory Opinions" fb-691qamhj,Former President Trump’s suspension,https://www.oversightboard.com/decision/fb-691qamhj/,"May 5, 2021",2021,,"TopicFreedom of expression, Politics, SafetyCommunity StandardDangerous individuals and organizations","Policies and TopicsTopicFreedom of expression, Politics, SafetyCommunity StandardDangerous individuals and organizations",Upheld,United States,"The Board has upheld Facebook's decision, on 7 January 2021, to restrict then-President Donald Trump's access to posting content on his Facebook Page and Instagram account.",68947,10681,"Upheld May 5, 2021 The Board has upheld Facebook's decision, on 7 January 2021, to restrict then-President Donald Trump's access to posting content on his Facebook Page and Instagram account. Standard Topic Freedom of expression, Politics, Safety Community Standard Dangerous individuals and organizations Location United States Platform Facebook To read this decision as a PDF, click here . The Board has upheld Facebook’s decision on January 7, 2021, to restrict then-President Donald Trump’s access to posting content on his Facebook page and Instagram account. However, it was not appropriate for Facebook to impose the indeterminate and standardless penalty of indefinite suspension. Facebook’s normal penalties include removing the violating content, imposing a time-bound period of suspension, or permanently disabling the page and account. The Board insists that Facebook review this matter to determine and justify a proportionate response that is consistent with the rules that are applied to other users of its platform. Facebook must complete its review of this matter within six months of the date of this decision. The Board also made policy recommendations for Facebook to implement in developing clear, necessary, and proportionate policies that promote public safety and respect freedom of expression. About the case Elections are a crucial part of democracy. On January 6, 2021, during the counting of the 2020 electoral votes, a mob forcibly entered the Capitol Building in Washington, D.C. This violence threatened the constitutional process. Five people died and many more were injured during the violence. During these events, then-President Donald Trump posted two pieces of content. At 4:21 pm Eastern Standard Time, as the riot continued, Mr. Trump posted a video on Facebook and Instagram: I know your pain. I know you’re hurt. We had an election that was stolen from us. It was a landslide election, and everyone knows it, especially the other side, but you have to go home now. We have to have peace. We have to have law and order. We have to respect our great people in law and order. We don’t want anybody hurt. It’s a very tough period of time. There’s never been a time like this where such a thing happened, where they could take it away from all of us, from me, from you, from our country. This was a fraudulent election, but we can't play into the hands of these people. We have to have peace. So go home. We love you. You're very special. You've seen what happens. You see the way others are treated that are so bad and so evil. I know how you feel. But go home and go home in peace. At 5:41 pm Eastern Standard Time, Facebook removed this post for violating its Community Standard on Dangerous Individuals and Organizations. At 6:07 pm Eastern Standard Time, as police were securing the Capitol, Mr. Trump posted a written statement on Facebook: These are the things and events that happen when a sacred landslide election victory is so unceremoniously viciously stripped away from great patriots who have been badly unfairly treated for so long. Go home with love in peace. Remember this day forever! At 6:15 pm Eastern Standard Time, Facebook removed this post for violating its Community Standard on Dangerous Individuals and Organizations. It also blocked Mr. Trump from posting on Facebook or Instagram for 24 hours. On January 7, after further reviewing Mr. Trump’s posts, his recent communications off Facebook, and additional information about the severity of the violence at the Capitol, Facebook extended the block “indefinitely and for at least the next two weeks until the peaceful transition of power is complete.” On January 20, with the inauguration of President Joe Biden, Mr. Trump ceased to be the president of the United States. On January 21, Facebook announced it had referred this case to the Board. Facebook asked whether it correctly decided on January 7 to prohibit Mr. Trump’s access to posting content on Facebook and Instagram for an indefinite amount of time. The company also requested recommendations about suspensions when the user is a political leader. In addition to the two posts on January 6, Facebook previously found five violations of its Community Standards in organic content posted on the Donald J. Trump Facebook page, three of which were within the last year. While the five violating posts were removed, no account-level sanctions were applied. Key findings The Board found that the two posts by Mr. Trump on January 6 severely violated Facebook’s Community Standards and Instagram’s Community Guidelines. “We love you. You’re very special” in the first post and “great patriots” and “remember this day forever” in the second post violated Facebook’s rules prohibiting praise or support of people engaged in violence. The Board found that, in maintaining an unfounded narrative of electoral fraud and persistent calls to action, Mr. Trump created an environment where a serious risk of violence was possible. At the time of Mr. Trump’s posts, there was a clear, immediate risk of harm and his words of support for those involved in the riots legitimized their violent actions. As president, Mr. Trump had a high level of influence. The reach of his posts was large, with 35 million followers on Facebook and 24 million on Instagram. Given the seriousness of the violations and the ongoing risk of violence, Facebook was justified in suspending Mr. Trump’s accounts on January 6 and extending that suspension on January 7. However, it was not appropriate for Facebook to impose an ‘indefinite’ suspension. It is not permissible for Facebook to keep a user off the platform for an undefined period, with no criteria for when or whether the account will be restored. In applying this penalty, Facebook did not follow a clear, published procedure. ‘Indefinite’ suspensions are not described in the company’s content policies. Facebook’s normal penalties include removing the violating content, imposing a time-bound period of suspension, or permanently disabling the page and account. It is Facebook’s role to create necessary and proportionate penalties that respond to severe violations of its content policies. The Board’s role is to ensure that Facebook’s rules and processes are consistent with its content policies, its values and its human rights commitments. In applying a vague, standardless penalty and then referring this case to the Board to resolve, Facebook seeks to avoid its responsibilities. The Board declines Facebook’s request and insists that Facebook apply and justify a defined penalty. The Oversight Board’s decision The Oversight Board has upheld Facebook’s decision to suspend Mr. Trump’s access to post content on Facebook and Instagram on January 7, 2021. However, as Facebook suspended Mr. Trump’s accounts ‘indefinitely,’ the company must reassess this penalty. Within six months of this decision, Facebook must reexamine the arbitrary penalty it imposed on January 7 and decide the appropriate penalty. This penalty must be based on the gravity of the violation and the prospect of future harm. It must also be consistent with Facebook’s rules for severe violations, which must, in turn, be clear, necessary and proportionate. If Facebook decides to restore Mr. Trump’s accounts, the company should apply its rules to that decision, including any changes made in response to the Board’s policy recommendations below. In this scenario, Facebook must address any further violations promptly and in accordance with its established content policies. A minority of the Board emphasized that Facebook should take steps to prevent the repetition of adverse human rights impacts and ensure that users who seek reinstatement after suspension recognize their wrongdoing and commit to observing the rules in the future. When it referred this case to the Board, Facebook specifically requested “observations or recommendations from the Board about suspensions when the user is a political leader.” In a policy advisory statement, the Board made a number of recommendations to guide Facebook’s policies in regard to serious risks of harm posed by political leaders and other influential figures. The Board stated that it is not always useful to draw a firm distinction between political leaders and other influential users, recognizing that other users with large audiences can also contribute to serious risks of harm. While the same rules should apply to all users, context matters when assessing the probability and imminence of harm. When posts by influential users pose a high probability of imminent harm, Facebook should act quickly to enforce its rules. Although Facebook explained that it did not apply its ‘newsworthiness’ allowance in this case, the Board called on Facebook to address widespread confusion about how decisions relating to influential users are made. The Board stressed that considerations of newsworthiness should not take priority when urgent action is needed to prevent significant harm. Facebook should publicly explain the rules that it uses when it imposes account-level sanctions against influential users. These rules should ensure that when Facebook imposes a time-limited suspension on the account of an influential user to reduce the risk of significant harm, it will assess whether the risk has receded before the suspension ends. If Facebook identifies that the user poses a serious risk of inciting imminent violence, discrimination or other lawless action at that time, another time-bound suspension should be imposed when such measures are necessary to protect public safety and proportionate to the risk. The Board noted that heads of state and other high officials of government can have a greater power to cause harm than other people. If a head of state or high government official has repeatedly posted messages that pose a risk of harm under international human rights norms, Facebook should suspend the account for a period sufficient to protect against imminent harm. Suspension periods should be long enough to deter misconduct and may, in appropriate cases, include account or page deletion. In other recommendations, the Board proposed that Facebook: *Case summaries provide an overview of the case and do not have precedential value. In this case, Facebook asked the Board to answer two questions: Considering Facebook’s values, specifically its commitment to voice and safety, did it correctly decide on January 7, 2021, to prohibit Donald J. Trump’s access to posting content on Facebook and Instagram for an indefinite amount of time? In addition to the board’s determination on whether to uphold or overturn the indefinite suspension, Facebook welcomes observations or recommendations from the board about suspensions when the user is a political leader. 1. Decision summary The Board upholds Facebook’s decision on January 7, 2021, to restrict then-President Donald Trump’s access to posting content on his Facebook page and Instagram account. However, it was not appropriate for Facebook to impose the indeterminate and standardless penalty of indefinite suspension. Facebook’s normal penalties include removing the violating content, imposing a time-bound period of suspension, or permanently disabling the page and account. The Board insists that Facebook review this matter to determine and justify a proportionate response that is consistent with the rules that are applied to other users of its platform. Facebook must complete its review of this matter within six months of the date of this decision. The Board also makes policy recommendations for Facebook to implement in developing clear, necessary, and proportionate policies that promote public safety and respect freedom of expression. 2. Case description Elections are a crucial part of democracy. They allow people throughout the world to govern and to resolve social conflicts peacefully. In the United States of America, the Constitution says the president is selected by counting electoral college votes. On January 6, 2021, during the counting of the 2020 electoral votes, a mob forcibly entered the Capitol where the electoral votes were being counted and threatened the constitutional process. Five people died and many more were injured during the violence. Prior to January 6, then-President Donald Trump had asserted without evidence that the November 2020 presidential election had been stolen. Legal claims brought by Mr. Trump and others of election fraud were rejected in over 70 cases , and the then-Attorney General, after investigation, stated that there had been no fraud “on a scale that could have effected a different outcome in the election.” Nevertheless, Mr. Trump continued to make these unfounded claims, including through using Facebook, and referred to a rally planned for January 6: On the morning of January 6, 2021, Mr. Trump attended a rally near the White House and gave a speech. He continued to make unfounded claims that he won the election and suggested that Vice President Mike Pence should overturn President-elect Joe Biden’s victory, a power Mr. Pence did not have. He also stated, “we will stop the steal,” and “we’re going to the Capitol.” Many of those attending the rally then marched to the U.S. Capitol Building, where they joined other protestors already gathered. Many of the protestors attacked Capitol security, violently entered the building, and rioted through the Capitol. Mr. Pence and other Members of Congress were placed at serious risk of targeted violence. Five people died and many were injured. During these events, Mr. Trump posted a video and a statement to his Facebook page (which had at least 35 million followers), and the video was also shared to his Instagram account (which had at least 24 million followers). The posts stated the 2020 election was “stolen” and “stripped away.” The posts also praised and supported those who were at the time rioting inside the Capitol, while also calling on them to remain peaceful. Both the Facebook page and the Instagram account show a blue tick next to the page or account name, meaning that Facebook has confirmed that the account is the “authentic presence of the public figure” it represents. In the one-minute video, posted at 4:21 pm Eastern Standard Time (EST), as the riot continued, Mr. Trump said: I know your pain. I know you’re hurt. We had an election that was stolen from us. It was a landslide election, and everyone knows it, especially the other side, but you have to go home now. We have to have peace. We have to have law and order. We have to respect our great people in law and order. We don’t want anybody hurt. It’s a very tough period of time. There’s never been a time like this where such a thing happened, where they could take it away from all of us, from me, from you, from our country. This was a fraudulent election, but we can't play into the hands of these people. We have to have peace. So go home. We love you. You're very special. You've seen what happens. You see the way others are treated that are so bad and so evil. I know how you feel. But go home and go home in peace. At 5:41 pm EST, Facebook removed this post for violating its Community Standard on Dangerous Individuals and Organizations. Mr. Trump posted the following written statement at 6:07 pm EST, as police were securing the Capitol: These are the things and events that happen when a sacred landslide election victory is so unceremoniously viciously stripped away from great patriots who have been badly unfairly treated for so long. Go home with love in peace. Remember this day forever! At 6:15 pm EST, Facebook removed this post for violating its Community Standard on Dangerous Individuals and Organizations and imposed a 24-hour block on Mr. Trump’s ability to post on Facebook or Instagram. On January 7, 2021 , after further reviewing Mr. Trump's posts, his recent communications off Facebook, and additional information about the severity of the violence at the Capitol, Facebook extended the block “indefinitely and for at least the next two weeks until the peaceful transition of power is complete. ” Facebook cited Mr. Trump's “use of our platform to incite violent insurrection against a democratically elected government."" In the days following January 6, some of the participants in the riot stated publicly that they did so at the behest of the president. One participant was quoted in the Washington Post (January 16, 2021): “I thought I was following my president. . . . He asked us to fly there. He asked us to be there. So I was doing what he asked us to do.” A video captured a rioter on the steps of the Capitol screaming at a police officer, “We were invited here! We were invited by the president of the United States!” The District of Columbia declared a public emergency on January 6 and extended it until January 21 that same day. On January 27, the Department of Homeland Security (DHS) issued a National Terrorism Advisory System Bulletin warning of a “heightened threat environment across the United States, which DHS believes will persist in the weeks following the successful Presidential Inauguration.” It stated that “drivers to violence will remain through early 2021 and some [Domestic Violent Extremists] may be emboldened by the January 6, 2021 breach of the U.S. Capitol Building in Washington, D.C. to target elected officials and government facilities.” While the posts that Facebook found to violate its content policies were removed, Mr. Trump’s Facebook page and Instagram account remain publicly accessible on Facebook and Instagram. There is no notice on the page or account of the restrictions that Facebook imposed. On January 21, 2021, Facebook announced that it had referred the case to the Oversight Board. In addition to the two posts on January 6, 2021, Facebook previously found five violations of its Community Standards in organic content posted on the Donald J. Trump Facebook page, three within the last year. The five violating posts were removed, but no account-level sanctions were applied. In response to the Board’s question on whether any strikes had been applied, Facebook said that the page received one strike for a post in August 2020 which violated its COVID-19 Misinformation and Harm policy. Facebook did not explain why other violating content it had removed did not result in strikes. Facebook has a “newsworthiness allowance” which allows content that violates its policies to remain on the platform, if Facebook considers the content “newsworthy and in the public interest.” Facebook asserted that it “has never applied the newsworthiness allowance to content posted by the Trump Facebook page or Instagram account.” Responding to the Board’s questions, Facebook disclosed that “there were 20 pieces of content from Trump’s Facebook Page and Instagram Account that content reviewers or automation initially marked as violating Facebook’s Community Standards but were ultimately determined to not be violations.” Facebook told the Board it applies a “cross check” system to some “high profile” accounts to “minimize the risk of errors in enforcement.” For these accounts, Facebook sends content found to violate its Community Standards for additional internal review. After this escalation, Facebook decides if the content is violating. Facebook told the Board that “it has never had a general rule that is more permissive for content posted by political leaders.” While the same general rules apply, the “cross check” system means that decision-making processes are different for some “high profile” users. 3. Authority and scope The Oversight Board has the power to review a broad set of questions referred by Facebook (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1). Decisions on these questions are binding and may include policy advisory statements with recommendations. These recommendations are non-binding but Facebook must respond to them (Charter Article 3, Section 4). The Board is an independent grievance mechanism to address disputes in a transparent and principled manner. 4. Relevant standards Under the Oversight Board’s Charter, it must consider all cases in light of the following standards: I. Facebook’s content policies: Facebook has Community Standards that describe what users are not allowed to post on Facebook, and Instagram has Community Guidelines that describe what users are not allowed to post on Instagram. Facebook’s Community Standard on Dangerous Individuals and Organizations prohibits “content that praises, supports, or represents events that Facebook designates as terrorist attacks, hate events, mass murders or attempted mass murders, serial murders, hate crimes and violating events.” It also prohibits “content that praises any of the above organizations or individuals or any acts committed by them,"" referring to hate organizations and criminal organizations, among others. Instagram’s Community Guidelines state that “Instagram is not a place to support or praise terrorism, organized crime, or hate groups,” and provide a link to the Dangerous Individuals and Organizations Community Standard. Facebook’s Community Standard on Violence and Incitement states it “remove[s] content, disable[s] accounts, and work[s] with law enforcement when [it] believe[s] there is a genuine risk of physical harm or direct threats to public safety.” The Standard specifically prohibits: “Statements advocating for high-severity violence” and “Any content containing statements of intent, calls for action, conditional or aspirational statements, or advocating for violence due to voting, voter registration or the administration or outcome of an election.” It also prohibits “Misinformation and unverifiable rumors that contribute to the risk of imminent violence or physical harm.” Instagram’s Community Guidelines state that Facebook removes “content that contains credible threats” and “serious threats of harm to public and personal safety aren’t allowed.” Both sections include links to the Violence and Incitement Community Standard. Facebook’s Terms of Service state that Facebook “may suspend or permanently disable access” to an account if it determines that a user has “clearly, seriously, or repeatedly” breached its terms or policies. The introduction to the Community Standards notes that “consequences for violating our Community Standards vary depending on the severity of the violation and the person's history on the platform.” Instagram’s Terms of Use state that Facebook “can refuse to provide or stop providing all or part of the Service to you (including terminating or disabling your access to the Facebook Products and Facebook Company Products) immediately to protect our community or services, or if you create risk or legal exposure for us, violate these Terms of Use or our policies (including our Instagram Community Guidelines).” Instagram’s Community Guidelines state “Overstepping these boundaries may result in deleted content, disabled accounts, or other restrictions.” II. Facebook’s values: Facebook has five values outlined in the introduction to the Community Standards which it claims guide what is allowed on its platforms. Three of these values are “Voice,” “Safety,” and “Dignity.” Facebook describes “Voice” as wanting “people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. […] Our commitment to expression is paramount, but we recognize that the Internet creates new and increased opportunities for abuse.” Facebook describes “Safety” as Facebook’s commitment to “mak[e] Facebook a safe place” and states that “Expression that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook.” Facebook describes “Dignity” as its belief that “all people are equal in dignity and rights” and states that it “expect[s] that people will respect the dignity of others and not harass or degrade others.” III. Human rights standards: On March 16, 2021, Facebook announced its corporate human rights policy , where it commemorated its commitment to respecting rights in accordance with the UN Guiding Principles on Business and Human Rights (UNGPs). The UNGPs, endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. As a global corporation committed to the UNGPs, Facebook must respect international human rights standards wherever it operates. The Oversight Board is called to evaluate Facebook’s decision in view of international human rights standards as applicable to Facebook. The Board analyzed Facebook’s human rights responsibilities in this case by considering human rights standards including: 5. Content creator’s statement When Facebook refers a case to the Board, the Board gives the person responsible for the content the opportunity to submit a statement. In this case, a statement to the Board was submitted on Mr. Trump’s behalf through the American Center for Law and Justice and a page administrator. This statement requests that the Board “reverse Facebook’s indefinite suspension of the Facebook account of former U.S. President Donald Trump.” The statement discusses the posts removed from Facebook and Instagram on January 6, 2021, as well as Mr. Trump’s speech earlier that day. It states that the posts “called for those present at and around the Capitol that day to be peaceful and law abiding, and to respect the police” and that it is “inconceivable that either of those two posts can be viewed as a threat to public safety, or an incitement to violence.” It also states that “It is stunningly clear that in his speech there was no call to insurrection, no incitement to violence, and no threat to public safety in any manner,” and describes a “total absence of any serious linkage between the Trump speech and the Capitol building incursion.” The statement also discusses Facebook’s reasons for imposing the restrictions. It states that as “nothing Mr. Trump said to the rally attendees could reasonably be interpreted as a threat to public safety,” Facebook’s basis for imposing restrictions cannot be safety-related. It also states that “any content suspected of impacting safety must have a direct and obvious link to actual risk of violence.” The statement further describes that the terms “fight” or “fighting” used during the rally speech “were linked to a call for lawful political and civic engagement,” and concludes “those words were neither intended, nor would be believed by any reasonable observer or listener to be a call for violent insurrection or lawlessness.” The statement also addresses the ""Capitol incursion.” It states that ""all genuine Trump political supporters were law-abiding"" and that the incursion was “certainly influenced, and most probably ignited by outside forces.” It describes a federal complaint against members of the Oath Keepers, and states the group was “in no way associated with Mr. Trump or his political organization.” It then states the Oath Keepers were “parasitically using the Trump rally and co-opting the issue of the Electoral College debate for their own purposes.” The statement argues that Mr. Trump’s rally speech did not violate the Dangerous Organizations and Individuals Community Standard because “none of those categories fit this case” and “Mr. Trump’s political speech on January 6th never ‘proclaim[ed] a violent mission,’ a risk that lies at the very center of the Facebook policy.” It also states the Violence and Incitement Community Standard “fail[s] to support the suspension of the Trump Facebook account” because the two posts “merely called for peace and safety” and “none of the words in Mr. Trump’s speech, when considered in their true context, could reasonably be construed as incitement to violence or lawlessness.” It also cites Facebook’s referral to the Board mentioning the “peaceful transfer of power” and states this “new ad hoc rule on insuring [sic] peaceful governmental transitions is not just overly vague, it was non-existent until after the events that Facebook used to justify it.” The statement also argues that the Board should ""defer to American law in this appeal” and discusses the international law standards for restricting the right to freedom of expression, of legality, legitimate aim, and necessity and proportionality, with each element interpreted by reference to United States constitutional law. On legality, the statement cites protection of hyperbole and false statements of fact and Facebook’s importance to public discourse. It states that “employing content decisions based on what seems ‘reasonable,’ or how a ‘reasonable person’ would react to that content is not enough” and Facebook should ""consider a much higher bar.” It states that the Supreme Court requires strict scrutiny for laws that burden political speech and that Facebook has market dominance. It also discusses constitutional standards for incitement to violence. On legitimate aim, it states that preserving public safety is a legitimate aim, but Mr. Trump’s speech did not present safety concerns. On necessity and proportionality, it denies the validity of the restrictions and states the penalty was disproportionate. The statement concludes with suggestions for the Board’s policy recommendations on suspensions when the user is a political leader. It argues that the Board should “defer to the legal principles of the nation state in which the leader is, or was governing.” It then described multiple exceptions to this deference based on assessments of rule of law, guarantees of rights, processes for law making, processes of judicial review, and the existence of relevant legal principles in particular countries. 6. Facebook’s explanation of its decision For each case, Facebook provides an explanation of its actions to the Board, and the Board asks Facebook questions to clarify further information it requires to make its decision. In this case, Facebook states that it removed the two pieces of content posted on January 6, 2021, for violating the Dangerous Individuals and Organizations Community Standard. Specifically, the content was removed for violating “its policy prohibiting praise, support, and representation of designated Violent Events.” Facebook also stated it contained “a violation of its Dangerous Individuals and Organizations policy prohibiting praise of individuals who have engaged in acts of organized violence.” The company notes that its Community Standards clearly prohibit “content that expresses support or praise for groups, leaders or individuals involved in” activities such as terrorism, organized violence or criminal activity, and that this includes organized assault as well as planned acts of violence attempting to cause injury to a person with the intent to intimidate a government in order to achieve a political aim. Facebook notes that its assessment reflected both the letter of its policy and the surrounding context in which the statement was made, including the ongoing violence at the Capitol. It says that while Mr. Trump did ask people in his video to “go home in peace,” he also reiterated allegations that the election was fraudulent and suggested a common purpose in saying, “I know how you feel.” Given the ongoing instability at the time of his comments and the overall tenor of his words, Facebook concludes that “We love you. You’re very special” was intended as praise of people who were breaking the law by storming the Capitol. It also believes the second post to contain praise of the event, as Mr. Trump referred to those who stormed the Capitol as “great patriots,” and urged people to “[r]emember this day forever.” Facebook notes that it regularly limits the functionality of Facebook pages and profiles and Instagram accounts which repeatedly or severely violate its policies. Where it concludes that there is an “urgent and serious safety risk,” Facebook “goes beyond its standard enforcement protocols to take stronger actions against users and pages engaged in violating behavior.” In such cases, Facebook states that its enforcement actions remain grounded in its Community Standards and Instagram’s Community Guidelines. It states that it “evaluates all available enforcement tools, including permanent bans, before deciding which is the most appropriate to employ in the unique circumstance. In cases where Facebook must make an emergency decision that has widespread interest, it endeavors to share its decision and its reasoning with the public, often through a post in its Newsroom.” Facebook states that it usually does not block the ability of pages to post or interact with content, but removes pages which severely or repeatedly violate Facebook’s policies. However, Facebook notes that its enforcement protocols for profiles, including feature blocks, may also be applied to Facebook pages when they are used in a person’s singular voice, as with the Donald J. Trump page. In this case, Facebook states that, in line with its standard enforcement protocols, it initially imposed a 24-hour block on the ability to post from the Facebook page and Instagram account. After further assessing the evolving situation and emerging details of the violence at the Capitol, Facebook concluded that the 24-hour ban was not sufficient to address “the risk that Trump would use his Facebook and Instagram presence to contribute to a risk of further violence.” Facebook notes that it maintained the indefinite suspension after Mr. Biden’s inauguration partly due to analysis that violence connected to Mr. Trump had not passed. It cites National Terrorism Advisory System Bulletin issued on January 27 by the Department of Homeland Security (DHS) that described a “heightened threat environment across the United States, which DHS believes will persist in the weeks following the successful Presidential Inauguration” and that “drivers to violence will remain through early 2021 and some [Domestic Violent Extremists] may be emboldened by the January 6, 2021, breach of the U.S. Capitol Building in Washington, D.C. to target elected officials and government facilities.” Facebook notes that even when the risk of violence has diminished, it may be appropriate to permanently block Mr. Trump’s ability to post based on the seriousness of his violations on January 6, his continued insistence that Mr. Biden’s election was fraudulent, his sharing of other misinformation, and the fact that he is no longer president. Facebook states that its decision was “informed by Article 19 of the ICCPR, and U.N. General Comment No. 34 on freedom of expression, which permits necessary and proportionate restrictions of freedom of expression in situations of public emergency that threatens the life of the nation. In this case, the District of Columbia was operating under a state of emergency that had been declared to protect the U.S. Capitol complex.” Facebook notes that it also took into account the six contextual factors from the Rabat Plan of Action on the prohibition of advocacy of national, racial or religious hatred. The Rabat Plan of Action was developed by experts with the support of the United Nations to guide states in addressing when advocacy of racial, religious or national hatred that incites discrimination, hostility or violence is so serious that resort to state-imposed criminal sanctions is appropriate, while protecting freedom of expression, in line with states’ obligations under Article 19 and Article 20, para. 2 of the ICCPR. Facebook argues that the events of January 6 represented an unprecedented threat to the democratic processes and constitutional system of the United States. While Facebook asserts that it strives to act proportionately and accountably in curtailing public speech, given the unprecedented and volatile circumstances, Facebook believes it should retain operational flexibility to take further action including a permanent ban. In this case, the Board asked Facebook 46 questions, and Facebook declined to answer seven entirely, and two partially. The questions that Facebook did not answer included questions about how Facebook’s news feed and other features impacted the visibility of Mr. Trump’s content; whether Facebook has researched, or plans to research, those design decisions in relation to the events of January 6, 2021; and information about violating content from followers of Mr. Trump’s accounts. The Board also asked questions related to the suspension of other political figures and removal of other content; whether Facebook had been contacted by political officeholders or their staff about the suspension of Mr. Trump’s accounts; and whether account suspension or deletion impacts the ability of advertisers to target the accounts of followers. Facebook stated that this information was not reasonably required for decision-making in accordance with the intent of the Charter; was not technically feasible to provide; was covered by attorney/client privilege; and/or could not or should not be provided because of legal, privacy, safety, or data protection concerns. 7. Third-party submissions The Oversight Board received 9,666 public comments related to this case. Eighty of the comments were submitted from Asia Pacific and Oceania, seven from Central and South Asia, 136 from Europe, 23 from Latin America and the Caribbean, 13 from the Middle East and North Africa, 19 from Sub-Saharan Africa, and 9,388 from the United States and Canada. The submissions cover the following themes, which include issues that the Board specifically asked about in its call for public comments: To read public comments submitted for this case, please click here . 8. Oversight Board analysis 8.1 Compliance with content policies The Board agrees with Facebook’s decision that the two posts by Mr. Trump on January 6 violated Facebook’s Community Standards and Instagram’s Community Guidelines. Facebook’s Community Standard on Dangerous Individuals and Organizations says that users should not post content “expressing support or praise for groups, leaders, or individuals involved in” violating events. Facebook designated the storming of the Capitol as a “violating event” and noted that it interprets violating events to include designated “violent” events. At the time the posts were made, the violence at the Capitol was underway. Both posts praised or supported people who were engaged in violence. The words “We love you. You’re very special” in the first post and “great patriots” and “remember this day forever” in the second post amounted to praise or support of the individuals involved in the violence and the events at the Capitol that day. The Board notes that other Community Standards may have been violated in this case, including the Standard on Violence and Incitement. Because Facebook’s decision was not based on this Standard and an additional finding of violation would not affect the outcome of this proceeding, a majority of the Board refrains from reaching any judgment on this alternative ground. The decision upholding Facebook’s imposition of restrictions on Mr. Trump’s accounts is based on the violation of the Dangerous Individuals and Organizations Community Standard. A minority of the Board would consider the additional ground and find that the Violence and Incitement Standard was violated. The minority would hold that, read in context, the posts stating the election was being “stolen from us” and “so unceremoniously viciously stripped,” coupled with praise of the rioters, qualifies as “calls for actions,” “advocating for violence” and “misinformation and unverifiable rumors that contribute[d] to the risk of imminent violence or physical harm” prohibited by the Violence and Incitement Community Standard. The Board finds that the two posts severely violated Facebook policies and concludes that Facebook was justified in restricting the account and page on January 6 and 7. The user praised and supported people involved in a continuing riot where people died, lawmakers were put at serious risk of harm, and a key democratic process was disrupted. Moreover, at the time when these restrictions were extended on January 7, the situation was fluid and serious safety concerns remained. Given the circumstances, restricting Mr. Trump’s access to Facebook and Instagram past January 6 and 7 struck an appropriate balance in light of the continuing risk of violence and disruption. As discussed more fully below, however, Facebook’s decision to make those restrictions “indefinite” finds no support in the Community Standards and violates principles of freedom of expression. The Board notes that there is limited detailed public information on the cross check system and newsworthiness allowance. Although Facebook states the same rules apply to high-profile accounts and regular accounts, different processes may lead to different substantive outcomes. Facebook told the Board that it did not apply the newsworthiness allowance to the posts at issue in this case. Unfortunately, the lack of transparency regarding these decision-making processes appears to contribute to perceptions that the company may be unduly influenced by political or commercial considerations. 8.2 Compliance with Facebook’s values The analysis above is consistent with Facebook's stated values of ""Voice"" and ""Safety."" For the reasons stated in this opinion, in this case the protection of public order justified limiting freedom of expression. A minority believes it is particularly important to emphasize that “Dignity” was also relevant. Facebook relates “Dignity” to equality and that people should not “harass or degrade” others. The minority considers below that previous posts on the platform by Mr. Trump contributed to racial tension and exclusion and that this context was key to understanding the impact of Mr. Trump’s content. Having dealt with this case on other grounds, the majority does not comment on these posts. 8.3 Compliance with Facebook’s human rights responsibilities The Board’s decisions do not concern the human rights obligations of states or application of national laws, but focus on Facebook’s content policies, its values and its human rights responsibilities as a business. The UN Guiding Principles on Business and Human Rights, which Facebook has endorsed (See Section 4), establish what businesses should do on a voluntary basis to meet these responsibilities. This includes avoiding causing or contributing to human rights harms, in part through identifying possible and actual harms and working to prevent or address them (UNGP Principles 11, 13, 15, 18). These responsibilities extend to harms caused by third parties (UNGP Principle 19). Facebook has become a virtually indispensable medium for political discourse, and especially so in election periods. It has a responsibility both to allow political expression and to avoid serious risks to other human rights. Facebook, like other digital platforms and media companies, has been heavily criticized for distributing misinformation and amplifying controversial and inflammatory material. Facebook’s human rights responsibilities must be understood in the light of those sometimes competing considerations. The Board analyzes Facebook’s human rights responsibilities through international standards on freedom of expression and the rights to life, security, and political participation. Article 19 of the ICCPR sets out the right to freedom of expression. Article 19 states that “everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice.” The Board does not apply the First Amendment of the U.S. Constitution, which does not govern the conduct of private companies. However, the Board notes that in many relevant respects the principles of freedom of expression reflected in the First Amendment are similar or analogous to the principles of freedom of expression in ICCPR Article 19. Political speech receives high protection under human rights law because of its importance to democratic debate. The UN Human Rights Committee provided authoritative guidance on Article 19 ICCPR in General Comment No. 34, in which it states that “free communication of information and ideas about public and political issues between citizens, candidates and elected representatives is essential” (para. 20). Facebook’s decision to suspend Mr. Trump’s Facebook page and Instagram account has freedom of expression implications not only for Mr. Trump but also for the rights of people to hear from political leaders, whether they support them or not. Although political figures do not have a greater right to freedom of expression than other people, restricting their speech can harm the rights of other people to be informed and participate in political affairs. However, international human rights standards expect state actors to condemn violence (Rabat Plan of Action), and to provide accurate information to the public on matters of public interest, while also correcting misinformation (2020 Joint Statement of international freedom of expression monitors on COVID-19). International law allows for expression to be limited when certain conditions are met. Any restrictions must meet three requirements – rules must be clear and accessible, they must be designed for a legitimate aim, and they must be necessary and proportionate to the risk of harm. The Board uses this three-part test to analyze Facebook’s actions when it restricts content or accounts. First Amendment principles under U.S. law also insist that restrictions on freedom of speech imposed through state action may not be vague, must be for important governmental reasons and must be narrowly tailored to the risk of harm. I. Legality (clarity and accessibility of the rules) In international law on freedom of expression, the principle of legality requires that any rule used to limit expression is clear and accessible. People must be able to understand what is allowed and what is not allowed. Equally important, rules must be sufficiently clear to provide guidance to those who make decisions on limiting expression, so that these rules do not confer unfettered discretion, which can result in selective application of the rules. In this case, these rules are Facebook’s Community Standards and Instagram’s Community Guidelines. These policies aim to set out what people cannot post, and Facebook’s policies on when it can restrict access to Facebook and Instagram accounts. The clarity of the Standard against praise and support of Dangerous Individuals and Organizations leaves much to be desired, as the Board noted in a prior decision (case 2020-005-FB-UA ). The UN Special Rapporteur on Freedom of Expression has also raised concerns about the vagueness of the Dangerous Individuals and Organizations Standard (A/HRC/38/35, para 26, footnote 67). As the Board has noted previously in case 2020-003-FB-UA , there may be times in which certain wording may raise legality concerns, but as applied to a particular case those concerns are not warranted. Any vagueness under the terms of the Standard does not render its application to the circumstances of this case doubtful. The January 6 riot at the Capitol fell squarely within the types of harmful events set out in Facebook’s policy, and Mr. Trump’s posts praised and supported those involved at the very time the violence was going on, and while Members of Congress were calling on him for help. In relation to these facts, Facebook’s policies gave adequate notice to the user and guidance to those enforcing the rule. With regard to penalties for violations, the Community Standards and related information about account restrictions are published in various sources, including the Terms of Service, the introduction to the Community Standards, the Community Standard on Account Integrity and Authentic Identity, the Facebook Newsroom , and the Facebook Help Center. As noted in case 2020-006-FB-FBR the Board reiterates that the patchwork of applicable rules makes it difficult for users to understand why and when Facebook restricts accounts, and raises legality concerns. While the Board is satisfied that the Dangerous Individuals and Organizations Standard is sufficiently clear under the circumstances of this case to satisfy clarity and vagueness norms of freedom of speech, Facebook’s imposition of an “indefinite” restriction is vague and uncertain. “Indefinite” restrictions are not described in the Community Standards and it is unclear what standards would trigger this penalty or what standards will be employed to maintain or remove it. Facebook provided no information of any prior imposition of indefinite suspensions in any other cases. The Board recognizes the necessity of some discretion on Facebook’s part to suspend accounts in urgent situations like that of January, but users cannot be left in a state of uncertainty for an indefinite time. The Board rejects Facebook’s request for it to endorse indefinite restrictions, imposed and lifted without clear criteria. Appropriate limits on discretionary powers are crucial to distinguish the legitimate use of discretion from possible scenarios around the world in which Facebook may unduly silence speech not linked to harm or delay action critical to protecting people. II. Legitimate aim The requirement of legitimate aim means that any measure restricting expression must be for a purpose listed in Article 19, para. 3 of the ICCPR, and this list of aims is exhaustive. Legitimate aims include the protection of public order, as well as respect for the rights of others, including the rights to life, security, and to participate in elections and to have the outcome respected and implemented. An aim would not be legitimate where used as a pretext for suppressing expression, for example, to cite the aims of protecting security or the rights of others to censor speech simply because it is disagreeable or offensive (General Comment No. 34, paras. 11, 30, 46, 48). Facebook’s policy on praising and supporting individuals involved in “violating events,” violence or criminal activity was in accordance with the aims above. III. Necessity and proportionality The requirement of necessity and proportionality means that any restriction on expression must, among other things, be the least intrusive way to achieve a legitimate aim (General Comment No. 34, para. 34). The Board believes that, where possible, Facebook should use less restrictive measures to address potentially harmful speech and protect the rights of others before resorting to content removal and account restriction. At a minimum, this would mean developing effective mechanisms to avoid amplifying speech that poses risks of imminent violence, discrimination, or other lawless action, where possible and proportionate, rather than banning the speech outright. Facebook stated to the Board that it considered Mr. Trump’s “repeated use of Facebook and other platforms to undermine confidence in the integrity of the election (necessitating repeated application by Facebook of authoritative labels correcting the misinformation) represented an extraordinary abuse of the platform.” The Board sought clarification from Facebook about the extent to which the platform’s design decisions, including algorithms, policies, procedures and technical features, amplified Mr. Trump’s posts after the election and whether Facebook had conducted any internal analysis of whether such design decisions may have contributed to the events of January 6. Facebook declined to answer these questions. This makes it difficult for the Board to assess whether less severe measures, taken earlier, may have been sufficient to protect the rights of others. The crucial question is whether Facebook’s decision to restrict access to Mr. Trump’s accounts on January 6 and 7 was necessary and proportionate to protect the rights of others. To understand the risk posed by the January 6 posts, the Board assessed Mr. Trump’s Facebook and Instagram posts and off-platform comments since the November election. In maintaining an unfounded narrative of electoral fraud and persistent calls to action, Mr. Trump created an environment where a serious risk of violence was possible. On January 6, Mr. Trump’s words of support to those involved in the riot legitimized their violent actions. Although the messages included a seemingly perfunctory call for people to act peacefully, this was insufficient to defuse the tensions and remove the risk of harm that his supporting statements contributed to. It was appropriate for Facebook to interpret Mr. Trump’s posts on January 6 in the context of escalating tensions in the United States and Mr. Trump’s statements in other media and at public events. As part of its analysis, the Board drew upon the six factors from the Rabat Plan of Action to assess the capacity of speech to create a serious risk of inciting discrimination, violence, or other lawless action: Analyzing these factors, the Board concludes that the violation in this case was severe in terms of its human rights harms. Facebook’s imposition of account-level restrictions on January 6 and the extension of those restrictions on January 7 was necessary and proportionate. For the minority of the Board, while a suspension of an extended duration or permanent disablement could be justified on the basis of the January 6 events alone, the proportionality analysis should also be informed by Mr. Trump’s use of Facebook’s platforms prior to the November 2020 presidential election. In particular, the minority noted the May 28, 2020, post “when the looting starts, the shooting starts,” made in the context of protests for racial justice, as well as multiple posts referencing the “China Virus.” Facebook has made commitments to respect the right to non-discrimination (Article 2, para. 1 ICCPR, Article 2 ICERD) and, in line with the requirements for restrictions on the right to freedom of expression (Article 19, para. 3 ICCPR), to prevent the use of its platforms for advocacy of racial or national hatred constituting incitement to hostility, discrimination or violence (Article 20 ICCPR, Article 4 ICERD). The frequency, quantity and extent of harmful communications should inform the Rabat incitement analysis (Rabat Plan of Action, para. 29), in particular the factors on context and intent. For the minority, this broader analysis would be crucial to inform Facebook’s assessment of a proportionate penalty on January 7, which should serve as both a deterrent to other political leaders and, where appropriate, an opportunity of rehabilitation. Further, if Facebook opted to impose a time-limited suspension, the risk-analysis required prior to reinstatement should also take into account these factors. Having dealt with this case on other grounds, the majority does not comment on these matters. 9. Oversight Board decision On January 6, Facebook’s decision to impose restrictions on Mr. Trump’s accounts was justified. The posts in question violated the rules of Facebook and Instagram that prohibit support or praise of violating events, including the riot that was then underway at the U.S. Capitol. Given the seriousness of the violations and the ongoing risk of violence, Facebook was justified in imposing account-level restrictions and extending those restrictions on January 7. However, it was not appropriate for Facebook to impose an indefinite suspension. Facebook did not follow a clear published procedure in this case. Facebook’s normal account-level penalties for violations of its rules are to impose either a time-limited suspension or to permanently disable the user’s account. The Board finds that it is not permissible for Facebook to keep a user off the platform for an undefined period, with no criteria for when or whether the account will be restored. It is Facebook’s role to create and communicate necessary and proportionate penalties that it applies in response to severe violations of its content policies. The Board’s role is to ensure that Facebook’s rules and processes are consistent with its content policies, its values, and its commitment to respect human rights. In applying an indeterminate and standardless penalty and then referring this case to the Board to resolve, Facebook seeks to avoid its responsibilities. The Board declines Facebook’s request and insists that Facebook apply and justify a defined penalty. Facebook must, within six months of this decision, reexamine the arbitrary penalty it imposed on January 7 and decide the appropriate penalty. This penalty must be based on the gravity of the violation and the prospect of future harm. It must also be consistent with Facebook’s rules for severe violations which must in turn be clear, necessary, and proportionate. If Facebook determines that Mr. Trump’s accounts should be restored, Facebook should apply its rules to that decision, including any modifications made pursuant to the policy recommendations below. Also, if Facebook determines to return him to the platform, it must address any further violations promptly and in accordance with its established content policies. A minority believes that it is important to outline some minimum criteria that reflect the Board’s assessment of Facebook’s human rights responsibilities. The majority prefers instead to provide this guidance as a policy recommendation. The minority explicitly notes that Facebook’s responsibilities to respect human rights include facilitating the remediation of adverse human rights impacts it has contributed to (UNGPs, Principle 22). Remedy is a fundamental component of the UNGP ‘Protect, Respect, Remedy’ framework, reflecting international human rights law more broadly (Article 2, para. 3, ICCPR, as interpreted by the Human Rights Committee in General Comment No. 31, paras. 15 - 18). To fulfil its responsibility to guarantee that the adverse impacts are not repeated, Facebook must assess whether reinstating Mr. Trump’s accounts would pose a serious risk of inciting imminent discrimination, violence or other lawless action. This assessment of risk should be based on the considerations the Board detailed in the analysis of necessity and proportionality in Section 8.3.III above, including context and conditions on and off Facebook and Instagram. Facebook should, for example, be satisfied that Mr. Trump has ceased making unfounded claims about election fraud in the manner that justified suspension on January 6. Facebook’s enforcement procedures aim to be rehabilitative, and the minority believes that this aim accords well with the principle of satisfaction in human rights law. A minority of the Board emphasizes that Facebook’s rules should ensure that users who seek reinstatement after suspension recognize their wrongdoing and commit to observing the rules in the future. In this case, the minority suggests that, before Mr. Trump’s account can be restored, Facebook must also aim to ensure the withdrawal of praise or support for those involved in the riots. 10. Policy advisory statement The Board acknowledges the difficult issues raised by this case and is grateful for the many thoughtful and engaged public comments that it received. In its referral of this matter to the Oversight Board, Facebook specifically requested “observations or recommendations from the board about suspensions when the user is a political leader.” The Board asked Facebook to clarify their understanding of the term “political leader;” Facebook explained that they sought to cover “elected or appointed government officials and people who are actively running for office in an upcoming election, including a short period of time after the election if the candidate is not elected” but not all state actors. Based on the Board’s analysis of this case, it confines its guidance to issues of public safety. The Board believes that it is not always useful to draw a firm distinction between political leaders and other influential users. It is important to recognize that other users with large audiences can also contribute to serious risks of harm. The same rules should apply to all users of the platform; but context matters when assessing issues of causality and the probability and imminence of harm. What is important is the degree of influence that a user has over other users. When posts by influential users pose a high probability of imminent harm, as assessed under international human rights standards, Facebook should take action to enforce its rules quickly. Facebook must assess posts by influential users in context according to the way they are likely to be understood, even if their incendiary message is couched in language designed to avoid responsibility, such as superficial encouragement to act peacefully or lawfully. Facebook used the six contextual factors in the Rabat Plan of Action in this case, and the Board thinks this is a useful way to assess the contextual risks of potentially harmful speech. The Board stresses that time is of the essence in such situations; taking action before influential users can cause significant harm should take priority over newsworthiness and other values of political communication. While all users should be held to the same content policies, there are unique factors that must be considered in assessing the speech of political leaders. Heads of state and other high officials of government can have a greater power to cause harm than other people. Facebook should recognize that posts by heads of state and other high officials of government can carry a heightened risk of encouraging, legitimizing, or inciting violence - either because their high position of trust imbues their words with greater force and credibility or because their followers may infer they can act with impunity. At the same time, it is important to protect the rights of people to hear political speech. Nonetheless, if the head of state or high government official has repeatedly posted messages that pose a risk of harm under international human rights norms, Facebook should suspend the account for a determinate period sufficient to protect against imminent harm. Periods of suspension should be long enough to deter misconduct and may, in appropriate cases, include account or page deletion. Restrictions on speech are often imposed by or at the behest of powerful state actors against dissenting voices and members of political oppositions. Facebook must resist pressure from governments to silence their political opposition. When assessing potential risks, Facebook should be particularly careful to consider the relevant political context. In evaluating political speech from highly influential users, Facebook should rapidly escalate the content moderation process to specialized staff who are familiar with the linguistic and political context and insulated from political and economic interference and undue influence. This analysis should examine the conduct of highly influential users off the Facebook and Instagram platforms to adequately assess the full relevant context of potentially harmful speech. Further, Facebook should ensure that it dedicates adequate resourcing and expertise to assess risks of harm from influential accounts globally. Facebook should publicly explain the rules that it uses when it imposes account-level sanctions against influential users. These rules should ensure that when Facebook imposes a time-limited suspension on the account of an influential user to reduce the risk of significant harm, it will assess whether the risk has receded before the suspension term expires. If Facebook identifies that the user poses a serious risk of inciting imminent violence, discrimination, or other lawless action at that time, another time-bound suspension should be imposed when such measures are necessary to protect public safety and proportionate to the risk. When Facebook implements special procedures that apply to influential users, these should be well documented. It was unclear whether Facebook applied different standards in this case, and the Board heard many concerns about the potential application of the newsworthiness allowance. It is important that Facebook address this lack of transparency and the confusion it has caused. Facebook should produce more information to help users understand and evaluate the process and criteria for applying the newsworthiness allowance. Facebook should clearly explain how the newsworthiness allowance applies to influential accounts, including political leaders and other public figures. In regard to cross check review, Facebook should clearly explain the rationale, standards, and processes of review, including the criteria to determine which pages and accounts are selected for inclusion. Facebook should report on the relative error rates and thematic consistency of determinations made through the cross check process compared with ordinary enforcement procedures. When Facebook’s platform has been abused by influential users in a way that results in serious adverse human rights impacts, it should conduct a thorough investigation into the incident. Facebook should assess what influence it had and assess what changes it could enact to identify, prevent, mitigate, and account for adverse impacts in future. In relation to this case, Facebook should undertake a comprehensive review of its potential contribution to the narrative of electoral fraud and the exacerbated tensions that culminated in the violence in the United States on January 6, 2021. This should be an open reflection on the design and policy choices that Facebook has made that may enable its platform to be abused. Facebook should carry out this due diligence, implement a plan to act upon its findings, and communicate openly about how it addresses adverse human rights impacts it was involved with. In cases where Facebook or Instagram users may have engaged in atrocity crimes or grave human rights violations, as well as incitement under Article 20 of the ICCPR, the removal of content and disabling of accounts, while potentially reducing the risk of harm, may also undermine accountability efforts, including by removing evidence. Facebook has a responsibility to collect, preserve and, where appropriate, share information to assist in the investigation and potential prosecution of grave violations of international criminal, human rights and humanitarian law by competent authorities and accountability mechanisms. Facebook’s corporate human rights policy should make clear the protocols the company has in place in this regard. The policy should also make clear how information previously public on the platform can be made available to researchers conducting investigations that conform with international standards and applicable data protection laws. This case highlights further deficiencies in Facebook’s policies that it should address. In particular, the Board finds that Facebook’s penalty system is not sufficiently clear to users and does not provide adequate guidance to regulate Facebook’s exercise of discretion. Facebook should explain in its Community Standards and Guidelines its strikes and penalties process for restricting profiles, pages, groups and accounts on Facebook and Instagram in a clear, comprehensive, and accessible manner. These policies should provide users with sufficient information to understand when strikes are imposed (including any applicable exceptions or allowances) and how penalties are calculated. Facebook should also provide users with accessible information on how many violations, strikes, and penalties have been assessed against them, as well as the consequences that will follow future violations. In its transparency reporting, Facebook should include numbers of profile, page, and account restrictions, including the reason and manner in which enforcement action was taken, with information broken down by region and country. Finally, the Board urges Facebook to develop and publish a policy that governs its response to crises or novel situations where its regular processes would not prevent or avoid imminent harm. While these situations cannot always be anticipated, Facebook’s guidance should set appropriate parameters for such actions, including a requirement to review its decision within a fixed time. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. Return to Case Decisions and Policy Advisory Opinions" fb-6okjpns3,Cambodian prime minister,https://www.oversightboard.com/decision/fb-6okjpns3/,"June 29, 2023",2023,,"TopicElections, Politics, ProtestsCommunity StandardCoordinating harm and publicizing crime, Violence and incitement","Policies and TopicsTopicElections, Politics, ProtestsCommunity StandardCoordinating harm and publicizing crime, Violence and incitement",Overturned,Cambodia,The Oversight Board has overturned Meta’s decision to leave up a video on Facebook in which Cambodian Prime Minister Hun Sen threatens his political opponents with violence.,59711,9305,"Overturned June 29, 2023 The Oversight Board has overturned Meta’s decision to leave up a video on Facebook in which Cambodian Prime Minister Hun Sen threatens his political opponents with violence. Standard Topic Elections, Politics, Protests Community Standard Coordinating harm and publicizing crime, Violence and incitement Location Cambodia Platform Facebook Khmer translation Public comments appendix To read this decision in Khmer, click here . ដើម្បីអានសេចក្ដីសម្រេចនេះជាភាសាខ្មែរ សូមចុច នៅទីនេះ។ The Oversight Board has overturned Meta’s decision to leave up a video on Facebook in which Cambodian Prime Minister Hun Sen threatens his political opponents with violence. Given the severity of the violation, Hun Sen’s history of committing human rights violations and intimidating political opponents, as well as his strategic use of social media to amplify such threats, the Board calls on Meta to immediately suspend Hun Sen’s Facebook page and Instagram account for six months. About the case On January 9, 2023, a live video was streamed from the official Facebook page of Cambodia’s Prime Minister, Hun Sen. The video shows a one hour 41-minute speech delivered by Hun Sen in Khmer, Cambodia’s official language. In the speech, he responds to allegations that his ruling Cambodia People’s Party (CPP) stole votes during the country’s local elections in 2022. He calls on his political opponents who made the allegations to choose between the “legal system” and “a bat,” and says that they can choose the legal system, or he “will gather CPP people to protest and beat you up.” He also mentions “sending gangsters to [your] house,” and says that he may “arrest a traitor with sufficient evidence at midnight.” Later in the speech, however, he says “we don’t incite people and encourage people to use force.” After the live broadcast, the video was automatically uploaded onto Hun Sen’s Facebook page, where is has been viewed around 600,000 times. Three users reported the video five times between January 9 and January 26, 2023, for violating Meta’s Violence and Incitement Community Standard. This prohibits “threats that could lead to death” (high-severity violence) and “threats that lead to serious injury (mid-severity violence),” including “[s]tatements of intent to commit violence.” After the users who reported the content appealed, it was reviewed by two human reviewers who found it did not violate Meta’s policies. At the same time, the content was escalated to policy and subject matter experts within Meta. They determined that it violated the Violence and Incitement Community Standard but applied a newsworthiness allowance. This permits otherwise violating content where the public interest value outweighs the risk of it causing harm. One of the users who reported the content appealed Meta’s decision to the Board. Separately, Meta referred the case to the Board. In its referral, Meta stated that the case involves a challenging balance between its values of “Safety” and “Voice” in determining when to allow speech that violates its Violence and Incitement policy by a political leader to remain on its platforms. Key findings The Board finds that the video in this case included unequivocal statements of intent to commit violence against Hun Sen’s political opponents, which clearly violate the Violence and Incitement policy. The use of terms such as “bat” and “sending gangsters to [your] house” or “legal action” including midnight arrests amounts to incitement of violence and legal intimidation. The Board finds that Meta was wrong to apply a newsworthiness allowance in this case, as the harm caused by allowing the content on the platform outweighs the post’s public interest value. Given Hun Sen’s reach on social media, allowing this kind of expression on Facebook enables his threats to spread more broadly. It also results in Meta’s platforms contributing to these harms by amplifying the threats and resulting intimidation. The Board is also concerned that a political leader’s sustained campaign of harassment and intimidation against independent media and the political opposition can become a factor in a newsworthiness assessment that leads to violating content not being removed and the account avoiding penalties. Such behavior should not be rewarded. Meta should more heavily weigh press freedom when considering newsworthiness so that the allowance is not applied to government speech in situations where that government has made its own content more newsworthy by limiting free press. The Board urges Meta to clarify that its policy on restricting the accounts of public figures is not limited solely to single incidents of violence and civil unrest, but also applies to contexts in which citizens are under continuing threat of retaliatory violence from their governments. In this case, given the severity of the violation, Hun Sen’s history of committing human rights violations and intimidating political opponents, and his strategic use of social media to amplify such threats, the Board calls on Meta to immediately suspend Hun Sen’s Facebook page and Instagram account for six months. The Oversight Board’s decision The Oversight Board overturns Meta’s decision to leave up the content, requiring the post to be removed. The Board recommends that Meta: * Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board overturns Meta’s decision to leave a video on Facebook by granting a newsworthiness allowance to content in which Cambodian Prime Minister Hun Sen threatened his political opponents with violence. Meta referred this case to the Board because it raises difficult questions about balancing the need to allow people to hear from their political leaders with the need to prevent those leaders from using the platform to threaten their opponents with violence or intimidate others from becoming politically engaged. The Board finds that Hun Sen’s remarks violated the Violence and Incitement Community Standard. It also finds that Meta’s decision that the content was sufficiently newsworthy to leave it on the platform despite that violation was incorrect. The Board concludes that the content should be removed from the platform. Further, given the severity of the violation, the political context in Cambodia, the government’s history of human rights violations, Hun Sen’s history of inciting violence against his opponents, and his strategic use of social media to amplify such threats, the Board holds that Meta should immediately suspend Hun Sen’s official Facebook page and Instagram account for six months. 2. Case description and background On January 9, 2023, a live video was streamed on the official Facebook page of Cambodia’s Prime Minister, Hun Sen. The video shows a one hour 41-minute speech delivered by Hun Sen in Khmer, Cambodia’s official language, during a ceremony marking the opening of a national road expansion project in Kampong Cham. In the speech, he responds to allegations that his ruling Cambodia People’s Party (CPP) stole votes during the country’s local elections in 2022. He calls on his political opponents who made the allegations to choose between the “legal system” and “a bat,” and says they can choose the legal system, or he “will gather CPP people to protest and beat you up.” He adds, “if you say that’s freedom of expression, I will also express my freedom by sending people to your place and home” and mentions sending “gangsters to [your] house.” He names individuals, warning that they “need to behave,” and says he may “arrest a traitor with sufficient evidence at midnight.” However, approximately 22 minutes later in the speech, he says “we don’t incite people and encourage people to use force.” After the live broadcast, the video was automatically uploaded onto Hun Sen’s Facebook page, which has approximately 14 million followers, where it has been viewed approximately 600,000 times. The video was shared by almost 3,000 other people almost 4,000 times. Three users reported the video five times between January 9 and January 26, 2023, for violating Meta’s Violence and Incitement Community Standard. This policy prohibits “[t]hreats that could lead to death” (high-severity violence) and “threats that lead to serious injury (mid-severity violence),” including “[s]tatements of intent to commit violence.” Meta generally prioritizes content for human review based on its severity, virality and likelihood of violating content policies. In this case, Meta’s automated systems did not prioritize the content and closed the user reports without human review. After the users who reported the content appealed, two human reviewers found that it did not violate Meta’s policies. At the same time, the content was escalated to policy and subject matter experts within Meta. On January 18, 2023, those policy and subject matter experts determined that the video contravened the Violence and Incitement Community Standard, but applied a newsworthiness allowance for it to remain on the platform. A newsworthiness allowance permits otherwise violating content to remain on Meta’s platforms where its public interest value outweighs the risk of it causing harm. One user who reported the content appealed Meta’s decision to the Board. Separately, Meta referred the case to the Board. The political and social context of Cambodia is particularly relevant to assessing the content in this case. Hun Sen, now 70 years old, was formerly a Khmer Rouge commander and has been in power since 1985. He is currently running for re-election, with Cambodia’s general election scheduled for July 23, 2023, though there are reports that he may then hand power to his son. Critics of his government have long faced targeted political violence, with over 30 opposition activists attacked between 2017 and 2022. Opposition members and political activists have been killed under deeply suspicious circumstances, such as the killing of prominent political commentator Kem Lay in 2016. In 2015, Hun Sen warned of attacks against his opposition the Cambodian National Rescue Party (CNRP) if anyone protested his diplomatic visit to France. Shortly after protests broke out, two opposition members of parliament were beaten by a mob and hospitalized with serious injuries. In November 2021, the UN Office of the High Commissioner of Human Rights expressed concern over the killing of a CNRP affiliate who had received threats several months prior. The attack came weeks after Hun Sen threatened to “do what it takes to crack down [on] protests during Cambodia’s ASEAN chairmanship.” One independent media outlet in Cambodia reported that, between 2017 and 2022, more than 30 opposition activists were “violently attacked,” usually by “unknown assailants on public streets.” In a public comment (PC-11044) the Dangerous Speech Project warned that Hun Sen’s inflammatory language increases his audience’s willingness to commit and condone violence against his opponents. That prediction has been borne out recently with Human Rights Watch linking multiple acts of violence against opposition members directly to the January 9 speech at issue in this case. The Board is grateful to stakeholders and public commenters for highlighting the range and severity of human rights violations perpetrated or tolerated by the Cambodian government. Independent experts consulted by the Board report that, over the last 12 months, Hun Sen has used Facebook and Instagram to convey multiple implied threats to his political opponents. He recently posted what appears to be a threat to Cambodians living outside of the country, warning them not to “oppose the election.” In May 2017 shortly before the local elections, Hun Sen stated in a speech streamed on Facebook that he was “willing to eliminate 100 or 200 people” if necessary to ensure peace in the country, and threatened civil war should he lose power, a threat he has made numerous times over his tenure as Prime Minister. Shortly afterwards in another speech , which the Board was not able to confirm whether he posted to Meta’s platforms, he warned that critics and political opponents should “prepare their coffins” if they continued to accuse him of threatening civil war if he lost the election. Hun Sen has also claimed that he regretted not killing opposition leaders who organized protests calling for him to resign after the 2013 national elections. After the Board selected this case, in a speech livestreamed on Facebook, Hun Sen threatened to shoot opposition leader Sam Rainsy with a rocket launcher. Hun Sen’s most recent electoral victory came in 2018, when the CPP won all 125 seats in the National Assembly. In advance of those elections, the Cambodian Supreme Court ruled to dissolve the opposition Cambodian National Rescue Party (CNRP) and 118 of that party’s senior officials were banned from politics for five years. Bans and associated legal actions have quickly followed threats and public directives from Hun Sen himself. In a 2017 report , the UN Special Rapporteur on the situation of human rights in Cambodia noted that multiple opposition leaders had been charged with crimes, including two senators convicted based on Facebook posts. In the lead up to the 2023 election Hun Sen’s government has intensified pressure on opposition party members, independent press outlets, and civil society groups, employing politically motivated prosecutions and other forms of intimidation. In a public comment, the International Commission of Jurists (ICJ) (PC-11038) noted that Hun Sen and the Cambodian authorities have “systematically restricted human rights and fundamental freedoms” through actions like mass convictions of opposition party leaders on spurious charges and often in absentia. The ICJ also raised serious concerns over the “‘weaponization’ of laws that are not compliant with human rights law and standards.” The UN Special Rapporteur’s 2022 report noted that the independence and transparency of the judiciary is a “long-standing issue,” but there is a “more recent turn . . . in that some judicial and related personnel have close links with the political party in power.” Beyond the judiciary, the same report also found an undue level of influence over the media and electoral system. With respect to local elections held in June 2022, the Special Rapporteur questioned whether members of the Cambodian National Election Committee (NEC) had “too close ties with the ruling party,” and documented the pre-election delisting of a “large number of candidates, especially of the Candlelight Party,” the main opposition party, under questionable circumstances. In late 2022, Hun Sen threatened to use national courts once again to dissolve his primary opposition in advance of the 2023 elections. Shortly afterwards, in May 2023, the NEC refused to register the Candlelight Party, disqualifying it from the July elections and removing Hun Sen’s only credible challenge. Following this decision, through a Facebook post, Hun Sen threatened anyone protesting against the disqualification with “arrest and legal action.” When discussing his threats to crack down on the protests he later stated that “when Hun Sen speaks, he acts.” Hun Sen’s government has also clamped down on independent media, with the UN Special Rapporteur on the situation of human rights in Cambodia stating that there are “virtually no free media outlets operating in the country” ahead of the July elections. According to experts who asked to remain anonymous, the combination of these media closures, the weaponization of Cambodia’s court system against opponents, and targeted political violence has produced an “intentionally cultivated climate of fear.” The Cambodian Journalists Alliance Association recorded 35 cases of harassment against journalists in 2022. According to public comments and experts, this culture of intimidation has significantly chilled accurate reporting, with media outlets reluctant to cover sensitive issues or controversial speeches by Hun Sen for fear of government retribution. These media outlets have also been intimidated into reproducing government propaganda without critical commentary. Following a narrow victory in the 2013 general election, Hun Sen’s government recognized the power of social media and intensified Cambodia’s turn to what Freedom House later described as “ digital authoritarianism ,” where government use and monitoring of social media is leveraged to suppress and threaten political opposition. While social media, and Facebook in particular, can be an important platform for political discussions and news, independent experts consulted by the Board reported that ""there is minimal content in the Khmer-language Facebook ecosystem that is not supportive of the government."" Intimidation and threats of violence and arrest for activity critical of Hun Sen and the government have become a feature of online life. Additionally, the government has proposed taking control of the internet’s technical infrastructure in Cambodia through a “National Internet Gateway.” According to Cambodian civil society groups , this system would route internet traffic through government servers and enable the government to more easily initiate social media and internet shutdowns, force internet service providers to block or restrict content, increase the government’s ability to conduct surveillance of users’ online activity, and require operators to collect and store bulk data. In February 2022, the Ministry of Posts and Telecommunications announced that the implementation of the National Internet Gateway would be postponed due to the COVID-19 pandemic, however there is no indication that the project has been permanently abandoned. In 2020, Meta published its summary of, and response to , a Human Rights Impact Assessment that it commissioned from Business for Social Responsibility (BSR) of the company’s activities in Cambodia. BSR found that Facebook was “essential to freedom of information and expression in the country, where FM radio stations have been shut down and almost all print, radio, and TV media are now controlled by the government.” While considering this case, the Board was given access to the full report by BSR but Meta continues to classify it as confidential. In response to questions from the Board, Meta stated that it has not carried out a full assessment of Hun Sen’s pages and accounts, but that the page in question had a piece of content removed for breaching the Violence and Incitement policy in December 2022. Meta referred the case to the Board stating that it involves a challenging balance between the company’s values of “Safety” and “Voice” in determining when to allow speech that violates the Violence and Incitement policy by a political leader to remain on its platforms. Meta has asked the Board for guidance on how to evaluate such content, particularly in the context of an authoritarian regime where the right to access information is at stake. 3. Oversight Board authority and scope The Board has authority to review decisions that Meta submits for review (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1.1). The Board also has authority to review Meta’s decision following an appeal from the person who previously reported content that was left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). Where Meta commits to act on recommendations, the Board monitors their implementation. 4. Sources of authority and guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board decisions: The most relevant previous decisions of the Oversight Board include: II. Meta’s content policies: The policy rationale for the Facebook Violence and Incitement Community Standard explains that it ""aim[s] to prevent potential offline harm that may be related to content on Facebook"" and that while Meta ""understand[s] that people commonly express disdain or disagreement by threatening or calling for violence in non-serious ways, [the company] remove[s] language that incites or facilitates serious violence."" It further provides that Meta removes content, disables accounts and works with law enforcement ""when [it] believe[s] there is a genuine risk of physical harm or direct threats to public safety."" Meta states that it tries ""to consider the language and context in order to distinguish casual statements from content that constitutes a credible threat."" The policy specifically prohibits “threats that could lead to death” (high-severity violence) and “threats that lead to serious injury (mid-severity violence)” toward private individuals, unnamed specified persons, or minor public figures and defines threat as including “statements of intent to commit violence,” “statements advocating for violence,” or “aspirational or conditional statements to commit violence.” Internal guidelines on how to apply the policy also explain that ""violating content if it is shared in a condemning or raising awareness context"" is permitted. The Board's analysis of the content policies was informed by Meta's commitment to Voice , which the company describes as ""paramount"": The goal of our Community Standards is to create a place for expression and give people a voice. Meta wants people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Meta limits ""Voice"" in the service of four values, ""Safety"" being the most relevant in this case: We're committed to making Facebook a safe place. We remove content that could contribute to a risk of harm to the physical security of persons. Content that threatens people has the potential to intimidate, exclude or silence others and isn't allowed on Facebook. In explaining its commitment to ""Voice,"" Meta states that ""in some cases, we allow content – which would otherwise go against our standards – if it's newsworthy and in the public interest."" This is known as the newsworthiness allowance . It is a general policy exception applicable to all Community Standards. To employ the allowance, Meta conducts a balancing test, assessing the public interest in the content against the risk of harm. Meta says that it assesses whether content ""surfaces an imminent threat to public health or safety, or gives voice to perspectives currently being debated as part of a political process."" Both the assessment of public interest and harm take into account country circumstances such as whether an election or conflict is under way and whether there is a free press. Meta states that there is no presumption that content is inherently in the public interest solely on the basis of the speaker's identity, for example their identity as a politician. Meta says that it removes content ""even if it has some degree of newsworthiness, when leaving it up presents a risk of harm, such as physical, emotional and financial harm, or a direct threat to public safety."" In response to the “Former President Trump’s suspension” case, Meta created a policy on restricting accounts of public figures during civil unrest . This policy acknowledges that the “standard restrictions may not be proportionate to the violation, or sufficient to reduce the risk of further harm, in the case of public figures posting content during ongoing violence or civil unrest.” The Board notes that neither ongoing violence nor civil unrest are defined in the policy. This policy acknowledges that threats from public figures pose a greater risk of harm when they violate Meta’s policies and sets out some of the criteria used by the company to assess whether and how to restrict their accounts. III. Meta’s human rights responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following international standards: 5. User submissions In addition to Meta referring the case, a user also appealed Meta’s decision to keep the content on Facebook to the Board. In that appeal, the user explained that Hun Sen had made such threats on previous occasions. Specifically, the user noted that in the lead up to the July 2023 general election, Hun Sen often used Facebook to threaten others with violence and to suppress opposition activity. 6. Meta’s submissions Meta explained that while human reviewers initially assessed the case content as non-violating, after it was escalated to policy and subject matter experts for additional review, the company determined that it violated the Violence and Incitement policy, but should remain on the platform under the newsworthiness allowance. On escalation, Meta determined that two extracts from Hun Sen’s speech violated the Violence and Incitement policy: namely, the choice offered to his political opponents between the “legal system” and “a bat,” and his threat to “gather CPP people to protest and beat you up.” Meta stated that, based on the overall context of the speech, including information provided by the company’s regional team, the references to “you” in these statements are to Hun Sen’s political opponents in the Candlelight Party and potentially the now-dissolved CNRP. In weighing the risk of harm against the potential benefits of allowing the content on Facebook with a newsworthiness allowance, Meta noted that the majority of the hour and 41-minute speech related to governance or politics, such as Cambodia’s relationship with China and the COVID-19 pandemic. Meta said that political speech by a country’s leader has high public interest value, particularly in an election year. By contrast, according to the company’s assessment, the violating parts of the speech lasted for only a few minutes and fall within the mid-severity tier of the Violence and Incitement policy. Meta stated that the public has an interest in hearing warnings about potential violence by their government, particularly when those threats are not reported by local media. Meta learned through the company’s regional teams that, although regional media - which is not necessarily accessible to people in Cambodia - reported on the threats, local media did not. In support of this assessment, Meta cited two media reports on the violent elements of Hun Sen’s speech: one from the Bangkok Post , and one from Voice of Democracy , an independent news outlet based in Cambodia, recently shut down by the government. Meta believes that, under these circumstances, Facebook can “play a key role in spreading awareness about potential safety risks.” With respect to this context, Meta noted that the content in this case does not involve ongoing violence or armed conflict like the content considered in the “Former President Trump’s suspension” and “Tigray Communication Affairs Bureau” cases. Nonetheless, Meta acknowledged that there is an upcoming election and that Hun Sen and the CPP have cracked down on opposition political figures and the media. Meta explained that the company cannot ascertain Hun Sen’s intent at the time he made these remarks. However, Meta noted that “given the CPP’s use of court proceedings to undermine political opponents, it appears he has chosen to use the courts rather than force, though this does not rule out the possibility of future violence.” In response to a question from the Board Meta stated that it was aware of the human rights situation in Cambodia “including a pattern of Prime Minister Hun Sen engaging in speech that threatens either violence or use of the judicial system against political opponents.” Meta believes its decision is consistent with its values as well as with international human rights principles. Meta said the key factors in determining that this content did not require removal were the context and lack of imminent harm. The threat in this case was “not connected to an ongoing armed conflict or violent event” and “non-specific.” However, Meta recognized the “challenge in handling threats that lack a nexus to imminent violence, but nevertheless may contribute to a climate of fear when issued by an authoritarian government.” The Board asked Meta 15 questions in writing. Questions related to: past violations by Hun Sen’s pages and accounts; contextual factors considered when applying a newsworthiness allowance; contextual factors considered when enforcing the Violence and Incitement policy; Meta’s communications with the government authorities in Cambodia; the Early Response Secondary Review cross-check list; and Meta’s allocation of resources for operational and product work related to Khmer-language content in Cambodia. Meta answered all questions. 7. Public comments The Oversight Board received 18 public comments relevant to this case. Five of the comments were submitted from Asia Pacific and Oceania, one from Central and South Asia, one from Latin America and the Caribbean, and 11 from the United States and Canada. The submissions covered the following themes: the context of political oppression and disregard for human rights in Cambodia; the impunity with which Cambodian government figures act on Facebook; and the declining state of civil liberties in Cambodia. The Board also heard directly from civil society representatives who stressed that threats and incitement from Hun Sen are part of a systematic effort to create a climate of fear amongst political opponents and to dissuade Cambodians from questioning the government. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board selected this case because it gives the Board the opportunity to examine whether political leaders are using Meta’s platforms to incite violence and shut down political opposition, and, if so, what the consequences should be. This case falls into the Board’s strategic priorities of government use of Meta’s platforms as well as elections and civic space. The Board examined whether this content should be removed by analyzing Meta’s content policies, values and human rights responsibilities. 8.1 Compliance with Meta’s content policies I. Content rules a. Violence and Incitement The Board finds that the content in this case violates the Violence and Incitement Community Standard and must be removed from the platform. The Board finds that the posted video included unequivocal statements of intent to incite not only mid-severity violence (serious injury), but also high-severity violence (risk of death and other forms of high-severity violence) towards Hun Sen’s political opponents, which clearly violate the Violence and Incitement policy. The broader political context reinforces that conclusion: Hun Sen and members of his party have repeatedly both threatened and carried out violence against their opposition and its supporters, often using social media to communicate their threats. This history of violence and repression makes those threats more credible, and in this context such statements amount to a severe violation of this policy. In the Board’s view, Hun Sen’s perfunctory assurance that “we don’t incite people and encourage people to use force” contradicts the clear message of the speech and is not credible. The Board is concerned and perplexed that the initial reviewers concluded otherwise, but notes that Meta’s country experts, on review, recognized that the post violated the Violence and Incitement Standard. In response to questions from the Board, Meta stated that “threats to sue or use the legal system against opposition figures, standing alone, would not violate [the Violence and Incitement] policy, as they do not involve a physical threat of violence.” Meta justified that position by explaining that “as a social media platform, we are not in a position to independently determine whether a threat by the government to use legal process is undue.” While that approach may be appropriate when threats do indeed “stand alone,” that was not the case here. Where regimes with a history of following through on threats of violence against its opposition use Meta’s platforms, the company must rely on its regional teams and expertise to assess whether threats to use the legal system against political opponents amount to threatening or intimidating with violence. In the context of Cambodia, where the courts are controlled by the leading party and regularly used to suppress opposition, the Prime Minister threatening to pursue his opposition through the legal system is tantamount to a threat of violence. Threats to arrest the opposition “at midnight” are not consistent with due process. The Board also notes the history of targets of intimidation through the misuse of the courts by Hun Sen subsequently becoming targets of physical violence documented above. b. Newsworthiness allowance The Board concludes that Meta was wrong to apply a newsworthiness allowance in this case as the harms inherent in having the content on the platform outweigh the public interest in publicizing the speech. According to Meta’s approach to newsworthy content, there is no presumption that content is inherently newsworthy solely on the basis of the speaker. In the decision rationale, Meta reported that in this case the company weighed several factors, outside of the content itself, in deciding to apply the newsworthiness exception. Meta considered both “the country-specific circumstances and political structure in Cambodia, including the lack of an independent free press, Hun Sen’s reported suppression of political opposition, and reports from human rights organizations.” In response to the Board’s questions, Meta said that the lack of local press coverage of the threats at issue related directly to the content’s public interest value as a warning to the Cambodia people. This was based on the company’s assessment that, while regional media reported on the threats, local coverage of the speech did not mention them. The Board notes that one of the media outlets cited by Meta in support of this assessment, the Cambodia-based Voice of Democracy , reported on the violent threats in Hun Sen’s speech and also represented itself as a “local independent media outlet” prior to its closure in February 2023. One report provided by experts found that 82.6% of the “eligible” audience (i.e. people aged 13 and above) in Cambodia uses Facebook in 2023. Discussing the reasons for social media usage, Freedom House reports that following the 2018 general election the internet has “become one of the main sources of news and information for Cambodians, and social media has allowed the proliferation of more diverse content that is free from government influence.” Meta also noted that the “somewhat equivocal nature of the threats” in the speech factored into the determination “that the high public interest value in allowing people to hear political speech . . . outweighed the risk of harm” and warranted a newsworthiness allowance. The Board recognizes that a delicate balance must be struck when assessing violating speech made by political leaders. In addition to the high level of reliance on social media in Cambodia, the government has shut down almost all independent traditional media in the country, making it difficult for the population to receive independent and impartial news through other channels. Further, there is a strong transparency argument that the Cambodian people should be able to see that their leader is making threats against his opposition, though the Board notes that most people in Cambodia would know that members of Hun Sen’s regime routinely engage in such speech. However, given Hun Sen’s reach on social media, allowing such speech on the platform enables his threats to spread more broadly. It also results in Meta’s platforms being exploited to that effect, contributing to those harms by amplifying the threats and resulting intimidation. This was not a post by third parties reporting on Hun Sen’s threats, but a post on Hun Sen’s official Facebook account conveying those threats. The Board is concerned that a political leader’s sustained campaign of harassment and intimidation against independent media and the political opposition can become a factor within a newsworthiness assessment that leads to violating content not being removed and the account avoiding penalties. Such behavior should not be rewarded. Meta should more heavily weigh press freedom when considering newsworthiness so that the allowance is not applied to government speech in situations where that government has made its own content more newsworthy by limiting free press. Meta’s position also seems to assume that people viewing this violating content will see it for the incitement it is and disapprove of it. However, there are limited opportunities for expressing such disproval in Cambodia, and allowing this violating content to remain on the platform risks further normalizing violent speech from political leaders. Rather than informing debate, applying the newsworthiness allowance in this case would further chill the public discourse in favor of Hun Sen’s domination of the media landscape. Meta’s approach to newsworthy content balances public interest against the risk of harm. However, the Board finds that this balancing test cannot be satisfied in instances where public figures use Meta’s platforms to directly incite violence. If there is sufficient public interest in the inciting speech then it will be reported on by some form of third party journalism. While content that reports on, raises awareness of, condemns, or comments on incitement to violence by a public figure without endorsing it should not be prohibited, Meta cannot continue to allow direct incitement on its platforms on the grounds of newsworthiness. II. Enforcement action The Board holds that the newsworthiness allowance in this case should be revoked and that the content should be removed for violating the Violence and Incitement policy. It is vital that Meta’s platforms not be used as an instrument to amplify threats of violence and retaliation, aimed at suppressing political opposition, especially during an election, as in this case. In addition, given the severity of the violation, the political context in Cambodia, the government’s history of human rights violations, Hun Sen’s history of inciting violence against his opponents, and the way he uses social media to amplify such threats, the Board concludes that Meta should immediately suspend the official Facebook page and Instagram account of the Cambodian Prime Minister. While it is not the Board’s role to determine the duration of the suspension in the first instance, the Board holds that the page and account should be suspended for at least a six-month period, to give Meta time to review the situation and set a determinate period. Further, ahead of the termination of the suspension, Meta should carry out an assessment to determine whether the risk to public safety has receded, inviting local stakeholders to share relevant information. As part of its response to the Board’s recommendations in the “Former President Trump’s suspension” case, Meta created a policy on restricting the accounts of public figures (See Section 4 above). This policy applies to “public figures posting content during ongoing violence or civil unrest.” Against a background of widespread political repression and repeated acts of violence against political opponents, the Board disagrees with Meta and finds that the build up to the 2023 election in Cambodia constitutes a situation of ongoing violence. The Board notes that, while the policy was created in the aftermath of the January 6 2021 attack on the US Capitol building, it was developed to provide a framework for when Meta's ""standard restrictions may not be proportionate to the violation, or sufficient to reduce the risk of further harm, in the case of public figures posting content during ongoing violence or civil unrest."" Though the policy does not define ""ongoing violence"" and ""civil unrest,"" this case is clearly in line with the spirit of the policy. Violence is ongoing not only when a single continuous violent incident or period of civil unrest is present, but also in periods of civil ""peace"" where political leaders use the threat of state backed violence to pre-emptively suppress dissent and civil unrest through widespread repression and repeated acts of violence. Although the Board considers it necessary for Meta to publicly clarify the extent of the situations in which the policy should apply to public figures posting content in its platforms, it finds it to apply to this case. The criteria for imposing a restriction under the policy are threefold. Firstly, the severity of the violation and the public figure’s history on Meta’s platforms. The Board finds that incitement to send violent mobs to people’s homes is at the highest level of severity. This is reinforced by Hun Sen’s history of successfully inciting violence against his opponents both on and off the platforms and by the removal of content from his page in December 2022 for violating the Violence and Incitement policy. The second criterion is the public figure’s potential influence over, and relationship to, the individuals engaged in violence. Again, this is at the highest level. Hun Sen is a Prime Minister with complete control over his party, the military, law enforcement, and the judiciary of Cambodia as well as a high degree of loyalty from a section of the population. His influence is clearly demonstrated by the fact that both this speech and previous incitements have resulted in violence being committed against his targets. The final criterion, the severity of the violence and related physical harm, is also met. The speech incited armed attacks and previous incitements have resulted in killings. The Board also notes that, contrary to Meta’s conclusion that the threats in Hun Sen's speech were ""non-specific,"" he referred to at least one member of the political opposition by name. In addition to the factors listed under the policy in considering whether to suspend a political leader from its platforms and the duration of such a suspension, Meta should take into account the political context and human rights situation of the country in question, when assessing behavior on the platform. Viewing content like that under review in this case as a single violation of Meta’s policies divorced from their context ignores the reality that this speech and others like it are part of an ongoing and calculated effort to intimidate that incorporates offline violence. Moreover, actual violence confirms the seriousness of threats made over social media, giving these off-platform acts significance on the platform. As noted earlier in this decision, Hun Sen habitually uses social media to amplify implicit and explicit threats against his opposition as well as his intimidation of anyone who he sees as a threat to his continued control. From information made available to the Board it seems clear that Hun Sen uses social media to amplify threats against his opponents, spreading them more widely and causing more harm than he would be able to do without access to Meta’s platforms. Hun Sen’s use of the platforms to incite violence against his political opposition, taken in the context of his history, his government’s human rights abuses, and the upcoming election combine to require immediate action. The Board finds that the content in this case should be seen as a serious breach warranting an immediate suspension from Facebook and Instagram. The Board notes that the company does not currently inform the public when a government official or their official page or account is suspended or has content removed. Meta should announce when a government official’s page or account is suspended and the company’s reasoning for doing so. Meta should also consider preserving removed content for research and legal purposes and journalistic access and discussion. 8.2 Compliance with Meta’s human rights responsibilities As the Board found above, Meta’s own policies required that Hun Sen’s post should have been taken down. The Board also concluded that Meta’s policy on restricting accounts of public figures during civil unrest warranted Hun Sen’s suspension from Meta’s platforms. Allowing this content to remain on Facebook, as well as Hun Sen’s continuous use of Meta’s platforms to incite violence, is at odds with the company’s human rights responsibilities. This is especially pertinent given the risk it represents to the rights to vote and participate in public affairs (ICCPR, Article 25), to peaceful assembly (ICCPR, Article 21), to physical security (ICCPR, Article 9) and to life (ICCPR, Article 6) in Cambodia. In the analysis below, the Board assesses this speech restriction in light of Meta’s responsibility to protect freedom of expression (ICCPR, Article 19). Freedom of expression (Article 19 ICCPR) Article 19, para. 2, of the International Covenant on Civil and Political Rights (ICCPR) protects “the expression and receipt of communications of every form of idea and opinion capable of transmission to others,” including about politics, public affairs, and human rights ( General Comment No. 34 (2011), Human Rights Committee, paras. 11-12). Moreover, the UN Human Rights Committee stated that “free communication of information and ideas about public and political issues between citizens, candidates and elected representatives is essential” (General Comment No. 34, para. 20). Where restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human rights commitments, both in relation to the individual content decision under review and Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although ""companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users' right to freedom of expression"" ( A/74/486 , para. 41). In this case, the Board applied the three-part test to assess whether both the content’s removal and Hun Sen’s suspension, while warranted under Meta’s policies, are consistent with the company’s responsibilities to protect freedom of expression. I. Legality (clarity and accessibility of the rules) The principle of legality under international human rights law requires rules that limit expression to be clear and publicly accessible (General Comment No.34, para. 25). Rules restricting expression ""may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution"" and ""provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not"" ( Ibid ). Applied to rules that govern online speech, the UN Special Rapporteur on freedom of expression has said they should be clear and specific ( A/HRC/38/35 , para. 46). People using Meta’s platforms should be able to access and understand the rules, and content reviewers should have clear guidance on their enforcement. The Board finds that Hun Sen and those maintaining his social media presence would easily have been able to determine that the content violated the Violence and Incitement Community Standard's prohibition on threatening speech, especially in the context of an upcoming election. To threaten critics with the “bat” and with being beaten up by partisans is unambiguously contrary to the rule. Similarly, Meta’s policy on restricting accounts of public figures makes it clear that severe violations from public figures leading to violence and physical harm, in a broader context of ongoing violence, warrant suspension. As noted above the Board finds that, as currently drafted, the policy applies to this case. However, Meta should publicly clarify the extent of the policy. II. Legitimate aim The Violence and Incitement Community Standard aims to “prevent potential offline harm” and removes content that poses “a genuine risk of physical harm or direct threats to public safety.” Additionally, Meta’s policy on restricting accounts of public figures applies when “standard restrictions may not be proportionate to the violation, or sufficient to reduce the risk of further harm.” Prohibiting calls for violence and threats of arbitrary arrest on the platform to ensure people’s safety constitutes a legitimate aim under Article 19, para. 3, as it protects “the rights of others” to life (ICCPR, Article 6), and to physical security against arbitrary arrest and detention (ICCPR, Article 9 para. 1). Particularly in the run up to elections, both policies may also pursue the legitimate aim of protecting others’ right to peaceful assembly (ICCPR, Article 21) and to vote and participate in public affairs (ICCPR, Article 25). III. Necessity and proportionality The principle of necessity and proportionality provides that any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected” ( General Comment No. 34 , para. 34). When analyzing the risks posed by violent content, the Board is typically guided by the six-factor test described in the Rabat Plan of Action, which addresses advocacy of national, racial or religious hatred that constitutes incitement to hostility, discrimination or violence. Based on an assessment of the relevant factors, especially the speaker, the context and the extent of the speech act, further described below, the Board finds that removing Hun Sen’s inciting content up is in compliance with Meta’s human rights responsibilities as it poses imminent and likely harm. Removing the content is a necessary and proportionate limitation on expression in order to protect the rights to life and physical security of people, including opposition members, from potential violence and persecution. The speech presented in the posted video was delivered by the head of the government in Cambodia, a public figure who has been in power since 1985 and has significant reach and authority. In this sense, the speech amounts to state action. As reflected in the case background section, Hun Sen’s government has been reported to have used both physical violence and the Cambodian court system to silence and persecute dissenters and opposition members. As mentioned in the “Former President Trump’s suspension” case decision (2021-001-FB-FBR), these factors increase both the level of the risk of harm associated with his statements and the public interest in his remarks. The speech was made just over six months prior to the July 2023 parliamentary elections in Cambodia, and addressed issues of public interest, including further discussion of the election and national infrastructure. The Board notes that people in Cambodia have access to information on these issues through other means, including other social media accounts and reporting of the speech that did not mention the threats. However, the use of such terms as “bat,” which the context makes clear is a reference to a weapon, and “sending gangsters to [your] house,” or “legal action” including midnight arrests, when directly addressing opposition leaders, amounts to incitement of violence and threats of arbitrary arrests to stifle political dissent and weaken the opposition. In its decision rationale, Meta maintained that “the threat in this case was non-specific and not connected to an ongoing armed conflict or violent event.” The Board does not accept Meta’s designation of the threats as non-specific. In context, oblique references can still be understood to have specific meanings. Here, the threat was thrown into stark relief by the backdrop of an impending election and the identification of Hun Sen’s political opponents as its targets. Additionally, given the history of violence by Hun Sen’s supporters and the intimidation of opposition figures, the Board finds that any call for violence made by the Prime Minister will be credible and have a chilling effect. This is the case especially given the Cambodian government’s total control over the means of violence, in addition to its soft powers. Elections are a crucial part of democracy, and the Board is mindful of the upcoming parliamentary elections in Cambodia. Public comments emphasized that Hun Sen’s speech should be assessed “within the overall context of the poor human rights situation and democratic deficit in Cambodia in the lead-up to the July 2023 election, and the ongoing violence and crackdown against perceived political opponents” which leads to “a real risk of human rights abuses and other harm to concerned persons” (ICJ comment, PC-11038; see also HRF comment, PC-11041). The UN Special Rapporteur’s 2022 report on the situation of human rights in Cambodia cautioned that the large number of political parties that participated in 2022 local elections was “more of form than of substance,” and that, since the 2017 elections, “the playing field for democratic pluralism has been heavily undermined and the imposition of one-party rule has ridden roughshod over the political lawn.” In the Board’s view, this speech by a public official, with a history of political oppression, violence and intimidation, delivered in the lead up to an election, contributes to a broader campaign to incite violence as well as to intimidate and silence dissenters and opposition. Therefore, the Board finds that removing the content under the Violence and Incitement policy is necessary, in the sense that no other measure less restrictive of freedom of expression would be appropriate to protect people’s rights. The Board also concludes that such removal is proportionate, given the likelihood and imminence of harm to human rights impacted in this case. Given the context of Hun Sen’s history of human rights violations, his intimidation and suppression of political opponents, and his use of social media to amplify his threats, the Board finds that simply removing the content is not sufficient to respect the rights of others in this case, and that his suspension is necessary. Simply removing the content does nothing to prevent future violations and incitement to violence, which are particularly dangerous given the recent context and the upcoming elections. The Board therefore also finds that the suspension of Hun Sen’s official Facebook page and Instagram account is proportionate. 9. Oversight Board decision The Oversight Board overturns Meta’s decision to leave up the content, requiring the post to be removed. 10. Recommendations A. Content policy 1. Meta should clarify that its policy for restricting accounts of public figures applies to contexts in which citizens are under continuing threat of retaliatory violence from their governments. The policy should make it clear that it is not restricted solely to single incidents of civil unrest or violence and that it applies where political expression is pre-emptively suppressed or responded to with violence or threats of violence from the state. The Board will consider this recommendation implemented when Meta’s public framework for restricting accounts of public figures is updated to reflect these clarifications. 2. Meta should update its newsworthiness allowance policy to state that content that directly incites violence is not eligible for a newsworthiness allowance, subject to existing policy exceptions. The Board will consider this recommendation implemented when Meta publishes an updated policy on newsworthy content explicitly setting out this limitation on the allowance. B. Enforcement 3. Meta should immediately suspend the official Facebook page and Instagram account of Cambodian Prime Minister Hun Sen for a period of at least six months under Meta’s policy on restricting accounts of public figures during civil unrest. The Board will consider this recommendation implemented when Meta suspends the accounts and publicly announces that it has done so. 4. Meta should update its review prioritization systems to ensure that content from heads of state and senior members of government that potentially violated the Violence and Incitement policy is consistently prioritized for immediate human review. The Board will consider this recommendation implemented when Meta discloses details on the changes to its review ranking systems and demonstrates how those changes would have ensured review for this and similar content from heads of state and senior members of government. 5. Meta should implement product and/or operational guideline changes that allow more accurate review of long form video (e.g., use of algorithms for predicting the timestamp of violation, ensuring proportional review time with length of the video, allowing videos to run 1,5x or 2x faster, etc.). The Board will consider this implemented when Meta shares its new long form video moderation procedures with the Board, including metrics for showing improvements in review accuracy for long form videos. C. Transparency 6. In the case of Prime Minister Hun Sen, and in all account-level actions against heads of state and senior members of government, Meta should publicly reveal the extent of the action and the reasoning behind its decision. The Board will consider this recommendation implemented when Meta discloses this information for Hun Sen, and commits to doing so for future enforcements against all heads of state and senior members of government. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg which draws on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Linguistic expertise was provided by Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world. Return to Case Decisions and Policy Advisory Opinions" fb-6yhrxhzr,Pro-Navalny protests in Russia,https://www.oversightboard.com/decision/fb-6yhrxhzr/,"May 26, 2021",2021,,"TopicFreedom of expression, News events, PoliticsCommunity StandardBullying and harassment","Policies and TopicsTopicFreedom of expression, News events, PoliticsCommunity StandardBullying and harassment",Overturned,Russia,The Oversight Board has overturned Facebook's decision to remove a comment in which a supporter of imprisoned Russian opposition leader Alexei Navalny called another user a 'cowardly bot'.,33896,5218,"Overturned May 26, 2021 The Oversight Board has overturned Facebook's decision to remove a comment in which a supporter of imprisoned Russian opposition leader Alexei Navalny called another user a 'cowardly bot'. Standard Topic Freedom of expression, News events, Politics Community Standard Bullying and harassment Location Russia Platform Facebook The Oversight Board has overturned Facebook’s decision to remove a comment in which a supporter of imprisoned Russian opposition leader Alexei Navalny called another user a “cowardly bot.” Facebook removed the comment for using the word “cowardly” which was construed as a negative character claim. The Board found that while the removal was in line with the Bullying and Harassment Community Standard, the current Standard was an unnecessary and disproportionate restriction on free expression under international human rights standards. It was also not in line with Facebook’s values. About the case On January 24, a user in Russia made a post consisting of several pictures, a video, and text (root post) about the protests in support of opposition leader Alexei Navalny held in Saint Petersburg and across Russia on January 23. Another user (the Protest Critic) responded to the root post and wrote that while they did not know what happened in Saint Petersburg, the protesters in Moscow were all school children, mentally “slow,” and were “shamelessly used.” Other users then challenged the Protest Critic in subsequent comments to the root post. A user who was at the protest (the Protester) appeared to be the last to respond to the Protest Critic. They claimed to be elderly and to have participated in the protest in Saint Petersburg. The Protester ended the comment by calling the Protest Critic a “cowardly bot.” The Protest Critic then reported the Protester’s comment to Facebook for bullying and harassment. Facebook determined that the term “cowardly” was a negative character claim against a “private adult” and, since the “target” of the attack reported the content, Facebook removed it. The Protester appealed against this decision to Facebook. Facebook determined that the comment violated the Bullying and Harassment policy, under which a private individual can get Facebook to take down posts containing a negative comment on their character. Key findings This case highlights the tension between policies protecting people against bullying and harassment and the need to protect freedom of expression. This is especially relevant in the context of political protest in a country where there are credible complaints about the absence of effective mechanisms to protect human rights. The Board found that, while Facebook’s removal of the content may have been consistent with a strict application of the Community Standards, the Community Standards fail to consider the wider context and disproportionately restricted freedom of expression. The Community Standard on Bullying and Harassment states that Facebook removes negative character claims about a private individual when the target reports the content. The Board does not challenge Facebook’s conclusion that the Protest Critic is a private individual and that the term “cowardly” was a negative character claim. However, the Community Standard did not require Facebook to consider the political context, the public character, or the heated tone of the conversation. Accordingly, Facebook did not consider the Protester’s intent to refute false claims about the protests or attempt to balance that concern against the reported negative character claim. The decision to remove this content failed to balance Facebook’s values of “Dignity” and “Safety” against “Voice.” Political speech is central to the value of “Voice” and should only be limited where there are clear “Safety” or “Dignity” concerns. ""Voice” is also particularly important in countries where freedom of expression is routinely suppressed, as in Russia. In this case, the Board found that Facebook was aware of the wider context of pro-Navalny protests in Russia, and heightened caution should have led to a more careful assessment of content. The Board found that Facebook’s Community Standard on Bullying and Harassment has a legitimate aim in protecting the rights of others. However, in this case, combining the distinct concepts of bullying and harassment into a single set of rules, which were not clearly defined, led to the unnecessary removal of legitimate speech. The Oversight Board’s decision The Oversight Board overturns Facebook’s decision to remove the content, requiring the post to be restored. In a policy advisory statement, the Board recommends that, to comply with international human rights standards, Facebook should amend and redraft its Bullying and Harassment Community Standard to: · Explain the relationship between its Bullying and Harassment policy rationale and the “Do nots” as well as the other rules restricting content that follow it. · Differentiate between bullying and harassment and provide definitions that distinguish the two acts. The Community Standard should also clearly explain to users how bullying and harassment differ from speech that only causes offense and may be protected under international human rights law. · Clearly define its approach to different target user categories and provide illustrative examples of each target category (i.e. who qualifies as a public figure). Format the Community Standard on Bullying and Harassment by user categories currently listed in the policy. · Include illustrative examples of violating and non-violating content in the Bullying and Harassment Community Standard to clarify the policy lines drawn and how these distinctions can rest on the identity status of the target. · When assessing content including a ‘negative character claim’ against a private adult, Facebook should amend the Community Standard to require an assessment of the social and political context of the content. Facebook should reconsider the enforcement of this rule in political or public debates where the removal of the content would stifle debate. · Whenever Facebook removes content because of a negative character claim that is only a single word or phrase in a larger post, it should promptly notify the user of that fact, so that the user can repost the material without the negative character claim. *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board has overturned Facebook’s decision to remove a comment where a supporter of imprisoned Russian opposition leader Alexei Navalny called another user a “cowardly bot.” Facebook clarified that the comment was removed for using the word “cowardly” which was construed as a negative character claim. The Board found that while the removal was in line with the Bullying and Harassment Community Standard, this Standard was an unnecessary and disproportionate restriction on freedom of expression under international human rights standards. It was also not in accordance with Facebook’s values. 2. Case Description On January 24, a user in Russia made a post consisting of several pictures, a video, and text (root post) about the protests in support of opposition leader Alexei Navalny held in Saint Petersburg and across Russia on January 23. Another user (the Protest Critic) responded to the root post and wrote that while they did not know what happened in Saint Petersburg, the protesters in Moscow were all school children, mentally “slow,” and were “shamelessly used.” The Protest Critic added that the protesters were not the voice of the people but a “theatre show.” Other users then challenged the Protest Critic in subsequent comments to the root post. These other users defended the protesters and stated that the Protest Critic was spreading nonsense and misunderstood the Navalny movement. The Protest Critic responded in several comments, repeatedly dismissing these challenges and referring to Navalny as a “pocket clown” and “rotten,” claiming that people supporting him have no self-respect. They also called people who brought their grandparents to the protests “morons.” A user who was at the protest (the Protester) appeared to be the last to respond to the Protest Critic. They self-identified as elderly and as having participated in the protest in Saint Petersburg. They noted that there were many people at the protests, including disabled and elderly people, and that they were proud to see young people protesting. They said that the Protest Critic was deeply mistaken in thinking that young protesters had been manipulated. The Protester ended the comment by calling the Protest Critic a “cowardly bot.” The Protest Critic then reported the Protester’s comment to Facebook for bullying and harassment. Facebook determined that the term “cowardly” was a negative character claim against a “private adult” (i.e. not a public figure) and, since the “target” of the attack reported the content, Facebook removed it. Facebook did not find the term “bot” to be a negative character claim. The Protester appealed against this decision to Facebook. Facebook reviewed the appeal and determined that the comment violated the Bullying and Harassment policy. The content was reviewed within four minutes of the Protester requesting an appeal, which according to Facebook “falls within the standard timeframe” for reviewing content on appeal. 3. Authority and scope The Board has the power to review Facebook’s decision following an appeal from the user whose post was removed (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1). The Board may uphold or reverse that decision (Charter Article 3, Section 5). The Board’s decisions are binding and may include policy advisory statements with recommendations. These recommendations are nonbinding, but Facebook must respond to them (Charter Article 3, Section 4). The Board is an independent grievance mechanism to address disputes in a transparent and principled manner. 4. Relevant standards The Oversight Board considered the following standards in its decision: I. Facebook’s Community Standards The Community Standard on Bullying and Harassment is broken into two parts. It includes a policy rationale followed by a list of “Do nots,” which are specific rules around what content should not be posted and when it may be removed. The policy rationale begins by stating that bullying and harassment can take many forms, including threatening messages and unwanted malicious contact. It then declares that Facebook does not tolerate this kind of behavior because it prevents people from feeling safe and respected.” The rationale also explains that Facebook approaches bullying and harassment of public and private individuals differently to allow open discussion of current events. The policy rationale adds that to protect private individuals, Facebook removes any content “that is meant to degrade or shame” them. One of the “Do not” rules that follows the rationale declares that it is not permitted to “target private adults (who must self-report)” with “negative character or ability claims, except in the context of criminal allegations against adults.” The Community Standards do not define the meaning of a “negative character claim.” Further, Facebook explained to the Board that it “does not maintain an exhaustive list of which terms qualify as negative character claims.” Although “several of Facebook’s regionally focused operational teams maintain dynamic, non-exhaustive lists of terms in the relevant market language in order to provide guidance for terms which may be difficult to classify, such as terms that are new or used in a variety of ways.” Facebook also has longer documents detailing the Internal Implementation Standards on Bullying and Harassment and how to apply the policy. These non-public guidelines define key terms and offer guidance and illustrative examples to moderators on what content may be removed under the policy. In an excerpt provided to the Board, a “negative character claim” was defined as “specific terms or descriptions that attack an individual's mental or moral qualities. This encompasses: disposition, temperament, personality, mentality, etc. Claims solely about an individual's actions are not encompassed, nor are criminal allegations.” II. Facebook’s values Facebook’s values are outlined in the introduction to the Community Standards. The value of “Voice” is described as “paramount”: The goal of our Community Standards has always been to create a place for expression and give people a voice. […] We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Facebook limits “Voice” in service of four values, and two are relevant here: “Safety”: We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn't allowed on Facebook. “Dignity” : We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade others. III. Human rights standards The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In March 2021, Facebook announced its Corporate Human Rights Policy , where it recommitted to respecting rights in accordance with the UNGPs. The Board's analysis in this case was informed by the following human rights standards: 5. User statement In their appeal to the Board, the Protester explained that their comment was not offensive and simply refuted the false claims of the Protest Critic. The Protester claimed that the Protest Critic sought to prevent people from seeing contradictory opinions and was “imposing their opinions in many publications,” which made them think they were a paid bot with no actual first-hand experience of the protests. 6. Explanation of Facebook’s decision Facebook stated that it removed the Protester’s comment for violating its Bullying and Harassment policy in line with its values of “Dignity” and “Safety.” It noted that the Community Standards require the removal of content that targets private adults with negative character claims whenever it is reported by the targeted person. A user is deemed to be targeted when they are referenced by name in the content. In this case Facebook stated that “cowardly” is “easily discerned to be a negative character claim” targeting the Protest Critic. Facebook explained that they remove any content meant to degrade or shame private individuals if the targets report it themselves. The requirement for the targeted person to report the content was put in place to help Facebook better understand when people feel bullied or harassed. Facebook justified prohibiting attacks on a user’s character on the ground that such attacks prevent people from feeling safe and respected on the platform, which decreases their likelihood of engaging in debate or discussion. Citing an article from an anti-bullying charity Ditch the Label , Facebook reiterated that bullying “undermines the right to freedom of expression . . . and creates an environment in which the self-expression of others—often marginalized groups—is suppressed.” Facebook also cited other research suggesting that users who have experienced harassment are likely to self-censor . Facebook stated that by limiting content removals to cases where the target is a private adult who reports that they find the content harmful, the company ensures everyone’s “Voice” is heard. According to Facebook, this is reinforced by an appeals system that lets users request a review of content removed for violating the Bullying and Harassment policy to help prevent enforcement errors. Facebook also stated that its decision was consistent with international human rights standards. Facebook stated that (a) its policy was publicly accessible, (b) the decision to remove the content was legitimate to protect the freedom of expression of others, and (c) the removal of the content was necessary to eliminate unwanted harassment. In Facebook’s view, its decision was proportionate as lesser measures would still expose the Protest Critic to harassment and potentially impact others who may see it. 7. Third-party submissions The Board received 23 public comments on this case. Eight came from Europe, 13 from the US and Canada, one from Asia, and one from Latin America and the Caribbean. The submissions covered issues including, whether Facebook is contributing to silencing dissent in Russia and thereby supporting Russian President Vladimir Putin, the context of state-sponsored domestic social-media manipulation in Russia, and whether the content was serious enough to constitute bullying or harassment. A range of organizations and individuals submitted comments, including activists, journalists, anti-bullying groups, and members of the Russian opposition. To read public comments submitted for this case click here . 8. Oversight Board analysis This case highlights the tension between policies protecting people against bullying and harassment and the need to protect freedom of expression. This is especially relevant in the context of a political protest in a country where there are credible complaints about the absence of effective and independent mechanisms for the protection of human rights. The Board seeks to evaluate whether this content should be restored to Facebook through three lenses: Facebook's Community Standards; the company's values; and its human rights responsibilities. 8.1 Compliance with Community Standards The Board found that Facebook’s removal of the content is consistent with the “Do not” rule prohibiting targeting private individuals with negative character claims. The Community Standard on Bullying and Harassment states that Facebook removes negative character claims aimed at a private individual when the target reports the content. If the same content is reported by a person who is not targeted, it will not be removed. To the Board the term “cowardly” does not appear to be a serious or harmful term in the context of this case because of the tone of the discussion. Nevertheless, the Board does not challenge Facebook’s conclusion that the Protest Critic is a private individual and that the term “cowardly” may be construed as a negative character claim. The Board recognizes the importance of the Bullying and Harassment policy. According to the National Anti-Bullying Research and Resource Centre, bullying and harassment are two distinct concepts. While there is no widely agreed definition of either bullying or harassment, common elements of academic definitions include willful and repeated attacks as well as power imbalances. These elements are not reflected in Facebook’s Community Standards. Kate Klonick wrote that given the lack of a clear definition and the highly context-specific and subjective nature of harm, Facebook claimed that it had two choices: to keep-up up potentially harmful content in the interests of free expression, or to err on the side of removing all potentially harmful speech (even if some of that content turned out to be benign). Encouraged by some advocacy groups and media debate on cyber bullying, Facebook chose the latter option. The requirement that private individuals report content that targets them appears to be an attempt to limit the amount of benign content removed. The Board appreciates the difficulties involved in setting policy in this area as well as the importance of protecting users’ safety. This particularly applies to women and vulnerable groups who are at higher risks of online bullying and harassment. However, the Board found that, in this case, the negative character claim was used in a heightened exchange on a matter of public issue and was no worse than the language used by the Protest Critic. The Protest Critic had voluntarily engaged in a debate on a matter of public interest. This case illustrates that Facebook’s blunt and decontextualized approach can disproportionately restrict freedom of expression. Enforcing the Community Standard appears limited to determining whether a single term is a negative character claim and whether it has been reported by the user targeted by the claim. There is no assessment of the wider context or conversation. In this case, Facebook did not consider the Protest Critic’s derogatory language about pro-Navalny protesters. Facebook also did not consider the Protester’s intent to refute false claims about the protests spread by the Protest Critic nor made any attempt to balance that concern against the reported bullying. Instead, the company stated that this balancing exercise is undertaken when the Community Standards are drafted so that moderation decisions are made solely on the individual piece of content that has been reported. Ultimately, decisions to remove content seem to be made based on a single word if that word is deemed to be a negative character claim, regardless of the context of any exchange the content may be part of. 8.2 Compliance with Facebook’s values The Board found that the Facebook’s decision to remove this content did not comply with Facebook’s values. Further, the company failed to balance the values of “Dignity” and “Safety” against “Voice.” The Board found that political speech is central to the value of “Voice.” As such, it should only be limited where there are clear concerns around “Safety” or “Dignity.” In the context of an online political discussion, a certain level of disagreement should be expected. The Protest Critic vigorously exercised their voice, but was challenged and called a “cowardly bot.” While the Protester’s use of “cowardly” and “bot” could be seen as a negative character claim, it formed part of a broader exchange on an issue of public interest. In relation to political matters “Voice” is particularly important in countries where freedom of expression is routinely suppressed. The Board considered well-documented instances of pro-government actors in Russia engaging in anti-opposition expression in online spaces. While there is no evidence of government involvement in this case, the general efforts of the Russian authorities to manipulate online discourse and drown out opposition voices provides crucial context for assessing Facebook’s decision to limit “Voice” in this instance. The values of “Safety” and “Dignity” protect users from feeling threatened, silenced or excluded. Bullying and harassment is always highly context specific and can have severe impacts on the safety and dignity of those targeted. The Board notes that “the consequences of and harm caused by different manifestations of online violence are specifically gendered, given that women and girls suffer from particular stigma in the context of structural inequality, discrimination and patriarchy” (A/HRC/38/47, para. 25). As the Protest Critic was not invited to provide a statement, the impact of this post on them is unknown. However, analysis of the comment thread shows the user actively engaged in a contentious political discussion and felt safe to attack and insult Navalny, his supporters, and January 23 protesters. The term “cowardly bot” may be generally considered insulting and offend the “Dignity” of the user who reported the content. However, the Board finds that the risk of likely harm to Protest Critic was minor considering the tone of the overall exchange. 8.3 Compliance with Facebook’s human rights responsibilities The Board found that the removal of the Protestor’s content under the Bullying and Harassment Community Standard was not consistent with Facebook’s human rights responsibilities. Freedom of expression (Article 19 ICCPR) Article 19, para. 2, of the ICCPR provides broad protection for expression of “all kinds,” including political discourse and “free communication of information and ideas about public and political issues between citizens…is essential” (General Comment No. 34, para. 13). The UN Human Rights Committee has made clear the protection of Article 19 extends to expression that may be considered “deeply offensive” (General Comment No. 34, paras. 11, 12). While the right to freedom of expression is fundamental, it is not absolute. It may be restricted, but restrictions should meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). Facebook should seek to align its policies on bullying and harassment with these principles (UN Special Rapporteur on freedom of expression, report A/74/486, para. 58(b)). I. Legality The principle of legality under international human rights law requires rules used to limit expression to be clear and accessible (General Comment No. 34, para. 25). People need to understand what is and what is not allowed. Additionally, precision in rulemaking ensures expression is not limited selectively. Here, however, the Board found Facebook’s Bullying and Harassment Community Standard to be unclear and overly complicated. Overall, the Community Standard is organized in a way that makes it difficult to understand and follow. The policy rationale offers a broad understanding on what the Standard aims to achieve, which includes making users feels safe as well as preventing speech that degrades or shames. The rationale is then followed by a number of “Do nots” and additional rules under two yellow warning signs. These rules list prohibited content, when and how Facebook takes action, and degrees of protections enjoyed by distinct user groups. It is not made clear in the Community Standards if the aims of the rationale serve simply as guidance for the specific rules that follow, or if they must be interpreted conjunctively with the rules. Furthermore, the information is organized in a seemingly random order. For example, rules applicable to private individuals precede, follow and are sometimes mixed in with rules related to public figures. The Community Standard fails to differentiate between bullying and harassment. As previously noted, experts on the subject agree that these are distinct behaviors. Further, as argued by civil society organization Article 19 , the Community Standard falls below international standards on freedom of expression due to its lack of guidance on how bullying and harassment differ from threats or otherwise offensive speech. The Board finds that combining the distinct concepts of bullying and harassment into a single definition and corresponding set of rules has resulted in the removal of legitimate speech. Furthermore, while the Bullying and Harassment policy applies differently to various categories of individuals and groups, it fails to define these categories. Other key terms, such as “negative character claim,” also lack clear definitions. Accordingly, the Board concludes that the Community Standard failed the test of legality. II. Legitimate aim Under international human rights law, any measure restricting expression must be for a purpose listed in Article 19, para. 3, of the ICCPR. Legitimate aims include the protection of the rights or reputations of others, as well as the protection of national security, public order, or public health or morals (General Comment No. 34, para. 28). The Board accepts that the Bullying and Harassment Community Standard aims to protect the rights of others. Users' freedom of expression may be undermined if they are forced off the platform due to bullying and harassment. The policy also seeks to deter behavior that can cause significant emotional distress and psychological harm, implicating users’ right to health. However, the Board notes that any restrictions on freedom of expression must be drafted with care and the existence of a rule’s connection to a legitimate aim is not enough to satisfy human rights standards on freedom of expression. (General Comment No. 34, paras. 28, 30, 31, 32) III. Necessity and proportionality Any restrictions on freedom of expression ""must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected"" (General Comment 34, para. 34). Facebook properly distinguishes between public and private individuals, but it does not recognize the context in which discussions may take place. For instance, in some circumstances private persons engaged in public debate over matters of public concern may open themselves up to criticism pertaining to their statement. The company narrowed the potential reach of its rule on negative character claims against private adults by requiring the targeted user to report content. The Board further notes that in addition to reporting abusive content, Facebook allows users to block or mute each other. This is a useful, albeit a limited tool against abuse. Because these options may be viewed as a less restrictive means to limit expression compared to other options, the content removal in this case was disproportionate. Context is key for assessing necessity and proportionality. The UN Special Rapporteur on freedom of expression has stated in relation to hate speech that the “evaluation of context may lead to a decision to make an exception in some instances, when the content must be protected as, for example, political speech” (A/74/486, at para. 47(d)). This approach may be extended to assessments of bullying and harassment. In this case, Facebook should have considered the environment for freedom of expression in Russia generally, and specifically government campaigns of disinformation against opponents and their supporters, including in the context of the January protests. The Protest Critic’s engagement with the Protester in this case repeated the false claim that Navalny protesters were manipulated children. The accusation of “cowardly bot” in the context of a heated discussion on these issues was unlikely to cause harm, in particular given the equally hostile allegations and accusations from the Protest Critic. Facebook notified the Board that in January 2021 it determined that potential mass nationwide protests in support of Navalny constituted a high-risk event and asked its moderators to flag trends and content where it was unclear if Community Standards had been violated. In March 2021, Facebook reported that it removed 530 Instagram accounts involved in coordinated inauthentic activities targeting pro-Navalny Russian users. Facebook was thus aware of the wider context of the content in this case, and heightened caution should have led to a more careful assessment of content related to the protests. Additionally, the removed content appears to have lacked elements that often constitute bullying and harassment, such as repeat attacks or an indication of a power imbalance. While calling someone cowardly can be a negative character claim, the content was a culmination of a heated political exchange on current events in Russia. Considering the factors above, the Board concludes that Facebook’s decision to remove the content under its Bullying and Harassment Community Standard was unnecessary and disproportionate. 9. Oversight Board Decision The Oversight Board overturns Facebook’s decision to remove the content, requiring the post to be restored. 10. Policy advisory statement To comply with international human rights standards, Facebook should amend and redraft its Bullying and Harassment Community Standard to: 1. Explain the relationship between the policy rationale and the “Do nots” as well as the other rules restricting content that follow it. 2. Differentiate between bullying and harassment and provide definitions that distinguish the two acts. Further, the Community Standard should clearly explain to users how bullying and harassment differ from speech that only causes offense and may be protected under international human rights law. 3. Clearly define its approach to different target user categories and provide illustrative examples of each target category (i.e. who qualifies as a public figure). Format the Community Standard on Bullying and Harassment by user categories currently listed in the policy. 4. Include illustrative examples of violating and non-violating content in the Bullying and Harassment Community Standard to clarify the policy lines drawn and how these distinctions can rest on the identity status of the target. 5. When assessing content including a ‘negative character claim’ against a private adult, Facebook should amend the Community Standard to require an assessment of the social and political context of the content. Facebook should reconsider the enforcement of this rule in political or public debates where the removal of the content would stifle debate. 6. Whenever Facebook removes content because of a negative character claim that is only a single word or phrase in a larger post, it should promptly notify the user of that fact, so that the user can repost the material without the negative character claim. *Procedural note: The Oversight Board's decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world, provided expertise on socio-political and cultural context. Return to Case Decisions and Policy Advisory Opinions" fb-79khz1p5,Supreme Court in White Hoods,https://www.oversightboard.com/decision/fb-79khz1p5/,"December 18, 2023",2023,December,"TopicFreedom of expression, Humor, PoliticsCommunity StandardDangerous individuals and organizations",Dangerous individuals and organizations,Overturned,United States,"A user appealed Meta’s decision to remove a Facebook post that contains an edited image of the Supreme Court of the United States, depicting six of the nine members wearing the robes of the Ku Klux Klan.",5042,764,"Overturned December 18, 2023 A user appealed Meta’s decision to remove a Facebook post that contains an edited image of the Supreme Court of the United States, depicting six of the nine members wearing the robes of the Ku Klux Klan. Summary Topic Freedom of expression, Humor, Politics Community Standard Dangerous individuals and organizations Location United States Platform Facebook This is a summary decision. Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors and inform the public about the impact of the Board’s work. They are approved by a Board Member panel, not the full Board. They do not involve a public comments process and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement. Case Summary A user appealed Meta’s decision to remove a Facebook post that contains an edited image of the Supreme Court of the United States, depicting six of the nine members wearing the robes of the Ku Klux Klan. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Case Description and Background In July 2023, a user posted an edited image on Facebook that depicts six justices of the Supreme Court of the United States as members of the Ku Klux Klan while three justices, considered to be more liberal, appear unaltered. The post contained no caption and received fewer than 200 views. The post was removed for violating Meta’s Dangerous Organizations and Individuals policy . This policy prohibits content that contains praise, substantive support or representation of organizations or individuals that Meta deems as dangerous. In their appeal to the Board, the user emphasized that the post was intended to be a political critique rather than an endorsement of the Ku Klux Klan. The user stated that the content highlights what the user regards as the six justices’ “prejudicial, hateful, and destructive attitudes toward women, women’s rights to choose abortions, the gay, lesbian, transgender and queer communities, and the welfare of other vulnerable groups.” After the Board brought this case to Meta’s attention, the company determined the content did not violate Meta’s Dangerous Organizations and Individuals policy and its removal was incorrect. The company then restored the content to Facebook. Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation processes involved, reduce errors and increase fairness for Facebook and Instagram users. Case Significance The case highlights an error in Meta’s enforcement of its Dangerous Organizations and Individuals policy, specifically relating to content shared as political critique. Continued similar errors could significantly limit important free expression by users, and the company should make reducing such errors a high priority. The Dangerous Organizations and Individuals Community Standard is the source of many erroneous takedowns and has been addressed in a number of prior Board decisions. In one earlier decision, the Board asked Meta to “explain in the Community Standards how users can make the intent behind their posts clear to Facebook.” To the same end, the Board also recommended that the company publicly disclose its list of designated individuals and organizations, and that “Facebook should also provide illustrative examples to demonstrate the line between permitted and prohibited content, including in relation to application of the rule clarifying what 'support' excludes,” ( Ocalan’s Isolation decision, recommendation no. 6). Meta committed to partial implementation of this recommendation. Additionally, the Board urged Meta to “include more comprehensive information on error rates for enforcing rules on 'praise' and 'support' of dangerous individuals and organizations,” ( Ocalan’s Isolation decision, recommendation no. 12). Meta declined to implement this recommendation following a feasibility assessment. The Board emphasizes that full implementation of these recommendations could reduce the number of enforcement errors under Meta’s Dangerous Organizations and Individuals policy. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-7p5w797i,Hateful Memes Video Montage,https://www.oversightboard.com/decision/fb-7p5w797i/,"March 7, 2024",2024,,"TopicRace and ethnicity, Religion, Sex and gender equalityCommunity StandardHate speech",Hate speech,Overturned,United States,"A user appealed Meta’s decision to leave up a Facebook post in which a video montage, set to German music, contains a series of antisemitic, racist, homophobic and transphobic memes. This case highlights errors in Meta’s enforcement of its Hate Speech policy.",5691,892,"Overturned March 7, 2024 A user appealed Meta’s decision to leave up a Facebook post in which a video montage, set to German music, contains a series of antisemitic, racist, homophobic and transphobic memes. This case highlights errors in Meta’s enforcement of its Hate Speech policy. Summary Topic Race and ethnicity, Religion, Sex and gender equality Community Standard Hate speech Location United States Platform Facebook This is a summary decision. Summary decisions examine cases in which Meta reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors and inform the public about the impact of the Board’s work. They are approved by a Board Member panel, not the full Board. They do not consider public comments and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement. Case Summary A user appealed Meta’s decision to leave up a Facebook post in which a video montage, set to German music, contains a series of antisemitic, racist, homophobic and transphobic memes. This case highlights errors in Meta’s enforcement of its Hate Speech policy. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and removed the post. Case Description and Background In August 2023, a Facebook user posted a three-minute video clip containing a series of antisemitic, racist, homophobic and transphobic memes. The memes, among other things, allege Jewish control of media institutions, praise the Nazi military, express contempt toward interracial relationships, compare Black people to gorillas, display anti-Black and anti-LGBTQIA+ slurs, and advocate for violence against these communities. The accompanying caption, in English, claims that the post would get the user’s Facebook page suspended but that it would be “worth it.” The post was viewed approximately 4,000 times and reported fewer than 50 times. This content violates several elements of Meta’s Hate Speech policy , which prohibits content that references “harmful stereotypes historically linked to intimidation,” such as “claims that Jewish people control financial, political or media institutions.” Furthermore, the policy forbids dehumanizing imagery, such as content that equates “Black people and apes or ape-like creatures.” Additionally, the policy forbids the use of racialized slurs. The memes in this content violate the above elements by alleging Jewish control of the media (one meme shows a Kippah (Jewish cap) with “facebook” written on it); comparing Black people to gorillas; and using racialized slurs by displaying the n-word on a sword wielded by a cartoon character. The accompanying caption, in English, claims that the post would get the user’s Facebook page suspended but that it would be “worth it.” The caption also calls on those interacting with the content to “show these degenerates your utter contempt” and to download the video. Meta initially left the content on Facebook. After the Board brought this case to Meta’s attention, the company determined that the content did violate the Hate Speech Community Standard and removed the content. Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the user who reported content that was then left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1). Where Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook and Instagram users. Case Significance This case highlights errors in how Meta enforces its Hate Speech policy. The Board has previously examined Meta’s Hate Speech policy where user content used slurs, such as in the South Africa Slurs case in which the Board determined the use of a racial slur was degrading and exclusionary, and for content referring to a group of people as subhuman, such as in the Knin Cartoon case. The content in this case contained multiple instances of violating content, from the use of slurs to target Black people to accusing Jewish people of controlling the media. While the caption indicates the user is acutely aware their content is likely to be violating and removed for hateful speech, the content was not removed until the Board identified the case for review based on another user’s appeal. The Board has also published summary decisions illustrating that Meta continues to have difficulty with enforcing hate speech, as shown in the Planet of the Apes Racism and Media Conspiracy Cartoon cases with regard to speech against Black and Jewish people respectively. Previously, the Board has noted in its Post in Polish Targeting Trans People case that Meta’s failures to take the correct enforcement action, despite multiple signals about a post’s harmful content, led the Board to conclude the company is not living up to the ideals it has articulated on the safety of LGBTQIA+ and other marginalized communities. The Board urges Meta to close enforcement gaps under the Hate Speech Community Standard. Decision The Board overturns Meta's original decision to leave up the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to the company’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-7uk5f6vg,Karachi Mayoral Election Comment,https://www.oversightboard.com/decision/fb-7uk5f6vg/,"December 18, 2023",2023,December,"TopicElections, Freedom of expression, PoliticsCommunity StandardDangerous individuals and organizations",Dangerous individuals and organizations,Overturned,Pakistan,"A Facebook user appealed Meta’s decision to remove their comment showing the 2023 Karachi mayoral election results and containing the name of Tehreek-e-Labbaik Pakistan (TLP), a party designated under Meta’s Dangerous Organizations and Individuals policy.",5838,859,"Overturned December 18, 2023 A Facebook user appealed Meta’s decision to remove their comment showing the 2023 Karachi mayoral election results and containing the name of Tehreek-e-Labbaik Pakistan (TLP), a party designated under Meta’s Dangerous Organizations and Individuals policy. Summary Topic Elections, Freedom of expression, Politics Community Standard Dangerous individuals and organizations Location Pakistan Platform Facebook This is a summary decision . Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors and inform the public about the impact of the Board’s work. They are approved by a Board Member panel, not the full Board. They do not involve a public comments process and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement. Case Summary A Facebook user appealed Meta’s decision to remove their comment showing the 2023 Karachi mayoral election results and containing the name of Tehreek-e-Labbaik Pakistan (TLP), a far-right Islamist political party designated under Meta’s Dangerous Organizations and Individuals policy. This case highlights the over-enforcement of this policy and its impact on users’ ability to share political commentary and news reporting. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the comment. Case Description and Background In June 2023, a Facebook user commented on a post of a photograph of Karachi politician Hafiz Naeem ur Rehman with former Pakistani Prime Minister Imran Khan and Secretary General of the Jamaat-e-Islami political party, Liaqat Baloch. The comment is an image of a graph taken from a television program, which shows the number of seats won by the various parties in the Karachi mayoral election. One of the parties included in the list is Tehreek-e-Labbaik Pakistan (TLP) , a far-right Islamist political party in Pakistan. The 2023 Karachi mayoral election was a contested race, with one losing party alleging that the vote was unfairly rigged and ensuing violent protests taking place between supporters of different parties. Meta originally removed the comment from Facebook, citing its Dangerous Organizations and Individuals policy , under which the company removes content that ""praises,” “substantively supports” or “represents” individuals and organizations it designates as dangerous. However, the policy recognizes that “users may share content that includes references to designated dangerous organizations and individuals in the context of social and political discourse. This includes content reporting on, neutrally discussing or condemning dangerous organizations and individuals or their activities.” In the appeal to the Board, the user identified themselves as a journalist and stated that the comment was about the Karachi mayoral election results. The user clarified that the intention of the comment was to inform the public and discuss the democratic process. After the Board brought this case to Meta’s attention, the company determined the content did not violate its policies. Meta’s policy allows for neutral discussion of a designated entity in the context of social and political discourse, in this case, reporting on the outcome of an election. Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation processes involved, reduce errors and increase fairness for Facebook and Instagram users. Case Significance This case highlights over-enforcement of Meta’s Dangerous Organizations and Individuals policy. The Board’s cases suggest that errors of this sort are all too frequent. They impede users’ – especially journalists’ – abilities to report factual information about organizations labeled as dangerous. The company should make reducing such errors a high priority. The Board has issued several recommendations regarding Meta’s Dangerous Organizations and Individuals policy. These included a recommendation to “evaluate automated moderation processes for enforcement of the DOI policy,” which Meta declined to implement ( Öcalan’s Isolation decision, recommendation no. 2). The Board has also recommended that Meta “assess the accuracy of reviewers enforcing the reporting allowance under the DOI policy to identify systemic issues causing enforcement errors,” ( Mention of the Taliban in News Reporting decision, recommendation no. 5). Meta is in the process of implementing an update to its Dangerous Organizations and Individuals policy, which will include details about how Meta approaches news reporting as well as neutral and condemning discussion. Furthermore, the Board has recommended Meta “provide a public list of the organizations and individuals designated ‘dangerous’ under the Dangerous Individuals and Organizations Community Standard,” which Meta declined to implement after a feasibility assessment, ( Nazi Quote decision, recommendation no. 3). Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to the company’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-8rtrzy6q,Washington Post Article on Israel-Palestine,https://www.oversightboard.com/decision/fb-8rtrzy6q/,"April 4, 2024",2024,,"TopicJournalism, News events, War and conflictCommunity StandardDangerous individuals and organizations",Dangerous individuals and organizations,Overturned,"Israel, Palestinian Territories, United States","A user appealed Meta’s decision to remove a Facebook post with a link to a Washington Post article that addressed the chronology of the Israeli-Palestinian conflict. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post.",5424,797,"Overturned April 4, 2024 A user appealed Meta’s decision to remove a Facebook post with a link to a Washington Post article that addressed the chronology of the Israeli-Palestinian conflict. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Summary Topic Journalism, News events, War and conflict Community Standard Dangerous individuals and organizations Location Israel, Palestinian Territories, United States Platform Facebook This is a summary decision . Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Case Summary A user appealed Meta’s decision to remove a Facebook post with a link to a Washington Post article that addressed the chronology of the Israeli-Palestinian conflict. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Case Description and Background In October 2023, a Facebook user posted a link to a Washington Post article covering the chronology of the Israeli-Palestinian conflict. The article preview, which was automatically included with the link, mentions Hamas. The user did not add a caption to accompany the post or provide any further context. This Facebook post was removed under Meta’s Dangerous Organizations and Individuals policy , which prohibits representation of and certain speech about the groups and people the company judges as linked to significant real-world harm. In their appeal to the Board, the user emphasized that the post was intended to report on the current Israel-Hamas conflict and was not meant to provide support for Hamas, or any other dangerous organization. After the Board brought this case to Meta’s attention, the company determined the content did not violate the Dangerous Organizations and Individuals policy as the post references Hamas in a news-reporting context, which is allowed under the policy. The company then restored the content to Facebook. Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation processes involved, reduce errors and increase fairness for Facebook and Instagram users. Case Significance This case highlights an instance of Meta over-enforcing its Dangerous Organizations and Individuals policy, specifically news reporting on entities the company designates as dangerous. This is a recurring problem, which has been particularly frequent during the current Israel-Hamas conflict, in which one of the parties is a designated organization. The Board has issued numerous recommendations relating to the news reporting allowance under the Dangerous Organizations and Individuals policy. Continued errors in applying this important allowance can significantly limit users’ free expression, the public’s access to information, and impair public discourse. In a previous decision, the Board recommended that Meta “assess the accuracy of reviewers enforcing the reporting allowance under the Dangerous Organizations and Individuals policy in order to identify systemic issues causing enforcement errors,” ( Mention of the Taliban in News Reporting , recommendation no. 5). Meta reported implementation as work it already does, without publishing information to prove so. The Board also recommended that Meta “add criteria and illustrative examples to its Dangerous Organizations and Individuals policy to increase understanding of the exceptions for neutral discussion condemnation and news reporting,” ( Shared Al Jazeera Post , recommendation no. 1). The implementation of this recommendation was demonstrated through published information. Furthermore, the Board recommended that Meta “include more comprehensive information on error rates for enforcing rules on ‘praise’ and ‘support’ of dangerous individuals and organizations” in transparency reporting, ( Ocalan’s Isolation , recommendation no. 12). Meta declined to implement this recommendation after conducting a feasibility assessment. In an update to its policy dated December 29, 2023, Meta now uses the term “glorification” instead of “praise” in its Community Standard. The Board believes that full implementation of these recommendations could reduce the number of enforcement errors under Meta’s Dangerous Organizations and Individuals policy. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-ajtd9p90,Planet of the Apes racism,https://www.oversightboard.com/decision/fb-ajtd9p90/,"November 16, 2023",2023,,"TopicDiscrimination, Marginalized communities, Race and ethnicityCommunity StandardHate speech",Hate speech,Overturned,France,A user appealed Meta’s decision to leave up a Facebook post that likens a group of Black individuals involved in a riot in France to the “Planet of the Apes.”,5392,846,"Overturned November 16, 2023 A user appealed Meta’s decision to leave up a Facebook post that likens a group of Black individuals involved in a riot in France to the “Planet of the Apes.” Summary Topic Discrimination, Marginalized communities, Race and ethnicity Community Standard Hate speech Location France Platform Facebook This is a summary decision. Summary decisions examine cases where Meta reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors. They are approved by a Board Member panel, not the full Board. They do not consider public comments, and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement. Case summary A user appealed Meta’s decision to leave up a Facebook post that likens a group of Black individuals involved in a riot in France to the “Planet of the Apes.” After the Board brought the appeal to Meta’s attention, the company reversed its original decision and removed the post. Case description and background In January 2023, a Facebook user posted a video that appears to have been taken from a car driving at night. The video shows the car driving through neighborhoods until a group of Black men appear and are seen chasing the car towards the end of the footage. The caption states in English that, “France has fell like planet of the friggin apes over there rioting in the streets running amok savages” and writes about how “the ones” that make it to ""our shores” are given housing for what the user believes to be at a significant cost. The post had under 500 views. A Facebook user reported the content. Under Meta’s Hate Speech policy , the company removes content that dehumanizes people belonging to a designated protected characteristic group by comparing them to “insects” or “animals in general or specific types of animals that are culturally perceived as intellectually or physically inferior (including but not limited to: Black people and apes or ape-like creatures; Jewish people and rats; Muslim people and pigs; Mexican people and worms).” Meta initially left the content on Facebook. After the Board brought this case to Meta’s attention, the company determined that the content violated the Hate Speech Community Standard and its original decision to leave up the content was incorrect. Meta explained to the Board that the caption for the video violated its Hate Speech policy by comparing the men to apes and should have been removed. The company then removed the content from Facebook. Board authority and scope The Board has authority to review Meta's decision following an appeal from the user who reported content that was then left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1). Where Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, to reduce errors and increase fairness for people who use Facebook and Instagram. Case significance This case highlights difficulties in Meta’s consistent enforcement of its content policies. Meta has a specific provision within its Hate Speech policy prohibiting comparisons of Black people to apes, and yet it still failed to remove the content in this case. Other cases taken by the Board have examined Meta’s Hate Speech policies and contextual factors in determining whether that speech involved qualified or unqualified behavioral statements about a protected group for legitimate social commentary. The content in this case, however, unequivocally uses dehumanizing hate speech for the express purpose of denigrating a group of individuals based on their race and should have been removed. The case also underscores how problems with enforcement may result in content remaining on Meta’s platforms, which discriminates against a group of people based on their race or ethnicity. The Board notes that this type of content, at scale, contributes to the further marginalizing of visible minority groups and even potentially leads to offline harm, particularly in regions where there is existing animosity towards immigrants. Previously, the Board has issued a recommendation that emphasized the importance of moderators recognizing nuance in Meta’s Hate Speech policy. Specifically, the Board recommended that “Meta should clarify the Hate Speech Community Standard and the guidance provided to reviewers, explaining that even implicit references to protected groups are prohibited by the policy when the reference would reasonably be understood” ( Knin cartoon decision, recommendation no. 1). Partial implementation of this recommendation by Meta has been demonstrated through published information. The Board underlines the need for Meta to address these concerns to reduce the error rate in moderating hate speech content. Decision The Board overturns Meta's original decision to leave up the content. The Board acknowledges Meta's correction of its initial error once the Board brought the case to Meta's attention. Return to Case Decisions and Policy Advisory Opinions" fb-ap0nsbvc,Sudan graphic video,https://www.oversightboard.com/decision/fb-ap0nsbvc/,"June 13, 2022",2022,,"TopicNews events, SafetyCommunity StandardViolent and graphic content","Policies and TopicsTopicNews events, SafetyCommunity StandardViolent and graphic content",Upheld,Sudan,The Oversight Board has upheld Meta’s decision to restore a Facebook post depicting violence against a civilian in Sudan.,33215,5237,"Upheld June 13, 2022 The Oversight Board has upheld Meta’s decision to restore a Facebook post depicting violence against a civilian in Sudan. Standard Topic News events, Safety Community Standard Violent and graphic content Location Sudan Platform Facebook Sudan graphic video public comments The Oversight Board has upheld Meta’s decision to restore a Facebook post depicting violence against a civilian in Sudan. The content raised awareness of human rights abuses and had significant public interest value. The Board recommended that Meta add a specific exception on raising awareness of or documenting human rights abuses to the Violent and Graphic Content Community Standard. About the case On December 21, 2021, Meta referred a case to the Board concerning a graphic video which appeared to depict a civilian victim of violence in Sudan. The content was posted to the user’s Facebook profile page following the military coup in the country on October 25, 2021. The video shows a person lying next to a car with a significant head wound and a visibly detached eye. Voices can be heard in the background saying in Arabic that someone has been beaten and left in the street. A caption, also in Arabic, calls on people to stand together and not to trust the military, with hashtags referencing documenting military abuses and civil disobedience. After being identified by Meta’s automated systems and reviewed by a human moderator, the post was removed for violating Facebook’s Violent and Graphic Content Community Standard. After the user appealed, however, Meta issued a newsworthiness allowance exempting the post from removal on October 29, 2021. Due to an internal miscommunication, Meta did not restore the content until nearly five weeks later. When Meta restored the post, it placed a warning screen on the video. Key findings The Board agrees with Meta’s decision to restore this content to Facebook with a warning screen. However, Meta’s Violent and Graphic Content policy is unclear on how users can share graphic content to raise awareness of or document abuses. The rationale for the Community Standard, which sets out the aims of the policy, does not align with the rules of the policy. While the policy rationale states that Meta allows users to post graphic content “to help people raise awareness” about human rights abuses, the policy itself prohibits all videos (whether shared to raise awareness or not) “of people or dead bodies in non-medical settings if they depict dismemberment.” The Board also concludes that, while it was used in this case, the newsworthiness allowance is not an effective means of allowing this kind of content on Facebook at scale. Meta told the Board that it “documented 17 newsworthy allowances in connection with the Violent Graphic Content policy over the past 12 months (12 months prior to March 8, 2022). The content in this case represents one of those 17 allowances.” By comparison, Meta removed 90.7 million pieces of content under this Community Standard in the first three quarters of 2021. The Board finds it unlikely that, over one year, only 17 pieces of content related to this policy should have been allowed to remain on the platform as newsworthy and in the public interest. To ensure such content is allowed on Facebook, the Board recommends that Meta amends the Violent and Graphic Content Community Standard to allow videos of people or dead bodies when shared to raise awareness or document abuses. Meta must also be prepared to respond quickly and systematically to conflicts and crisis situations around the world. The Board’s decision on “Former President Trump’s Suspension” recommended that Meta “develop and publish a policy that governs Facebook’s response to crises.” While the Board welcomes the development of this protocol, which Meta says it has adopted, the company must implement the protocol more quickly and provide as much detail as possible on how it will operate. The Oversight Board’s decision The Oversight Board upholds Meta’s decision to restore the post with a warning screen that prevents minors from seeing the content. As a policy advisory opinion, the Board recommends that Meta: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board upholds Meta’s decision to restore to Facebook a post containing a video, with a caption, that depicts violence against a civilian in Sudan. The post was restored under the newsworthiness allowance with a warning screen marking the content as sensitive, making it generally inaccessible to minors, and requiring all other users to click through to see the content. The Board finds that this content, which sought to raise awareness of or document human rights abuses, had significant public interest value. While the initial removal of the content was in line with the rules in the Violent and Graphic Content Community Standard, Meta’s decision to restore the content with a sensitivity screen is consistent with its policies, values, and human rights responsibilities. The Board notes, however, that Meta’s use of the newsworthiness allowance is not an effective means to keep up or restore content such as this at scale. The Board therefore recommends that Meta add a specific exception on raising awareness of or documenting abuses to the Violent and Graphic Content Community Standard. The Board further urges Meta to prioritize implementation of its earlier recommendation to introduce a policy on collection, preservation, and sharing of content that may evidence violations of international law. 2. Case description and background On December 21, 2021, Meta referred a case to the Board concerning a graphic video which appeared to depict a civilian victim of violence in Sudan. The content was posted to the user's Facebook profile page on October 26, 2021, following a military coup in the country on October 25, 2021 and the start of protests against the military takeover of the government. The video shows a person with a significant head wound and a visibly detached eye lying next to a car. Voices can be heard in the background saying in Arabic that someone has been beaten and left in the street. The post includes a caption, also in Arabic, calling on the people to stand together and not to trust the military, with hashtags referencing documenting military abuses and civil disobedience. Meta explained that its technology identified the content as potentially violating its Violent and Graphic Content Community Standard on the same day that it was posted, October 26, 2021. Following human review, Meta determined that it violated Facebook’s Violent and Graphic Content policy and removed it. The content creator subsequently disagreed with the decision. On October 28, 2021, the content was escalated to policy and subject matter experts for their additional review. Following the review, Meta issued a newsworthiness allowance exempting the post from removal under the Violent and Graphic Content policy on October 29, 2021. However, due to an internal miscommunication, Meta did not actually restore the content until December 2, 2021, nearly five weeks later. When it restored the content, it also placed a warning screen on the video marking it as sensitive and requiring users to click through to view the content. The warning screen prohibits users under the age of 18 from viewing the video. The post was viewed fewer than 1,000 times and no users reported the content. The following factual background is relevant to the Board’s decision. Following the military takeover of the civilian government in Sudan in October 2021 and the start of civilian protests, security forces in the country fired live ammunition, used tear gas, and arbitrarily arrested and detained protesters, according to the UN High Commissioner for Human Rights . Security forces have also targeted journalists and activists, searching their homes and offices. Journalists have been attacked, arrested, and detained. According to experts consulted by the Board, with the military takeover of state media and crackdown on Sudanese papers and broadcasters, social media became a crucial source of information and venue to document the violence carried out by the military. The military shut down the internet simultaneously with the arrest of civilian leadership on October 25, 2021, and consistent access to the internet since then has been regularly disrupted across the country. 3. Oversight Board Authority and Scope The Board has authority to review decisions that Meta submits for review (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1.1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include policy advisory statements with non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). 4. Sources of authority The Oversight Board considered the following sources of authority: I. Oversight Board decisions: In previous decisions, the Board has considered and provided recommendations on Meta’s policies and processes. The most relevant include: II. Meta’s content policies: Facebook’s Community Standards: Under the rationale for its Violent and Graphic Content policy, Meta states that it removes any content that ""glorifies violence or celebrates suffering"" but allows graphic content ""to help people raise awareness."" The rules of the policy prohibit posting ""videos of people or dead bodies in non-medical settings if they depict dismemberment."" According to its newsworthiness allowance , Meta allows violating content on its platforms if it is newsworthy and ""if keeping it visible is in the public interest."" III. Meta’s values: Meta's values are outlined in the introduction to Facebook's Community Standards . The values relevant to this case are those of “Voice,” “Safety,” “Privacy,” and “Dignity.” The value of ""Voice"" is described as ""paramount"": The goal of our Community Standards has always been to create a place for expression and give people a voice. [We want] people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Meta limits “Voice” in service of four other values, and three are relevant here: “Safety”: Content that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook. “Privacy”: We’re committed to protecting personal privacy and information. Privacy gives people the freedom to be themselves, choose how and when to share on Facebook and connect more easily. “Dignity”: We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade others. IV: International human rights standards: The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following human rights standards: 5. User submissions Following Meta's referral and the Board's decision to accept the case, the user was sent a message notifying them of the Board's review and providing them with an opportunity to submit a statement to the Board. The user did not submit a statement. 6. Meta’s submissions In its referral, Meta stated that the decision on this content was difficult because it highlights the tension between the public interest value of documenting human rights violations and the risk of harm associated with sharing such graphic content. Meta also highlighted the importance of allowing users to document human rights violations during a coup and the shutdown of internet access in the country. Meta informed the Board that immediately after the military coup occurred in Sudan, Meta created a crisis response cross-functional team to monitor the situation and communicate emerging trends and risks. According to Meta, this team observed “spikes in relation to reports of content depicting Graphic Violence and Violence and Incitement at times when protests were most active. [The team] was instructed to escalate requests to allow instances of graphic violence that would otherwise violate the Graphic and Violent Content policy, including content depicting state-backed human rights abuses.” Meta noted that this video was taken in the context of widespread protests and real concerns regarding press freedom in Sudan. Meta also noted that this type of content could “warn users in the area of a threat to their safety and is particularly important during an internet blackout where journalists’ access to the location may be limited.” Meta also stated that the decision to restore the content was in line with its values, especially the value of ""Voice,"" which is paramount. Meta cited prior Board decisions stating that political speech is central to the value of ""Voice."" Case decisions 2021-010-FB-UA (“Colombia Protests”); 2021-003-FB-UA (“Punjabi concern over the RSS in India”); 2021-007-FB-UA (“Myanmar Bot”); and 2021-009-FB-UA (“Shared Al Jazeera post”). Meta told the Board that it determined that its initial decision to remove the content was inconsistent with Article 19 of the ICCPR, specifically with the principle of necessity. Therefore, it restored the content pursuant to the newsworthiness allowance. To mitigate any potential risk of harm involved in allowing the graphic content to remain on the platform once restored, Meta restricted access to it to people over the age of 18 and applied a warning screen. Meta also noted in its case rationale that the decision to reinstate the content was consistent with the report of the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, particularly “the right to access information on human rights violations.” Meta also noted that because it applied a warning screen that does not permit users under the age of 18 from seeing the content, it also considered the impact of the decision on the rights of the child. Meta told the Board when making its decision that it considered Article 13 of the Convention on the Rights of the Child and General Comment No. 25 On Children’s Rights in Relation to the Digital Environment, in protecting the child’s right to freedom of expression, including the “freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers.” Meta explained in its case rationale that its decision to restrict the visibility of the content to adults served the legitimate aim of protecting the safety of minors and was proportional to that aim. The Board asked Meta 21 questions. Meta responded to 17 fully and 4 partially. The partial responses were to do with questions on measuring the impact of Meta’s automated system on content on the platform and why the Violent and Graphic Content Community Standard does not contain a raising awareness exception. 7. Public comments The Board received five public comments for this case. Two comments were from Europe, one from Sub-Saharan Africa, and two from the United States and Canada. The submissions covered the following themes: the need to adopt a more context-sensitive approach that would set a higher threshold for removal of content in regions subject to armed conflicts, so that less content is removed; the need to preserve materials for potential future investigations or to hold violators of human rights accountable; and that the newsworthiness allowance is likely to be applied in an ad hoc and contestable manner and that this practice should be reconsidered. In March 2022, as part of ongoing stakeholder engagement, the Board spoke with approximately 50 advocacy organization representatives and individuals working on reporting and documenting human rights abuses, academics researching ethics, human rights, and documentation, and stakeholders interested in engaging with the Board on issues arising from the Violent and Graphic Content Community Standard and its enforcement in crisis or protest contexts. These ongoing engagements are held under the Chatham House Rule in order to ensure frank discussion and to protect the participants. The discussion touched on a number of themes including the vital role of social media within countries controlled by repressive regimes for documenting human rights violations and bringing international media and public attention to state-sanctioned violence; shared concerns that a universal standard on violent and graphic content is in practice a US-focused standard; and observed that the use of warning screens is useful to address the real problem of trauma, though some organizations reported that warning screens may limit the reach of their content. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board looked at the question of whether this content should remain on the platform through three lenses: Meta's content policies, the company's values, and its human rights responsibilities. 8.1. Compliance with Meta’s content policies I. Content rules The Board agrees with Meta’s decision to restore this content to the platform with a warning screen and age restriction, but notes that there is a lack of clarity in Meta’s content policies and no effective means of implementing this response to similar content at scale. Meta’s initial decision to remove the content was consistent with the rules within its Violent and Graphic Content Community Standard – the content violated the policy by depicting human dismemberment in a non-medical setting (a person with a significant head wound and a visibly detached eye). However, the policy rationale of the Violent and Graphic Content Community Standard states that “[Meta] allow[s] graphic content (with some limitations) to help people raise awareness about issues. [Meta] know[s] that people value the ability to discuss important issues such as human rights abuses or acts of terrorism.” Despite this reference in the policy rationale, the specific rules within the Community Standard do not include a “raising awareness” exception. Meta’s internal standards for its reviewers also do not include an exception for content seeking to raise awareness of or document human rights abuses. In the absence of a specific exception within the Community Standard, the Board agrees with Meta’s decision to restore the content using the newsworthiness allowance. Meta states that it allows violating content to remain on the platform under the newsworthiness allowance if it determines that it is newsworthy and “keeping it visible is in the public interest [and] after conducting a balancing test that weighs the public interest against the risk of harm.” II. Enforcement action The Board notes that although Meta made the decision to issue a newsworthiness allowance and restore the post with a warning screen on October 29, 2021, the post was not actually restored to the platform until nearly five weeks later, on December 2, 2021. Meta said that communication about the final decision on the content occurred outside of its normal escalation management tools, “leading to the delay in taking appropriate action on the content.” The Board finds this explanation and the delay extremely troubling and emphasizes the importance of Meta taking timely action in relation to decisions such as this one, in the context of a public crisis and when the freedom of the press has been severely restricted. When Meta initially removed this content, it applied a 30-day feature limit preventing the user from creating new content, during a period when protestors in the streets and journalists reporting on the coup and the military crackdown were being met with severe violence and repression. 8.2. Compliance with Meta’s values The Board concludes that keeping this content on the platform with a warning screen is consistent with Meta’s values of “Voice” and “Safety.” The Board recognizes the importance of “Dignity” and “Privacy” in the context of protecting victims of human rights abuses. The content affects the dignity and privacy of the injured person in the video and their family; the person depicted is identifiable and they, or their family or loved ones, may not have wished for this type of footage of them to be broadcast. The Board also notes the relevance of “Safety” in this context, which aims to protect users from content that poses a “risk of harm to the physical security of persons.” On one hand, the user sought to raise awareness of the ongoing coup, which could contribute to improving safety of persons in that region. On the other hand, the content may also create risks for the person shown in the video and/ or their family. The Board concludes that in a context where civic space and media freedom is curtailed by the state, the value of “Voice” becomes even more important. Here, “Voice” also serves to enhance the value of “Safety” by ensuring people have access to information and state violence is exposed. 8.3. Compliance with Meta’s human rights responsibilities The Board finds that keeping the content on the platform with a warning screen is consistent with Meta’s human rights responsibilities. However, the Board concludes that Meta’s policies should be amended to better respect the right to freedom of expression for users seeking to raise awareness of or document abuses. Freedom of expression (Article 19 ICCPR) Article 19 of the ICCPR provides broad protection for freedom of expression, including the right to seek and receive information. However, the right may be restricted under certain specific conditions, known as the three-part test of legality (clarity), legitimacy, and necessity and proportionality. Although the ICCPR does not create obligations for Meta as it does for states, Meta has committed to respecting human rights as set out in the UNGPs. This commitment encompasses internationally recognized human rights as defined, among other instruments, by the ICCPR. I. Legality (clarity and accessibility of the rules) Any restriction on freedom of expression should be accessible and clear enough to provide guidance as to what is permitted and what is not. The Board concludes that the Violent and Graphic Content policy does not make clear how Meta permits users to share graphic content to raise awareness of or document abuses. The rationale for the Community Standard, which sets out the aims of the policy, does not align with the rules of the policy. The policy rationale states that Meta allows users to post graphic content “to help people raise awareness about” human rights abuses but the policy prohibits all videos (whether it is shared to raise awareness or not) “of people or dead bodies in non-medical settings if they depict dismemberment.” While Meta correctly relied on the broader newsworthiness allowance to restore this content, the Violent and Graphic Content Community Standard does not make clear whether this type of content will be allowed on the platform. The Board also concludes that the newsworthiness allowance does not make clear when content documenting human rights abuses or atrocities will benefit from the allowance. While we agree with Meta that determining newsworthiness can be “highly subjective,” the rule in question does not even define the term. The policy states that the company assigns “special value to content that surfaces imminent threats to public health or safety or that gives voice to perspectives currently being debated as part of a political process.” Emblematic examples and clear principles should guide the exercise of discretion in applying this allowance. Absent those, its use is likely to be inconsistent and arbitrary. Furthermore, the newsworthiness allowance makes no reference to the use of warning screens (or interstitials) for content that otherwise violates Meta’s policies. Lastly, the Board, in a previous case (“Colombia Protests”), recommended that Meta “develop and publicize clear criteria for content reviewers to escalate for additional review public interest content.” Meta responded that it has already publicized the criteria for escalation through the Transparency Center article on newsworthiness. However, this article focuses on factors Meta considers in applying the newsworthiness allowance, and not criteria provided to moderators for when to escalate content (ie. send it for additional review). If the newsworthiness allowance is intended to be part of the company's scaled content moderation system, processes for escalation and use ought to facilitate that aim. The Board notes that the lack of clarity surrounding when, and how, the newsworthiness allowance is applied is likely to invite arbitrary application of this policy. II. Legitimate aim Restrictions on freedom of expression should pursue a legitimate aim, which includes the protection of the rights of others, such as the right to privacy of the depicted individual (General Comment 34, para. 28) and the right to physical integrity. Meta also notes in the rationale for the policy that “content that glorifies violence or celebrates the suffering or humiliation of others...may create an environment that discourages participation.” The Board agrees that the Violent and Graphic Content policy pursues several legitimate aims. III. Necessity and proportionality Restrictions on expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interests to be protected” (General Comment 34, para. 34). In this case, the Board concludes that placing a warning label on the content was a necessary and proportionate restriction on freedom of expression. The warning screen does not place an undue burden on those who wish to see the content while informing others about the nature of the content and allowing them to decide whether to see it or not. The warning screen also adequately protects the dignity of the individual depicted and their family. The Board also notes that, as discussed in Section 8.1, Meta’s restoration of the post was delayed by nearly five weeks. This delay had a disproportionate impact on freedom of expression in the context of ongoing violence and the restricted media environment in Sudan. A delay of this length undermines the benefits of this speech, which is to provide a warning to civilians and to raise awareness. The Board also concludes that because the newsworthiness allowance is used infrequently, it is not an effective mechanism through which to allow content documenting abuses or seeking to raise awareness on the platform at scale. Meta told the Board that it “documented 17 newsworthy allowances in connection with the Violent Graphic Content policy over the past 12 months (12 months prior to March 8, 2022). The content in this case represents one of those 17 allowances.” By comparison, Meta removed 90.7 million pieces of content under this Community Standard in the first three quarters of 2021. The Board finds it unlikely that only 17 pieces of content related to this policy, globally, over a year, should have been allowed to remain on the platform as newsworthy and in the public interest. The newsworthiness allowance does not provide an adequate mechanism for preserving content of this nature on the platform. In order to avoid censoring protected expressions, Meta should amend the Violent and Graphic Content policy itself to allow such content to remain on the platform. In contexts of war or political unrest, there will be more graphic and violent content captured by users and shared on the platform for the purpose of raising awareness of or documenting abuses. This content is important for promoting accountability. The Board, in the ""Former President Trump’s Suspension"" case, noted that Meta has a responsibility to “collect, preserve and, where appropriate, share information to assist in the investigation and potential prosecution of grave violations of international criminal, human rights and humanitarian law by competent authorities and accountability mechanisms.” The Board also recommended that Meta clarify and state in its Corporate Human Rights Policy protocols for how to make previously public content available to researchers while respecting international standards and data protection laws. In response, Meta committed to briefing the Board on ongoing efforts to address the issue. Since the Board published this recommendation on May 5, 2021, Meta has not reported any progress on this issue. The Board finds a delay of this length and the lack of progress concerning, given the role the platform plays in situations of violent conflict (e.g. the current war in Ukraine where users are documenting abuses through social media) and political unrest around the globe. Finally, the Board recalls its recommendation from the ""Former President Trump’s Suspension"" case for Meta to “develop and publish a policy that governs Facebook’s response to crises or novel situations where its regular processes would not prevent or avoid imminent harm.” Meta reported in the Q4 2021 Update on the Oversight Board that the company has prepared a proposal for a new Crisis Protocol in response to the Board’s recommendation and it was adopted. Meta also stated that it will soon provide information on this protocol on the Transparency Center . Meta informed the Board that this protocol was not in place at the time of the coup in Sudan, nor was it operational during the review of this case. The company plans to launch the protocol later in 2022. A well-designed protocol should guide Meta in developing and implementing necessary and proportional responses in crisis situations. Meta should move more quickly to implement this protocol and provide as much detail as possible on how this protocol will operate and interact with existing Meta processes. Meta’s platforms play a prominent role in conflicts and crisis situations around the world and the company must be prepared to respond quickly and systematically to prevent mistakes. 9. Oversight Board decision The Oversight Board upholds Meta's decision to leave up the content with a screen that restricts access to those over 18. 10. Policy advisory statement Content policy 1. Meta should amend the Violent and Graphic Content Community Standard to allow videos of people or dead bodies when shared for the purpose of raising awareness of or documenting human rights abuses. This content should be allowed with a warning screen so that people are aware that content may be disturbing. The Board will consider this recommendation implemented when Meta updates the Community Standard. 2. Meta should undertake a policy development process that develops criteria to identify videos of people or dead bodies when shared for the purpose of raising awareness of or documenting human rights abuses. The Board will consider this recommendation implemented when Meta publishes the findings of the policy development process, including information on the process and criteria for identifying this content at scale. 3. Meta should make explicit in its description of the newsworthiness allowance all the actions it may take (for example, restoration with a warning screen) based on this policy. The Board will consider this recommendation implemented when Meta updates the policy. Enforcement 4. To ensure users understand the rules, Meta should notify users when it takes action on their content based on the newsworthiness allowance including the restoration of content or application of a warning screen. The user notification may link to the Transparency Center explanation of the newsworthiness allowance. The Board will consider this implemented when Meta rolls out this updated notification to users in all markets and demonstrates that users are receiving this notification through enforcement data. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of more than 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Return to Case Decisions and Policy Advisory Opinions" fb-b6ngyrek,COVID lockdowns in Brazil,https://www.oversightboard.com/decision/fb-b6ngyrek/,"August 19, 2021",2021,,"TopicGovernments, HealthCommunity StandardViolence and incitement","Policies and TopicsTopicGovernments, HealthCommunity StandardViolence and incitement",Upheld,Brazil,"The Oversight Board has upheld Facebook's decision to leave up a post by a state-level medical council in Brazil, which claimed that lockdowns are ineffective and had been condemned by the World Health Organization (WHO).",38871,5980,"Upheld August 19, 2021 The Oversight Board has upheld Facebook's decision to leave up a post by a state-level medical council in Brazil, which claimed that lockdowns are ineffective and had been condemned by the World Health Organization (WHO). Standard Topic Governments, Health Community Standard Violence and incitement Location Brazil Platform Facebook 2021-008-FB-FBR Public Comments The Oversight Board has upheld Facebook’s decision to leave up a post by a state-level medical council in Brazil which claimed that lockdowns are ineffective and had been condemned by the World Health Organization (WHO). The Board found that Facebook’s decision to keep the content on the platform was consistent with its content policies. The Board found that the content contained some inaccurate information which raises concerns considering the severity of the pandemic in Brazil and the council’s status as a public institution. However, the Board found that the content did not create a risk of imminent harm and should, therefore, stay on the platform. Finally, the Board emphasized the importance of measures other than removal to counter the spread of COVID-19 misinformation to be adopted under certain circumstances, such as those in this case. About the case In March 2021, the Facebook page of a state-level medical council in Brazil posted a picture of a written notice on measures to reduce the spread of COVID-19, entitled “Public note against lockdown.” The notice claims that lockdowns are ineffective, against fundamental rights in the Constitution and condemned by the WHO. It includes an alleged quote from Dr. David Nabarro, a WHO special envoy for COVID-19, stating that ""the lockdown does not save lives and makes poor people much poorer."" The notice claims that the Brazilian state of Amazonas had an increase in deaths and hospital admissions after lockdown as evidence of the failure of lockdown restrictions. The notice claims that lockdowns would lead to greater mental disorders, alcohol and drug abuse, and economic damage, amongst other things. It concludes that effective preventative measures against COVID-19 include education campaigns about hygiene, masks, social distancing, vaccination and government monitoring – but never lockdowns. The page has over 10,000 followers. The content was viewed around 32,000 times and shared around 270 times. No users reported the content. Facebook took no action against the content and referred the case to the Board. The content remains on the platform. Key findings The Board concluded that Facebook’s decision to keep the content on the platform was consistent with its content policies. The Violence and Incitement Community Standard prohibits content which contains misinformation that contributes to the risk of imminent violence or physical harm. The Help Center article linked from the Standard states that Facebook determines if information is false based on the opinion of public health authorities. The Board found that the content contained some inaccurate information which raises concerns considering the severity of the pandemic in Brazil and the council’s status as a public institution. However, the Board found that the content did not create a risk of imminent harm. The statement that the WHO condemned lockdowns and the quote attributed to Dr. David Nabarro are not fully accurate. Dr. Nabarro did not say that “lockdown does not save lives,” but instead noted that the WHO did “not advocate lockdowns as a primary means of control of this virus” and that they have the consequence of “making poor people an awful lot poorer.” The WHO has said that “lockdowns are not sustainable solutions because of their significant economic, social broader health impacts. However, during the #COVID19 pandemic there’ve been times when restrictions were necessary and there may be other times in the future.” The Board notes Facebook’s argument that the threshold of “imminent harm” was not met because the WHO and other health experts advised the company to “remove claims advocating against specific health practices, such as social distancing,” but not claims advocating against lockdowns. Despite confirming that it has been in communication with Brazil’s national public health authority, Facebook said it does not take into account local context when defining the threshold of imminent harm for enforcement of the policy on misinformation and harm. The Board believes that Facebook should take into consideration local context when assessing the risk of imminent physical harm and the fact that the content was shared by a public institution, which has a duty to provide reliable information. However, the Board still finds that the post does not meet the threshold of imminent harm in this case, despite the severity of the pandemic in Brazil, because the post emphasized the importance of other measures to counter the spread of COVID-19 – including social distancing. Facebook disclosed that the post was eligible for fact-checking, but that fact-checking partners did not assess this content. The Board notes that Facebook’s approach failed to provide additional context to content that may endanger people’s trust in public information about COVID-19, and that Facebook should prioritize sending potential health misinformation from public authorities to fact-checking partners. The Board notes that Facebook has previously stated that content from politicians is not eligible for fact-checking, but its policies do not make clear eligibility criteria for other users, such as pages or accounts administered by public institutions. The Oversight Board’s decision The Oversight Board upholds Facebook's decision to keep the content on the platform. In a policy advisory statement, the Board recommends that Facebook: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board has upheld Facebook’s decision to leave up a post by a state-level medical council in Brazil which claimed that lockdowns are ineffective and had been condemned by the World Health Organization (WHO). As such, the content will remain on Facebook. 2. Case description In March 2021, the Facebook page of a state-level medical council in Brazil posted a picture of a written notice with messaging in Portuguese on measures to reduce the spread of COVID-19, entitled “Public note against lockdown.” The notice claims that lockdowns are ineffective, against the fundamental rights in the Constitution and condemned by the World Health Organization (WHO). It includes an alleged quote from Dr. David Nabarro, one of the WHO’s special envoys for COVID-19, stating that ""the lockdown does not save lives and makes poor people much poorer."" The notice also claims that the Brazilian state of Amazonas had an increase in the number of deaths and hospital admissions after lockdown as evidence of the failure of lockdown restrictions. The notice claims that lockdowns would lead to an increase in mental disorders, alcohol and drug abuse, and economic damage, amongst other things. It concludes that effective preventative measures against COVID-19 include education campaigns about hygiene measures, the use of masks, social distancing, vaccination and extensive monitoring by the government – but never the decision to adopt lockdowns. The page has more than 10,000 followers. The content was viewed around 32,000 times and shared around 270 times. No users reported the content. Facebook took no action against the content and referred the case to the Board. The content remains on the platform. The following factual background is relevant to the Board’s decision. Article 1 of Brazil’s Federal Law No. 3268/1957 outlines that medical councils are part of the government administration of each of the 26 states, endowed with legal personality under public law as well as having administrative and financial autonomy. The councils are responsible for the professional registration of medical doctors and their titles. Article 2 notes that they are supervisory bodies of professional ethics and have sanctioning powers over physicians. Medical councils do not have authority to impose measures such as lockdowns under Federal Law No. 3268/1957. The claims made in the post that the WHO condemned lockdowns and Dr. David Nabarro said that “lockdown does not save lives” are not fully accurate. Dr. Nabarro noted that lockdowns have the consequence of “making poor people an awful lot poorer” but he did not say that they “do not save lives.” The WHO has not condemned lockdowns, it has said that lockdowns are not a sustainable solution due to their significant economic, social and broader health impacts, but there may be times when such restrictions are necessary, and are best used to prepare for longer-term public health measures. The lockdown in Amazonas referred to in the notice shared by the medical council was adopted between January 25 and January 31, 2021, by Decree No. 43,303 of January 23, 2021, and extended by Decree No. 43,348 of January 31, 2021, until February 7, 2021. The Decrees established temporary restrictions on the movement of people in public venues and suspended the operation of all commercial activities and services with a few exceptions – including the transportation of essential goods, the operation of markets, bakeries, drug stores, gas stations, banks and health care units, among others. The lockdown measures were enforced by the police and other authorities. Those not abiding by the Decrees could face a number of sanctions. 3. Authority and scope The Oversight Board has the power to review a broad set of questions referred by Facebook (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1). Decisions on these questions are binding and may include policy advisory statements with recommendations. These recommendations are non-binding but Facebook must respond to them (Charter Article 3, Section 4). 4. Relevant standards The Oversight Board considered the following standards in its decision: I. Facebook’s Community Standards: The introduction to the Community Standards contains a section titled “COVID-19: Community Standards Updates and Protections.” The full text states: As people around the world confront this unprecedented public health emergency, we want to make sure that our Community Standards protect people from harmful content and new types of abuse related to COVID-19. We're working to remove content that has the potential to contribute to real-world harm, including through our policies prohibiting the coordination of harm , the sale of medical masks and related goods, hate speech, bullying and harassment, and misinformation that contributes to the risk of imminent violence or physical harm . As the situation evolves, we are continuing to look at content on the platform, assess speech trends and engage with experts, and will provide additional policy guidance when appropriate to keep the members of our community safe during this crisis. [emphasis added] The Violence and Incitement Community Standard states that Facebook prohibits content containing ""Misinformation and unverifiable rumors that contribute to the risk of imminent violence or physical harm.” It then states: “Additionally, we have specific rules and guidance regarding content related to COVID-19 and vaccines. To see these specific rules, please click here . "" According to the article provided in the link above, under this policy Facebook removes content discouraging good health practices that “public health authorities advise people take to protect themselves from getting or spreading COVID-19,” including “wearing a face mask, social distancing, getting tested for COVID-19 and […] getting vaccinated against COVID-19.” The policy rationale for Facebook’s False News Community Standard states that: Reducing the spread of false news on Facebook is a responsibility that we take seriously. We also recognize that this is a challenging and sensitive issue. We want to help people stay informed without stifling productive public discourse. There is also a fine line between false news and satire or opinion. For these reasons, we don't remove false news from Facebook, but instead significantly reduce its distribution by showing it lower in the News Feed. The False News Standard provides information on the range of enforcement options used by Facebook besides content removal: We are working to build a more informed community and reduce the spread of false news in a number of different ways, namely by: II. Facebook’s values: Facebook’s values are described in the introduction to the Community Standards. “Voice” is described as Facebook’s paramount value: The goal of our Community Standards has always been to create a place for expression and give people a voice. This has not and will not change. Building community and bringing the world closer together depends on people’s ability to share diverse views, experiences, ideas and information. We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Facebook notes that “Voice” may be limited in service of four other values – the relevant one in this case is “Safety”: We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook. III. Human rights standards: The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In March 2021, Facebook announced its Corporate Human Rights Policy , where it recommitted to respecting human rights in accordance with the UNGPs. The Board's analysis in this case was informed by the following human rights standards: 5. User statement Facebook referred this case to the Oversight Board. Facebook confirmed to the Oversight Board that it sent the user a notification that the case had been referred to the Board and provided the user the opportunity to submit information on this case, but the user did not submit a statement. The Board notes that the notification sent by Facebook provides the user with the opportunity to submit information. The Board is concerned, however, that Facebook does not provide the user with sufficient information to be able to properly provide a statement. The notifications shown by Facebook to the user states the general topics that the case relates to, but does not provide a detailed explanation of why the content was referred to the Board and the relevant policies the content might be enforced against. 6. Explanation of Facebook’s decision Facebook took no action against the content and stated in its referral to the Board that the case is ""difficult because this content does not violate Facebook's policies, but can still be read by some people as advocacy for taking certain safety measures during the pandemic."" It explained that “an internal team at Facebook familiar with the region noted reports from the press about the case content and flagged the case for review. The reviewers determined that the content did not violate Facebook’s policies.” Facebook says that it prohibits misinformation that may “contribute to the risk of imminent violence or physical harm,” and that it consults with the WHO, the U.S. Centers for Disease Control and Prevention, and other leading public health authorities in order to determine whether a particular false claim about COVID-19 may contribute to the risk of imminent physical harm. Facebook says that the content in this case does not meet that standard. It says that “the WHO does not state that criticizing lockdown measures may contribute to the risk of imminent physical harm” and that ""while the World Health Organization and other health experts have advised Facebook to remove claims advocating against specific health practices, such as social distancing, they have not advised Facebook to remove claims advocating against lockdowns."" In response to a question from the Board on how Facebook defines the line between lockdowns and social distancing measures, Facebook stated that “the WHO defines “lockdowns” as large scale physical distancing measures and movement restrictions put in place by the government. Social distancing, on the other hand, is the practice of an individual keeping a certain amount of physical distance from another person. A lockdown can, in theory, include social distancing as a requirement.” Facebook also noted that “in this case, the post was eligible to be rated by our third party fact-checkers, but the fact checkers did not rate this content. [sic] and it was not downranked or labeled as false news.” Facebook stated that its fact-checking partners are independent and it “does not speculate on why they rate or do not rate eligible posts, including this one.” Facebook says that it does not take a different approach to the threshold for health misinformation depending on the context in different countries – its policies are global in scope. It states that it consults with leading public health authorities in developing its policies, and confirmed in its responses to the Board’s questions that it has been in communication with the national public health authority in Brazil. 7. Third-party submissions The Oversight Board received 30 public comments on this case. Three comments were submitted from Asia Pacific and Oceania, one from Central and South Asia, nine from Latin America and the Caribbean, and 17 from the US and Canada. A range of organizations and individuals submitted comments, including a number of researchers and organizations in Brazil. The submissions covered the following themes: the importance of considering the Brazilian context, including the impact of COVID-19 and the political context; discussion and analysis of the impact of alternative enforcement measures such as labeling and downranking; and the influential nature of the user as a medical authority. Comments providing more context on the situation in Brazil noted the politicization of the health emergency in Brazil (PC-10105), that adherence to evidence-based public policy measures combatting COVID-19 had been affected by political forces in Brazil contesting such measures (PC-10100) and that due to a context in which “lockdown” had become a political buzzword, claims advocating against lockdowns could also encourage defiance of other safety measures (PC-10106). Researchers focused on disinformation in Brazil also found that public authorities have a much higher impact when sharing disinformation (PC-10104). To read public comments submitted for this case, please click here . 8. Oversight Board analysis 8.1 Compliance with Community Standards The Board concludes that Facebook’s decision to keep the content on the platform was consistent with its content policies. The Violence and Incitement Community Standard prohibits content which contains misinformation that contributes to the risk of imminent violence or physical harm. The Help Center article linked from the Violence and Incitement Community Standard states that Facebook removes false content under this policy based on previous guidance from public health authorities. Although the Board finds that the content contained some misinformation (see below), the content did not create a risk of imminent harm. The post claims that lockdowns are ineffective and condemned by the WHO, and includes an alleged quote from WHO official Dr. David Nabarro saying that ""the lockdown does not save lives and makes poor people much poorer."" This information is not fully accurate. The part of the quote from WHO official Dr. David Nabarro stating that “lockdown does not save lives” is inaccurate – Dr. Nabarro stated that the WHO did “not advocate lockdowns as a primary means of control of this virus” and that they have the consequence of “making poor people an awful lot poorer,” but he did not say that “lockdown does not save lives.” The WHO has said that “lockdowns are not sustainable solutions because of their significant economic, social broader health impacts. However, during the #COVID19 pandemic there’ve been times when restrictions were necessary and there may be other times in the future. ... Because of their severe economic, social broader health impacts, lockdowns need to be limited in duration. They’re best used to prepare for longer-term public health measures. During these periods, countries are encouraged to lay the groundwork for more sustainable solutions.” The Board notes Facebook’s argument that the threshold of “imminent harm” was not met because the World Health Organization and “other health experts” advised the company to “remove claims advocating against specific health practices, such as social distancing,” but not claims advocating against lockdowns. Despite confirming that it has been in communication with “the national public health authority in Brazil,” Facebook highlighted that it does not take into account local context when defining the threshold of “imminent harm” for the enforcement of the policy on misinformation and harm. The Board believes, however, that Facebook should take into consideration local context and consider the current situation in Brazil when assessing the risk of imminent physical harm. As highlighted by the experts consulted by the Board, as well as several public comments submitted by organizations and researchers in Brazil, the COVID-19 pandemic has already resulted in more than 500,000 deaths in the country, one of the worst rates of deaths per million inhabitants of any country. The experts consulted and some public comments also emphasized the politicization of measures to counter the spread of COVID-19 in the country. In light of the situation and context in Brazil, the Board is concerned that the spread of COVID-19 misinformation in the country can endanger people’s trust in public information about appropriate measures to counter the pandemic, which could increase the risk of users adopting risky behaviors. The Board understands that this would justify a more nuanced approach by Facebook in the country, intensifying its efforts to counter misinformation there, as the Board advocates under Recommendation 2 below. However, the Board still finds that the post does not meet the threshold of imminent harm, because it discusses a measure that is not suggested unconditionally by the public health authorities and emphasizes the importance of other measures to counter the spread of COVID-19 – including social distancing. In its responses to questions from the Board in this case, Facebook disclosed that the post was eligible for fact-checking under the False News Community Standard, but that fact-checking partners did not assess this content. The Board understands these partners may not be able to analyze all content flagged as misinformation by Facebook’s automated systems, internal teams or users. However, the Board notes that Facebook’s approach to misinformation failed to provide additional context to a piece of content that may endanger people’s trust in public information about COVID-19 and may undermine the effectiveness of measures that in certain cases can be essential. Facebook should prioritize sending content which comes to its attention and that appears to contain health misinformation shared by public authorities to fact-checking partners, especially during the pandemic. The Board has issued a recommendation in this regard in section 10. The Board also notes that Facebook has previously stated that “opinion and speech” from politicians are not eligible for fact-checking, but its policies do not make clear eligibility criteria for other users, such as pages or accounts administered by state and public institutions. The Board notes that content shared by state and public institutions should be eligible for fact-checking. 8.2 Compliance with Facebook’s values The Board found that Facebook’s decision to take no action against this content was consistent with its value of “Voice.” Although Facebook’s value of “Safety” is important, particularly in the context of the pandemic, this content did not pose an imminent danger to the value of “Safety” to justify displacing “Voice.” 8.3 Compliance with Facebook’s human rights responsibilities Freedom of expression (Article 19 ICCPR) Article 19 para. 2 of the ICCPR provides broad protection for expression of ""all kinds."" The UN Human Rights Committee has highlighted that the value of expression is particularly high when it involves public institutions or discusses matters of public concern (General comment No. 34, paras. 13, 20 and 38). As an institution established by law, the medical council is a public institution which has human rights duties, including the duty to ensure that it disseminates reliable and trustworthy information about matters of public interest (A/HRC/44/49, para. 44). The Board notes that even though the medical councils do not have authority to impose measures such as lockdowns, it is relevant that they are part of the state government administration and may exert influence over the authorities deciding on the adoption of measures to counter the spread of COVID-19. The Board notes that the post engages with a wider and important discussion in Brazil about appropriate measures to counter the spread of COVID-19 in the country. Moreover, because the post was shared by the Facebook page of a medical council in Brazil there is general increased interest in its views as an institution on public health issues. The Board recognizes the importance of professional experts to state their views in matters of forming public health policies. The right to freedom of expression is fundamental and includes the right to receive information, including from governmental entities – however, this right is not absolute. Where restrictions are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). Facebook has recognized its responsibilities to respect international human rights standards under the UNGPs. Relying on the UNGPs framework, the UN Special Rapporteur on freedom of opinion and expression has called on social media companies to ensure their content rules are guided by the requirements of Article 19, para. 3, ICCPR (on content rules addressing disinformation, see: A/HRC/47/25, at para. 96; on content rules more broadly, see: A/HRC/38/35, paras 45 and 70). The Board examined whether the removal of the post would be justified under this three-part test in accordance with Facebook’s human rights responsibilities. I. Legality (clarity and accessibility of the rules) Article 19, para. 3, ICCPR requires any rules a state imposes to restrict expression to be clear, precise and publicly accessible (General comment 34, para. 25). People should have enough information to determine if and how their access to information may be limited. To protect these rights, it is also important that public bodies are able to clearly understand the rules that apply to their communications on the platform and adjust their behavior accordingly. General Comment 34 also highlights that the rules imposed “may not confer unfettered discretion for the restriction of freedom of expression on those charged with its execution” (para. 25). Facebook also has a responsibility to ensure its rules comply with the principle of legality (A/HRC/38/35, para. 46). In case decision 2020-006-FB-FBR , the Board found that it was “difficult for users to understand what content relating to health misinformation is prohibited” under Facebook’s Community Standards considering the “patchwork” of relevant of rules (including misinformation that contributes to a risk of imminent harm under “Violence and Incitement”). The Board also noted the lack of public definitions of key terms such as “misinformation,” concluding this made the Violence and Incitement Community Standard “inappropriately vague” as it applied to misinformation. In this regard, the UN Rapporteur on freedom of expression has stated that the principle of legality should be applied “to any approach” to misinformation because it is a “extraordinarily elusive concept to define in law, susceptible to providing executive authorities with excessive discretion” (A/HRC/44/49, para. 42). To address these issues, the Board recommended that Facebook “set out a clear and accessible Community Standard on health misinformation, consolidating and clarifying existing rules in one place.” In response to the Board’s recommendation, Facebook published the Help Center article “ COVID-19 and Vaccine Policy Updates Protections ,” which is linked to the misinformation and harm policy under the Violence and Incitement Community Standard. In this article, Facebook lists all relevant COVID-19 and vaccine policies from various Community Standards and provides examples of content types that are violating. This article is also available in Portuguese. While the Help Center article provides useful information for users to understand how the policy is enforced, it also adds to the number of sources of rules outside the Community Standards. Additionally, the article is not sufficiently “made accessible to the public” (General Comment 34, para. 25), considering it is only accessible to people with a Facebook log-in. Moreover, it is only linked from the Community Standard on Violence and Incitement, and not from other applicable Community Standards or the announcement on COVID-19 in the introduction to the Community Standards. The Board also reiterates the point made in section 5 above that Facebook does not provide users with sufficient information to submit a statement to the Board. II. Legitimate aim Any restriction on freedom of expression should also pursue a ""legitimate aim."" Facebook has a responsibility to ensure its rules comply with the principle of legitimacy (A/HRC/38/35, para. 45). The ICCPR lists legitimate aims in Article 19, para. 3, which includes the protection of the rights of others as well as protection of public health. III. Necessity and proportionality Any restrictions on freedom of expression ""must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected"" (General Comment 34, para. 34). Facebook has a responsibility to ensure its rules respect the principles of necessity and proportionality (A/HRC/38/35, para. 47). The Board assessed whether the content removal was necessary to protect public health and the right to health, in line with Facebook’s human rights responsibilities. The content was shared by the page of a medical council, a part of the state government administration that may, through the information it shares, influence decisions of other public authorities and the behavior of the general public. The Board notes that it is relevant for Facebook to consider whether a page or account is administered by a public institution, as it is in this case, because those institutions should “not make, sponsor, encourage or further disseminate statements which they know or reasonably should know to be false” or which “demonstrate a reckless disregard for verifiable information” (UN Special Rapporteur on freedom of expression, report A/HRC/44/49, para. 44; Joint Declaration on Freedom of Expression and “Fake News”, Disinformation and Propaganda, FOM.GAL/3/17, paras. 2 (c)). Further, state actors should, “in accordance with their domestic and international legal obligations and their public duties, take care to ensure that they disseminate reliable and trustworthy information, including about matters of public interest, such as the economy, public health, security and the environment” (ibid., para 2(d)) This duty is particularly strong when the information is related to the right to health, especially during a global pandemic. A minority is of the view that the standard quoted from the Joint Declaration is not applicable in the present case and the definition used in the Joint Declaration is contradicted by other authorities of international human rights law. The standard of the Joint Declaration refers to dis information by public institutions, while in the present case the Decision expressly qualifies the impugned statement to be a mis information. As emphasized by the Special Rapporteur, the interchangeable use of the two concepts endangers the right to freedom of expression (A/HRC/47/25, para 14) – and “disinformation is understood as false information that is disseminated intentionally to cause serious social harm and misinformation as the dissemination of false information unknowingly. The terms are not used interchangeably.” (para 15). In the present case, it has not been shown that the user, a medical council reasonably should have known that the disseminated statement is false. The minority believes that while the statement contains some inaccurate information, as a whole it is a fact related opinion which is legitimate in public discussion. The efficacy of lockdowns, while widely accepted among experts and public health agencies in most of the world, is subject to reasonable debate. Moreover, while the council is part of the public administration, it cannot be held in the present context to be a state actor as its powers are limited to its members and it is not a public authority having the legal power to influence or determine a lockdown decision. The majority understands the minority’s view but respectfully disagrees with it. According to the standards above, public authorities have a duty to verify information they provide to the public. This duty is not lost when the false information disseminated is not directly related to its statutory duties. Facebook argued that the threshold of imminent physical harm was not reached in this case because health authorities such as the World Health Organization and other experts have recommended the company to remove misinformation on practices such as social distancing, but they have not done the same with respect to lockdowns. Additionally, the Board notes that the content in this case was not used as a basis by the council for the adoption of public health measures that could create risks, since the council does not have authority to decide on these matters. For these reasons and following the Board’s analysis in case decision 2020-006-FB-FBR, the Board considers Facebook’s decision to keep the content on the platform to be justified, given that the threshold of imminent physical harm was not met. However, as already mentioned, the Board notes that the dissemination of misinformation on public health can affect trust in public information and the effectiveness of certain measures that, in the words of the World Health Organization, may be essential in certain contexts. In these cases, as the UN Special Rapporteur on Freedom of Expression suggested, the damage caused by false or misleading information can be mitigated by the sharing of reliable information (A/HRC/44/49, para. 6). Those alternative or less intrusive measures can provide the public with greater context and promote their right to access accurate health-related information. In this particular case, Facebook should provide the public with more context about the statements of Dr. Nabarro and the World Health Organization’s stance on lockdowns mentioned above. The Board recalls that in case decision 2020-006-FB-FBR it recommended that Facebook should consider less intrusive measures than removals for misinformation that may lead to forms of physical harm that are not imminent. These measures are provided for in the False News Community Standard – as noted above in section 8.1. The Board recommends that Facebook should prioritize referring content that comes to its attention to its fact-checking partners where a public position on debated health policy issues (in particular in the context of a pandemic) is presented by a part of state government administration normally capable of influencing public opinion and individual health-related conduct. The Board recognizes that Facebook’s approach to fact-checking has been criticized, but because fact-checkers did not review this post, this case is not a proper occasion to consider those issues. 9. Oversight Board decision The Oversight Board upholds Facebook's decision to keep the content on the platform. 10.Policy advisory statement Implementing the Board’s recommendation from case decision 2020-006-FB-FBR 1. Facebook should conduct a proportionality analysis to identify a range of less intrusive measures than removing the content. When necessary, the least intrusive measures should be used where content related to COVID-19 distorts the advice of international health authorities and where a potential for physical harm is identified but is not imminent. Recommended measures include: (a) labeling content to alert users to the disputed nature of a post's content and to provide links to the views of the World Health Organization and national health authorities; (b) introducing friction to posts to prevent interactions or sharing; and (c) down-ranking, to reduce visibility in other users’ News Feeds. All these enforcement measures should be clearly communicated to all users, and subject to appeal. Prioritizing the fact-checking of content flagged as health misinformation 2. Given the context of the COVID-19 pandemic, Facebook should make technical arrangements to prioritize fact-checking of potential health misinformation shared by public authorities which comes to the company’s attention, taking into consideration the local context. Clarity on eligibility for fact-checking 3. Facebook should provide more transparency within the False News Community Standard regarding when content is eligible for fact-checking, including whether public institutions' accounts are subject to fact-checking. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world, provided expertise on socio-political and cultural context. Return to Case Decisions and Policy Advisory Opinions" fb-blkz1zi8,Niger Coup Cartoon,https://www.oversightboard.com/decision/fb-blkz1zi8/,"December 8, 2023",2023,December,"TopicNews events, Politics, War and conflictCommunity StandardHate speech",Hate speech,Overturned,"France, Niger",A user appealed Meta’s decision to remove a Facebook post on the military coup in Niger.,3627,554,"Overturned December 8, 2023 A user appealed Meta’s decision to remove a Facebook post on the military coup in Niger. Summary Topic News events, Politics, War and conflict Community Standard Hate speech Location France, Niger Platform Facebook This is a summary decision. Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors. They are approved by a Board Member panel, not the full Board. They do not involve a public comment process and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement. Case Summary A user appealed Meta’s decision to remove a Facebook post on the military coup in Niger. This case highlights errors in Meta’s content moderation, including its automated systems for detecting hate speech. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Case Description and Background In July 2023, a Facebook user in France posted a cartoon image showing a military boot labeled “Niger,” kicking a person wearing a red hat and dress. On the dress is the geographical outline of Africa. Earlier in the same month, there was a military takeover in Niger when General Abdourahamane Tchiani, with the help of the presidential guard of which he was head, ousted President Mohamed Bazoum, and declared himself leader of the country. Meta originally removed the post from Facebook, citing its Hate Speech policy, under which the company removes content containing attacks against people on the basis of a protected characteristic, including some depictions of violence against these groups. After the Board brought this case to Meta’s attention, the company determined that the content did not violate the Hate Speech policy and its removal was incorrect. The company then restored the content to Facebook. Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation processes involved, reduce errors and increase fairness for Facebook and Instagram users. Case Significance This case highlights inaccuracies in Meta’s moderation systems that detect hate speech. The Board has issued recommendations on improving automation and transparency, including urging Meta to ""implement an internal audit procedure to continually analyze a statistically representative sample of automated removal decisions to reverse and learn from enforcement mistakes,"" ( Breast Cancer Symptoms and Nudity decision, recommendation no. 5). Meta has reported that it is implementing this recommendation but has not yet published information to demonstrate implementation. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to the company’s attention. The Board also urges Meta to speed up the implementation of still-open recommendations to reduce such errors. Return to Case Decisions and Policy Advisory Opinions" fb-czhy85jc,Sri Lanka pharmaceuticals,https://www.oversightboard.com/decision/fb-czhy85jc/,"March 9, 2023",2023,,"TopicGovernments, Health, SafetyCommunity StandardRegulated goods","Policies and TopicsTopicGovernments, Health, SafetyCommunity StandardRegulated goods",Upheld,Sri Lanka,The Oversight Board has upheld Meta’s decision to leave up a Facebook post asking for donations of pharmaceutical drugs to Sri Lanka during the country’s financial crisis.,39831,6189,"Upheld March 9, 2023 The Oversight Board has upheld Meta’s decision to leave up a Facebook post asking for donations of pharmaceutical drugs to Sri Lanka during the country’s financial crisis. Standard Topic Governments, Health, Safety Community Standard Regulated goods Location Sri Lanka Platform Facebook Sri Lanka pharmaceuticals public comments Tamil translation - Sri Lanka pharmaceuticals Sinhala translation - Sri Lanka pharmaceuticals This decision is also available in Sinhala and Tamil . මෙම තීරණය සිංහල භාෂාවෙන් කියවීමට මෙතන ක්ලික් කරන්න. இந்த முடிவைத் தமிழில் படிக்க இங்கே கிளிக் செய்யவும். The Oversight Board has upheld Meta’s decision to leave up a Facebook post asking for donations of pharmaceutical drugs to Sri Lanka during the country’s financial crisis. However, the Board has found that secret, discretionary policy exemptions are incompatible with Meta’s human rights responsibilities, and has made recommendations to increase transparency and consistency around the “spirit of the policy” allowance. This allowance permits content where a strict reading of a policy produces an outcome that is at odds with that policy’s intent. About the case In April 2022, an image was posted on the Facebook page of a medical trade union in Sri Lanka, asking for people to donate drugs and medical products to the country, and providing a link for them to do so. At the time, Sri Lanka was in the midst of a severe political and financial crisis, which emptied the country’s foreign currency reserves. As a result, Sri Lanka, which imports 85% of its medical supplies, did not have the funds to import drugs. Doctors reported that hospitals were running out of medicine and essential supplies, and said they feared an imminent health catastrophe. The Meta teams responsible for monitoring risk during the Sri Lanka crisis identified the content in this case. The company found that the post violated its Restricted Goods and Services Community Standard, which prohibits content that asks for pharmaceutical drugs but applied a scaled “spirit of the policy” allowance. “Spirit of the policy” allowances permit content where the policy rationale, and Meta’s values, demand a different outcome to a strict reading of the rules. Scaled allowances apply to entire categories of content, rather than just individual posts. The rationale for the Restricted Goods and Services policy includes “encouraging safety.” Meta referred this case to the Board. Key findings The Oversight Board finds that the post violates the Restricted Goods and Services Community Standard. However, it finds that applying a scaled “spirit of the policy” allowance to permit this and similar content was appropriate, and in line with Meta’s values and human rights responsibilities. In the context of the Sri Lankan crisis, where people’s health and safety were in grave danger, the allowance pursued the Community Standard’s aim of “encouraging safety,” and the human right to health. Though allowing drug donations can present risks, the acute need in Sri Lanka justified Meta’s actions. However, the Board is concerned that Meta has said that the “spirit of the policy” allowance “may” apply to content posted in Sinhala outside Sri Lanka, in addition to the Sri Lanka market. Meta should be clear about where its allowances apply. It should also ensure that at-scale allowances are sensitive to the ethnic and linguistic diversity of the people they may impact in order to avoid inadvertent discrimination. Sri Lanka has two official languages, Sinhala and Tamil, the latter largely spoken by Tamil and Muslim minorities. The Board also finds that, to meet its human rights responsibilities, Meta should take action to increase users’ understanding of the “spirit of the policy” allowance, and to ensure it is applied consistently. Users who report content are not notified when it benefits from a “spirit of the policy” allowance, nor do users have any way of knowing that the exception exists. The “spirit of the policy” allowance is not mentioned in the Community Standards, and Meta has not published information on it in the Transparency Center, as it has on the newsworthiness exception, partly thanks to recommendations from the Board. Secret, discretionary exemptions to Meta’s policies are incompatible with Meta’s human rights responsibilities. There appear to be no clear criteria in place to govern when “spirit of the policy” allowances are issued and terminated. The Board emphasizes the importance of such criteria in ensuring decisions are made consistently, and recommends Meta make them public. It also finds that where Meta regularly uses an allowance for the same purpose, it should assess whether a standalone exception to the relevant policy is needed. The Oversight Board’s decision The Oversight Board upholds Meta’s decision to leave the post on Facebook. The Board also recommends that Meta: * Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Board upholds Meta’s decision to leave a post on Facebook which asks for donations of pharmaceutical drugs in Sri Lanka. Despite violating Meta’s Restricted Goods and Services Community Standard, the content was left on Facebook as a result of an at-scale “spirit of the policy” allowance, issued by Meta. This allowance permitted content that was seeking to donate, gift, or ask for pharmaceutical drugs in Sri Lanka between April 27 and November 10, 2022. The Board finds that this allowance was appropriate in light of Sri Lanka’s severe and compounding political, economic, and healthcare crises but urges Meta to provide more information to users on how the “spirit of the policy” allowance is applied, especially in times of crisis. 2. Case description and background In April 2022, a Facebook user posted an image on the Facebook page of a medical trade union in Sri Lanka. The image includes a button which reads “donate” and a caption in English stating that people can now donate drugs and medical products to Sri Lanka by clicking on the link provided. The link in the caption leads to a page on the trade union’s external website that describes a crisis in the Sri Lankan healthcare sector and states that there is a need for people to donate pharmaceutical drugs to support the healthcare system. The webpage also provides instructions for donors, including obtaining: 1) a letter from the recipient of the donated drugs; 2) a commercial invoice specifying the type, quantity, and value of the drugs; and 3) a scanned image of the drugs’ label. The post has been viewed over 80,000 times, shared fewer than 1,000 times, and has not been reported by anyone. At the time the content was posted, Sri Lanka was in the midst of a severe financial crisis , which emptied the country’s foreign currency reserves. Many Sri Lankans were engaged in protests against members of the government for their role in the country's economic crisis. In June 2022, the United Nations reported that about three-quarters of the population had reduced their food intake due to the country’s severe food shortages. Eighty-five per cent of Sri Lanka’s medical supplies are imported from other countries, particularly from India. The currency crisis meant that Sri Lanka no longer had the funds to import these drugs. In April 2022, doctors across Sri Lanka reported that hospitals were running out of medicines and essential supplies, and said they feared an imminent health catastrophe. Routine medical procedures were cancelled and doctors feared mortality would increase exponentially. In September 2022, the United Nations Development Program (UNDP) in Sri Lanka came forward to procure and deliver vital and essential medicines and medical supplies for the country, together with the World Health Organization (WHO) in Sri Lanka, with the financial support of the United Nations’ Central Emergency Response Fund (CERF). Meta’s Global Operations team identified the content at issue in this case during a risk-monitoring effort related to the ongoing crisis in Sri Lanka. The company stated that this type of monitoring effort is typically carried out during high-risk events, prompted by the team’s expertise and its assessment of off-platform situations. The case content was escalated for additional review twice before reaching Meta’s Content Policy team. Meta issued a time-bound and scaled “spirit of the policy” allowance to permit this post as well as other content attempting to donate, gift, or ask for pharmaceutical drugs in Sri Lanka. Meta makes “spirit of the policy” exceptions when a strict application of the relevant Community Standard is producing results that are inconsistent with its rationale and objectives. Scaled policy allowances are general allowances that apply to all content that fulfils certain criteria. They can only be issued by Meta’s internal teams on escalation. Once issued, scaled policy allowances are enforced by at-scale reviewers. The “spirit of the policy” allowance was issued on April 27, 2022, for a period of two weeks (effective from April 27, 2022, to May 10, 2022). The allowance was extended multiple times, after being periodically reviewed and renewed. Since November 10, 2022, when the allowance ended, Meta reviews any content attempting to donate, gift, or ask for pharmaceutical drugs in Sri Lanka against the Restricted Goods and Services policy and enforces the policy without the allowance. Meta referred the case to the Board, stating that it is difficult, as it involves balancing the competing values of “Safety” and “Voice,” and significant, as it concerns the Sri Lankan financial crisis, which could lead to preventable deaths due to a lack of medical drugs. Meta has asked the Board to evaluate how the company makes temporary, region-specific “spirit of the policy” allowances to its Restricted Goods and Services policy, particularly during crisis or conflict situations. 3. Oversight Board authority and scope The Board has authority to review decisions that Meta submits for review (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1.1). The Board may request that Meta refer decisions to it. The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying the Board’s decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include policy advisory statements with non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). Where Meta commits to act on recommendations, the Board monitors their implementation. 4. Sources of authority and guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board decisions: The most relevant previous decisions of the Oversight Board include: II. Meta’s content policies: Restricted Goods and Services Community Standard Under the Restricted Goods and Services Community Standard, Meta “prohibits attempts by individuals, manufacturers, and retailers to purchase, sell, raffle, gift, transfer or trade certain goods and services.” This includes “attempts to donate or gift pharmaceutical drugs,” as well as requests “for pharmaceutical drugs except when content discusses the affordability, accessibility or efficacy of pharmaceutical drugs in a medical context.” The policy rationale for this Community Standard encourages safety while deterring potentially harmful activity. Under this Community Standard, “pharmaceutical drugs” are described as “drugs that require a prescription or medical professionals to administer.” Spirit of the Policy Allowance Meta may apply a “spirit of the policy” allowance to content when the policy rationale (the text that introduces each Community Standard) and Meta’s values demand a different outcome than a strict reading of the rules (the rules set out in the “do not post” section and in the list of prohibited content). In this case, Meta applied a “spirit of the policy” allowance to allow content that is seeking to donate, gift, or ask for pharmaceutical drugs in Sri Lanka. It did so due to the economic crisis and the acute need for medicine. In Meta’s answers to the Board, Meta said that the allowance applies to content posted in Sri Lanka and that “it may also include content posted in the Sinhalese language outside of Sri Lanka” due to their “market routing.” Meta did not mention the allowance was applied outside of Sri Lanka to content posted in Tamil, another official language of the country. The Board’s analysis was also informed by Meta’s value of “Voice,” which the company describes as “paramount,” and its value of “Safety.” III. Meta’s human rights responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following international standards: 5. User submissions The author of the post was notified of the Board’s review and provided with an opportunity to submit a statement to the Board. The user did not submit a statement. 6. Meta’s submissions Meta explained to the Board that, while its Community Standards do not allow content that asks for pharmaceutical drugs, the company determined that the need for “pharmaceutical drugs precipitated by an economic crisis in Sri Lanka justified an allowance under the policy rationale of the Restricted Goods and Services policy.” This rationale provides that the goal of this policy is “[t]o encourage safety and deter potentially harmful activities.” Meta also stated that the decision to issue a scaled allowance was “particularly challenging” because it required Meta to “balance the needs of Sri Lankans during a crisis against the dangers of allowing people to share and exchange potentially harmful drugs” on the company's platforms. Meta also stated that most countries, including Sri Lanka, have “strict drug distribution laws that broadly criminalize the sale, transport, or transfer of controlled substances.” However, the company claims its decision to issue the “spirit of the policy” allowance in this case also furthered the legitimate aim of user safety and the protection of public health because of the safety risks associated with the shortage of medical drugs in Sri Lanka. Meta believes its decision is consistent with its values as well as with international human rights principles on protecting public health. Meta provided examples of other allowances made for pharmaceutical donations, including: (a) a three-month allowance in Cuba in 2022 based on an acute shortage of medication linked to an economic crisis; (b) a nine-month allowance in Lebanon in 2021 based on an acute shortage and unaffordability of medication during an economic crisis; and (c) an ongoing allowance in Ukraine since February 27, 2022, based on supply disruptions caused by Russia’s invasion. Meta said it had also issued “spirit of the policy” allowances limited to COVID-19 related medicines and medical goods, “allowing content donating and soliciting donations for medical-grade oxygen in Afghanistan (1 month), Indonesia (1 month) and Myanmar (5 months) as well as content offering to donate or soliciting donations for Remdesivir, Fabiflu, and Tocilizumab in India (1 month) and Nepal (2 weeks).” Meta also stated that in each situation, the company relied on “independent reporting to verify the crisis.” In its responses to the Board’s questions, Meta explained that the criteria used to issue and terminate an at-scale “spirit of the policy” allowance in crisis situations vary depending on the nature of the policy and the context of the crisis. The company added that its decisions were typically based on input from internal teams and, in some cases, external stakeholders. In this case, Meta stated that the company’s decision was influenced by the existence of a “well-demonstrated crisis” including “news coverage and economic analysis” on the shortages of medical drugs. Meta took into account that “the need for medical drugs is mentioned by reputable medical authorities.” Meta also stated in response to the Board’s questions that in the past three years, only a minority of the allowances issued by the company were scaled, and a low proportion of these related to the Restricted Goods and Services policy. Policy allowances can only be introduced by Meta’s internal teams “on escalation.” Scaled policy allowances are general allowances that apply to all content that fulfills certain criteria when first reviewed by at-scale reviewers. Allowances that are not scaled are specific to individual posts. The Board asked Meta nine questions in writing. Questions related to the “spirit of the policy” allowance in Sri Lanka, Meta’s Crisis Protocol, and Meta’s general approach to initiating and terminating spirit of the policy allowances. Meta answered all questions fully. 7. Public comments The Oversight Board received three public comments relevant to this case. Two of the comments were submitted from the United States and Canada and one was from Latin America and the Caribbean. The submissions covered the following themes: the risks of accepting donations of drugs; the harms caused by Meta not allowing the coordination of pharmaceutical donation drives on its platforms; and the need for clear, human rights-respecting criteria when Meta creates exceptions to its policies. To read public comments submitted for this case, please click here. 8. Oversight Board analysis The Board chose to take this case because Meta’s decisions on whether to issue allowances for pharmaceutical donations in crisis situations will have a critical impact on people’s access to health and information about health crises in the countries affected. The case also allows the Board to assess Meta’s approach to “spirit of the policy” allowances and issue recommendations in this regard, as well as the company’s approach to country-specific applications of its rules. The Board examined whether this content should be removed by analyzing Meta's content policies, human rights responsibilities and values. The Board also assessed the implications of this case for Meta’s broader approach to content governance. 8.1 Compliance with Meta’s content policies The Board finds that the content in this case violates the Restricted Goods and Services Community Standard's prohibition of content that “asks for pharmaceutical drugs.” However, the Board finds that an at-scale “spirit of the policy” allowance was appropriately issued to allow this post, and similar content, to remain on Facebook at a time of pressing need in Sri Lanka. I. Content rules Meta’s Restricted Goods and Services policy prohibits “attempts to donate or gift pharmaceutical drugs” as well as posts that ask for pharmaceutical drugs “except when content discusses the affordability, accessibility or efficacy of pharmaceutical drugs in a medical context.” The Board notes that the content in this case was part of a donation coordination effort. However, it does not itself “attempt to donate or gift pharmaceutical drugs,” rather it urges people to donate medical supplies. Therefore, the Board finds that the content is “asking for pharmaceutical drugs.” Meta’s internal guidelines to content moderators further clarify that discussing “affordability” means mentioning discounts or offers (for example, “$5 off prescription,”) comparing the value of generic and brand-name versions of pharmaceutical drugs, or listing the price of vaccines. Additionally, content discussing accessibility in a medical context may indicate suggestions on how to address a medical condition (for example, “If you’re having trouble with allergies, go to ABC Pharmacy to buy methylprednisolone.”) The content was posted in a context where the affordability and the accessibility of pharmaceutical drugs in Sri Lanka were at risk. However, the Board finds that the content is not in line with the examples of affordability and accessibility discussions, set out in Meta’s internal guidelines. It therefore violates the Community Standard. The Board notes that these examples are not part of the public-facing language of Meta’s Restricted Goods and Services policy. According to Meta, a ""spirit of the policy” allowance can be issued when the policy rationale of the relevant Community Standard, and Meta’s values, demand a different outcome than a literal reading of the rules. The Restricted Goods and Services policy aims at “encouraging safety and deterring potentially harmful activities.” Similarly, under the value of “Safety,” Meta “removes content that could contribute to a risk of harm to the physical security of persons.” Safety requires different considerations depending on the context. During this period in Sri Lanka, there was an acute crisis arising out of a shortage of medicines and this posed a serious safety risk. However, there are also serious safety concerns in allowing pharmaceutical donations, especially in times of crisis. The WHO cautions that donated pharmaceutical drugs may be expired, improperly stored, or cause costly vetting and storage burdens for countries already in crisis (World Health Organization, Guidelines for medicine donations , page 6). Meta should also keep this in mind when issuing allowances to the Restricted Goods and Services policy. Additionally, allowing users to share and exchange potentially harmful drugs on Meta’s platforms may result in their misuse for illicit or dangerous purposes. Despite these valid concerns, the Board finds that Meta’s decision to issue an at-scale “spirit of the policy” allowance to permit content that is seeking to donate, gift, or ask for pharmaceutical drugs in Sri Lanka is justified, given the country’s economic crisis and the shortage of medical supplies. The more acute need in a time of severe economic crisis should prevail so that people’s access to healthcare is minimally preserved. Concerns around the storage of drugs, as well as their misuse, can be mitigated by other responsible parties, such as local authorities and organizations engaged in the distribution of medicine, if properly notified by Meta of the policy allowance. II. Enforcement action In a response to one of the Board’s questions, Meta explained that various internal teams might be involved in a decision to grant a “spirit of the policy” allowance, and in a decision to terminate such an allowance. These include teams with safety, human rights and region-specific expertise. When an allowance is time-bound, Meta assesses it periodically and decides whether to renew or terminate it. Meta explained that the allowance in this case was terminated on November 10, 2022, after the company’s internal teams communicated that the “medical crisis in Sri Lanka had abated to an extent that the risk of potential abuse from unfettered calls for donations of medical drugs on Facebook outweighed the remaining benefits.” After the Board asked a follow-up question, Meta explained that: Two things occurred that appeared to ease the crisis: (i) new donations of medicine from multilateral donor agencies, NGOs and governments which eased the shortages and (ii) a new caretaker government reprioritized spending on obtaining essential medicine. We also saw other positive developments including local hospitals setting up a centralized system for coordinating medical supplies and new credit lines from international agencies and India, some of which was specifically directed to purchasing medicine. In response to another of the Board’s questions, Meta explained that: The policy allowance applied in the Sri Lanka market only. We did not extend it to posts outside of that market. While the Sri Lanka market includes content posted in Sri Lanka, due to our market routing, it may also include content posted in the Sinhalese language outside of Sri Lanka. The Board notes Meta’s uncertainty about whether the allowance was actually applied outside of Sri Lanka, and urges the company to review its enforcement systems and practices to ensure Meta is in a better position to anticipate the allowance’s impact. The Board further noted Meta seemingly restricted the application of this allowance to the Sri Lanka market and the Sinhala language. Sri Lanka has two official languages, Sinhala and Tamil, the latter also being largely spoken in the country and diaspora, primarily by Tamil and Muslim ethnic minorities. At-scale allowances should be sensitive to the ethnic and linguistic diversity of people they may impact in order to avoid inadvertent discrimination. III. Transparency The Board notes that Meta has not published information on the “spirit of the policy” allowance in its Transparency Center , nor in the Community Standards. Users would benefit from a page that presents the criteria Meta uses to decide whether to issue “spirit of the policy” allowances and when to scale them. Additionally, Meta should publicize examples of content which benefited from this allowance. Finally, the company should include, in its Transparency Center, a list of all the “spirit of the policy” allowances it has issued at scale, with explanations of why they were issued and terminated. This page in the Transparency Center should also include aggregated data about the “spirit of the policy” allowances issued, including the number of instances in which they were issued, and the regions and/or languages impacted. This would be similar to Meta’s current approach towards the newsworthiness allowance , which has evolved and substantially improved following action the company has taken in response to recommendations issued by the Board. This is especially important because the “spirit of the policy” allowance, like the newsworthiness allowance, is a general exception applicable to all content policies across Facebook and Instagram. 8.2 Compliance with Meta’s human rights responsibilities The Board finds that keeping the content on the platform is consistent with Meta’s human rights responsibilities. Meta has committed itself to respect human rights under the UN Guiding Principles on Business and Human Rights ( UNGPs ). Its Corporate Human Rights Policy states that this commitment includes respecting the International Covenant on Civil and Political Rights (ICCPR). Meta’s decision to issue an allowance was also guided by a concern with people’s right to health and life. Additionally, the Board notes that access to health-related information is particularly important from a freedom of expression perspective (A/HRC/44/49, para. 6). Such rights were endangered in Sri Lanka given the severe political and economic crisis, which greatly hindered access to medical supplies. Freedom of expression (Article 19 ICCPR) The scope of the right to freedom of expression is broad. Article 19, para. 2, of the ICCPR gives heightened protection to expression, including on public affairs ( General Comment No. 34 , para. 11). Expression can be particularly important during a health crisis as it relates to matters of great public importance. The UN Special Rapporteur on freedom of expression highlighted that “the free flow of information, unhindered by threats and intimidation and penalties, protects life and health and enables and promotes critical social, economic, political and other policy discussions and decision-making” ( A/HRC/44/49 ). In this case the content coordinates action aiming to mitigate risks to people’s right to health and life in Sri Lanka resulting from a severe economic crisis. Where restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human rights commitments, both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users' right to freedom of expression” ( A/74/486 , para. 41). In this case, the Board applied the “three-part test” to Meta’s relevant rules under the Restricted Goods and Services Community Standard and its overarching “spirit of the policy” allowance. I. Legality (clarity and accessibility of the rules) The principle of legality requires rules used by states to limit expression to be clear and accessible (General Comment 34, para. 25). Lack of specificity can lead to subjective interpretation of rules and their arbitrary enforcement. Individuals must have enough information to determine if and how their expression may be limited, so that they can adjust their behavior accordingly. In a 2018 report addressing content moderation and ICCPR Article 19’s legality standards, the UN Special Rapporteur on freedom of expression highlighted the need for “clarity and specificity” in rules that govern online speech ( A/HRC/38/35 , para. 46). Applied to Meta’s content rules for Facebook, users should be able to understand what is allowed and what is prohibited. The Board concluded that, although Meta’s prohibition of content “asking for pharmaceutical drugs” is intelligible to users, exceptions to it, including the rules governing “spirit of the policy” allowances, are not sufficiently clear and accessible to users. As restrictions on rights must be clear, any exceptions to those restrictions should also be clear enough for users to understand what they can and cannot post. However, while the failure to properly articulate an allowance which permits more speech falls short of the standard of legality, it does not undermine the application of that allowance in this case, given the context in Sri Lanka when it was applied. The Board notes that in the public facing language of the Restricted Goods and Services policy, Meta does not provide sufficient information on how the exception to allow content discussing “the affordability, accessibility or efficacy of pharmaceutical drugs in a medical context” is interpreted. Meta’s internal guidelines to content moderators provide examples in this regard. The Board is concerned with the lack of clarity around these exceptions because users need to understand what they are allowed to post without breaching the rules. Meta should provide users with clearer guidance on how the company interprets its content policies by providing examples which align with its internal guidelines. The Board also notes that the “spirit of the policy” allowance is not mentioned anywhere in the Community Standards. The Facebook Community Standards do not explain that the company occasionally introduces at-scale short-term “spirit of the policy” allowances to its rules in certain regions or countries. Users currently have no way of knowing about the “spirit of the policy” allowance, or its application across all Community Standards, since no public explanation of it exists. Secret discretionary exemptions to Meta’s policies are incompatible with the legality standard. In one of its responses to the Board’s questions, Meta explained that at the time the crisis in Sri Lanka began, the company had not yet launched its Crisis Policy Protocol but that for future crisis, “the allowance for content soliciting, donating, or gifting pharmaceuticals in times of conflict is one of the policy levers […] documented as part of the protocol.” However, the Board notes, in its exchanges with Meta, the lack of clear criteria and protocols (e.g., consultation with local authorities and external stakeholders) for the application of such exceptions. The Board emphasizes the importance of ensuring that the application of allowances is guided by objective criteria, resulting in consistent decisions to issue and terminate them. Therefore, the Board urges Meta to publicly disclose information on the “spirit of the policy” allowance, and the criteria used by the company to apply it across all Community Standards. The Board accepts that when moderating vast amounts of content on a global scale, it is necessary to have a “catch-all” allowance that can be applied to prevent clear injustices. The criteria used to assess when such an allowance is warranted should however be set out publicly. Further, where such an allowance is repeatedly used in the same way, as Meta has occasionally done for pharmaceutical donations in times of crisis, the company should carefully assess whether or not this should be specifically provided for as an exception to the relevant policy. Finally, users reporting content are not notified when content reported by them benefits from a “spirit of the policy” allowance. Meta confirmed in its answers to the Board that the company “does not directly notify users of at-scale policy allowances.” In its “Colombia protests” decision ( 2021-010-UA ), the Board recommended that Meta notify users who reported content as violating when the content was left on the company’s platforms because it benefitted from the newsworthiness allowance. Meta is still assessing whether to implement this recommendation. Similarly, notifying users who reported content benefiting from a “spirit of the policy” allowance would increase users’ understanding of what such an allowance entails and why content that appears to contravene a policy might still be available on the platform. II. Legitimate aim ICCPR Article 19 provides that when states restrict expression, they may only do so in furtherance of legitimate aims, which are set forth as: “respect for the rights or reputations of others . . . [and] the protection of national security or of public order (ordre public), or of public health or morals.” Meta’s general prohibition on “attempts to donate or gift” and “asks for” pharmaceutical drugs seeks to protect public safety and public health (Art. 19, para. 3, ICCPR), and the right of others to health (Art. 12, ICESCR) and to life (Art. 6, ICCPR), which are all legitimate aims. Meta stated in its decision rationale that “this content could facilitate the illicit transfer of controlled substances or trade of pharmaceutical drugs to users who do not have a prescription or instructions from a medical professional.” This aligns with Meta’s rationale in the preamble to the Restricted Goods and Services policy, which indicates that it was designed to “encourage safety and deter potentially harmful activities.” III. Necessity and proportionality The principle of necessity and proportionality provides that any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected” (General Comment 34, para. 34). The application of the Restricted Goods and Service Policy in this case would not have been a proportional restriction on speech given the severe political and economic crisis in Sri Lanka, which hindered Sri Lankans’ access to medicines, endangering their right to health and life. However, Meta’s decision to issue a “spirit of the policy” allowance protected Sri Lankans’ right to life and health during this crisis. 9. Oversight Board decision The Oversight Board upholds Meta's decision to leave up the content. 10. Policy advisory statement A. Content Policy 1. To provide more clarity to users, Meta should explain in the landing page of the Community Standards, in the same way the company does with the newsworthiness allowance, that allowances to the Community Standards may be made when their rationale, and Meta’s values, demand a different outcome than a strict reading of the rules. The company should include a link to a Transparency Center page which provides information about the “spirit of the policy” allowance. The Board will consider this recommendation implemented when an explanation is added to the Community Standards. B. Enforcement 2. To provide more certainty to users, Meta should communicate when reported content benefits from a “spirit of the policy” allowance. In line with Meta’s recent work to audit its user notification systems as stated in its response to the Board’s recommendation in the “Colombia protests” case (2021-010-FB-UA), Meta should notify all users who reported content which was assessed as violating but left on the platform because a “spirit of the policy” allowance was applied to the post. The notice should include a link to a Transparency Center page which provides information about the “spirit of the policy” allowance. The Board will consider this recommendation implemented when Meta introduces the notification protocol described in this recommendation. C. Transparency 3. In line with the Board’s recommendations five and six in the “Iran protest slogan” case (2022-013-FB-UA) the Board specifies that Meta should publish information about the “spirit of the policy” allowance in its Transparency Center, similar to the information it has published on the newsworthiness allowance. In the Transparency Center, Meta should: (i) explain that “spirit of the policy” allowances can be either scaled or narrow; (ii) publicize examples of content which benefited from this allowance; (iii) provide criteria Meta uses to determine when to scale “spirit of the policy” allowances; and (iv) include a list of all “spirit of the policy” allowances Meta has issued at scale in the past three years with explanations of why Meta decided to issue and terminate each of them. Meta should keep this list updated as new allowances are issued. The Board will consider this recommendation implemented when Meta makes this information publicly available in the Transparency Center. 4. In line with the Board’s recommendations five and six in the “Iran protest slogan” case (2022-013-FB-UA) the Board specifies that Meta should publicly share aggregated data, in its Transparency Center, about the “spirit of the policy” allowances issued, including the number of instances in which they were issued, and the regions and/or languages impacted. Meta should keep this information updated as new “spirit of the policy” allowances are issued. The Board will consider this recommendation implemented when Meta makes this information publicly available in the Transparency Center. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg which draws on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Return to Case Decisions and Policy Advisory Opinions" fb-e1154yly,Tigray Communication Affairs Bureau,https://www.oversightboard.com/decision/fb-e1154yly/,"October 4, 2022",2022,,"TopicGovernments, Violence, War and conflictCommunity StandardViolence and incitement","Policies and TopicsTopicGovernments, Violence, War and conflictCommunity StandardViolence and incitement",Upheld,Ethiopia,The Oversight Board has upheld Meta’s decision to remove a post threatening violence in the conflict in Ethiopia,32633,5078,"Upheld October 4, 2022 The Oversight Board has upheld Meta’s decision to remove a post threatening violence in the conflict in Ethiopia Standard Topic Governments, Violence, War and conflict Community Standard Violence and incitement Location Ethiopia Platform Facebook Amharic translation (2022-006-FB-MR) Tigrinya translation (2022-006-FB-MR) Oromo translation (2022-006-FB-MR) Tigray Communication Affairs Bureau public comments This decision is also available in Amharic , Oromo and Tigrinya . ሙሉ ውሳኔውን በአማርኛ ለማንበብ፣ እዚህ ይጫኑ ። Murtii guutuu kan Afaan Oromoo dubbisuuf as tuqi ብትግርኛ እተገብረ ውሳነ ምሉእ ከተንብቦ እንተ ደሊኻ ኣብዚ ጠውቕ ። The Oversight Board has upheld Meta’s decision to remove a post threatening violence in the conflict in Ethiopia. The content violated Meta's Violence and Incitement Community Standard and removing it is in line with the company's human rights responsibilities. Overall, the Board found that Meta must do more to meet its human rights responsibilities in conflict situations and makes policy recommendations to address this. About the case On February 4, 2022, Meta referred a case to the Board concerning content posted on Facebook during a period of escalating violence in the conflict in Ethiopia, where Tigrayan and government forces have been fighting since November 2020. The post appeared on the official page of the Tigray Regional State’s Communication Affairs Bureau and was viewed more than 300,000 times. It discusses the losses suffered by federal forces and encourages the national army to “turn its gun” towards the “Abiy Ahmed group.” Abiy Ahmed is Ethiopia’s Prime Minister. The post also urges government forces to surrender and says they will die if they refuse. After being reported by users and identified by Meta’s automated systems, the content was assessed by two Amharic-speaking reviewers. They determined that the post did not violate Meta’s policies and left it on the platform. At the time, Meta was operating an Integrity Product Operations Centre (IPOC) for Ethiopia. IPOCs are used by Meta to improve moderation in high-risk situations. They operate for a short time (days or weeks) and bring together experts to monitor Meta's platforms and address any abuse. Through the IPOC, the post was sent for expert review, found to violate Meta’s Violence and Incitement policy, and removed two days later. Key findings The Board agrees with Meta’s decision to remove the post from Facebook. The conflict in Ethiopia has been marked by sectarian violence, and violations of international law. In this context, and given the profile and reach of the page, there is a high-risk the post could have led to further violence. As a result, the Board agrees that removing the post is required by Meta’s Violence and Incitement Community Standard, which prohibits “statements of intent to commit high-severity violence.” The removal also aligns with Meta’s values; given the circumstances, the values of “Safety” and “Dignity” prevail over “Voice.” The Board also finds that removal of the post aligns with Meta’s human rights responsibilities and is a justifiable restriction on freedom of expression. Meta has long been aware that its platforms have been used to spread hate speech and fuel violence in conflict. The company has taken positive steps to improve content moderation in some conflict zones. Overall however, the Board finds that Meta has a human rights responsibility to establish a principled, transparent system for moderating content in conflict zones to reduce the risk of its platforms being used to incite violence or violations of international law. It must do more to meet that responsibility. For example, Meta provides insufficient information on how it implements its Violence and Incitement policy in armed conflict situations, what policy exceptions are available or how they are used. Its current approach to content moderation in conflict zones suggests inconsistency; observers have accused the company of treating the Russia-Ukraine conflict differently to others. While Meta says it compiles a register of “at-risk” countries, which guides its allocation of resources, it does not provide enough information for the Board to evaluate the fairness or efficacy of this process. The IPOC in this case led to the content being removed. However, it remained on the platform for two days. This suggests that the “at-risk” system and IPOCs are inadequate to deal with conflict situations. According to Meta, IPOCs are ""not intended to be a sustainable, long-term solution to dealing with a years-long conflict.” The Board finds Meta may need to invest in a more sustained mechanism. The Oversight Board’s decision The Oversight Board upholds Meta’s decision to remove the post. The Board also makes the following recommendations: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board upholds Meta’s decision to remove the content from Facebook for violating the Violence and Incitement Community Standard. The Board finds that removing the content in this case is consistent with Meta’s human rights responsibilities in an armed conflict. The Board also finds that Meta has a responsibility to establish a principled and transparent system for moderating content in conflict zones to mitigate the risks of its platforms being used to incite violence or commit violations of international human rights and humanitarian law. The Board reiterates the need for Meta to adopt all measures aimed at complying with its responsibility to carry out heightened human rights due diligence in this context. 2. Case description and background On February 4, 2022, Meta referred a case to the Board concerning content posted on Facebook on November 5, 2021. The content was posted by the Tigray Communication Affairs Bureau page, which states that it is the official page of the Tigray Regional State Communication Affairs Bureau (TCAB). The content was posted in Amharic, the Federal Government’s official working language. The TCAB is a ministry within the Tigray regional government. Since November 2020, the Tigray People’s Liberation Front (TPLF) and the Federal Democratic Republic of Ethiopia (“Federal Government”) have been engaged in an armed conflict. The TPLF is the ruling party in Tigray, while the Tigray Defense Forces is the TPLF’s armed wing. The post discusses the losses suffered by the Federal National Defense Forces under the leadership of Prime Minister Abiy Ahmed in the armed conflict with the TPLF. The post encourages the national army to “turn its gun towards the fascist Abiy Ahmed group” to make amends to the people it has harmed. It goes on to urge the armed forces to surrender to the TPLF if they hope to save their lives, adding: “If it refuses, everyone should know that, eventually, the fate of the armed forces will be death.” Tensions between the Federal Government and the TPLF reached their peak when the Federal Government postponed the elections in 2020, citing the coronavirus pandemic as the reason for the delay. Opposition leaders accused the Prime Minister of using the pandemic as an excuse to extend his term. Despite the Federal Government announcement, the Tigray regional government proceeded to conduct elections within the region, where the TPLF won by a landslide. Prime Minister Abiy Ahmed announced a military operation against Tigrayan forces in November 2020 in response to an attack on a federal military base in Tigray. Federal forces pushed through to take Tigray’s capital, Mekelle. After eight months of fighting, federal forces and their allies withdrew from Mekelle and the TPLF retook control. In May 2021, the Federal Government designated the TPLF a terrorist organization. On November 2, 2021, days before the content was posted, the Prime Minister imposed a nationwide state of emergency after the TPLF took over certain parts of the Amhara and Afar regions, beyond Tigray. The Federal Government also called on citizens to take up arms as the TPLF made its way towards the capital, Addis Ababa. On the day the content was posted on November 5, nine opposition groups, including the TPLF, created an alliance to put pressure on the Federal Government and oust the Prime Minister. The TCAB page has about 260,000 followers and is set to public, meaning it can be viewed by any Facebook user. It is verified by a blue checkmark badge, which confirms that the page or profile is the authentic presence of a person or entity. The content was viewed more than 300,000 times and shared fewer than 1,000 times. Since November 5, the content was reported by 10 users for violating the Violence and Incitement, Dangerous Individuals and Organizations, and Hate Speech policies. Additionally, Meta’s automated systems identified the content as potentially violating, and sent it for review. Following review by two human reviewers, both of whom were Amharic speakers, Meta determined that the content did not violate its policies and did not remove it from the platform. On November 4, a day before the content was posted, Meta convened an Integrity Product Operations Center (IPOC) to monitor and respond in real time to the rapidly unfolding situation in Ethiopia. According to Meta, an IPOC is a group of subject matter experts within the company brought together for a short period to provide real-time monitoring and address potential abuse flowing across Meta’s platforms. Through the IPOC, the content was escalated for additional review by policy and subject matter experts. Following this review, Meta determined the content violated the Violence and Incitement policy, which prohibits “statements of intent to commit high severity violence.” The content remained on the platform for approximately two days before it was removed. Since the beginning of the conflict in November 2020, there have been credible reports of violations of international human rights and humanitarian law by all parties to the conflict. The Report of the joint investigation of the Ethiopia Human Rights Commission and Office of the United Nations High Commissioner for Human Rights found documented instances of torture and other forms of cruel, inhuman, or degrading treatment, extrajudicial executions of civilians and captured combatants, kidnappings, forced disappearances, and sexual and gender-based violence, among other international crimes (see also Ethiopia Peace Observatory ). The joint investigation team found that persons taking no direct part in the hostilities were killed by both sides to the conflict. This included ethnic-based and retaliatory killings. Both federal forces and Tigrayan forces “committed acts of torture and ill-treatment against civilians and captured combatants in various locations in Tigray, including in military camps, detention facilities, victims’ homes, as well as secret and unidentified locations.” Individuals perceived to be affiliated with the TPLF were forcefully disappeared or arbitrarily detained, and the wives of disappeared or detained men subject to sexual violence by federal armed forces. Similarly, wives of members of the federal armed forces were sexually assaulted or raped by Tigrayan combatants. Many people were gang-raped. Federal armed forces also refused to facilitate access to humanitarian relief in conflict-affected areas. Other armed groups and militias have also been involved, and Eritrea has supported Ethiopia’s national army in the conflict. Although the joint investigation covered events occurring between November 3, 2020 and June 28, 2021, the findings provide significant context for this case and the later escalation in hostilities in November 2021, when the TPLF seized territory outside of Tigray. 3. Oversight Board authority and scope The Board has authority to review decisions that Meta refers for review (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1.1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include policy advisory statements with non-binding recommendations to which Meta must respond (Charter Article 3, Section 4; Article 4). 4. Sources of authority The Oversight Board considered the following sources of authority: I. Oversights Board decisions: The most relevant previous decisions of the Oversight Board include: II. Meta’s content policies: Facebook’s Community Standards: Under its Violence and Incitement policy, Meta states that it will remove any content that “incites or facilitates serious violence.” The policy prohibits “threats that could lead to death (and other forms of high-severity violence) … targeting people or places.” It also prohibits “statements of intent to commit high-severity violence.” III. Meta’s values: Meta’s values are outlined in the introduction to Facebook’s Community Standards. The value of “Voice” is described as “paramount”: The goal of our Community Standards has always been to create a place for expression and give people a voice. [We want] people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Meta limits “Voice” in service of four other values, two of which are relevant here: “Safety”: We remove content that could contribute to a risk of harm to the physical security of persons. “Dignity” : We expect that people will respect the dignity of others and not harass or degrade others. IV. International human rights standards The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. Significantly, the UNGPs impose a heightened responsibility on businesses operating in a conflict setting (“Business, human rights and conflict-affected regions: towards heightened action,” A/75/212 ). The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following human rights standards: 5. User submissions Following Meta’s referral and the Board’s decision to accept the case, the user was sent a message notifying them of the Board’s review and providing them with an opportunity to submit a statement to the Board. The user did not submit a statement. 6. Meta’s submissions In its referral of the case to the Board, Meta stated that the decision regarding the content was difficult because it involved removing “official government speech that could be considered newsworthy,” but may pose a risk of inciting violence during an ongoing conflict. Meta stated that it did not consider granting the newsworthiness allowance because that allowance does not apply to content that presents a risk of contributing to physical harm. Since late 2020, Meta stated that it has treated Ethiopia as a Tier 1 at-risk country, the highest risk level. According to Meta, classifying countries as at-risk is part of its process for prioritizing investment in product resources over the long-term. For example, in response to the high risk in Ethiopia, Meta developed language classifiers (machine learning tools trained to automatically detect potential violations of the Community Standards) in Amharic and Oromo, two of the most widely used languages in Ethiopia. According to the company, the initial Amharic and Oromo classifiers were launched in October 2020. In June 2021, Meta launched what it refers to as the “Hostile Speech” classifiers in Amharic and Oromo (machine learning tools trained to identify content subject to Hate Speech, Violence and Incitement, and Bullying and Harassment policies). The company also created an IPOC for Ethiopia on November 4, 2021 in response to the escalation in the conflict. IPOCs typically operate for several days or weeks. An IPOC is convened either for planned events, such as certain elections, or in response to risky unplanned events. An IPOC can be requested by any Meta employee. The request is reviewed by a multi-level, multi-stakeholder group within the company that includes representatives from its Operations, Policy, and Product teams. There are different levels of IPOC, providing escalating levels of coordination and communication in monitoring content on the platform. The IPOC convened in November 2021 for Ethiopia was Level 3, which “involves the greatest level of coordination and communication within Meta.” As Meta explained, IPOCs are a “short-term solution” meant to “understand a large set of issues and how to address them across a crisis or high-risk situation. It is not intended to be a sustainable, long-term solution to dealing with a years-long conflict.” Meta referred to the Board’s analysis in the “Alleged crimes in Raya Kobo” case in support of the proposition that resolving the tension between protecting freedom of expression and reducing the threat of sectarian conflict requires careful consideration of the specifics of the conflict. Meta also noted the documented atrocity crimes committed by all sides of the conflict. Meta told the Board that given the nature of the threat, the influential status of the speaker, and the rapidly escalating situation in Ethiopia at the time the content was posted, the value of ""Safety"" outweighed other considerations and would best be served by removing the post than to leave it on the platform, despite the potential value of the content to warn individuals in Ethiopia of future violence. The Board asked Meta 20 questions. Meta answered 14 questions fully and six questions partially. The partial responses related to the company’s approach to content moderation in armed conflict situations, imposing account restrictions for violations of content policies, and the cross-check process. 7. Public comments The Oversight Board received and considered seven public comments related to this case. One of the comments was submitted from Asia Pacific and Oceana, three from Europe, one from Sub-Saharan African and two from United States and Canada. The submissions covered the following themes: the inconsistency of Meta’s approach in the context of different armed conflicts; the heightened risk accompanying credible threats of violence between parties during an armed conflict; the problems with Meta’s content moderation in Ethiopia and the role of social media in closed information environments; factual background to the conflict in Ethiopia, including the harm suffered by Tigrayan people and the role of hate speech against Tigrayans on Facebook in spreading violence; and the need to consider laws of armed conflict in devising policies for moderating speech during an armed conflict. To read public comments submitted for this case, please click here . In April 2022, as part of ongoing stakeholder engagement, the Board consulted representatives of advocacy organizations, academics, inter-governmental organizations and other experts on the issue of content moderation in the context of armed conflict. Discussions included the treatment of speech by parties to a conflict and the application of the Violence and Incitement policy in conflict situations. 8. Oversight Board analysis The Board examined the question of whether this content should be restored, and the broader implications for Meta’s approach to content governance, through three lenses: Meta's content policies, the company's values and its human rights responsibilities. 8.1 Compliance with Meta’s content policies The Board finds that removing the content from the platform is consistent with the Violence and Incitement Community Standard. The policy prohibits “threats that could lead to death (and other forms of high-severity violence) … targeting people or places,” including “statements of intent to commit high-severity violence.” The Board finds that the content can be reasonably interpreted by others as a call that could incite or encourage acts of actual violence in the already violent context of an armed conflict. As such, the content violates Meta’s prohibition on “statements of intent to commit high-severity violence.” 8.2 Compliance with Meta’s values The Board concludes that removing this content from the platform is consistent with Meta’s values of “Safety” and “Dignity.” The Board recognizes the importance of “Voice” especially in a country with a poor record of press and civic freedoms and where social media platforms serve as a key means of imparting information about the ongoing armed conflict. However, in the context of an armed conflict, marked by a history of sectarian violence and violations of international law, the values of “Safety” and “Dignity” prevail in this case to protect users from content that poses a heightened risk of violence. The content in this case can be interpreted as a call to kill ""Abiy Ahmed’s group."" It can further be interpreted as a warning of punishment to those who will not surrender to the TPLF, and as such poses a risk to the life and physical integrity of Ethiopian federal forces and political leaders. While the content was shared by the governing regional body, the post itself does not contain information with sufficiently strong public interest value to outweigh the risk of harm. 8.3 Compliance with Meta’s human rights responsibilities The Board finds that removing the content in this case is consistent with Meta’s human rights responsibilities. During an armed conflict, the company also has a responsibility to establish a principled and transparent system for moderating content where there is a reasonable probability that the content would succeed in inciting violence. The Board notes the heightened risk of content directly contributing to harm during an armed conflict. The Board finds that Meta currently lacks a principled and transparent framework for content moderation in conflict zones. Freedom of expression (Article 19 ICCPR) Article 19 of the ICCPR provides broad protection for freedom of expression, including the right to seek and receive information about possible violence. However, the right may be restricted under certain specific conditions, which satisfy the three-part test of legality (clarity), legitimacy, and necessity and proportionality. Meta has committed to respect human rights under the UNGPs and to look to authorities such as the ICCPR when making content decisions, including in situations of armed conflict. The Rabat Plan of Action also provides useful guidance on this matter. As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression” ( A/74/486 , para. 41). I. Legality (clarity and accessibility of the rules) Any restriction on freedom of expression should be accessible and clear enough to provide guidance to users and content reviewers as to what content is permitted on the platform and what is not. Lack of clarity or precision can lead to inconsistent and arbitrary enforcement of the rules. The Violence and Incitement policy prohibits “threats that could lead to death” and, in particular, “statements of intent to commit high-severity violence.” The Board finds that the applicable policy in this case is clear. However, in the course of deciding this case, the Board finds that Meta provides insufficient information on how it implements the Violence and Incitement policy in situations of armed conflict, what policy exceptions are available and how they are used, or any specialized enforcement processes the company uses for this kind of situation. II. Legitimate aim Restrictions on freedom of expression should pursue a legitimate aim, which includes the respect of the rights of others, and the protection of national security or public order. The Facebook Community Standard on Violence and Incitement exists to prevent offline harm that may be related to content on Facebook. As previously concluded by the Board in the “Alleged crimes in Raya Kobo” case decision, restrictions based on this policy serve the legitimate aim of protecting the rights to life and bodily integrity. III. Necessity and proportionality Necessity and proportionality require Meta to show that its restriction on speech was necessary to address the threat, in this case the threat to the rights of others, and that it was not overly broad (General Comment 34, para. 34). In making this assessment, the Board also considered the factors in the Rabat Plan of Action, on what constitutes incitement to violence ( The Rabat Plan of Action, OHCHR, A/HRC/22/17/Add.4,2013 ), while accounting for differences between the international law obligations of states and the human rights responsibilities of businesses. In this case, the Board finds that removing this content from the platform was a necessary and proportionate restriction on freedom of expression under international human rights law. Using the Rabat Plan of Action’s six-part test to inform its analysis, the Board finds support for the removal of this post. The context in Ethiopia; the status and intent of the speaker; the content of the speech as well as its reach; and the likelihood of offline harm all contribute to a heightened risk of offline violence. (1) Context: The content was posted in the context of an ongoing and escalating civil war. Since its beginning, the conflict has been marked by violations of international human rights and humanitarian law committed by all parties to the conflict. (2) Speaker: The speaker is a regional government ministry affiliated with one of the parties to the conflict with significant reach and influence, including the authority to direct the Tigrayan armed forces. (3) Intent: Given the language and context, there is at least an explicit call to kill soldiers who do not surrender; and it could be reasonably inferred that there is further intent to commit harm. (4) Content: The post can be read to advocate targeting combatants and political leaders, regardless of their participation in the hostilities. (5) Extent of dissemination: the content was posted on the public page of a body connected to one of the parties to the conflict with about 260,000 followers and remained on the platform for two days before being removed. (6) Likelihood and Imminence: The content was posted around the time that TPLF forces were advancing towards other parts of Ethiopia beyond Tigray, as well as the Prime Minister declaring a nationwide state of emergency and calling on civilians to take up arms and fight. While the Board found that removing the content in this case was necessary and proportionate, it also became clear to the Board in reviewing the case that more transparency is needed to assess whether Meta’s measures are consistently proportionate throughout a conflict and across all armed conflict contexts. The company has long been aware of how its platforms have been used to spread hate speech and fuel ethnic violence. While Meta has taken positive steps to improve its moderation system in some conflicts (for instance, commissioning an independent assessment of bias in content moderation in the Israeli-Palestinian conflict in response to the Board’s recommendation), it has not done enough to evaluate its existing policies and processes and to develop a principled and transparent framework for content moderation in conflict zones. Some Board Members have expressed that Meta’s content moderation in conflict zones should also be informed by international humanitarian law. In Ethiopia, Meta has outlined the steps it has taken to remove content that incites others to violence. The company refers to two general processes for countries at risk of or experiencing violent conflict, which were used in Ethiopia: the “at-risk countries” tiering system and IPOCs. Ethiopia has been designated as a tier 1 at-risk country (highest risk) since late 2020 and had a level 3 IPOC (the highest level) at the time the content was posted. Despite this, it was only two days later that the content was removed, notwithstanding the clear policy line it violated. The Board notes that two days, in the context of an armed conflict, is a considerable time span given the Rabat assessment outlined above. This also suggests the inadequacy of the at-risk tiering system and IPOCs as a solution to deal with events posing heightened human rights risks. Meta does not provide enough public information on the general method or criteria used for the “at-risk countries” assessment and the product investments the company has made as a result, in Ethiopia and other conflict situations. Without this information, neither the Board nor the public can evaluate the effectiveness and fairness of these processes, whether the company’s product investments are equitable or whether they are implemented with similar speed and diligence across regions and conflict situations. IPOCs are, in the words of Meta, “short-term solutions” and convened on an ad hoc basis. This suggests to the Board that there may be a need for the company to invest greater resources in a sustained internal mechanism that provides the expertise, capacity and coordination necessary to review and respond to content effectively for the entirety of a conflict. Such assessment should be informed by policy and country expertise. Meta’s current approach to content moderation in conflict zones could lead to the appearance of inconsistency. There are currently some 27 armed conflicts in the world, according to the Council on Foreign Relations . In at least one conflict (Russia-Ukraine), Meta has, to some observers, appeared to promptly take action and create policy exceptions to allow content that would otherwise be prohibited under the Violence and Incitement policy, while taking too long to respond in other conflict situations. One public comment (PC-10433), submitted by Dr. Samson Esayas, associate professor at BI Norwegian Business School, noted Meta’s “swift measures” in moderating content in the context of the Russia-Ukraine conflict and highlighted the “differential treatment between this conflict and conflicts in other regions, particularly Ethiopia and Myanmar.” This suggests an inconsistent approach, which is problematic for a company of Meta’s reach and resources, especially in the context of armed conflict. 9. Oversight Board decision The Oversight Board upholds Meta's decision to remove the content for violating the Violence and Incitement Community Standard. 10. Policy advisory statement Transparency 1. In line with the Board’s recommendation in the “Former President Trump’s Suspension,” as reiterated in the “Sudan Graphic Video,” Meta should publish information on its Crisis Policy Protocol. The Board will consider this recommendation implemented when information on the Crisis Policy Protocol is available in the Transparency Center, within six months of this decision being published, as a separate policy in the Transparency Center in addition to the Public Policy Forum slide deck. Enforcement 2. To improve enforcement of its content policies during periods of armed conflict, Meta should assess the feasibility of establishing a sustained internal mechanism that provides the expertise, capacity and coordination required to review and respond to content effectively for the duration of a conflict. The Board will consider this recommendation implemented when Meta provides an overview of the feasibility of a sustained internal mechanism to the Board. * Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Return to Case Decisions and Policy Advisory Opinions" fb-e5m6qzga,Colombia protests,https://www.oversightboard.com/decision/fb-e5m6qzga/,"September 27, 2021",2021,,"TopicCommunity organizations, Freedom of expression, ProtestsCommunity StandardHate speech","Policies and TopicsTopicCommunity organizations, Freedom of expression, ProtestsCommunity StandardHate speech",Overturned,Colombia,"The Oversight Board has overturned Facebook's decision to remove a post showing a video of protesters in Colombia criticising the country's president, Ivan Duque.",30043,4650,"Overturned September 27, 2021 The Oversight Board has overturned Facebook's decision to remove a post showing a video of protesters in Colombia criticising the country's president, Ivan Duque. Standard Topic Community organizations, Freedom of expression, Protests Community Standard Hate speech Location Colombia Platform Facebook Public Comments 2021-010-FB-UA The Oversight Board has overturned Facebook’s decision to remove a post showing a video of protesters in Colombia criticizing the country’s president, Ivan Duque. In the video, the protesters use a word designated as a slur under Facebook’s Hate Speech Community Standard. Assessing the public interest value of this content, the Board found that Facebook should have applied the newsworthiness allowance in this case. About the case In May 2021, the Facebook page of a regional news outlet in Colombia shared a post by another Facebook page without adding any additional caption. This shared post is the content at issue in this case. The original root post contains a short video showing a protest in Colombia with people marching behind a banner that says “SOS COLOMBIA.” The protesters are singing in Spanish and address the Colombian president, mentioning the tax reform recently proposed by the Colombian government. As part of their chant, the protesters call the president ""hijo de puta"" once and say ""deja de hacerte el marica en la tv"" once. Facebook translated these phrases as ""son of a bitch"" and ""stop being the fag on tv.” The video is accompanied by text in Spanish expressing admiration for the protesters. The shared post was viewed around 19,000 times, with fewer than five users reporting it to Facebook. Key findings Facebook removed this content as it contained the word “marica” (from here on redacted as “m**ica”). This violated Facebook’s Hate Speech Community Standard which does not allow content that “describes or negatively targets people with slurs” based on protected characteristics such as sexual orientation. Facebook noted that while, in theory, the newsworthiness allowance could apply to such content, the allowance can only be applied if the content moderators who initially review the content decide to escalate it for additional review by Facebook’s content policy team. This did not happen in this case. The Board also notes that Facebook does not make its criteria for escalation publicly available. The word “m**rica” has been designated as a slur by Facebook on the basis that it is inherently offensive and used as an insulting and discriminatory label primarily against gay men. While the Board agrees that none of the exceptions currently listed in Facebook’s Hate Speech Community Standard permit the slur’s use, which can contribute to an environment of intimidation and exclusion for LGBT people, it finds that the company should have applied the newsworthiness allowance in this case. The newsworthiness allowance requires Facebook to assess the public interest of allowing certain expression against the risk of harm from allowing violating content. As part of this, Facebook considers the nature of the speech as well as country-specific context, such as the political structure of the country and whether it has a free press. Assessing the public interest value of this content, the Board notes that it was posted during widespread protests against the Colombian government at a significant moment in the country’s political history. While participants appear to use the slur term deliberately, it is used once among numerous other utterances and the chant primarily focuses on criticism towards the country’s president. The Board also notes that, in an environment where outlets for political expression are limited, social media has provided a platform for all people, including journalists, to share information about the protests. Applying the newsworthiness allowance in this case means that only exceptional and limited harmful content would be permitted. The Oversight Board’s decision The Oversight Board overturns Facebook’s decision to remove the content, requiring the post to be restored. In a policy advisory statement, the Board recommends that Facebook: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board has overturned Facebook’s decision to remove a Facebook post showing a video of protesters in Colombia criticizing the Colombian president, Ivan Duque. In the video, protesters used a word which Facebook has designated as a slur that violates its Hate Speech Community Standard for being a direct attack against people based on their sexual orientation. The Board found that, while the removal was prima facie in line with the Hate Speech Community Standard (meaning that on its face the content appeared to violate the Standard), the newsworthiness allowance should have been applied in this case to keep the content on the platform. 2. Case description In May 2021, the Facebook page of a regional news outlet in Colombia shared a post by another Facebook page, without adding any additional caption – this shared post is the content at issue in this case. The original root post contains a short video (originally shared on TikTok), which shows a protest in Colombia, with people marching behind a banner that says ""SOS COLOMBIA."" The protesters are singing in Spanish and address the Colombian president, mentioning the tax reform recently proposed by the Colombian government. As part of their chant, the protesters call the president an ""hijo de puta"" once and say ""deja de hacerte el marica en la tv"" once. Facebook translated these phrases as ""son of a bitch"" and ""stop being the fag on tv."" The video, which is 22 seconds long, is accompanied by text in Spanish expressing admiration for the protesters. The shared post was viewed around 19,000 times and shared over 70 times. Fewer than five users reported the content. Following human review, Facebook removed the shared post under its Hate Speech policy. Under its Hate Speech Community Standard , Facebook takes down content that ""describes or negatively targets people with slurs, where slurs are defined as words that are inherently offensive and used as insulting labels"" on the basis of protected characteristics including sexual orientation. The word ""marica"" (hereafter redacted as “m**ica”) is on Facebook's list of prohibited slur words. The user who posted the shared post appealed Facebook’s decision. Following further human review, Facebook upheld its original decision to remove the content. Facebook also removed the original root post from the platform. 3. Authority and scope The Board has the power to review Facebook's decision following an appeal from the user whose post was removed (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1). The Board may uphold or reverse that decision (Charter Article 3, Section 5). The Board's decisions are binding and may include policy advisory statements with recommendations. These recommendations are non-binding, but Facebook must respond to them (Charter Article 3, Section 4). 4. Relevant standards The Oversight Board considered the following standards in its decision: I. Facebook’s Community Standards: In the policy rationale for the Hate Speech Community Standard, Facebook states that hate speech is not allowed on the platform ""because it creates an environment of intimidation and exclusion and, in some cases, may promote real-world violence."" The Community Standard defines hate speech as “a direct attack against people — rather than concepts or institutions — on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease.” It prohibits content that “describes or negatively targets people with slurs, where slurs are defined as words that are inherently offensive and used as insulting labels for the above-listed characteristics.” II. Facebook’s values: Facebook's values are outlined in the introduction to the Community Standards. The value of ""Voice"" is described as ""paramount"": The goal of our Community Standards has always been to create a place for expression and give people a voice. […] We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Facebook limits ""Voice"" in service of four values, the relevant one in this case being “Dignity”: ""Dignity"" : We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade them. III. Human rights standards: The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. The Board's analysis of Facebook’s human rights responsibilities in this case was informed by the following human rights standards: 5. User statement The user, who is the administrator of the page on which the content was posted, submitted their appeal to the Board in Spanish. In the appeal, the user states that they are a journalist reporting on local news from their province. The user claims that the content was posted by another person who took their phone, but that, nevertheless, this content did not intend to cause harm and showed protests in a time of crisis. The user states that they aim to follow Facebook's policies, and claim that this removal led to account penalties. The user further states that the content shows young people protesting within the framework of freedom of expression and peaceful protest, and that the young people are expressing themselves without violence and demanding rights using typical language. The user also expresses concern about government repression of protest. 6. Explanation of Facebook’s decision Facebook removed this content on the basis that it contained the word “m**ica” and therefore violated Facebook’s Hate Speech Community Standard, which prohibits “[c]ontent that describes or negatively targets people with slurs, where slurs are defined as words that are inherently offensive and used as insulting labels for the above characteristics [i.e., a protected characteristic].” The word “m**ica” is on Facebook’s list of prohibited slur words, on the grounds that it targets people based on their sexual orientation. Facebook states that there is no exception for using slurs against political leaders or public figures. Furthermore, Facebook notes that “it does not matter if the speaker or the target are members of the protected characteristic group being attacked. Since slurs are inherently offensive terms for a group defined by their protected characteristic, the use of slurs [is] not allowed, unless the user has clearly demonstrated that [the slur] was shared to condemn, to discuss, to raise awareness of the slur, or the slur is used self-referentially or in an empowering way.” With regards to whether the newsworthiness allowance could be applied to this content, Facebook explained that the newsworthiness allowance can only be applied if the content moderators who initially review the content decide to escalate it for additional review by Facebook’s content policy team – in this case, the content was not escalated for further review. The Board notes that Facebook does not make its criteria for escalation publicly available. It stated that “the newsworthiness allowance, in theory, could apply to such content. In this case, however, the public interest value does not outweigh the risk of harm from allowing content containing an inherently offensive and insulting label to remain on Facebook’s platform.” 7. Third-party submissions The Oversight Board received 18 public comments related to this case. Five of the comments were submitted from Asia Pacific and Oceania, one from Europe, seven from Latin America and the Caribbean, one from Middle East and North Africa, and four from the United States and Canada. The submissions covered the following themes: the various meanings and uses of the word “m**ica” in Colombia; concern that Facebook removes journalistic content; censorship of media outlets in Colombia; and analysis of whether the content complied with the Community Standards. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board looked at the question of whether this content should be restored through three lenses: Facebook’s Community Standards; the company’s values; and its human rights responsibilities. 8.1 Compliance with Community Standards The Board finds that, although Facebook’s decision to remove the content was prima facie in line with its Hate Speech Community Standard (meaning that on its face the content appeared to violate the Standard), the newsworthiness allowance should have been applied in this case to allow the content to remain on the platform. The word “m**ica” has been designated as a slur by Facebook on the basis that it is inherently offensive and used as an insulting and discriminatory label primarily against gay men. As noted in section 6, Facebook explained to the Board that neither the sexual orientation nor the public figure status of the target is relevant to the enforcement of this policy. Since discriminatory slurs are inherently offensive, the use of slurs is not allowed unless a policy exception applies. Those exceptions allow the sharing of slurs to condemn, to discuss, or raise awareness of hate speech, or when used self-referentially or in an empowering way. The Board sought expert input and public comments that confirmed that the word “m**ica” has multiple meanings and can be used without discriminatory intent. However, there is agreement that its origins are homophobic, principally against gay men, even though its use has evolved to reportedly common usage in Colombia to refer to a person as “friend” or “dude,” and as an insult equivalent to “stupid,” “dumb” or “idiot.” The Board notes that this evolution or normalization does not necessarily mean the term’s usage is less harmful for gay men, as this casual use may continue to marginalize lesbian, gay, bisexual and transgender (LGBT) people and communities by implicitly associating them with negative characteristics. The Board understands why Facebook designated this word as a slur, and agrees none of the exceptions currently listed in the Hate Speech Community Standard explicitly applies to permit its use on the platform. Nevertheless, the Board finds that the newsworthiness allowance should have been applied to allow this content to remain on the platform. Facebook has provided more public information about the newsworthiness allowance in response to the Board’s recommendations in case 2021-001-FB-FBR . This allowance requires the company to assess the public interest of expression against the risk of harm from allowing violating content on the platform. Facebook states that it takes into account country-specific circumstances, the nature of the speech, including whether it relates to governance or politics, and the political structure of the country, including whether it has a free press. The allowance is not applied on the basis of the identity of the speaker as a journalist or media outlet, or simply because the subject matter is in the news. Several contextual factors are relevant to assessing the public interest in this content. It was posted during widespread protests against the Colombian government. The chant in the video was primarily focused on criticism towards the president. While participants appear to use the slur term deliberately, the protest was not discriminatory in its objectives. The slur term is used once, among numerous other utterances. Where it appears that a user shares footage to raise awareness of the protests and to express support for their cause, and not to insult people on the basis of protected characteristics or to incite discrimination or violence, the newsworthiness exception is particularly applicable. The Board emphasizes that the application of the newsworthiness allowance in this case should not be understood as endorsement of the language the protesters used. The Board acknowledges that the term used by protesters in this video is offensive to gay men, including in Colombia, and its usage could create a risk of harm. Allowing such slurs on the platform can contribute to an environment of intimidation and exclusion for LGBT people and, in some cases, promote real-world violence. This language is not inherently of public interest value. Rather, the public interest is in allowing expression on the platform that relates to a significant moment in Colombia’s political history. The Board also notes that social media has played an important role in providing a platform for all people, including journalists, to share information about the protests in an environment where public comments and expert reports suggest the media landscape would benefit from greater pluralism. Allowing the content through the application of the newsworthiness allowance means that only exceptional and limited harmful content would be permitted. The newsworthiness exception should not be construed as a broad permission for hate speech to remain up. 8.2 Compliance with Facebook’s values The Board finds that restoring this content is consistent with Facebook’s values. Facebook lists “Dignity” as one of its values. The Board shares Facebook’s concern that permitting hateful slurs to proliferate on the platform can cause dignitary harm to members of communities targeted by such slurs. The Board also acknowledges that the use of the slur in this specific case may be demeaning and harmful to members of the LGBT community. At the same time, Facebook has indicated that “Voice” is not just one of its values, but its “paramount” value. The sharing of content that shows widespread protests against a political leader represents the value of “Voice” at its apex, particularly in an environment in which outlets for political expression are limited. Application of the newsworthiness allowance to the slur policy in this setting—the sharing of information about political protests against a national leader—permits Facebook to honor its paramount commitment to “Voice” without sacrificing its legitimate commitment to “Dignity.” 8.3 Compliance with Facebook’s human rights responsibilities The Board finds that restoring the content is consistent with Facebook’s human rights responsibilities as a business. Facebook has committed itself to respect human rights under the UN Guiding Principles on Business and Human Rights (UNGPs). Its Corporate Human Rights Policy states that this includes the International Covenant on Civil and Political Rights (ICCPR). Freedom of expression and freedom of peaceful assembly (Articles 19 and 21 ICCPR) Article 19 of the ICCPR provides for broad protection of expression. This protection is “particularly high” for “public debate in a democratic society concerning figures in the public and political domain” ( General Comment 34 , para. 34). Article 21 of the ICCPR provides similar protection for freedom of peaceful assembly - assemblies with a political message are accorded heightened protection ( General Comment No. 37 , paras 32 and 49), and Article 21 extends to protect associated activities that take place online ( Ibid ., paras 6, and 34). The Human Rights Committee has further emphasized the role of journalists, human rights defenders and election monitors and others monitoring or reporting on assemblies, including in respect of the conduct of law enforcement officials ( Ibid ., paras 30 and 94). Interference with online communications about assemblies has been interpreted to impede the right to freedom of peaceful assembly ( Ibid. , para. 10). Article 19 requires that where restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). Facebook has recognized its responsibilities to respect international human rights standards under the UNGPs. Relying on the UNGPs framework, the UN Special Rapporteur on freedom of opinion and expression has called on social media companies to ensure their content rules are guided by the requirements of Article 19, para. 3, ICCPR (see A/HRC/38/35, paras. 45 and 70). The Board examined whether the removal of the post would be justified under the three-part test for restrictions on freedom of expression under Article 19 in accordance with Facebook’s human rights commitments. I. Legality (clarity and accessibility of the rules) The principle of legality under international human rights law requires rules used to limit expression to be clear, precise, publicly accessible and non-discriminatory ( General Comment 34 , para. 25 and para. 26). The Human Rights Committee has further noted that rules “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” (General Comment 34, para. 25). Although Facebook’s Hate Speech Community Standard specifies that slurs related to protected characteristics are prohibited, the specific list of words which Facebook has designated as slurs in different contexts is not publicly available. Given that the word “m**ica” can be used in different ways, it may not be clear to users that this word contravenes Facebook’s prohibition against slurs. Facebook should provide the public with more information on its list of slurs to enable users to regulate their conduct accordingly. The Board has made a policy recommendation below in this regard. The Board recommended in case 2021-001-FB-FBR that Facebook should produce more information to help users understand and evaluate the process and criteria for applying the newsworthiness allowance. In response, Facebook published more information in its Transparency Center and said that from 2022 it would begin providing regular updates about the number of times it applied this allowance in the Community Standards Enforcement Reports. However, the Transparency Center resource is not linked from the more limited explanation of the newsworthiness allowance in the introduction to the Community Standards. While the Board notes the commitment to provide more information in Enforcement Reports, this will not provide information to users who post or view content which is given an allowance. The Board recommended in case 2020-003-FB-UA that Facebook should give users more detail on the specific parts of the Hate Speech policy that their content violated, so that users can regulate their behavior accordingly. The Board notes that there is a distinction to be made here. Case 2020-003-FB-UA concerned content originally created by the user themselves that could be easily edited upon notification, whereas the present case concerns content depicting public events. Nevertheless, the Board understands that it is important for users to receive clear information about why their content is removed as a general rule. The Board appreciates the update Facebook provided in July 2021 on the company’s efforts to implement this recommendation, which when rolled out in all languages should provide more information to users whose content is removed for using slurs. The Board encourages Facebook to provide clearer timelines for implementing this recommendation in non-English languages. II. Legitimate aim Any restriction on expression should pursue one of the legitimate aims listed in the ICCPR, which include the “rights of others.” The policy at issue in this case pursued the legitimate aim of protecting the rights of others (General Comment No. 34, para. 28) to equality, protection against violence and discrimination based on sexual orientation and gender identity (Article 2, para. 1, Article 26 ICCPR; UN Human Rights Committee, Toonen v. Australia (1992) and General Comment No. 37, para. 25; UN Human Rights Council Resolution 32/2 on the protection against violence and discrimination based on sexual orientation and gender identity). III. Necessity and proportionality Any restrictions on freedom of expression should be appropriate to achieve their protective function and should be the least intrusive instrument among those which might achieve their protective function (General Comment 34, para. 34). The Board finds that it was not necessary or proportionate to remove the content in this case. As discussed above in section 8.1, the Board recognizes the potential for harms to the rights of LGBT people from allowing homophobic slurs to remain on the platform. However, context is crucial in assessing the proportionality of removal of the content. The UN Special Rapporteur on freedom of expression has stated in relation to hate speech that the ""evaluation of context may lead to a decision to make an exception in some instances, when the content must be protected as, for example, political speech"" (A/74/486, para. 47(d)). Taking into account the political context in Colombia, the fact this protest addressed a political figure, and the significant role that social media has played in sharing information about the protests there, the Board finds that removal of this content was not proportionate to achieve the aim of protecting the rights to non-discrimination and equality of LGBT people. Freedom of peaceful assembly For a minority of the Board, it is also important to assess the content restriction in this case for its impact on the right to freedom of peaceful assembly. Journalists and other observers play an important role in amplifying the collective expression and associative power of protests through disseminating footage of those events online – these acts are protected by Article 21 of the ICCPR (General Comment No. 37, para. 34). The minority believes that assessing restrictions on the right to peaceful assembly is substantially similar to the test for evaluating restrictions on the right to freedom of expression. Restrictions on the right to freedom of peaceful assembly should be narrowly drawn, meeting the requirements of legality, legitimate aim, and necessity and proportionality ( ibid ., paras 8 and 36). The UN Special Rapporteur on freedom of peaceful assembly and of association has also called on companies engaged in content moderation to be guided by international human rights law (see A/HRC/41/41, para. 19), noting “the enormous power of Facebook” ( Ibid ., para. 4). The Human Rights Committee has noted that private ownership of communication platforms should inform a contemporary understanding of the legal framework Article 21 of the ICCPR requires (op cit. para. 10 and 34) . The three-part analysis above, which the minority joins, leads to an additional minority conclusion that Facebook’s removal of the content in this case impaired the right to freedom of peaceful assembly, and that restriction was not justified. 9. Oversight Board decision The Oversight Board overturns Facebook's decision to take down the content, requiring the post to be restored. 10. Policy advisory statement The following recommendations are numbered, and the Board requests that Facebook provides an individual response to each as drafted. Content policy To further clarify for users its rules on Hate Speech and on how the newsworthiness allowance applies, Facebook should: 1. Publish illustrative examples from the list of slurs it has designated as violating under its Hate Speech Community Standard. These examples should be included in the Community Standard and include edge cases involving words which may be harmful in some contexts but not others, describing when their use would be violating. Facebook should clarify to users that these examples do not constitute a complete list. 2. Link the short explanation of the newsworthiness allowance provided in the introduction to the Community Standards to the more detailed Transparency Center explanation of how this policy applies. The company should supplement this explanation with illustrative examples from a variety of contexts, including reporting on large scale protests. Enforcement To safeguard against the wrongful removal of content that is in the public interest, and to ensure provision of adequate information to users who report such content, Facebook should: 3. Develop and publicize clear criteria for content reviewers to escalate for additional review public interest content that potentially violates the Community Standards but may be eligible for the newsworthiness allowance. These criteria should cover content depicting large protests on political issues, in particular in contexts where states are accused of violating human rights and where maintaining a public record of events is of heightened importance. 4. Notify all users who reported content assessed as violating but left on the platform for public interest reasons that the newsworthiness allowance was applied to the post. The notice should link to the Transparency Center explanation of the newsworthiness allowance. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world, provided expertise on socio-political and cultural context. The company Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world, provided linguistic expertise. Return to Case Decisions and Policy Advisory Opinions" fb-exvb8ktw,Testicular Cancer Self-Check Infographics,https://www.oversightboard.com/decision/fb-exvb8ktw/,"June 4, 2024",2024,,TopicHealthCommunity StandardAdult nudity and sexual activity,Adult nudity and sexual activity,Overturned,United Kingdom,A user appealed Meta’s decision to remove a Facebook post that contains infographics providing instructions for testicular self-examination.,6321,962,"Overturned June 4, 2024 A user appealed Meta’s decision to remove a Facebook post that contains infographics providing instructions for testicular self-examination. Summary Topic Health Community Standard Adult nudity and sexual activity Location United Kingdom Platform Facebook Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Summary A user appealed Meta’s decision to remove a Facebook post that contains infographics providing instructions for testicular self-examination. After the Board brought the appeal to Meta’s attention, the company reversed its earlier decision and restored the post. About the Case In April 2019, a Facebook user posted four screenshots of infographics titled “How to check your balls (testicles).” The infographics explain the steps to perform a testicular self-check, providing written and visual guidance on how to look for lumps or changes in testicular size. The illustrations include drawings of a man using his hands to examine his testicles. The user included the following caption: “have you given your balls a squeeze this month?” Five years after the image was posted, Meta removed the post from Facebook under its Adult Nudity and Sexual Activity policy. Within that policy, Meta states that it removes sexual imagery by default, with an exception for (among other reasons) medical or health contexts, specifying as an example, cancer or disease prevention/assessment. The user, in his appeal to the Board, affirmed that “the set of images were clearly a medical infographic” and did not portray adult nudity in violation of the Community Standard, but instead was information shared in a medical or health context. After reviewing the case, Meta concluded that the content did not violate its Adult Nudity and Sexual Activity policy , and restored the content to Facebook. The company’s Community Standard allows “imagery of visible adult male and female genitalia, fully nude close-ups of buttocks or anus, or implied/other sexual activity shared in an educational or scientific context such as sexual health or medical awareness.” It remains unclear to the Board why Meta removed the content five years after its original posting date. In his appeal to the Board, the user clarified that the post was prompted by Facebook Memories, an album posted several years ago without any issues. As a cancer survivor, he considered Meta’s decision unfortunate and worrying; the user explained his intention was to encourage men to conduct testicular self-examination regularly as this is the main reason why many men fail to detect early signs of cancer. Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process involved, reduce errors and increase fairness for Facebook, Instagram and Threads users. Significance of Case This case highlights Meta’s failure to enforce medical and health exceptions consistently as described under its Adult Nudity and Sexual Activity Community Standard . Creating awareness of and promoting testicular self-screening is crucial for the early diagnosis of cancer, and social media is essential to this effort. This case highlights the necessity of accurate moderation to facilitate awareness about testicular cancer and other diseases for educational or medical reasons. The Board has previously issued recommendations to Meta on its policy related to nudity in a health awareness context. The Board asked Meta to improve the automated detection of images with text overlay to ensure that posts raising awareness of breast cancer symptoms were not wrongly flagged for review, ( Breast Cancer Symptoms and Nudity, recommendation no. 1). The Board has also urged Meta to “implement an internal audit procedure to continuously analyze a statistically representative sample of automated content removal decisions to reverse and learn from enforcement mistakes,” (Breast Cancer Symptoms and Nudity, recommendation no. 5). In response to the first recommendation, Meta demonstrated its implementation through published information. For recommendation no. 5, the company reported this as work it already does but did not publish information to demonstrate implementation. Furthermore, the Board stressed that users should be able to appeal automated decisions when their content is treated as a violation of the Adult Nudity and Sexual Activity Community Standard. These decisions should then be reviewed by a human reviewer to ensure that over-enforcement of this Community Standard is not being used to prevent other harms on the platform, ( Breast Cancer Symptoms and Nudity , recommendation no. 4). Meta reframed this recommendation in its initial response, when it said it would assess the feasibility of implementing it. However, later in the quarter, Meta declined to take further action on the recommendation. The Board emphasizes that full implementation of these recommendations will help Meta to decrease the error rate of content incorrectly removed under the health and educational exception of the Adult Nudity and Sexual Activity Community Standard, allowing users to raise awareness and educate themselves about early symptoms of testicular cancer. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to the company’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-fwixegxq,Derogatory Image of Candidates for U.S. Elections,https://www.oversightboard.com/decision/fb-fwixegxq/,"October 23, 2024",2024,,"TopicElections, Freedom of expression, US Elections 2024Community StandardBullying and harassment",Bullying and harassment,Overturned,United States,A user appealed Meta’s decision to remove content containing an altered and derogatory depiction of U.S. presidential candidate Kamala Harris and her running mate Tim Walz.,6192,938,"Overturned October 23, 2024 A user appealed Meta’s decision to remove content containing an altered and derogatory depiction of U.S. presidential candidate Kamala Harris and her running mate Tim Walz. Summary Topic Elections, Freedom of expression, US Elections 2024 Community Standard Bullying and harassment Location United States Platform Facebook Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company's attention and include information about Meta's acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta's decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Summary A user appealed Meta’s decision to remove content containing an altered and derogatory depiction of U.S. presidential candidate Kamala Harris and her running mate Tim Walz. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. About the Case In August 2024, a Facebook user posted an altered picture based on the poster for the 1994 comedy film “Dumb and Dumber.” In the altered image, the faces of the original actors are replaced by the U.S. presidential candidate, Vice President Kamala Harris, and her running mate, Minnesota Governor Tim Walz. As in the original poster, the two figures are grabbing each other’s nipples through their clothing. The content was posted with a caption that includes the emojis “🤷‍♂️🖕🖕.” Meta initially removed the user’s post from Facebook under its Bullying and Harassment Community Standard , which prohibits “derogatory sexualized photoshop or drawings.” When the Board brought this case to Meta’s attention, the company determined that the removal of the content was incorrect, restoring the post to Facebook. Meta explained that its Community Standards were not violated in this case because the company does not consider that pinching a person’s nipple through their clothing qualifies as sexual activity. Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook, Instagram and Threads users. Significance of Case In the Explicit AI Images of Female Public Figures cases, the Board decided that two AI-generated images violated Meta’s rule that prohibits “derogatory sexualized photoshop” under the Bullying and Harassment policy. Both images had been edited to show the faces of real public figures with a different (real or fictional) nude body. In this case, however, the Board highlights the overenforcement of Meta’s Bullying and Harassment policy with respect to satire and political speech in the form of a non-sexualized derogatory depiction of political figures. It also points to the dangers that overenforcing the Bullying and Harassment policy can have, especially in the context of an election, as it may lead to the excessive removal of political speech and undermine the ability to criticize government officials and political candidates, including in a sarcastic manner. This post is nothing more than a commonplace satirical image of prominent politicians and is instantly recognizable as such. In the context of elections, the Board has previously recommended that Meta should develop a framework for evaluating its election integrity efforts in order to provide the company with relevant data to improve its content moderation system as a whole and decide how to best employ its resources in electoral contexts ( Brazilian General’s Speech , recommendation no. 1). Meta has reported progress on implementing this recommendation. Nonetheless, the company’s failure to recognize the nature of this post and treat it accordingly raises serious concerns about the systems and resources Meta has in place to effectively make content determinations in such electoral contexts. The Board has previously urged Meta to put in place adequate procedures for evaluating content in its relevant context. For example, the Board stated that Meta should: “Make sure it has adequate procedures in place to assess satirical content and relevant context properly,” ( “Two Buttons” Meme , recommendation no. 3). Meta reported implementation of this recommendation but has yet to publish information to demonstrate this. The Board has also stated that the Bullying and Harassment Community Standard should “clearly explain to users how bullying and harassment differ from speech that only causes offense and may be protected under international human rights law,” ( Pro-Navalny Protests in Russia , recommendation no. 2). Meta declined to implement this recommendation following a feasibility assessment. Finally, the Board stated Meta should: “Include illustrative examples of violating and non-violating content in the Bullying and Harassment Community Standard to clarify the policy lines drawn and how these distinctions can rest on the identity status of the target,” ( Pro-Navalny Protests in Russia , recommendation no. 4). Meta declined to implement this recommendation after a feasibility assessment. The Board believes that full implementation of such recommendations calling for effective assessment of context and the development of an election integrity framework would contribute to decreasing the number of enforcement errors. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-gw8by1y3,Altered Video of President Biden,https://www.oversightboard.com/decision/fb-gw8by1y3/,"February 5, 2024",2024,,"TopicElections, Manipulated media, MisinformationCommunity StandardBullying and harassment, Hate speech, Manipulated media","Policies and TopicsTopicElections, Manipulated media, MisinformationCommunity StandardBullying and harassment, Hate speech, Manipulated media",Upheld,United States,The Oversight Board has upheld Meta’s decision to leave up a video that was edited to make it appear as though U.S. President Joe Biden is inappropriately touching his adult granddaughter’s chest.,45365,7081,"Upheld February 5, 2024 The Oversight Board has upheld Meta’s decision to leave up a video that was edited to make it appear as though U.S. President Joe Biden is inappropriately touching his adult granddaughter’s chest. Standard Topic Elections, Manipulated media, Misinformation Community Standard Bullying and harassment, Hate speech, Manipulated media Location United States Platform Facebook Altered Video of President Biden Public Comments Appendix Altered Video of President Biden Decision PDF The Oversight Board has upheld Meta’s decision to leave up a video that was edited to make it appear as though U.S. President Joe Biden is inappropriately touching his adult granddaughter’s chest, and which is accompanied by a caption describing him as a “pedophile.” The Facebook post does not violate Meta’s Manipulated Media policy, which applies only to video created through artificial intelligence (AI) and only to content showing people saying things they did not say. Since the video in this post was not altered using AI and it shows President Biden doing something he did not do (not something he didn’t say), it does not violate the existing policy. Additionally, the alteration of this video clip is obvious and therefore unlikely to mislead the “average user” of its authenticity, which, according to Meta, is a key characteristic of manipulated media. Nevertheless, the Board is concerned about the Manipulated Media policy in its current form, finding it to be incoherent, lacking in persuasive justification and inappropriately focused on how content has been created, rather than on which specific harms it aims to prevent (for example, to electoral processes). Meta should reconsider this policy quickly, given the number of elections in 2024. About the Case In May 2023, a Facebook user posted a seven-second video clip, based on actual footage of President Biden, taken in October 2022, when he went to vote in person during the U.S. midterm elections. In the original footage, he exchanged “I Voted” stickers with his adult granddaughter, a first-time voter, placing the sticker above her chest, according to her instruction, and then kissing her on the cheek. In the video clip, posted just over six months later, the footage has been altered so that it loops, repeating the moment when the president’s hand made contact with his granddaughter’s chest to make it look like he is inappropriately touching her. The soundtrack to the altered video includes the lyric “Girls rub on your titties” from the song “Simon Says” by Pharoahe Monch, while the post’s caption states that President Biden is a “sick pedophile” and describes the people who voted for him as “mentally unwell.” Other posts containing the same altered video clip, but not the same soundtrack or caption, went viral in January 2023. A different user reported the post to Meta as hate speech, but this was automatically closed by the company without any review. They then appealed this decision to Meta, which resulted in a human reviewer deciding the content was not a violation and leaving the post up. Finally, they appealed to the Board. Key Findings The Board agrees with Meta that the content does not violate the company’s Manipulated Media policy because the clip does not show President Biden saying words he did not say, and it was not altered through AI. The current policy only prohibits edited videos showing people saying words they did not say (there is no prohibition covering individuals doing something they did not do) and only applies to video created through AI. According to Meta, a key characteristic of “manipulated media” is that it could mislead the “average” user to believe it is authentic and unaltered. In this case, the looping of one scene in the video is an obvious alteration. Nevertheless, the Board finds that Meta’s Manipulated Media policy is lacking in persuasive justification, is incoherent and confusing to users, and fails to clearly specify the harms it is seeking to prevent. In short, the policy should be reconsidered. The policy’s application to only video content, content altered or generated by AI, and content that makes people appear to say words they did not say is too narrow. Meta should extend the policy to cover audio as well as to content that shows people doing things they did not do. The Board is also unconvinced of the logic of making these rules dependent on the technical measures used to create content. Experts the Board consulted, and public comments, broadly agreed on the fact that non-AI-altered content is prevalent and not necessarily any less misleading; for example, most phones have features to edit content. Therefore, the policy should not treat “deep fakes” differently to content altered in other ways (for example, “cheap fakes”). The Board acknowledges that Meta may put in place necessary and proportionate measures to prevent offline harms caused by manipulated media, including protecting the right to vote and participate in the conduct of public affairs. However, the current policy does not clearly specify the harms it is seeking to prevent. Meta needs to provide greater clarity on what those harms are and needs to make revisions quickly, given the record number of elections in 2024. At present, the policy also raises legality concerns. Currently, Meta publishes this policy in two places: as a standalone policy and as part of the Misinformation Community Standard . There are differences between the two in their rationale and exact operational wording. These need to be clarified and any errors corrected. At the same time, the Board believes that in most cases Meta could prevent the harm to users caused by being misled about the authenticity of audio or audiovisual content, through less restrictive means than removal of content. For example, the company could attach labels to misleading content to inform users that it has been significantly altered, providing context on its authenticity. Meta already uses labels as part of its third-party fact-checking program, but if such a measure were introduced to enforce this policy, it should be carried out without reliance on third-party fact-checkers and across the platform. The Oversight Board’s Decision The Oversight Board has upheld Meta’s decision to leave up the post. The Board recommends that Meta: * Case summaries provide an overview of cases and do not have precedential value. 1. Decision Summary The Oversight Board upholds Meta’s decision to leave up a video that was edited to make it appear as though U.S. President Joe Biden is touching his adult granddaughter’s chest and which is accompanied by a caption accusing him of being “a pedophile.” The Board agrees with Meta that the post does not violate Facebook’s Manipulated Media policy as currently formulated, for two reasons: (1) the policy prohibits the display of manipulated videos that portray people saying things they do not say, but does not prohibit posts depicting an individual doing something they did not do; and (2) the policy only applies to video created through artificial intelligence (AI). Because the video does not show President Biden saying words he did not say and the clip was not altered using artificial intelligence, it does not violate the company’s Manipulated Media policy. Additionally, the accusation in the caption does not violate the Bullying and Harassment policy. Leaving the post on the platform also aligns with Meta’s human-rights responsibilities, which include protecting the right to vote and take part in public affairs. Although the decision not to remove this post was consistent with Meta’s human-rights responsibilities, the lines drawn by the policy more broadly lack persuasive justification and should be reconsidered. In its current form, the Manipulated Media policy fails to clearly specify the harms it is seeking to prevent, and the scope of its prohibitions is incoherent in both policy and technical terms. The policy’s application to (i) only video content; (ii) only content generated or altered by AI; and (iii) content that makes people appear to say “words they did not say,” is too narrow to meet any conceivable objective. It is inappropriately focused on the medium of communication and method of content creation rather than on preventing specific harms that may result from speech (e.g., to electoral processes). At the same time, Meta’s primary reliance on removing violating content may lead to disproportionate restrictions on freedom of expression. The Board recommends less severe measures be considered. The technology involved in the creation and identification of content through AI is rapidly changing, making content moderation in this area challenging. It is all the more challenging because some forms of media alteration may even enhance the value of content to the audience. Some media is manipulated for purposes of humor, parody or satire. The Board previously emphasized the importance of protecting satirical speech (see “Two Buttons” Meme decision ). The Board therefore recommends that Meta revise its Manipulated Media policy to more clearly specify the harms it seeks to prevent. Given the record number of elections taking place in 2024, the Board recommends that Meta embark on such revisions expeditiously. This is essential because misleading video or audio in themselves are not always objectionable, absent a direct connection to potential offline harm. Such harms may include (but are not limited to) those resulting from invasion of privacy, incitement to violence, intensification of hate speech, bullying, and – more pertinent to this case – misleading people about facts essential to their exercise of the right to vote and to take part in the conduct of public affairs, with resulting harm to the democratic process. Many of these harms are addressed by other Community Standards, which also apply to manipulated media. The Board is not suggesting that Meta expand the harms addressed by the Manipulated Media policy, but that it provides greater clarity on what those harms are. In addition, the company should eliminate distinctions based on the form of expression, with no relation to harm. It should extend the policy to cover audio in addition to video, and all methods of manipulating media, not only those using AI. It should also include content depicting people doing things they did not do, in addition to the existing provision for things they did not say. Furthermore, Meta’s enforcement approach should encompass the use of less restrictive measures to enforce the Manipulated Media policy, such as attaching labels to misleading content to explain that it has been significantly altered or generated by AI. 2. Case Description and Background This case concerns a seven-second video clip posted in May 2023 on Facebook – almost six months after the midterm elections and 18 months before the 2024 presidential vote. The clip was based on actual footage of President Biden taken in October 2022, when he went to vote in person during the U.S. midterm elections accompanied by his adult granddaughter, a first-time voter. In the original footage of this occasion, President Biden and his granddaughter exchanged “I Voted” stickers. President Biden, following his granddaughter’s instruction, placed a sticker above her chest, and then kissed her on the cheek. In the video clip posted to Facebook, the footage has been altered so that it loops, repeating the moment when President Biden’s hand made contact with his granddaughter’s chest so that it appears as though he is inappropriately touching her. The soundtrack to the altered video is a short excerpt of the song “Simon Says” by Pharoahe Monch, which includes the lyric “Girls rub on your titties,” reinforcing the creator’s imputation that the depicted act was sexualized. The caption to the video states that President Biden is “a sick pedophile” for “touch[ing] his granddaughter’s breast!!!” and it also questions the people who voted for him, saying they are “mentally unwell.” While other posts containing the same altered video clip but not the same soundtrack or caption went viral in January 2023, the content in this case, posted months later, had fewer than 30 views, and was not shared. A user reported the post to Meta for violating the company’s Hate Speech policy. That report was automatically closed without review and the content left up. This reporting user then appealed the decision to Meta. A human reviewer upheld the decision. The same user then appealed to the Board. 3. Oversight Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the person who previously reported content that was left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). When Meta commits to act on recommendations, the Board monitors their implementation. 4. Sources of Authority and Guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board Decisions The most relevant previous Oversight Board decisions include: II. Meta’s Content Policies Meta’s Misinformation Community Standard explains that Meta will remove content only when it “is likely to directly contribute to the risk of imminent physical harm” or will “directly contribute to interference with the functioning of political processes,” or in the case of “certain highly deceptive manipulated media.” The rules relating to interference in political processes focus only on “voter or census interference” (Section III of the Misinformation policy). In other words, this section of the Misinformation policy applies to information about the process of voting, and not about issues or candidates. The rules relating to “highly deceptive manipulated media” are outlined under Section IV of the Misinformation policy and on a separate Manipulated Media policy page. According to the latter, Meta removes “videos that have been edited or synthesized, beyond adjustments for clarity or quality, in ways that are not apparent to an average person” by means of AI or machine learning, and “which would likely mislead an average person to believe” that the “subject of the video said words that they did not say.” The policy rationale emphasizes that some altered media “could mislead.” In Meta’s Misinformation Community Standard, the prohibition of manipulated media is further justified by the rationale that such content “can go viral quickly and experts advise that false beliefs regarding manipulated media often cannot be corrected through further discourse.” According to the standalone Manipulated Media policy page, there is a policy exception for content that is parody or satire. For misinformation that is not removed for violating Meta’s misinformation policy, Meta focuses on “reducing its prevalence or creating an environment that fosters a productive dialogue.” For the latter, Meta attempts to direct users to “authoritative information.” Meta states that as part of that effort, it partners with third-party fact-checking organizations to “review and rate the accuracy of the most viral content on our platforms.” This is linked to a detailed explanation of the Fact Checking Program . Third-party fact-checkers have a variety of rating options , including “false,” “altered,” “partly false” and “missing context.” The rating “altered” is applied to “image, audio or video content that has been edited or synthesized beyond adjustments for clarity or quality, in ways that could mislead people.” This is not limited to AI-generated content or content depicting a person saying something they did not say. Fact-checking does not apply to statements politicians make , within or outside of election periods. Meta does not control the ratings its fact-checkers apply and it is outside of the Board’s scope to receive appeals on the decisions of fact-checkers. Based on the ratings that fact checkers give, Meta may add labels to the content. Content labeled “false” and “altered” is obscured by a warning screen, requiring the user to click through to see the content. Meta explained that the “altered” label obscures the content with a full screen overlay informing the user that “[i]ndependent fact-checkers say this information could mislead people” as well as a “see why” button. Meta told the Board it provides users with a link to an article providing background, which is authored by the fact-checker whose rating was applied to the content. Again, this is neither reviewed by Meta nor appealable to the Board. A user who clicks the “see why” button is given the option of clicking through to the third-party fact-checking article that explains the basis for the rating. When a piece of content is labeled “false” and “altered” by fact-checkers, Meta also demotes the content, meaning that it ranks lower in users’ feeds. Meta’s Bullying and Harassment Community Standard prohibits various forms of abuse directed against individuals. It does not apply, however, to criminal allegations against adults, even if they contain expressions of contempt or disgust. Nor does it prohibit negative character or ability claims or expressions of contempt or disgust directed towards adult public figures because these types of statements can be a part of important political and social discourse. The Board’s analysis was also informed by Meta’s value of voice, which the company describes as “paramount,” and its value of authenticity. III. Meta’s Human-Rights Responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human-rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , in which it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board’s analysis of Meta’s human-rights responsibilities in this case was informed by the following international standards: 5. User Submissions The user who appealed this case to the Board stated the content was a “blatantly manipulated video to suggest that Biden is a pedophile.” The author of the post was notified of the Board’s review and provided with an opportunity to submit a statement to the Board, but declined. 6. Meta’s Submissions Meta informed the Board that the content does not violate its Manipulated Media policy because the video neither depicts President Biden saying something he did not say nor is it the product of AI or machine learning in such a way that it merges, combines, replaces or includes superimposed content. Meta explained to the Board that in determining whether media would “likely mislead an average person,” it considers factors such as whether any edits in a video are apparent (e.g., whether there are unnatural facial movements or odd pixelation when someone’s head turns, or mouth movements out of sync with the audio). It also reviews captions for videos to see whether any disclaimers are included (e.g., “This video was created using AI”). Furthermore, the company will assume a video is unlikely to mislead when it is clearly parody or satire, or involves people doing unrealistic, absurd or impossible things (e.g., a person surfing on the moon). However, it would remove a video (or image) that depicts actions or events should it violate other Community Standards, whether generated by AI or not. Following a policy development process in 2023, Meta plans to update the Manipulated Media policy to respond to the evolution of new and increasingly realistic AI. Meta is collaborating with other companies and experts through forums such as the Partnership on AI to develop common industry standards for identifying AI-generated content in order to provide users with information when they encounter this type of media. Meta explained that the content in this case was not reviewed by independent fact-checkers. The company uses a ranking algorithm to prioritize content for fact-checking, with virality a factor that would lead to content being prioritized in the queue. The content in this case had no reactions or comments, only about 30 views, and was not shared; it therefore was not prioritized. Meta explained that other posts containing the same video (but with different captions) were reviewed by fact-checkers, but those reviews had no impact on this case due to the nature of fact-checkers' review, as described below. Meta explained that its fact-checking enforcement systems take into account whether a fact-checker rated an entire post (e.g., a video shared with caption) or specific components of a post (e.g., only a video) to have false information. Meta then uses technology to identify and label identical and near-identical versions of the rated content across Facebook and Instagram. For example, when a fact-checker rates a whole post, Meta would apply a label only to posts that include identical and near-identical video and caption. If they had rated the video alone, Meta would label all identical and near-identical videos regardless of any caption. Rating and labeling components of posts, such as videos, independently from captions, can be more effective as this impacts more content on Meta’s platforms. However, there may be meaningful differences in posts that share near-identical media, such as when an altered video is shared with a caption discussing its authenticity. Meta is therefore careful not to apply labels to all instances of media, such as an altered video, when the caption with which it is shared makes a meaningful difference. Many posts containing the same video as in this case were rated by fact-checkers, but those ratings were applied to the entire post (i.e., only those with matching video and caption), rather than to all posts that included the video (regardless of caption). As such, those ratings did not impact the video in this post. Additionally, the reference to President Biden as a “sick pedophile” contained an expression of contempt and disgust (that he is “sick”) in the context of a criminal allegation (that he is a “pedophile”). However, because Meta’s Bullying and Harassment policy does not apply to criminal allegations against adults, it concluded that the caption was not violating. The Board asked Meta eight questions in writing. Questions related to the rationale underlying the Manipulated Media policy and its limited scope; the detection of manipulated media; Meta’s assessment of when media is “likely to mislead”; and Meta’s fact-checking program and which labels Meta applies to fact-checked content. All questions were answered. 7. Public Comments The Oversight Board received 49 public comments relevant to this case: 35 were submitted from the United States and Canada, seven from Europe, three from the Middle East and North Africa, two from Central and South Asia, one from Latin America and Caribbean, and one from Asia Pacific and Oceania. The submissions covered the following themes: the scope of Meta’s Manipulated Media policy; challenges of deciding which content has been altered and when content may mislead; the distinction between content generated or altered by AI and content altered by other means; the question of what harms manipulated media may cause and what impact it may have on elections; appropriate measures to moderate manipulated media; the challenges of moderating manipulated media at scale; and the impact of enforcement of manipulated media on freedom of expression. To read public comments submitted for this case, please click here . In October 2023, as part of ongoing stakeholder engagement, the Board consulted with representatives of civil-society organizations, academics, inter-governmental organizations and other experts on the issue of manipulated media and elections. The insights shared during this meeting also informed the Board’s consideration of the issues in this case. 8. Oversight Board Analysis The Board examined whether this content should be removed by analyzing Meta’s content policies, human-rights responsibilities and values. The Board also assessed the implications of this case for Meta’s broader approach to content governance. This case was selected to examine whether Meta’s Manipulated Media policy adequately addresses the potential harms of altered content, while ensuring that political expression is not unjustifiably suppressed. This issue is pertinent as the volume of manipulated media is expected to increase, in particular with continuing technological advances in the field of generative AI. The case falls within the Board’s strategic priority of Elections and Civic Space . 8.1 Compliance With Meta’s Content Policies The Board agrees with Meta that the content does not violate Meta’s Manipulated Media policy as currently formulated. The video clip does not show President Biden saying words he did not say and it was not altered using artificial intelligence (AI) to create an authentic-looking video. Moreover, according to Meta’s policy, a key characteristic of “manipulated media” is that it misleads the “average” user to believe the media is authentic and unaltered (see also PC-18036 - UCL Digital Speech Lab, discussing this under the term “blatancy”). In this case, the footage has been altered to loop, repeating the moment when President Biden’s hand made contact with his granddaughter’s breast, making it appear as if he was touching her inappropriately. The alteration of the video clip as such – looping the scene back and forth – is obvious. Users can easily see that content has been edited. The majority of the Board find that while the video may still be misleading, or intended to mislead about the event it depicts, this is not done through disguised alterations. A minority of the Board believe that the image may fall under the spirit of the policy as it still could “mislead an average user.” The Board understands that resources for third-party fact-checking are limited and the content’s virality is an important factor in determining which content is prioritized for fact-checking. It is therefore reasonable that the post in this case was not prioritized due to its limited potential reach, while other posts featuring an identical video were fact-checked on the basis of their reach. The majority of the Board believe the caption that accompanies the video, which accuses President Biden of being a “sick pedophile,” does not violate the Bullying and Harassment Community Standard. For public figures, Meta generally prohibits the forms of abuse listed under “Tier I: Universal protections for everyone” in the Bullying and Harassment Community Standard. This includes, “Attacks through derogatory terms related to sexual activity.” Meta’s policy specifically allows, however, claims including criminal allegations, even if they contain expressions of contempt or disgust. The majority find that the statement that Biden is a “sick pedophile” includes such an allegation and, as part of the discussion of a public figure, falls within that exception. The minority find that the claim that Biden is a “sick pedophile,” when accompanied with a video that has been altered with the goal of presenting false evidence for the claim, does not constitute a criminal allegation but a malicious personal attack –and should therefore be removed under the Bullying and Harassment Community Standard. 8.2. Compliance with Meta’s Human-Rights Responsibilities Freedom of Expression (Article 19 ICCPR) Article 19 para. 2 of the ICCPR provides broad protection for expression of “all kinds.” The UN Human Rights Committee has highlighted that the value of expression is particularly high when it discusses political issues, candidates and elected representatives (General comment No. 34, para. 13). This includes expression that is “deeply offensive,” insults public figures and opinions that may be erroneous (General comment No. 34, para. 11, 38, and 49). The UN Human Rights Committee has emphasized that freedom of expression is essential for the conduct of public affairs and the effective exercise of the right to vote (General comment No. 34, para. 20). The Committee further states that the free communication of information and ideas about public and political issues between citizens, candidates and elected representatives is essential for the enjoyment of the right to take part in the conduct of public affairs and the right to vote, Article 25 ICCPR (General comment No. 25, para 25.) When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human-rights commitments, both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As in previous cases (e.g., Armenians in Azerbaijan , Armenian Prisoners of War Video ), the Board agrees with the UN Special Rapporteur on freedom of expression that, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression,” (report A/74/486 , para. 41). Nonetheless, the Board has also previously acknowledged that Meta can legitimately remove certain content because its human-rights responsibilities as a company differ from the human-rights obligations of states (see Knin Cartoon decision ). I. Legality (Clarity and Accessibility of the Rules) Rules restricting expression should be clearly defined and communicated. Users should be able to predict the consequences of posting content on Facebook and Instagram. The UN Special Rapporteur on freedom of expression highlighted the need for “clarity and specificity” in content-moderation policies ( A/HRC/38/35, para. 46). Meta’s Manipulated Media policy raises various concerns from a legality perspective. Meta publishes this policy in two different places (with no cross-reference link), as a self-standing policy and as part of the Misinformation Community Standard (Section IV). There are differences between the two in the rationale for restricting speech and their operative language. As a public comment pointed out, the wording of the policy is inaccurate (PC-18036 - UCL Digital Speech Lab). It states content will be removed if it “would likely mislead an average person to believe: (…)” that “(t)he video is the product of artificial intelligence or machine learning.” This appears to be a typographical or formatting error because the opposite is presumably correct, that the average person could be misled precisely because it is not clear that content is AI generated or altered (as reflected in the Misinformation policy). This is confusing to users and requires correction. Furthermore, the self-standing policy states that it requires “additional information and/or context to enforce.” The Misinformation Community Standard does not include this statement. It only mentions that verification of facts requires partnering with third parties. In other cases, the Board learned that when a rule requires “additional information and/or context to enforce,” it is only applied on-escalation, meaning by specialized teams within Meta and not by human reviewers enforcing the policy at scale (for previous cases engaging with Meta’s escalation processes, see e.g., Armenian Prisoners of War Video , Knin Cartoon and India Sexual Harassment Video ). It would be useful for Meta to publicly clarify whether the Manipulated Media policy falls within this category or not. II. Legitimate Aim Restrictions on freedom of expression must pursue a legitimate aim (Article 19, para. 3, ICCPR), including to protect “the rights of others.” According to the policy rationale of the Manipulated Media policy, as presented in Section IV of the Misinformation Community Standard, it aims to prevent misleading content going viral quickly. Meta explains that “experts advise that false beliefs regarding manipulated media cannot be corrected through further discourse” without providing further evidence of this claim. The explanation in the standalone Manipulated Media policy is even less insightful, only stating that manipulated media could “mislead,” without linking this to any specified harm. Preventing people from being misled is not, in and of itself, a legitimate reason to restrict freedom of expression (General comment No. 34, para. 47 and 49). This is especially relevant in the context of political participation and voting, where contested arguments are an integral part of the public discourse (General comment No. 25, para 25) and competing claims may be characterized as misleading by some, and accurate by others. Additionally, media may be manipulated for purposes of humor, parody or satire and may as such constitute protected forms of speech (see “Two Buttons” Meme decision). In its submissions, Meta failed to explain, in terms of its human-rights responsibilities, what outcome the policy aims to achieve beyond preventing individuals being “misled” by content altered using AI (see also PC-18033 - Cato Institute). Meta did not explain whether it is consciously departing from international standards in adopting the Manipulated Media rule, per the Special Rapporteur’s guidance (report A/74/486, at para. 48 and report A/HRC/38/35, at para. 28). Protecting the right to vote and to take part in the conduct of public affairs is a legitimate aim that Meta’s Manipulated Media policy can legitimately pursue (Article 25, ICCPR). As public comments for this case illustrate, there is a broad range of views regarding how manipulated media can affect public trust in online information and in media more broadly and thus interfere with political processes. (See e.g., PC-18035 - Digital Rights Foundation; PC-18040 - Institute for Strategic Dialogue; PC-18045 - Tech Global Institute). Protecting the right to vote and to take part in the conduct of public affairs can justify taking measures against manipulated media, as long as Meta specifies the objectives of these measures and they are necessary and proportionate. III. Necessity and Proportionality Under ICCPR Article 19(3), necessity requires that restrictions on expression “must be appropriate to achieve their protective function.” The removal of content would not meet the test of necessity “if the protection could be achieved in other ways that do not restrict freedom of expression,” ( General Comment No. 34 , para. 33). Proportionality requires that any restriction “must be the least intrusive instrument amongst those which might achieve their protective function,” ( General Comment No. 34 , para. 34). Social-media companies should consider a range of possible responses to problematic content beyond deletion to ensure restrictions are narrowly tailored ( A/74/486 , para. 51). The Board acknowledges that Meta may put in place necessary and proportionate measures to prevent harms caused by manipulated media. Manipulation of media is often difficult for viewers to detect and may be especially impervious to normal human instincts of skepticism. Although humans have been aware for millennia that words may be lies, pictures and especially videos and audio impart a false veneer of credibility. While judgments about misinformation usually center on evidence for or against the propositions contained in a disputed message, judgments about manipulated media focus on the means by which the message was created. A central characteristic of “manipulated media” is that it misleads the user to believe that media is authentic and unaltered. This raises fewer risks that content moderation will itself be biased against particular viewpoints or misinformed. In addition to defining the legitimate aim that the Manipulated Media policy pursues, however, Meta also needs to assure that the measures it chooses to enforce the policy with are necessary to achieve that goal. The Board believes that in most cases Meta could prevent harm to users caused by being misled about the authenticity of audio or audiovisual content through less restrictive means than removal. Rather than promote trust, content removal can sow distrust and fuel accusations of coverup and bias. For example, Meta could attach labels to misleading content to inform users that it was generated or significantly altered, providing context on its authenticity, without opining on its underlying substance. Labels could achieve this aim without the need for full warning screens that blur or otherwise obscure the content, requiring the user to click through to see it. This would mitigate against the risk of over-removals, given the challenges of accurately identifying content that is misleading (see e.g., PC-18041 - American Civil Liberties Union; PC-18033 - Cato Institute; PC-18044 - Initiative for Digital Public Infrastructure). Choosing the less restrictive measure of labeling rather than removing content would assure Meta’s approach to enforcing its Manipulated Media policy is consistent with the necessity requirement. Restricting the enforcement of the Manipulated Media policy to labeling does not prevent Meta from removing information, including manipulated media, which misleads about the modalities of elections and which interferes with people’s abilities to take part in the election process. Such information is removed under Meta’s policy on “Voter or census interference” (Section III of the Misinformation policy), which also applies to manipulated media. The Board notes that Meta already attaches labels to content under its third-party fact-checking program. Fact-checking, however, is dependent on the capacity of third-party fact-checkers, which is likely to be asymmetrical across languages and markets, and has no guarantee of genuine expertise or objectivity. The enforcement of an updated Manipulated Media policy through labeling may be more scalable. Labels could be attached to a post once identified as “manipulated” as per the definition in the Manipulated Media policy, independently from the context in which it is posted, across the platform and without reliance on third-party fact-checkers. The Board is concerned about Meta’s practice of demoting content that third-party fact-checkers rate as “false” or “altered” without informing users or providing appeal mechanisms. Demoting content has significant negative impacts on freedom of expression. Meta should examine these policies to ensure that they clearly define why and when content is demoted, and provide users with access to an effective remedy (Article 2 of the ICCPR.) The Board sees little sense in the choice to limit the Manipulated Media policy to cover only people saying things they did not say, while excluding content showing people doing things they did not do. Meta informs us that at the time of introducing the rule, videos involving speech were considered the most misleading and easiest to reliably detect. Whatever the merits of that judgment when it was made, the Board is skeptical that the rationale continues to apply, especially as methods for manipulating visual content beyond speech have and continue to develop, and become more accessible to content creators. Second, it does not make sense to limit the application of the rule to video and exclude audio. Audio-only content can include fewer cues of inauthenticity and therefore be as or more misleading than video content. In principle, Meta’s rules about manipulated media should apply to all media – video, audio and photographs. However, including photographs may significantly expand the scope of the policy and make it more difficult to enforce at scale. This may lead to inconsistent enforcement with detrimental effects. If Meta sought to label videos, audio and photographs but only captured a small portion, this could create the false impression that non-labeled content is inherently trustworthy . Furthermore, in the process leading up to the policy advisory opinion on the Removal of COVID-19 Misinformation , Meta presented evidence that the effectiveness of labeling diminishes over time, possibly due to over-exposure. To avoid diminishing the effectiveness of labels applied to manipulated audio and video, the Board, at this point, recommends not to include photographs in the proposed scope expansion. However, it encourages Meta to conduct further research into the effects of manipulated photographs and consider extending its Manipulated Media policy to photographs if warranted and if Meta can assure effective enforcement at scale. Third, the Board is also unconvinced of the logic of making the Manipulated Media rule contingent on the technical measures used to create content. Experts the Board consulted, including at a dedicated roundtable, as well as public comments, almost unanimously agreed that the rule should be agnostic on the technical methods used (see e.g., PC-18036 - UCL Digital Speech Lab). There was broad agreement that non-AI-altered content is, for now, more prevalent and is not necessarily less misleading; for example, most phones have features to edit content (see e.g., PC-18047 - Nexus Horizon). Moreover, it is not technically feasible, especially at scale, to distinguish AI-generated or altered content from content that is either authentic or manipulated using means other than AI. For these reasons, the policy should not distinguish the treatment of “deep fakes” from content altered in other ways (e.g., “cheap fakes” or “shallow fakes”). For the preceding reasons, the majority of the Board uphold Meta’s decision to leave up the content. However, some Board Members believe the content should be removed even under the current standards, on the grounds that a false video presenting what might be misinterpreted as evidence of a serious crime is not protected speech, directly harms the integrity of the electoral process and is defamatory. According to the minority, such harms will not be prevented through less intrusive means. 9. Oversight Board Decision The Oversight Board upholds Meta’s decision to leave up the content, based on the Community Standards as they now exist. 10. Recommendations Content policy 1. To address the harms posed by manipulated media, Meta should reconsider the scope of its Manipulated Media policy in three ways to cover: (1) audio and audiovisual content, (2) content showing people doing things they did not do (as well as saying things they did not say), and (3) content regardless of the method of creation or alteration. The Board will consider this recommendation implemented when the Manipulated Media policy reflects these changes. 2. To ensure its Manipulated Media policy pursues a legitimate aim, Meta must clearly define in a single unified policy the harms it aims to prevent beyond preventing users being misled, such as preventing interference with the right to vote and to take part in the conduct of public affairs. The Board will consider this recommendation implemented when Meta changes the Manipulated Media policy accordingly. 3. To ensure the Manipulated Media policy is proportionate, Meta should stop removing manipulated media when no other policy violation is present and instead apply a label indicating the content is significantly altered and may mislead. The label should be attached to the media (such as a label at the bottom of a video) rather than the entire post, and should be applied to all identical instances of that media on the platform. The Board will consider this recommendation implemented when Meta launches the new labels and provides data on how many times the labels have been applied within the first 90-day period after launch. *Procedural Note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg, which draws on a team of more than 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Linguistic expertise was provided by Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world. Return to Case Decisions and Policy Advisory Opinions" fb-h6ozkds3,Punjabi concern over the RSS in India,https://www.oversightboard.com/decision/fb-h6ozkds3/,"April 29, 2021",2021,,TopicPoliticsCommunity StandardDangerous individuals and organizations,Type of DecisionStandardPolicies and TopicsTopicPoliticsCommunity StandardDangerous individuals and organizationsRegion/CountriesLocationIndiaPlatformPlatformFacebook,Overturned,India,The Oversight Board has overturned Facebook's decision to remove a post under its Dangerous Individuals and Organisations Community Standard.,27728,4262,"Overturned April 29, 2021 The Oversight Board has overturned Facebook's decision to remove a post under its Dangerous Individuals and Organisations Community Standard. Standard Topic Politics Community Standard Dangerous individuals and organizations Location India Platform Facebook To read this decision in Punjabi click here . ਇਹ ਫ਼ੈਸਲਾ ਪੰਜਾਬੀ ਵਿੱਚ ਪੜ੍ਹਨ ਲਈ, ""ਇੱਥ ੇ ਕਲਿੱਕ ਕਰੋ। The Oversight Board has overturned Facebook’s decision to remove a post under its Dangerous Individuals and Organizations Community Standard. After the Board identified this case for review, Facebook restored the content. The Board expressed concerns that Facebook did not review the user’s appeal against its original decision. The Board also urged the company to take action to avoid mistakes which silence the voices of religious minorities. About the case In November 2020, a user shared a video post from Punjabi-language online media company Global Punjab TV. This featured a 17-minute interview with Professor Manjit Singh who is described as “a social activist and supporter of the Punjabi culture.” The post also included a caption mentioning Hindu nationalist organization Rashtriya Swayamsevak Sangh (RSS) and India’s ruling party Bharatiya Janata Party (BJP): “RSS is the new threat. Ram Naam Satya Hai. The BJP moved towards extremism.” In text accompanying the post, the user claimed the RSS was threatening to kill Sikhs, a minority religious group in India, and to repeat the “deadly saga” of 1984 when Hindu mobs massacred and burned Sikh men, women and children. The user alleged that Prime Minister Modi himself is formulating the threat of “Genocide of the Sikhs” on advice of the RSS President, Mohan Bhagwat. The user also claimed that Sikh regiments in the army have warned Prime Minister Modi of their willingness to die to protect the Sikh farmers and their land in Punjab. After being reported by one user, a human reviewer determined that the post violated Facebook’s Dangerous Individuals and Organizations Community Standard and removed it. This triggered an automatic restriction on the user’s account. Facebook told the user that they could not review their appeal of the removal because of a temporary reduction in review capacity due to COVID-19. Key findings After the Board identified this case for review, but prior to it being assigned to a panel, Facebook realized that the content was removed in error and restored it. Facebook noted that none of the groups or individuals mentioned in the content are designated as “dangerous” under its rules. The company also could not identify the specific words in the post which led to it being removed in error. The Board found that Facebook’s original decision to remove the post was not consistent with the company’s Community Standards or its human rights responsibilities. The Board noted that the post highlighted the concerns of minority and opposition voices in India that are allegedly being discriminated against by the government. It is particularly important that Facebook takes steps to avoid mistakes which silence such voices. While recognizing the unique circumstances of COVID-19, the Board argued that Facebook did not give adequate time or attention to reviewing this content. It stressed that users should be able to appeal cases to Facebook before they come to the Board and urged the company to prioritize restoring this capacity. Considering the above, the Board found the account restrictions that excluded the user from Facebook particularly disproportionate. It also expressed concerns that Facebook’s rules on such restrictions are spread across many locations and not all found in the Community Standards, as one would expect. Finally, the Board noted that Facebook’s transparency reporting makes it difficult to assess whether enforcement of the Dangerous Individuals and Organizations policy has a particular impact on minority language speakers or religious minorities in India. The Oversight Board’s decision The Board overturns Facebook’s original decision to remove the content. In a policy advisory statement, the Board recommends that Facebook: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board has overturned Facebook’s decision to remove the content. The Board notes that, after it selected the case but before it was assigned to a panel, Facebook determined that the content was removed in error and restored it. The Board found that the content in question did not praise, support or represent any dangerous individual or organization. The post highlighted the alleged mistreatment of minorities in India by government and pro-government actors and had public interest value. The Board was concerned about mistakes in the review of the content and the lack of an effective appeals process available to the user. Facebook’s mistakes undermined the user’s freedom of expression as well as the rights of members of minorities in India to access information. 2. Case description The content touched on allegations of discrimination against minorities and silencing of the opposition in India by “Rashtriya Swayamsevak Sangh” (RSS) and the Bharatiya Janata Party (BJP). RSS is a Hindu nationalist organization that has allegedly been involved in violence against religious minorities in India. “BJP” is India’s ruling party to which the current Indian Prime Minister Narendra Modi belongs, and has close ties with RSS. In November 2020, a user shared a video post from Punjabi-language online media Global Punjab TV and an accompanying text. The post featured a 17-minute interview with Professor Manjit Singh, described as “a social activist and supporter of the Punjabi culture.” In its post, Global Punjab TV included the caption “RSS is the new threat. Ram Naam Satya Hai. The BJP moved towards extremism.” The media company also included an additional description “New Threat. Ram Naam Satya Hai! The BJP has moved towards extremism. Scholars directly challenge Modi!” The content was posted during India’s mass farmer protests and briefly touched on the reasons behind the protests and praised them. The user added accompanying text when sharing Global Punjab TV’s post in which they stated that the CIA designated the RSS a “fanatic Hindu terrorist organization” and that Indian Prime Minister Narendra Modi was once its president. The user wrote that the RSS was threatening to kill Sikhs, a minority religious group in India, and to repeat the “deadly saga” of 1984 when Hindu mobs attacked Sikhs. They stated that “The RSS used the Death Phrase ‘Ram naam sat hai’.” The Board understands the phrase ""Ram Naam Satya Hai"" to be a funeral chant that has allegedly been used as a threat by some Hindu nationalists. The user alleged that Prime Minister Modi himself is formulating the threat of “Genocide of the Sikhs” on advice of the RSS President, Mohan Bhagwat. The accompanying text ends with a claim that Sikhs in India should be on high alert and that Sikh regiments in the army have warned Prime Minister Modi of their willingness to die to protect the Sikh farmers and their land in Punjab. The post was up for 14 days and viewed fewer than 500 times before it was reported by another user for “terrorism.” A human reviewer determined that the post violated the Community Standard on Dangerous Individuals and Organizations and took down the content, which also triggered an automatic restriction on the use of the account for a fixed period of time. In its notification to the user, Facebook noted that its decision was final and could not be reviewed due to a temporary reduction in its review capacity due to COVID-19. For this reason, the user appealed to the Oversight Board. After the Case Selection Committee identified this case for review, but prior to it being assigned to a panel, Facebook determined the content was removed in error and restored it. The Board nevertheless proceeded in assigning the case to panel. 3. Authority and scope The Board has authority to review Facebook's decision under Article 2 (Authority to Review) of the Board's Charter and may uphold or reverse that decision under Article 3, Section 5 (Procedures for review: Resolution of the Charter). Facebook has not presented reasons for the content to be excluded in accordance with Article 2, Section 1.2.1 (Content not Available for Board Review) of the Board's Bylaws, nor has Facebook indicated that it considers the case to be ineligible under Article 2, Section 1.2.2 (Legal obligations) of the Bylaws. Under Article 3, Section 4 (Procedures for Review: Decisions) of the Board's Charter, the final decision may include a policy advisory statement, which will be taken into consideration by Facebook to guide its future policy development. Facebook restored the user’s content after determining their error, which likely would not have happened if the Board had not identified the case. In line with case decision 2020-004-IG-UA , Facebook’s choice to restore content does not exclude the case from review. Concerns over why the error occurred, the harm stemming from it, and the need to ensure it is not repeated remain pertinent. The Board offers users a chance to be heard and receive a full explanation of what happened. 4. Relevant standards The Oversight Board considered the following standards in its decision: I. Facebook’s Community Standards: Facebook’s Dangerous Individuals and Organizations policy explains that “in an effort to prevent and disrupt real-world harm, we do not allow any organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on Facebook.” The Standard further states that Facebook removes “content that expresses support or praise for groups, leaders, or individuals involved in these activities.” II. Facebook’s values: Facebook’s values are outlined in the introduction to the Community Standards. The value of “Voice” is described as “paramount”: The goal of our Community Standards has always been to create a place for expression and give people a voice. […] We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Facebook may limit “Voice” in service of the values of four values, including “Safety” and “Dignity”: “Safety” : We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook. “Dignity” : We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade others. III. Human rights standards: The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. Facebook’s commitment to respect human rights standards in line with the UNGPs was elaborated in a new corporate policy launched in March 2021. The Board's analysis in this case was informed by the following human rights standards: Freedom of expression : Article 19, International Covenant on Civil and Political Rights ( ICCPR ); Human Rights Committee, General Comment No. 34 (2011); Special Rapporteur on freedom of opinion and expression, reports: A/HRC/38/35 (2018) and A/74/486 (2019) The right to non-discrimination : Article 2, para. 1 and Article 26, ICCPR; Human Rights Committee, General Comment No. 23 (1994); General Assembly, Declaration on the Rights of Persons Belonging to National or Ethnic, Religious and Linguistic Minorities, as interpreted in by the Independent Expert on Minority Issues in A/HRC/22/49 para. 57-58 (2012); Special Rapporteur on Minority Issues, A/HRC/46/57 (2021) The right to an effective remedy : Article 2, para. 3, ICCPR; Human Rights Committee, General Comment 31 (2004); Human Rights Committee, General Comment No. 29 (2001); The right to security of person : Article 9, para. 1, ICCPR, as interpreted in General Comment No. 35 , para. 9. 5. User statement The user indicated to the Board that the post was not threatening or criminal but simply repeated the video’s substance and reflected its tone. The user complained about account restrictions imposed on them. They suggested that Facebook should simply delete problematic videos and avoid restricting users’ accounts, unless they engage in threatening or criminal behavior. The user also claimed that thousands of people engage with their content and called on the account to be restored immediately. 6. Explanation of Facebook’s decision According to Facebook, following a single report against the post, the person who reviewed the content wrongly found a violation of the of the Dangerous Individuals and Organizations Community Standard. Facebook informed the Board that the user’s post included no reference to individuals or organizations designated as dangerous. It followed that the post contained no violating praise. Facebook explained that the error was due to the length of the video (17 minutes), the number of speakers (two), the complexity of the content, and its claims about various political groups. The company added that content reviewers look at thousands of pieces of content every day and mistakes happen during that process. Due to the volume of content, Facebook stated that content reviewers are not always able to watch videos in full. Facebook was unable to specify the part of the content the reviewer found to violate the company’s rules. While the user appealed the decision to Facebook, they were informed that Facebook could not review the post again due to staff shortages caused by COVID-19. 7. Third-party submissions The Oversight Board received six public comments related to this case. Two comments were submitted from Europe and four from the United States and Canada. The submissions covered the following themes: the scope of political expression, Facebook’s legal right to moderate content, and the political context in India. To read public comments submitted for this case, please click here . 8. Oversight Board analysis 8.1 Compliance with Community Standards The Board concluded that Facebook’s original decision to remove the content was inconsistent with its Dangerous Individuals and Organizations Community Standard. The content referred to the BJP as well as the RSS and several of its leaders. Facebook explained that none of these groups or individuals are designated as “dangerous” under its Community Standards, and it was unable to identify the specific words in the content that led to its removal. The Board noted that, even if these organizations had been designated as dangerous, the content clearly criticized them. The content did praise one group – Indian farmers who were protesting. It therefore appears that inadequate time or attention was given to reviewing this content. The Board finds that the Dangerous Individuals and Organizations Community Standard is clear that violating content will be removed. The Introduction to the Community Standards , as well as Facebook’s Help Centre and Newsroom , explain that severe or persistent violations may result in a loss of access to some features. In this case, Facebook explained to the Board that it imposed an automatic restriction on the user's account for a fixed period of time for repeat violations. The Board found this would have been consistent with the company’s Community Standards had there been a violation. Facebook explained to the Board that account restrictions are automatic. These are imposed once a violation of the Community Standards has been determined and depend on the individual’s history of violations. This means that a person reviewing the content is not aware of whether removal will lead to an account restriction and is not involved in selecting that restriction. The Board notes that the consequences of enforcement mistakes can be severe and expresses concern that account level restrictions were wrongly applied in this case. 8.2 Compliance with Facebook’s values The Board found that Facebook’s decision to remove the content was inconsistent with its values of “Voice,” “Dignity” and “Safety.” The content linked to a media report and related to important political issues, including commentary on the alleged violation of minority rights and the silencing of opposition by senior BJP politicians and the RSS. Therefore, the incorrect removal of the post undermined the values of “Voice” and “Dignity.” Facebook has indicated it prioritizes the value of “Safety” when enforcing the Community Standard on Dangerous Individuals and Organizations. However, in this case, the content did not refer to, or praise, any designated dangerous individual or organization. Instead, the Board found that the content criticized governmental actors and political groups. 8.3 Compliance with Facebook’s human rights responsibilities Facebook’s application of the Community Standard on Dangerous Individuals and Organizations was inconsistent with the company’s human rights responsibilities and its publicly stated commitments to the UNGPs. Principles 11 and 13 call on businesses to avoid causing or contributing to adverse human rights impacts that may arise from their own activities or their relationships with other parties, including state actors, and to mitigate them. I. Freedom of Expression and Information (Article 19, ICCPR) Article 19 of the ICCPR guarantees the right to freedom of expression, and places particular value on uninhibited public debate, especially concerning political figures and the discussion on human rights (General Comment 34, paras 11 and 34). Article 19 also guarantees the right to seek and receive information, including from the media (General Comment 34, para. 13). This is guaranteed without discrimination, and human rights law places particular emphasis on the importance of independent and diverse media, especially for ethnic and linguistic minorities (General Comment 34, para. 14). a. Legality The Board has previously raised concerns with the accessibility of the Community Standard on Dangerous Individuals and Organizations, including around Facebook’s interpretation of “praise,” and the process for designating dangerous individuals and organizations ( case decision 2020-005-FB-UA ). Precise rules are important to constrain discretion and prevent arbitrary decision-making (General Comment No. 34, para. 25), and also to safeguard against bias. They also help Facebook users understand the rules being enforced against them. The UN Special Rapporteur on freedom of expression has raised concern at social media companies adopting vague rules that broadly prohibit “praise” and “support” leaders of dangerous organizations (report A/HRC/38/35, para. 26). The consequences of violating a rule, e.g. suspension of account functionalities or account disabling, must also be clear. The Board is concerned that information on account restrictions is spread across many locations, and not all set out in the Community Standards as one would expect. It is important to give users adequate notice and information when they violate rules so they can adjust their behavior accordingly. The Board notes its previous recommendations that Facebook should not expect users to synthesize rules from across multiple sources, and for rules to be consolidated in the Community Standards ( case decision 2020-006- FB-FBR , Section 9.2). The Board is concerned that the Community Standards are not translated into Punjabi, a language widely spoken globally with 30 million speakers in India. Facebook’s Internal Implementation Standards are also not available in Punjabi for moderators working in this language. This will likely compound the problem of users not understanding the rules, and increase the likelihood of moderators making enforcement errors. The possible specific impacts on a minority population raise human rights concerns (A/HRC/22/49, para. 57). b. Legitimate aim Article 19, para. 3 of the ICCPR states that legitimate aims include respect for the rights or reputations of others, as well as the protection of national security, public order, or public health or morals. Facebook has indicated that the aim of the Dangerous Individuals and Organizations Community Standard is to protect the rights of others. The Board is satisfied that the policy pursues a legitimate aim, in particular to protect the right to life, security of person, and equality and non-discrimination (General Comment 34, para. 28; Oversight Board decision 2020-005-FB-UA). c. Necessity and proportionality Restrictions must be necessary and proportionate to achieve a legitimate aim. There must be a direct connection between the necessity and proportionality of the specific action taken and the threat stemming from the expression (General Comment 34, para. 35). Facebook has acknowledged that its decision to remove the content was a mistake, and does not argue that this action was necessary or proportionate. Mistakes which restrict expression on political issues are a serious concern. It is particularly worrying if such mistakes are widespread, and especially if this impacts minority language speakers or religious minorities who may already be politically marginalized. The UN Special Rapporteur on minority issues has expressed concern at hate speech targeting minority groups on Facebook in India (A/HRC/46/57, para. 40). In such regional contexts, errors can silence minority voices that seek to counter hateful and discriminatory narratives, as in this case. The political context in India when this post was made, with mass anti-government farmer protests and increasing governmental pressure on social media platforms to remove related content, underscores the importance of getting decisions right. In this case, the content related to the protests and the silencing of opposition voices. It also included a link to an interview from a minority language media outlet on the topics. Dominant platforms should avoid undermining the expression of minorities who are protesting their government and uphold media pluralism and diversity (General Comment 34, para. 40). The account restrictions which wrongfully excluded the user from the platform during this critical period were particularly disproportionate. Facebook explained that they could not carry out an appeal on the user’s content due to reduced capacity during the COVID-19 pandemic. While the Board appreciates these unique circumstances, it again stresses the importance of Facebook providing transparency and accessible processes for appealing their decisions (UNGPs, Principle 11; A/74/486, para. 53). As the Board stated in case decision 2020-004-IG-UA , cases should be appealed to Facebook before they come to the Board. To ensure users’ access to remedy, Facebook should prioritize the return of this capacity as soon as possible. The Board acknowledges that mistakes are inevitable when moderating content at scale. Nevertheless, Facebook’s responsibility to prevent, mitigate and address adverse human rights impacts requires learning from these mistakes (UNGPs, Principles 11 and 13). It is not possible to tell from one case whether this enforcement was symptomatic of intentional or unintentional bias on behalf of the reviewer. Facebook also declined to provide specific answers to the Board’s questions regarding possible communications from Indian authorities to restrict content around the farmer’s protests, content critical of the government over its treatment of farmers, or content concerning the protests. Facebook determined that the requested information was not reasonably required for decision-making in accordance with the intent of the Charter and/or cannot or should not be provided because of legal, privacy, safety, or data protection restrictions or concerns. Facebook cited the Oversight Board’s Bylaws, Article 2, Section 2.2.2, to justify its refusal. Facebook answered the Board’s question on how Facebook’s moderation in India is independent of government influence. The company explained that its staff receive training specific to their region, market, or role as part of the Global Ethics and Compliance initiative, which fosters a culture of honesty, transparency, integrity, accountability and ethical values. Further, Facebook’s staff are bound by a Code of Conduct and an Anti-Corruption Policy. The Board emphasizes the importance of processes for reviewing content moderation decisions, including auditing, to check for and correct any bias in manual and automated decision-making, especially in relation to places experiencing periods of crisis and unrest. These assessments should take into account the potential for coordinated campaigns by governments and non-state actors to maliciously report dissent. Transparency is essential to ensure public scrutiny of Facebook’s actions in this area. The lack of detail in Facebook’s transparency reporting makes it difficult for the Board or other actors to assess, for example, if enforcement of the Dangerous Individuals and Organizations policy has particular impacts on users, and particularly minority language speakers, in India. To inform the debate, Facebook should make more data public, and provide analysis of what it means. 9. Oversight Board Decision The Oversight Board overturns Facebook's decision to take down the content and requires the post to be restored. The Board notes that Facebook has already taken action to this effect. 10. Policy advisory statement The following recommendations are numbered, and the Board requests that Facebook provides an individual response to each as drafted. Accessibility 1. Facebook should translate its Community Standards and Internal Implementation Standards into Punjabi. Facebook should aim to make its Community Standards accessible in all languages widely spoken by its users. This would allow a full understanding of the rules that users must abide by when using Facebook’s products. It would also make it simpler for users to engage with Facebook over content that may violate their rights. Right to remedy 2. In line with the Board’s recommendation in case 2020-004-IG-UA , the company should restore human review and access to a human appeals process to pre-pandemic levels as soon as possible while fully protecting the health of Facebook’s staff and contractors. Transparency reporting 3. Facebook should improve its transparency reporting to increase public information on error rates by making this information viewable by country and language for each Community Standard. The Board underscores that more detailed transparency reports will help the public spot areas where errors are more common, including potential specific impacts on minority groups, and alert Facebook to correct them. *Procedural note: The Oversight Board's decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world, provided expertise on socio-political and cultural context. The company Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world, provided linguistic expertise. Return to Case Decisions and Policy Advisory Opinions" fb-hffvzenh,Girls’ Education in Afghanistan,https://www.oversightboard.com/decision/fb-hffvzenh/,"December 8, 2023",2023,December,"TopicChildren / Children's rights, Discrimination, Sex and gender equalityCommunity StandardDangerous individuals and organizations",Dangerous individuals and organizations,Overturned,Afghanistan,A user appealed Meta’s decision to remove a Facebook post discussing the importance of educating girls in Afghanistan.,4419,651,"Overturned December 8, 2023 A user appealed Meta’s decision to remove a Facebook post discussing the importance of educating girls in Afghanistan. Summary Topic Children / Children's rights, Discrimination, Sex and gender equality Community Standard Dangerous individuals and organizations Location Afghanistan Platform Facebook This is a summary decision. Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors. They are approved by a Board Member panel, not the full Board. They do not involve a public comment process and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement. Case Summary A user appealed Meta’s decision to remove a Facebook post discussing the importance of educating girls in Afghanistan. This case highlights an error in the company’s enforcement of its Dangerous Organizations and Individuals policy. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Case Description and Background In July 2023, a Facebook user in Afghanistan posted text in Pashto describing the importance of educating girls in Afghanistan. The user called on people to continue raising their concerns and noted the consequences of failing to take these concerns to the Taliban. The user also states that preventing access to education for girls will be a loss to the nation. Meta originally removed the post from Facebook, citing its Dangerous Organizations and Individuals policy, under which the company removes content that “praises,” “substantively supports” or “represents” individuals and organizations it designates as dangerous, including the Taliban. The policy allows content that discusses a dangerous organization or individual in a neutral way or that condemns its actions. After the Board brought this case to Meta’s attention, the company determined that the content did not violate the Dangerous Organizations and Individuals policy, and that the removal of the post was incorrect. The company then restored the content. Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation processes involved, reduce errors and increase fairness for Facebook and Instagram users. Case Significance This case highlights mistakes in enforcement of Meta’s Dangerous Organizations and Individuals policy, which can have a negative impact on users’ capacities to share political commentary on Meta’s platforms. Here, this was specifically discussion of women’s education in Afghanistan after the Taliban takeover. In a previous case, the Board recommended that Meta “add criteria and illustrative examples to its Dangerous Organizations and Individuals policy to increase understanding of exceptions for neutral discussion, condemnation and news reporting,” ( Shared Al Jazeera Post decision, recommendation no. 1). Meta reported in its Q2 2023 quarterly update that this recommendation had been fully implemented. Furthermore, the Board recommended that Meta “implement an internal audit procedure to continuously analyze a statistically representative sample of automated content removal decisions to reverse and learn from enforcement mistakes,” ( Breast Cancer Symptoms and Nudity decision, recommendation no. 5). Meta has reported that it is implementing this recommendation but has not published information to demonstrate implementation. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to the company’s attention. The Board also urges Meta to speed up the implementation of still-open recommendations to reduce such errors. Return to Case Decisions and Policy Advisory Opinions" fb-hh0ajmit,Libya Floods,https://www.oversightboard.com/decision/fb-hh0ajmit/,"February 27, 2024",2024,,"TopicFreedom of expression, Journalism, Natural disastersCommunity StandardDangerous individuals and organizations",Dangerous individuals and organizations,Overturned,Libya,A user appealed Meta’s decision to remove a Facebook post discussing recent floods in Libya. This case highlights over-enforcement of the company's Dangerous Organizations and Individuals policy.,6704,1024,"Overturned February 27, 2024 A user appealed Meta’s decision to remove a Facebook post discussing recent floods in Libya. This case highlights over-enforcement of the company's Dangerous Organizations and Individuals policy. Summary Topic Freedom of expression, Journalism, Natural disasters Community Standard Dangerous individuals and organizations Location Libya Platform Facebook This is a summary decision. Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors and inform the public of the impact of the Board’s work. They are approved by a Board Member panel, not the full Board. They do not involve a public comment process and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement. Case Summary A user appealed Meta’s decision to remove a Facebook post discussing recent floods in Libya. In September 2023, there were devastating floods in northeast Libya caused by Storm Daniel and the collapse of two dams. A video in support of the victims of the floods, especially in the city of Derna, was removed for violating Meta’s Dangerous Organizations and Individuals policy. This case highlights an over-enforcement of the company's Dangerous Organizations and Individuals policy, which adversely impacts users’ freedom to express solidarity and sympathy in difficult situations. After the Board brought the appeal to Meta’s attention, the company reversed its earlier decision and restored the post. Case Description and Background In September 2023, a Facebook user posted a video containing two images without a caption. The background image showed two individuals in military uniform with badges. One of the badges had Arabic text that read “Brigade 444 – Combat.” This image was overlaid with the second one that depicted two people pulling a third person out from a body of water. The people on the sides had the Arabic words for “west” and “south” on their chests, while the person in the middle had the word “east.” In August 2023, armed clashes broke out in Tripoli between the 444th Combat Brigade and the Special Deterrence Force . These are two of the militias vying for power since the 2011 overthrow of Muammar Gaddafi. In their submission to the Board, the user stated that they posted the video to clarify that Libya was “one people” with “one army” supporting the north-eastern city of Derna after the flooding that resulted from dam collapses following Storm Daniel in September 2023. Meta originally removed the post from Facebook, citing its Dangerous Organizations and Individuals policy . After the Board brought this case to Meta’s attention, the company determined that its removal was incorrect and restored the content to Facebook. The company told the Board that the content did not contain any references to a designated organization or individual and therefore did not violate Meta’s policies. Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook and Instagram users. Case Significance This case highlights the over-enforcement of Meta's Dangerous Organizations and Individuals policy, including through automated systems, which can have a negative impact on users’ freedom of expression in sharing commentary about current events on Meta’s platforms. In the case of Öcalan's Isolation , the Board has recommended that Meta “evaluate automated moderation processes for enforcement of the Dangerous Organizations and Individuals policy,"" (recommendation no. 2). Meta reported that it would take no action on this recommendation as ""the policy guidance in this case does not directly contribute to the performance of automated enforcement."" In terms of automation, the Board has urged Meta to implement an internal audit procedure to continually analyze a statistically representative sample of automated removal decisions to reverse and learn from enforcement mistakes ( Breast Cancer Symptoms and Nudity , recommendation no. 5). Meta has reported implementing this recommendation but has not published information to demonstrate complete implementation. As of Q4 2022, Meta reported having ""completed the global roll out of new, more specific messaging that lets people know whether automation or human review led to the removal of their content from Facebook,"" but did not provide information as evidence of this. In the same decision, the Board also recommended that Meta ""expand transparency reporting to disclose data on the number of automated removal decisions per Community Standard, and the proportion of those decisions subsequently reversed following human review,"" ( Breast Cancer Symptoms and Nudity , recommendation no. 6). As of Q3 2023, Meta reported that it was establishing a consistent accounting methodology for such metrics. In the case of Punjabi Concern Over the RSS in India , the Board urged Meta to ""improve its transparency reporting to increase public information on error rates by making this information viewable by country and language for each Community Standard,"" (recommendation no. 3). As of Q3 2023, Meta reported that it was working to define its accuracy metrics, alongside its work on recommendation no. 6 in Breast Cancer Symptoms and Nudity . The Board reiterates that full implementation of its recommendations will help to decrease enforcement errors under the Dangerous Organizations and Individuals policy, reducing the number of users whose freedom of expression is infringed by wrongful removals. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to the company’s attention. The Board emphasizes that full adoption of these recommendations, along with the published information to demonstrate successful implementation, could reduce the number of enforcement errors under the Dangerous Organizations and Individuals policy on Meta's platforms. Return to Case Decisions and Policy Advisory Opinions" fb-i04m3kvf,Breast Self-Exam,https://www.oversightboard.com/decision/fb-i04m3kvf/,"December 18, 2023",2023,December,"TopicFreedom of expression, Health, Sex and gender equalityCommunity StandardAdult nudity and sexual activity",Adult nudity and sexual activity,Overturned,Spain,A user appealed Meta’s decision to remove a Facebook post that included a video providing instructions on how to perform a breast self-examination.,5672,874,"Overturned December 18, 2023 A user appealed Meta’s decision to remove a Facebook post that included a video providing instructions on how to perform a breast self-examination. Summary Topic Freedom of expression, Health, Sex and gender equality Community Standard Adult nudity and sexual activity Location Spain Platform Facebook This is a summary decision. Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors and inform the public about the impact of the Board’s work. They are approved by a Board Member panel, not the full Board. They do not involve a public comments process and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement. Case Summary A user appealed Meta’s decision to remove a Facebook post that included a video providing instructions on how to perform a breast self-examination. After the Board brought the appeal to Meta’s attention, the company reversed its earlier decision and restored the post. Case Description and Background In April 2014 - more than nine years ago - a Facebook user posted a video with a caption. The caption explains that the video provides instructions on how women should undertake a breast self-examination each month to check for breast cancer. The animated video depicts a nude female breast and gives information on breast cancer and when to reach out to a doctor. Additionally, the video specifies that a doctor’s advice should be followed. The post was viewed fewer than 500 times. Nine years after it was first shared, Meta removed the post from the platform under its Adult Nudity and Sexual Activity policy , which prohibits “imagery of real nude adults” if it depicts “uncovered female nipples’’ except, among other reasons, for “breast cancer awareness” purposes. However, Meta has since acknowledged that the content falls within the allowance of raising breast cancer awareness and has restored the content to Facebook. It is unclear why the post was enforced nine years after its original posting. In her appeal to the Board, the user expressed surprise at the content being taken down after nine years and stated that the purpose of posting the video was to educate women on conducting a breast self-examination, thereby enhancing their likelihood of detecting early-stage symptoms and ultimately saving lives. The user stated that “if they were male breasts, nothing would have happened.” Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation processes involved, reduce errors and increase fairness for Facebook and Instagram users. Case Significance The case highlights Meta’s inconsistency in enforcing allowances for medical and health content, as permitted under the company’s Adult Nudity and Sexual Activity Community Standard. Women’s rights to freedom of expression and health are affected by such inconsistency. This case emphasizes the connection between these two rights and the necessity of effective content moderation to allow for the raising of awareness about a cause or for educational or medical reasons. In one of its first case decisions, the Board issued recommendations related to Meta’s Adult Nudity and Sexual Activity policy, specifically on this issue. The Board urged Meta to improve the automated detection of images with text-overlay to ensure that posts raising awareness of breast cancer symptoms were not wrongly flagged for review, ( Breast Cancer Symptoms and Nudity , recommendation no. 1). In addition, the Board encouraged Meta to “implement an internal audit procedure to continuously analyze a statistically representative sample of automated content removal decisions to reverse and learn from enforcement mistakes,” ( Breast Cancer Symptoms and Nudity , recommendation no. 5). Meta reported implementation on the first recommendation and published information to demonstrate it. For the second, the company described this as work it already does but did not publish information to demonstrate implementation. The Board has also emphasized the importance of moderators reviewing user appeals submitted to Meta, specifically asking the company to ensure users can appeal decisions taken by automated systems to a human when their content is found to have violated Facebook’s Community Standard on Adult Nudity and Sexual Activity,” ( Breast Cancer Symptoms and Nudity , recommendation no. 4). Meta declined to implement this recommendation after assessing feasibility. The Board reiterates that full implementation of these recommendations is necessary to help reduce the error rate of content wrongly removed under the allowance in the Adult Nudity and Sexual Activity Community Standard, to raise awareness or educate users about early symptoms of breast cancer. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to the company’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-i2t6526k,Myanmar post about Muslims,https://www.oversightboard.com/decision/fb-i2t6526k/,"January 28, 2021",2021,January,"TopicPolitics, Religion, ViolenceCommunity StandardHate speech","Type of DecisionStandardPolicies and TopicsTopicPolitics, Religion, ViolenceCommunity StandardHate speechRegion/CountriesLocationChina, France, MyanmarPlatformPlatformFacebook",Overturned,"China, France, Myanmar",The Oversight Board has overturned Facebook's decision to remove a post under its hate speech Community Standard.,18688,2851,"Overturned January 28, 2021 The Oversight Board has overturned Facebook's decision to remove a post under its hate speech Community Standard. Standard Topic Politics, Religion, Violence Community Standard Hate speech Location China, France, Myanmar Platform Facebook To read this decision in Burmese click here . ဆုံးဖြတ်ချက် အပြည့်အစုံကို ဗမာဘာသာဖြ ဖြင့် ဖတ်ရှူရန်၊ ဤနေရာကို နှိပ်ပါ - The Oversight Board has overturned Facebook’s decision to remove a post under its Hate Speech Community Standard. The Board found that, while the post might be considered offensive, it did not reach the level of hate speech. About the case On October 29, 2020, a user in Myanmar posted in a Facebook group in Burmese. The post included two widely shared photographs of a Syrian toddler of Kurdish ethnicity who drowned attempting to reach Europe in September 2015. The accompanying text stated that there is something wrong with Muslims (or Muslim men) psychologically or with their mindset. It questioned the lack of response by Muslims generally to the treatment of Uyghur Muslims in China, compared to killings in response to cartoon depictions of the Prophet Muhammad in France. The post concludes that recent events in France reduce the user’s sympathies for the depicted child, and seems to imply the child may have grown up to be an extremist. Facebook removed this content under its Hate Speech Community Standard. Key findings Facebook removed this content as it contained the phrase “[there is] something wrong with Muslims psychologically.” As its Hate Speech Community Standard prohibits generalized statements of inferiority about the mental deficiencies of a group on the basis of their religion, the company removed the post. The Board considered that while the first part of the post, taken on its own, might appear to make an insulting generalization about Muslims (or Muslim men), the post should be read as a whole, considering context. While Facebook translated the text as: “[i]t’s indeed something’s wrong with Muslims psychologically,” the Board’s translators suggested: “[t]hose male Muslims have something wrong in their mindset.” They also suggested that the terms used were not derogatory or violent. The Board’s context experts noted that, while hate speech against Muslim minority groups is common and sometimes severe in Myanmar, statements referring to Muslims as mentally unwell or psychologically unstable are not a strong part of this rhetoric. Taken in context, the Board believes that the text is better understood as a commentary on the apparent inconsistency between Muslims’ reactions to events in France and in China. That expression of opinion is protected under Facebook’s Community Standards and does not reach the level of hate speech. Considering international human rights standards on limiting freedom of expression, the Board found that, while the post might be considered pejorative or offensive towards Muslims, it did not advocate hatred or intentionally incite any form of imminent harm. As such, the Board does not consider its removal to be necessary to protect the rights of others. The Board also stressed that Facebook’s sensitivity to anti-Muslim hate speech was understandable, particularly given the history of violence and discrimination against Muslims in Myanmar and the increased risk ahead of the country’s general election in November 2020. However, for this specific post, the Board concludes that Facebook was incorrect to remove the content. The Oversight Board’s decision The Oversight Board overturns Facebook’s decision to remove the content and requires that the post be restored. *Case summaries provide an overview of the case and do not have precedential value. 1.Decision Summary The Oversight Board has overturned Facebook’s decision to remove content it considered hate speech. The Board concludes that Facebook categorized a post as hate speech when it did not rise to that level. 2. Case Description On October 29, 2020, a Facebook user in Myanmar posted in Burmese to a group which describes itself as a forum for intellectual discussion. The post includes two widely-shared photographs of a Syrian toddler of Kurdish ethnicity who drowned in the Mediterranean Sea in September 2015. The accompanying text begins by stating that there is something wrong with Muslims (or Muslim men) psychologically or with their mindset. It questioned the lack of response by Muslims generally to the treatment of Uyghur Muslims in China, compared to killings in response to cartoon depictions of the Prophet Muhammad in France. The post concludes that recent events in France reduce the user’s sympathies for the depicted child, and seems to imply the child may have grown up to be an extremist. Facebook translated the statement “[there is] something wrong with Muslims psychologically” to constitute ‘Tier 2’ Hate Speech under its Community Standards. As this prohibits generalized statements of inferiority about the mental deficiencies of a person or group of people on the basis of their religion, Facebook removed the content. Prior to removal, on November 3, 2020, the two photographs included in the post had warning screens placed on them under the Violent and Graphic Content Community Standard. According to Facebook, nothing else in the post violated its policies. The user appealed to the Oversight Board, arguing they had not used hate speech. 3. Authority and Scope The Oversight Board has the authority to review Facebook’s decision under the Board’s Charter Article 2.1 and may uphold or overturn that decision under Article 3.5. This post is within the Oversight Board’s scope of review: it does not fit within any excluded category of content set forth in Article 2, Section 1.2.1 of the Board’s Bylaws and it does not conflict with Facebook’s legal obligations under Article 2, Section 1.2.2 of the Bylaws. 4. Relevant Standards The Oversight Board considered the following standards in its decision: I. Facebook’s Community Standards The Community Standard on Hate Speec h states that Facebook does “not allow hate speech on Facebook because it creates an environment of intimidation and exclusion and in some cases may promote real-world violence.” Facebook defines hate speech as an attack based on protected characteristics. Attacks may be “violent or dehumanizing speech, harmful stereotypes, statements of inferiority, or calls for exclusion or segregation” and are separated into three tiers of prohibited content. Under Tier 2, prohibited content includes: generalizations that state inferiority (in written or visual form) in the following ways [...] mental deficiencies are defined as those about: intellectual capacity [...] education [...] mental health. Protected characteristics are “race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability” with some protection for age and immigration status. II. Facebook’s Values The introduction to the Community Standards notes that “Voice” is Facebook’s paramount value, but the platform may limit “Voice” in service of several other values including “Safety.” Facebook’s definition of “Safety” states: “We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook.” III. Relevant Human Rights Standards Considered by the Board The UN Guiding Principles on Business and Human Rights ( UNGPs ), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. Drawing upon the UNGPs, the following international human rights standards were considered in this case: 5. User Statement The user submitted their appeal against Facebook’s decision to remove the content in November 2020. The user stated that their post did not violate Facebook’s Community Standards and that they did not use hate speech. The user explained that their post was sarcastic and meant to compare extremist religious responses in different countries. The user also stated that Facebook is not able to distinguish between sarcasm and serious discussion in the Burmese language and context. 6. Explanation of Facebook’s Decision Facebook removed this content based on its Hate Speech Community Standard. Facebook considered this post a Tier 2 attack under that standard, as a generalization of mental deficiency regarding Muslims. According to information provided by Facebook, which is not currently in the public domain, generalizations “are unqualified negative statements, with no room for reason, factual accuracy, or argument and they infringe on the rights and reputations of others.” Facebook stated that the only component of the post that violated Community Standards was the statement that something is wrong with Muslims psychologically. Facebook also argued that its Hate Speech Community Standard aligns with international human rights standards. According to Facebook, although prohibited speech under this standard may not rise to “advocacy or incitement to violence,” such expression can be restricted as it has the “capacity to trigger acts of discrimination, violence, or hatred, particularly if distributed widely, virally, or in contexts with severe human rights risks.” As the context for this case, Facebook cited the recent attack in Nice, France which left three people dead, the ongoing detention of Uyghur Muslims in China, the Syrian refugee crisis, and anti-Muslim violence in general. 7. Third party submissions The Board received 11 public comments related to this case. While one comment contained no content, 10 comments provided substantive submissions on this case. The regional breakdown of the comments was: one from Asia Pacific and Oceania, four from Europe and five from the United States and Canada. The submissions covered various themes, for example, whether the provocative and objectionable post constitutes with sufficient clarity hate speech; whether the content was an attack against Muslims; whether the user’s intent was to shed light on the treatment of Uyghur Muslims in China and the Syrian refugee crisis; whether the user’s intent was to condemn rather than promote the death of individuals; whether the use of retaliation in the post could imply direct call for physical violence against Chinese nationals; as well as feedback for improving the Board’s public comment process. 8. Oversight Board Analysis 8.1 Compliance with Community Standards The post does not constitute Hate Speech within the meaning of the relevant Community Standard. In this case, Facebook indicated that the speech in question was a Tier 2 attack under the Hate Speech Community Standard. The protected characteristic was religious affiliation, as the content described Muslims or Muslim men. According to Facebook, the attack was a “generalization that state[s] inferiority” about “mental deficiencies.” This section prohibits attacks about “[m]ental health, including but not limited to: mentally ill, retarded, crazy, insane” and “[i]ntellectual capacity, including, but not limited to: dumb, stupid, idiots.” Although the first sentence of the post, taken on its own, might appear to be making an offensive and insulting generalization about Muslims (or Muslim men), the post should be read as a whole, considering context. Human rights organizations and other experts have indicated that hate speech against Muslim minority groups in Myanmar is common and sometimes severe, in particular around the general election on November 8, 2020 ( FORUM-ASIA briefing paper on pervasive hate speech and the role of Facebook in Myanmar, pages 5 - 8, Report of the UN independent international fact-finding mission on Myanmar, A/HRC/42/50 , paras 1303, 1312, 1315 and 1317). However, there was no indication that statements referring to Muslims as mentally unwell or psychologically unstable are a significant part of anti-Muslim rhetoric in Myanmar. Further, while Facebook translated the sentence as “[i]t’s indeed something’s wrong with Muslims psychologically,” the Board’s translators found it stated “[t]hose male Muslims have something wrong in their mindset.” The translators also suggested that while the terms used could show intolerance, they were not derogatory or violent. The post is thus better read, in light of context, as a commentary pointing to the apparent inconsistency between Muslims’ reactions to events in France and in China. That expression of opinion is protected under the Community Standards, and does not reach the level of hate speech that would justify removal. 8.2 Compliance with Facebook Values Facebook’s decision to remove the content does not comply with the company’s values. Although Facebook’s value of “Safety” is important, particularly in Myanmar given the context of discrimination and violence against Muslims, this content did not pose a risk to “Safety” that would justify displacing “Voice.” 8.3 Compliance with Human Rights Standards Restoring the post is consistent with international human rights standards. According to Article 19 of the ICCPR individuals have the right to seek and receive information, including controversial and deeply offensive information (General Comment No. 34). Some Board Members noted the UN Special Rapporteur on Freedom of Expression’s 2019 report on online hate speech that affirms that international human rights law “protects the rights to offend and mock” (para. 17). Some Board Members expressed concerns that commentary on the situation of Uyghur Muslims may be suppressed or under-reported in countries with close ties to China. At the same time, the Board recognizes that the right to freedom of expression is not absolute and can be subject to limitations under international human rights law. First, the Board assessed whether the content was subject to a mandatory restriction under international human rights law. The Board found that the content was not advocacy of religious hatred constituting incitement to discrimination, hostility or violence, which states are required to prohibit under ICCPR Article 20, para. 2. The Board considered the factors cited in the UN Rabat Plan of Action, including the context, the content of the post, and the likelihood of harm. While the post had a pejorative tone, the Board did not consider that it advocated hatred, and did not consider that it intentionally incited any form of imminent harm. The Board also discussed whether this content could be restricted under ICCPR Article 19, para. 3. This provision of international human rights law requires restrictions on expression to be defined and easily understood (legality requirement), to have the purpose of advancing one of several listed objectives (legitimate aim requirement), and to be necessary and narrowly tailored to the specific objective (necessity and proportionality requirement). The Board recognizes that Facebook was pursuing a legitimate aim through the restriction: to protect the rights of others to life, to security of person, physical or mental injury, and to protection from discrimination. The Board acknowledges that online hate speech in Myanmar has been linked to serious offline harm, including accusations of potential crimes against humanity and genocide. As such, the Board recognized the importance of protecting the rights of those who may be subject to discrimination and violence, and who may even be at risk of atrocities. Nonetheless, the Board concludes while some may consider the post offensive and insulting towards Muslims, the Board does not consider its removal to be necessary to protect the rights of others. The Board recognizes that online hate speech is a complex issue to moderate and linguistic and cultural features such as sarcasm make it more difficult. In this case, there were no indications that the post contained threats against identifiable individuals. The Board acknowledges it is difficult for Facebook to evaluate the intent behind individual posts when moderating content at scale and in real time. While not decisive, the Board considered the user’s claim in their appeal that they are opposed to all forms of religious extremism. The fact that the post was within a group that claimed to be for intellectual and philosophical discussion, and also drew attention to discrimination against Uyghur Muslims in China, lends support to the user’s claim. At the same time, some Board Members found the user’s references to the refugee child who had died to be insensitive. The Board emphasizes that restoring any particular post does not imply any agreement with its content. Even in circumstances where discussion of religion or identity is sensitive and may cause offense, open discussion remains important. Removing this content is unlikely to reduce tensions or protect persons from discrimination. There are more effective ways to encourage understanding between different groups. The Board also emphasizes that Facebook’s sensitivity to the possibility of anti-Muslim hate speech in Myanmar is understandable, given the history of violence and discrimination against Muslims in that country, the context of increased risk around the elections, and the limited information available at the time. In these circumstances, Facebook’s caution demonstrated a general recognition of the company’s human rights responsibilities. Nonetheless, for this specific piece of content, the Board concludes that Facebook was incorrect to remove the content. 9. Oversight Board Decision 9.1 Content Decision The Oversight Board overturns Facebook’s decision to take down the content, requiring the post to be restored. The Board understands the photos will again have a warning screen under the Violent and Graphic Content Community Standard. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and must be agreed by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world, provided expertise on socio-political and cultural context. The company Lionbridge Technologies, LLC whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world, provided linguistic expertise. Return to Case Decisions and Policy Advisory Opinions" fb-i964kkm6,Colombian police cartoon,https://www.oversightboard.com/decision/fb-i964kkm6/,"September 15, 2022",2022,,"TopicFreedom of expression, Governments, MistreatmentCommunity StandardDangerous individuals and organizations","Policies and TopicsTopicFreedom of expression, Governments, MistreatmentCommunity StandardDangerous individuals and organizations",Overturned,Colombia,The Oversight Board has overturned Meta’s original decision to remove a Facebook post of a cartoon depicting police violence in Colombia.,26351,4105,"Overturned September 15, 2022 The Oversight Board has overturned Meta’s original decision to remove a Facebook post of a cartoon depicting police violence in Colombia. Standard Topic Freedom of expression, Governments, Mistreatment Community Standard Dangerous individuals and organizations Location Colombia Platform Facebook Colombian police cartoon public comments The Oversight Board has overturned Meta’s original decision to remove a Facebook post of a cartoon depicting police violence in Colombia. The Board is concerned that Media Matching Service banks, which can automatically remove images that violate Meta’s rules, can amplify the impact of incorrect decisions to bank content. In response, Meta must urgently improve its procedures to quickly remove non-violating content from these banks. About the case In September 2020, a Facebook user in Colombia posted a cartoon resembling the official crest of the National Police of Colombia, depicting three figures in police uniform holding batons over their heads. They appear to be kicking and beating another figure who is lying on the ground with blood beneath their head. The text of the crest reads, in Spanish, “República de Colombia - Policía Nacional - Bolillo y Pata.” Meta translated the text as “National Police – Republic of Colombia – Baton and Kick.” According to Meta, in January 2022, 16 months after the user posted the content, the company removed the content as it matched with an image in a Media Matching Service bank. These banks can automatically identify and remove images which have been identified by human reviewers as violating the company’s rules. As a result of the Board selecting this case, Meta determined that the post did not violate its rules and restored it. The company also restored other pieces of content featuring this cartoon which had been incorrectly removed by its Media Matching Service banks. Key findings As Meta has now recognized, this post did not violate its policies. Meta was wrong to add this cartoon to its Media Matching Service bank, which led to a mass and disproportionate removal of the image from the platform, including the content posted by the user in this case. Despite 215 users appealing these removals, and 98% of those appeals being successful, Meta still did not remove the cartoon from this bank until the case reached the Board. This case shows how, by using automated systems to remove content, Media Matching Service banks can amplify the impact of incorrect decisions by individual human reviewers. The stakes of mistaken additions to such banks are especially high when, as in this case, the content consists of political speech criticizing state actors. In response, Meta should develop mechanisms to quickly remove any non-violating content which is incorrectly added to its Media Matching Service banks. When decisions to remove content included in these banks are frequently overturned on appeal, this should immediately trigger reviews which can remove this content from the bank. The Board is particularly concerned that Meta does not measure the accuracy of Media Matching Service banks for specific content policies. Without this data, which is crucial for improving how these banks work, the company cannot tell whether this technology works more effectively for some Community Standards than others. The Oversight Board’s decision The Oversight Board overturns Meta’s original decision to remove the content. The Board recommends that Meta: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board overturns Meta’s original decision to remove a Facebook post of a cartoon depicting police violence in Colombia. The Board finds that the cartoon, which did not violate any Facebook Community Standard, was wrongly entered into one of Meta’s Media Matching Service banks leading to the incorrect removal of the post. These banks can automatically identify and remove images which have been previously identified as violating the company’s rules. After the Board selected the case, Meta found this content to be non-violating and acknowledged that it was removed in error. The Board is concerned that the wrongful inclusion of non-violating content into the Media Matching Service banks results in disproportionate wrongful enforcement. Moreover, the failure to prevent or remedy this error quickly compounds this problem over time. Hence, Meta must urgently improve its review mechanisms to remove non-violating content from these banks quickly, monitor the performance of these mechanisms and publicly release information as a part of its transparency efforts. 2. Case description and background In September 2020, a Facebook user in Colombia posted a picture of a cartoon as a comment on another user’s post. The cartoon resembles the official crest of the National Police of Colombia and depicts three figures wearing police uniforms and holding batons over their heads. The figures appear to be kicking and beating another figure who is lying on the ground with blood beneath their head. A book and a pencil are shown next to the figure on the ground. The text on the crest reads in Spanish, “República de Colombia - Policía Nacional - Bolillo y Pata.” Meta’s regional markets team translated the text as “National Police – Republic of Colombia – Baton and Kick.” The post was made during a time of widespread protest in the country following a police killing. According to Meta, in January 2022, 16 months after the content was originally posted to Facebook, the company removed the content as it matched with an image in a Media Matching Service bank of content that violates Facebook’s Dangerous Individuals and Organizations Community Standard. The user appealed, and Meta maintained its decision to remove the content, but based its removal decision on the Violence and Incitement Community Standard instead. At the time of removal, the content had been viewed three times and received no reactions or user reports. As a result of the Board selecting this case, Meta reviewed the content again and determined it did not violate the Dangerous Individuals and Organizations Community Standard or the Violence and Incitement Community Standard. The content was then restored to the platform about one month after it had been removed. Meta also informed the Board that because the image was in a Media Matching Service bank, identical content, including this case, had been removed from the platform. 215 of those removals were appealed by users, and 210 of those appeals were successful, meaning that the reviewers in this set of cases decided the content was not violating. All remaining removals, as well as any corresponding strikes and feature limits, were also reversed by Meta after the Board selected the case. 3. Oversight Board authority and scope The Board has authority to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include policy advisory statements with non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). When the Board selects cases like this one, where Meta, subsequently acknowledges that it made an error, the Board reviews the original decision to increase understanding of the content moderation process which led to the error and to make structural recommendations to reduce errors and treat users more fairly in the future. 4. Sources of authority The Oversight Board considered the following sources of authority: I. Oversight Board decisions: The Board’s most relevant decision to this case is the “Colombia protests” decision (2021-010-FB-UA). In this decision the Board highlighted the public interest in allowing content criticizing the government during protests, in particular in contexts where states are accused of violating human rights. II. Meta’s content policies: Facebook’s Dangerous Individuals and Organizations Community Standard states that Meta “remove[s] praise, substantive support and representation of various dangerous organizations.” Facebook's Violence and Incitement Community Standard states that Meta ""aim[s] to prevent potential offline harm that may be related to Facebook” and that it restricts expression “when [it] believe[s] there is a genuine risk of physical harm or direct threats to public safety.” Facebook's Violent and Graphic Content Community Standard states that Meta “remove[s] content that glorifies violence or celebrates the suffering or humiliation of others.” III. Meta’s values: Meta's values are outlined in the introduction to the Facebook Community Standards, where the value of ""Voice"" is described as ""paramount."" Meta limits ""Voice"" in service of four values, namely “Authenticity,” “Safety,” “Privacy,” and “Dignity.” “Safety” and “Dignity” are the most relevant here. IV. International human rights standards: The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following human rights standards which are applied in Section 8 of this decision: 5. User submissions In their statement to the Board, the user expressed confusion as to why the content was removed by Meta. The user explained that the content reflected reality in Colombia which was important for those who were interested in or affected by the situation. 6. Meta’s submissions Meta explained in its rationale that it removed the content because it matched with an image that had been mistakenly entered by a human reviewer into a Dangerous Individuals and Organizations Media Matching Service bank. On appeal, Meta upheld the removal but decided the content violated its Violence and Incitement policy rather than its Dangerous Individuals and Organizations policy. Meta later confirmed that both decisions to remove the content were wrong. The company stated that the Violent and Graphic Content policy could also be relevant to this case as it depicts a violent attack, but that it does not apply to images such as cartoons. According to Meta, its Media Matching Service banks identify and act on media, in this case images, posted on its platforms. Once content is identified for banking, it is converted into a string of data, or “hash.” The hash is then associated with a particular bank. Meta’s Media Matching Service banks align with particular content policies such as Dangerous Individuals and Organizations, Hate Speech, and Child Sexual Exploitation, Abuse and Nudity, or specific sections within a policy. In this case, Meta stated the content was in a Media Matching Service bank specifically for criminal organizations, a prohibited category within the Dangerous Individuals and Organizations policy. Depending on what Meta uses the specific bank for, it can be programmed to take different actions once it identifies banked content. For example, Meta might delete content that is violating, add a warning screen, or ignore it if it has been banked as non-violating content. Media Matching Service banks can also provide guidance to content reviewers when they are reviewing content that is banked. Meta also stated that its Media Matching Service banks can act at different points in time. For example, Meta can scan images at the point of upload to prevent some violating content from being posted. Banks can also be configured to only detect and take action on newly uploaded content, or they can be used to scan existing content on the platform. The Board asked Meta about how it identifies and adds content to Media Matching Service banks. Meta explained that at-scale human reviewers identify content eligible for banking. According to Meta, for Media Matching Service banks associated with the Dangerous Individuals and Organizations policy, the content is then sent to a process called “Dynamic Multi-Review,” where “two reviewers must agree on one decision in order for the media to be sent to the bank.” Meta describes this as a guardrail to help prevent mistakes. The Board also asked Meta about what mechanisms it has to identify erroneously banked content. Meta stated it has different “anomaly alerting systems.” These include a system that is triggered if a bank contains content that Meta considers to be viral. It also includes a system to identify banked content with a high level of successful appeals after removal. These systems were active for this bank when the content was removed. Meta stated that the number of removals and successful appeals on this content generated an alert to Meta’s engineering team around the time the content was removed. However, a month later, when the Board brought this case to Meta’s attention, the company had still not reviewed and removed the content from the Media Matching Service bank. It is unclear whether this engineering team alerted other teams that would be responsible for re-reviewing and determining whether the content was violating. While Meta indicated that some time lag is expected when reviewing reports, it gave the Board no indication of when it would have addressed this alert. The Board is concerned about the length of this timeframe. Meta should be monitoring how long it takes to remove mistakenly added content from banks and reverse wrongful enforcement decisions after alerts are triggered and set a concrete goal to minimize that time. Lastly, the Board asked Meta about the metrics it uses to audit the performance of its Media Matching Service banks. Meta states it generally monitors when its banks make more incorrect enforcement decisions and tries to identify what might be causing those increased errors. However, currently these audit assessments usually focus on individual banks. Because these banks can target specific policy lines and be programmed to take a variety of different actions, it can be difficult to generate meaningful data on overall enforcement accuracy for a specific community standard or for Media Matching Service banks in general from these analyses. Meta stated it is working to create a unified metric to monitor the accuracy of Media Matching Service banks. The Board asked Meta a total of 26 questions, 24 of which were answered fully and two of which were answered partially. The partial responses were about measuring accuracy and error rates for the company’s Media Matching Service banks and technical issues related to messages Meta sent to the user. Meta also provided a virtual briefing to a group of Board Members about Media Matching Service banks. 7. Public comments The Board received four public comments related to this case. Two of the comments were submitted from the United States and Canada, and two from Latin America and the Caribbean. The submissions covered the use of media matching technology in content moderation, the socio-political context in Colombia, the importance of social media in recording police violence in handling protests, and how Meta’s content policies should protect freedom of expression. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board selected this case as it involves artistic expression about police violence in the context of protests, a pressing issue across Latin America. The case also provided an opportunity for the Board to analyze Meta’s use of Media Matching Service technology in content moderation. The Board looked at the question of whether this content should be restored through three lenses: Meta's content policies, the company's values, and its human rights responsibilities. 8.1 Compliance with Meta’s content policies I. Content rules The Board finds that Meta’s actions were not consistent with Facebook’s content policies. As Meta acknowledges, the content did not violate any Meta policy. The decision to add it to a Media Matching Service bank and the failure to overturn the automated removal on appeal were wrong. II. Enforcement action Meta only restored the content, along with other pieces of content also removed because of incorrect Media Matching Service banking, after the Board selected the case. In response to a question from the Board, Meta stated it had feedback mechanisms to identify errors and stop acting on mistakenly banked content. Nevertheless, 215 users appealed removals and 98% of those appeals were successful, and no feedback mechanism resulted in the content being removed from the Media Matching Service bank before the case reached the Board. 8.2 Compliance with Meta’s values The Board finds that the original decision to remove this content was inconsistent with Meta's value of ""Voice,” and that its removal was not supported by any other Meta values. 8.3 Compliance with Meta’s human rights responsibilities The Board concludes that Meta’s initial decision to remove the content was inconsistent with its human rights responsibilities as a business. Meta has committed itself to respect human rights under the UN Guiding Principles on Business and Human Rights ( UNGPs ). Facebook’s Corporate Human Rights Policy states that this includes the International Covenant on Civil and Political Rights (ICCPR). Freedom of expression (Article 19 ICCPR) Article 19 of the ICCPR provides for broad protection of expression, and expression about social or political concerns receives heightened protection ( General Comment 34 , paras. 11 and 20). Article 19 requires that where restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim and necessity and proportionality (Article 19, para. 3, ICCPR). The Board uses this framework to guide its analysis of Meta’s content moderation. I. Legality (clarity and accessibility of the rules) The requirement of legality provides that any restriction on freedom of expression is accessible and clear enough to provide guidance as to what is permitted and what is not. In this case, the incorrect removal of the content was not attributable to the lack of clarity or accessibility of relevant policies. II. Legitimate aim Any restriction on expression should pursue one of the legitimate aims listed in the ICCPR, which include the “rights of others.” According to Meta, the Dangerous Individuals and Organizations and Violence and Incitement policies seek to prevent offline violence. The Board has consistently found that these aims comply with Meta’s human rights responsibilities. III. Necessity and proportionality The principle of necessity and proportionality provides that any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected” ( General Comment 34 , para. 34). In this case, removing the user’s content was not necessary because it did not serve any legitimate aim. Additionally, the design of Meta’s Media Matching Service banks enabled reviewers to mistakenly add content to a bank that resulted in the automatic removal of identical content, despite it being non-violating. The Board finds this was extremely disproportionate, based on the significant number of removals in this case, even considering Meta’s scale of operation. Despite completing a lengthy data validation process to verify the number of content removals, Meta barred the Board from disclosing this number, citing a concern that the number was not an accurate reflection of the quality of Meta’s Media Matching Service systems. The Board finds the removal of the content in this case particularly concerning as the content did not violate any Meta policy but contained criticism of human rights violations which is protected speech. Police violence is an issue of major and pressing public concern. The Board has also previously noted the importance of social media in sharing information about protests in Colombia in case 2021-010-FB-UA. The Board finds that adequate controls on the addition, auditing, and removal of content in such banks, as well as appeals opportunities, are essential. These banks greatly increase the scale of some enforcement decisions, resulting in disproportionate consequences for mistakes. Regarding the addition of content, given the consequences of removal that can be amplified to a disproportionate scale when automated, there is a need for Meta to strengthen procedures to ensure that non-violating content is not added. The stakes of mistaken additions to Media Matching Service banks are especially high when, as in this case, the content consists of political speech criticizing state actors or actions. Media Matching Service banks automate and amplify the impacts of individual incorrect decisions, and it is important for Meta to continually consider what error mitigation measures best help its human reviewers, including, for example, additional staffing, training, and time to review content. Regarding auditing and feedback, the use of Media Matching Service banks to remove content with limited or flawed feedback mechanisms raises concerns of disproportionate erroneous enforcement, where one mistake is amplified to a much greater scale. Despite Meta informing the Board it had feedback mechanisms to identify and stop acting on mistakenly banked content, 215 users appealed the removal with a 98% success rate and the content remained banked. Meta should ensure that content acted on due to its inclusion in a Media Matching Service bank with high rates of overturn immediately trigger reviews with the potential to remove this content from the bank. The Board is particularly concerned with the lack of performance metrics for the accuracy of Media Matching Service banks for particular content policies. Without objective metrics to monitor the accuracy of Media Matching Service banks, there is no effective governance over how this technology may be more or less effective for certain content policies. There are also no concrete benchmarks for improvement. While the Board notes that Meta is in the process of creating a unified metric to monitor the accuracy of Media Matching Service banks, it urges the company to complete this process as soon as practicable. To enable the establishment of metrics for improvement, Meta should publish information on accuracy for each content policy where it uses Media Matching Service technology. This should include data on error rates for non-violating content mistakenly added to Media Matching Service banks of violating content. It should also include the volume of content impacted by incorrect banking and key examples of errors. This data would allow Meta to understand the broader impacts of banking errors and set targets to reduce them. Further, it will allow users to better understand and respond to errors in automated enforcement. 9. Oversight Board decision The Oversight Board overturns Meta's original decision to take down the content. 10. Policy advisory statement Enforcement 1. To improve Meta’s ability to remove non-violating content from banks programmed to identify or automatically remove violating content, Meta should ensure that content with high rates of appeal and high rates of successful appeal is re-assessed for possible removal from its Media Matching Service banks. The Board will consider this recommendation implemented when Meta: (i) discloses to the Board the rates of appeal and successful appeal that trigger a review of Media Matching Service-banked content, and (ii) confirms publicly that these reassessment mechanisms are active for all its banks that target violating content. 2. To ensure that inaccurately banked content is quickly removed from Meta’s Media Matching Service banks, Meta should set and adhere to standards that limit the time between when banked content is identified for re-review and when, if deemed non-violating, it is removed from the bank. The Board will consider this recommendation implemented when Meta: (i) sets and discloses to the Board its goal time between when a re-review is triggered and when the non-violating content is restored, and (ii) provides the Board with data demonstrating its progress in meeting this goal over the next year. Transparency 3. To enable the establishment of metrics for improvement, Meta should publish the error rates for content mistakenly included in Media Matching Service banks of violating content, broken down by each content policy, in its transparency reporting. This reporting should include information on how content enters the banks and the company’s efforts to reduce errors in the process. The Board will consider this recommendation implemented when Meta includes this information in its Community Standards Enforcement Report. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg which draws on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Linguistic expertise was provided by Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world. Return to Case Decisions and Policy Advisory Opinions" fb-iulhg7jk,Hotel in Ethiopia,https://www.oversightboard.com/decision/fb-iulhg7jk/,"September 13, 2023",2023,,"TopicViolence, War and conflictCommunity StandardViolence and incitement",Violence and incitement,Overturned,Ethiopia,"A user appealed Meta's decision to leave up a Facebook post that called for a hotel in Ethiopia's Amhara region to be burned down. This case highlights Meta's error in enforcing its policy against a call for violence in a country experiencing armed conflict and civil unrest. After the Board brought the appeal to Meta's attention, the company reversed its original decision and removed the post.",4818,759,"Overturned September 13, 2023 A user appealed Meta's decision to leave up a Facebook post that called for a hotel in Ethiopia's Amhara region to be burned down. This case highlights Meta's error in enforcing its policy against a call for violence in a country experiencing armed conflict and civil unrest. After the Board brought the appeal to Meta's attention, the company reversed its original decision and removed the post. Summary Topic Violence, War and conflict Community Standard Violence and incitement Location Ethiopia Platform Facebook This is a summary decision . Summary decisions examine cases where Meta reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors. They are approved by a Board Member panel, not the full Board. They do not consider public comments, and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas where the company could improve its policy enforcement. Case summary A user appealed Meta's decision to leave up a Facebook post that called for a hotel in Ethiopia's Amhara region to be burned down. This case highlights Meta's error in enforcing its policy against a call for violence in a country experiencing armed conflict and civil unrest. After the Board brought the appeal to Meta's attention, the company reversed its original decision and removed the post. Case description and background On April 6, 2023, a Facebook user posted an image and caption that called for a hotel in Ethiopia's Amhara region to be burned down. The user claimed that the hotel was owned by a general in the Ethiopian National Defense Forces. The post also included a photograph of the hotel, its address, and the name of the general. The user posted this content during a period of heightened political tension in the Amhara region when protests had been taking place for several days against the government's plan to dissolve a regional paramilitary force. Under Meta's Violence and Incitement policy, the company removes content that calls for high-severity violence. In their appeal to the Board, the user who reported the content stated that the post calls for violence and violates Meta's Community Standards. Meta initially left the content on Facebook. When the Board brought this case to Meta’s attention, it determined that the post violated its Violence and Incitement policy, and that its original decision to leave up the content was incorrect. The company then removed the content from Facebook. Board authority and scope The Board has authority to review Meta's decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). Where Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, to reduce errors and increase fairness for people who use Facebook and Instagram. Case significance This case highlights Meta's error in enforcing its policy against a call for violence in a country experiencing armed conflict and civil unrest. Such calls for violence pose a heightened risk of near-term violence and can exacerbate the situation on the ground. That is why the Board recommended Meta “assess the feasibility of establishing a sustained internal mechanism that provides the expertise, capacity and coordination required to review and respond to content effectively for the duration of a conflict,” ( Tigray Communication Affairs Bureau , recommendation no. 2). Meta is in the process of launching a crisis coordination team to provide dedicated operations oversight throughout imminent and emerging crises. The Board will continue to follow implementation of the new mechanism together with existing policies, to ensure Meta treats users more fairly in affected regions. The Board has also recommended that Meta commission an independent human rights, due diligence assessment on how Facebook and Instagram have been used to spread hate speech and unverified rumors that heighten the risk of violence in Ethiopia, and publish the report in full ( Alleged crimes in Raya Kobo , recommendation no. 3). Meta described this recommendation as work it already does but did not publish information to demonstrate implementation. Decision The Board overturns Meta’s original decision to leave up the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to the company’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-izp492pj,Corruption of law enforcement in Indonesia,https://www.oversightboard.com/decision/fb-izp492pj/,"September 13, 2023",2023,,TopicGovernmentsCommunity StandardViolence and incitement,Violence and incitement,Overturned,Indonesia,"A user appealed Meta’s decision to remove a Facebook post that included a video discussing corruption among police officers in Indonesia. The case highlights an inconsistency in how Meta applies its Violence and Incitement policy to political metaphorical statements, which could be a significant deterrent to open online expression about governments. After the Board brought the appeal to Meta’s attention, the company reversed its earlier decision and restored the post.",5458,836,"Overturned September 13, 2023 A user appealed Meta’s decision to remove a Facebook post that included a video discussing corruption among police officers in Indonesia. The case highlights an inconsistency in how Meta applies its Violence and Incitement policy to political metaphorical statements, which could be a significant deterrent to open online expression about governments. After the Board brought the appeal to Meta’s attention, the company reversed its earlier decision and restored the post. Summary Topic Governments Community Standard Violence and incitement Location Indonesia Platform Facebook This is a summary decision. Summary decisions examine cases where Meta reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors. They are approved by a Board Member panel, not the full Board. They do not consider public comments, and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas of potential improvement in its policy enforcement. Case summary A user appealed Meta’s decision to remove a Facebook post that included a video discussing corruption among police officers in Indonesia. The case highlights an inconsistency in how Meta applies its Violence and Incitement policy to political metaphorical statements, which could be a significant deterrent to open online expression about governments. After the Board brought the appeal to Meta’s attention, the company reversed its earlier decision and restored the post. Case description and background In April 2023, a Facebook user posted a video in which they gave a monologue in Bahasa Indonesia denouncing the corrupt practices of Indonesia's National Police. The user alleged that the Chief of the National Police had said, “If I can’t clean my tail, I’ll cut off its head.” The user remarked that those dirty tails that could not be cleaned had actually become the heads, because the corrupt practices of subordinate law enforcement officers were guarded and maintained by the leaders of the police force. The user also named some specific individuals involved in their case who had since been promoted. Under the video, there was caption that read, “How could a dirty broom clean a dirty floor?” The Board understands the analogy by the Chief of the National Police to mean that he was taking a hard line towards corruption and implying that if he could not eradicate corruption among lower-level officers, he would take action against higher-level ones. The Board takes the user's remarks that “dirty tails became heads” as irony, suggesting that corrupt officers from the lower levels rose through the ranks to become corrupt officials. Together with the caption that referred to the “dirty broom,” the Board considers that this was why the user believed corruption was endemic in Indonesia. Meta originally removed the post from Facebook, citing its Violence and Incitement policy, under which the company removes content containing “threats that could lead to death (and other forms of high-severity violence)… targeting people.” After the Board brought this case to Meta’s attention, the company determined that its removal was incorrect and restored the content to the platform. The company told the Board that, instead of targeting a particular person or group of people, the user was drawing attention to the pervasive nature of corruption and the relationship between police leaders and subordinates. The company therefore concluded that there was no target for violence, as is required to violate the Violence and Incitement policy. Board authority and scope The Board has authority to review Meta's decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). Where Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, to reduce errors and increase fairness for people who use Facebook and Instagram. Case significance This case highlights an inconsistency in how Meta applies its Violence and Incitement policy to political metaphorical statements. The inconsistency could be a significant deterrent to criticism of governments. The case underlines the importance of designing context-sensitive moderation systems with awareness of irony, satire, or rhetorical discourse, especially to protect political speech. That is why, in its case decisions, the Board has urged Meta to put in place proper procedures for evaluating content in its relevant context ( Two Buttons meme , recommendation no. 3). Meta has committed to implement this recommendation. Its complete implementation, in this case for evaluating content in Bahasa Indonesia, may help to decrease the error rate of content moderation when users are discussing how governments exercise their power, where Meta’s value of “Voice” is especially important. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to the company’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-j3fc7xx9,News Documentary on Child Abuse in Pakistan,https://www.oversightboard.com/decision/fb-j3fc7xx9/,"May 14, 2024",2024,,"TopicChildren / Children's rights, Journalism, SafetyCommunity StandardChild nudity and sexual exploitation of children","Policies and TopicsTopicChildren / Children's rights, Journalism, SafetyCommunity StandardChild nudity and sexual exploitation of children",Overturned,Pakistan,"The Oversight Board has overturned Meta’s decision to take down a documentary video posted by Voice of America (VOA) Urdu, revealing the identities of child victims of sexual abuse and murder from Pakistan in the 1990s. The majority find that a newsworthiness allowance should have been applied.",40344,6285,"Overturned May 14, 2024 The Oversight Board has overturned Meta’s decision to take down a documentary video posted by Voice of America (VOA) Urdu, revealing the identities of child victims of sexual abuse and murder from Pakistan in the 1990s. The majority find that a newsworthiness allowance should have been applied. Standard Topic Children / Children's rights, Journalism, Safety Community Standard Child nudity and sexual exploitation of children Location Pakistan Platform Facebook Urdu Translation.pdf News Documentary on Child Abuse in Pakistan Public Comments Appendix News Documentary on Child Abuse in Pakistan Decision PDF To read this decision in Urdu, click here . م مکمل فیصلہ اردو میں پڑھنے کے لیے، یہاں پر کلک کریں. The Oversight Board has overturned Meta’s decision to take down a documentary video posted by Voice of America (VOA) Urdu, revealing the identities of child victims of sexual abuse and murder from Pakistan in the 1990s. Although the Board finds the post did violate the Child Sexual Exploitation, Abuse and Nudity Community Standard, the majority find that a newsworthiness allowance should have been applied in this case. These Board Members believe the ongoing public interest in reporting on child abuse outweighs the potential harms from identification to the victims, who did not survive these crimes that took place 25 years ago. Broadly factual in nature and sensitive to the victims, VOA Urdu’s documentary could have informed public debate on the widespread issue of child sexual abuse, which is underreported in Pakistan. This case also highlights how Meta could better communicate to users which policies do and which policies do not benefit from exceptions. About the Case In January 2022, the broadcaster Voice of America (VOA) Urdu posted on its Facebook page an 11-minute documentary about Javed Iqbal, who murdered and sexually abused approximately 100 children in Pakistan in the 1990s. The documentary, in Urdu, includes disturbing details of the crimes and the perpetrator’s trial. There are images of newspaper clips that clearly show the faces of the child victims along with their names, while other footage of people in tears could be relatives. The post’s caption mentions that a different film about the crimes had recently been in the news, and it also warns viewers about the documentary’s contents. This post was viewed about 21.8 million times and shared about 18,000 times. Between January 2022 and July 2023, 67 users reported the post. Following both automated and human reviews, Meta concluded the content was not violating. The post was also flagged separately by Meta’s High Risk Early Review Operations system because of its high likelihood of going viral. This led to human review by Meta’s internal staff with language, market and policy expertise (rather than by outsourced human moderation). Following escalation internally, Meta’s policy team overturned the original decision to keep the post up and removed it for violating the Child Sexual Exploitation, Abuse and Nudity policy. The company decided not to grant a newsworthiness allowance. Meta then referred this case to the Board. Key Findings The majority of the Board find that Meta should have applied the newsworthiness allowance to this content, keeping the post on Facebook. The Board finds the post violated the Child Sexual Exploitation, Abuse and Nudity Community Standard because the child abuse victims are identifiable by their faces and names. However, for the majority, the public interest in reporting on these child abuse crimes outweighed the possible harms to the victims and their families. In coming to their decision, the majority noted that the documentary had been produced to raise awareness, does not sensationalize the gruesome details and, significantly, the crimes took place about 25 years ago, with none of the victims surviving. This passage of time is the most important factor because it means possible direct harms to the child victims had diminished. Meanwhile, the public interest in child abuse remains. Experts consulted by the Board confirmed that child sexual abuse is prevalent in Pakistan, but incidents are underreported. The majority took note of expert reports on Pakistan’s track record of cracking down on independent media and silencing dissent, while also failing to prevent or punish serious crimes against children. This makes social media platforms necessary for reporting on and receiving information on this issue. In this case, the VOA Urdu documentary made an important contribution to public discussions. A minority note that while the video raised issues of public interest, it was possible for those issues to be discussed in detail without showing the names and faces of the victims, and therefore the content should have been removed. The Board expresses alarm at the length of time (18 months) it took for Meta to finally make a decision on this content, by which time it had been viewed 21.8 million times, and questions whether Meta’s resources for Urdu-language videos are sufficient. While the rarely used newsworthiness allowance – a general exception that can be applied only by Meta’s expert teams – was relevant here, the Board notes that no specific policy exceptions, such as raising awareness or reporting on, are available for the Child Sexual Exploitation, Abuse and Nudity policy. Meta should provide more clarity to users about this. Additionally, it could be made clearer to people in the public language of this policy what qualifies as identifying alleged victims “by name or image.” Had VOA Urdu received a more detailed explanation of the rule it was violating, it could have reposted the documentary without the offending images or, for example, with blurred faces of the victims, if this is allowed. The Oversight Board’s Decision The Oversight Board overturns Meta’s decision to take down the content and requires the post to be restored. The Board recommends that Meta: *Case summaries provide an overview of cases and do not have precedential value. 1. Decision Summary The Oversight Board overturns Meta’s decision to take down a Facebook post from Voice of America Urdu’s page, showing documentary video that reveals the identities of child victims of sexual abuse and murder from Pakistan in the 1990s. The Board finds that the post violated the text of the Child Sexual Exploitation, Abuse and Nudity Community Standard, as it “identified victims of child sexual abuse by name and image.” However, the majority of the Board find that Meta should have applied the newsworthiness allowance in this case because the current public interest in Pakistan in reporting on child abuse outweighs potential harms from identification of the victims from so long ago. A minority of the Board believe it was possible for those issues to be discussed without showing the names and faces of the victims, hence Meta’s decision to remove the post was warranted. To better inform users when policy exceptions for awareness raising, news reporting or other justifications could be granted, Meta should create a new section within each Community Standard describing what policy exceptions and allowances apply and provide the rationale when such exceptions or allowances do not apply. This section should note that general allowances such as newsworthiness apply to all Community Standards. 2. Case Description and Background On January 28, 2022, the broadcaster Voice of America Urdu, funded by the United States government, posted on its Facebook page an 11-minute documentary video about Javed Iqbal, who was convicted in a Pakistani court for committing serial crimes against children. The documentary contained extensive details, in Urdu, about the crimes, which involved the sexual abuse and murder of approximately 100 children in the 1990s. It also covered the perpetrator’s subsequent arrest and trial. The video contained images of newspapers clips from 1999 showing the faces of the child victims along with their names and cities they came from. It also showed children’s photographs discovered during a search of the perpetrator’s house. Extensive details of the events and incriminating evidence found at the scene of the crimes, including vats of acid where bodies were reportedly dissolved, are depicted in the documentary. There is also footage of people in tears who could be relatives of the child victims. The documentary mentioned that Javed Iqbal had confessed to bringing children to his home, where he sexually abused them, strangled them to death and disposed of their bodies in acid. It described his arrest, along with his young accomplice, their subsequent trials and sentences to death, and finally suicide while in custody. The post’s caption, in Urdu, mentioned that a different film about the crimes had recently been in the news. The caption also described the severity of the crimes, warning the documentary contained details about sexual abuse and violence, including interviews with people associated with the perpetrator and his crimes. Voice of America Urdu’s Facebook page has about 5 million followers. The content was viewed about 21.8 million times, received about 51,000 reactions and 5,000 comments, and was shared around 18,000 times. Between January 2022 and July 2023, a total of 67 users reported the content. Following both automated and outsourced human reviews during that period, Meta concluded the content was not violating. Meta’s High Risk Early Review Operations (HERO) system also flagged the content eight times due to its high virality signals between January 2022 and July 15, 2023. The HERO system is designed to identify potentially violating content predicted to have a high likelihood of going viral. Once identified by the system, the content is prioritized for human review by Meta’s internal staff with language, market and policy expertise (as opposed to outsourced moderators reviewing content). In late July 2023, following a report from the HERO system, the internal regional operations team within Meta escalated the content to Meta’s policy experts requesting an assessment under the newsworthiness allowance. Following this review in August 2023, the policy team overturned the original decision to keep the content up and removed it for violating the Child Exploitation, Abuse and Nudity policy. Meta did not grant a newsworthiness allowance for this content because it concluded that the potential risk of harm outweighed the public interest value. The company did not specify the nature and extent of this risk. Meta did not apply a strike against the account of the news organization that had posted the content because of the public interest and awareness-raising context of the video, as well as the notable length of time (18 months) between the content being posted and removed. Meta referred this case to the Board because it considered it significant and difficult as the company has to “weigh the safety, privacy and dignity of the child victims against the fact that the footage does not emphasize the child victims’ identities, the events depicted are from over 30 years ago, and the video appears designed to raise awareness around a serial killer’s crimes and discuss issues that have high public interest value.” The Board notes the following context in reaching its decision in this case. Civic space and media freedom in Pakistan is considerably restricted. UN human rights experts and civil society organizations have highlighted that the Pakistani state has a history of curtailing media freedoms and targeting those who speak critically of the authorities with arrest and legal action . Media outlets have faced interference, withdrawal of government advertising, bans on television presenters and on broadcasting content. Likewise, online activists, dissidents and journalists are often subjected to state-sponsored threats and harassment . Independent media outlets have also documented how the Pakistani authorities makes requests for social media companies to remove content. Meta reported in the company’s Transparency Center that between June 2022 and June 2023, the company geo-blocked 7,665 posts that Pakistan’s authorities reported to Meta. Local access to the content was restricted for allegedly violating local laws, even though they did not necessarily violate Meta’s policies. Despite written confessions reportedly mailed to the local police, the crimes Javed Iqbal committed were not seriously investigated by the authorities until Pakistani journalists who received the confession letter and investigated it published a story in Jang newspaper on December 3, 1999, with the names and photos of 57 alleged child victims, thus alerting their families and generating a public uproar about the issue. Widespread and global public coverage ensued in Pakistan and internationally about the crimes, Javed Iqbal’s confession and subsequent arrest , conviction and suicide. Between January 2022 and January 2024, films, documentaries and media reports have re-ignited interest and fueled discussions about Javed Iqbal and his crimes. “Javed Iqbal: The Untold Story of a Serial Killer,” a film that was set to be released in January 2022, was banned for several months by Pakistan’s Central Bureau of Film Censors because, according to news reports , the title glorified Iqbal. The film was released later that year at the UK Asian Film Festival. Experts the Board consulted and independent media reported that the producers edited the film and changed its name to “Kukri” (based on Javed Iqbal’s nickname ), ahead of its resubmission to the Pakistani Censor Board. The film was authorized and re-released in Pakistan in June 2023. Child sexual abuse in Pakistan remains prevalent. According to experts the Board consulted, from 2020 to 2022 there were some 5.4 million reports of online child exploitation in Pakistan on social media, based on data gathered by the National Center for Missing and Exploited Children (NCMEC). NCMEC collects reports of Child Sexual Abuse material on U.S. based social media platforms, with 90% of these reports about content posted on Meta’s platforms. Sahil , an Islamabad-based NGO, reports that an average of 12 children per day were subjected to sexual abuse in Pakistan during the first half of 2023. Almost 75 per cent of the 2,200-plus cases from 2023 were reported from Punjab, Pakistan’s most populous province. Two other heinous cases of crimes reported in the city of Kasur involved the sexual abuse by a gang of 280 children and the murder and sexual abuse of a six-year-old child, with the media showing photographs including of her dead body. 3. Oversight Board Authority and Scope The Board has authority to review decisions that Meta submits for review (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1.1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). When Meta commits to act on recommendations, the Board monitors their implementation. 4. Sources of Authority and Guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board Decisions II. Meta’s Content Policies The policy rationale for the Child Sexual Exploitation, Abuse and Nudity policy states that Meta does not permit content that “sexually exploits or endangers children.” Under this policy, Meta removes “content that identifies or mocks alleged victims of sexual exploitation by name or image.” The Board’s analysis was informed by Meta’s commitment to voice , which the company describes as “paramount,” and its values of safety, privacy and dignity. Newsworthiness Allowance Meta defines the newsworthiness allowance as a general policy allowance that can be applied across all policy areas within the Community Standards, including the Child Sexual Exploitation, Abuse and Nudity policy. It allows otherwise violating content to be kept on the platform if the public interest value in doing so outweighs the risk of harm. According to Meta, such assessments are made only in “rare cases,” following escalation to the Content Policy team. This team assesses whether the content in question poses an imminent threat to public health or safety or gives voice to perspectives currently being debated as part of a political process. This assessment considers country-specific circumstances, including whether elections are underway. While the speaker’s identity is a relevant consideration, the allowance is not limited to content posted by news outlets. Meta reported that from June 1, 2022, to June 1, 2023, only 69 newsworthiness allowances were documented globally. Similar numbers were reported for the previous year. III. Meta’s Human Rights Responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , in which it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The following international standards may be relevant to the Board’s analysis of Meta’s human rights responsibilities in this case: 5. User Submissions The author of the post was notified of the Board’s review and provided with an opportunity to submit a statement. No response was received. 6. Meta’s Submissions According to Meta, the post violated the Child Sexual Exploitation, Abuse and Nudity Community Standard as it showed identifiable faces of child victims of sexual exploitation, together with their names. Meta defines an individual as identified through name or image if “the content includes any of the following information: (i) mention of the individual’s name (first, middle, last or full name) unless the content explicitly states that the name has been made up [or] (ii) imagery depicting the individual’s face.” Meta distinguishes between content identifying adult victims from child victims of sexual abuse because children have “reduced capacity” to grant informed consent on identification. Given this, the risks of revictimization, community discrimination and risk of further violence remain significant for children. Meta therefore provides no policy exceptions under the Child Exploitation, Abuse and Nudity policy for content identifying alleged victims of sexual exploitation by name or image, shared for the purposes of raising awareness, reporting on or condemning the abuse. Child rights advocates emphasized to Meta that its policies should prioritize child safety, especially in cases involving child victims of sexual assault. Other external stakeholders noted to Meta that the goal of avoiding victimization of minors has to outweigh potential newsworthiness in identifying child victims. The Board asked Meta to study its decision not to grant the content a newsworthiness allowance in this case. Meta noted that though the content had public interest value, the risk of harm from identification of victims remained significant. Although the crimes occurred in the 1990s, the victims identified were children, and the abuses they suffered were violent and sexual in nature. In this case, Meta did not apply a strike against the account of the news organization that posted the content because of the public interest and awareness-raising context of the video, and notable length of time between the content being posted and removed. In response to the Board’s questions, Meta noted that the company utilizes its HERO system to proactively flag content before it reaches its peak virality using a number of different signals to identify content. This system prioritizes for review and potential action content that is likely to go viral, and it is one of many tools used to address problematic viral content on the platform. The Board asked Meta 15 questions in writing. Questions related to Meta’s policy choices around the Child Sexual Exploitation, Abuse and Nudity policy, Meta’s strike system and the HERO system. Meta answered the 15 questions. 7. Public Comments The Oversight Board received four public comments that met the terms for submission. Two were submitted from the United States and Canada, one from Europe and one from Asia Pacific and Oceania. To read the public comments submitted with consent to publish, please click here . The submissions covered the following themes: the importance of protecting the privacy and identity of victims of child abuse as well as the privacy of families; the interplay between the UN Convention on the Rights of the Child and Meta’s Child Sexual Exploitation, Abuse and Nudity policy; the educational and awareness-raising context of the documentary; and the role of journalists in reporting about child abuse crimes. 8. Oversight Board Analysis The Board accepted this Meta referral to assess the impact of Meta’s Child Sexual Exploitation, Abuse and Nudity Community Standard on the rights of child victims, especially in the context of reporting on crimes after a notable passage of time. This case concerns the protection of civic space, which is among the Board’s strategic priorities. The Board examined whether this content should be restored by analyzing Meta’s content policies, human rights responsibilities and values. 8.1 Compliance with Meta’s Content Policies The Board agrees with Meta that the content in this case violated the explicit rules of the Child Sexual Exploitation, Abuse and Nudity Community Standard, as the video showed identifiable faces and contained the names of child abuse victims. The majority of the Board, however, find that Meta should have applied the newsworthiness allowance and permitted the content to remain on Facebook, on escalation. For the majority, the public interest in reporting on child abuse crimes with such characteristics as in this case outweighed the possible harm to the victims and their families. This conclusion is largely based on the fact that this documentary has been produced to raise awareness, does not mock nor sensationalize the gruesome details it reports on, and, most significantly, the crimes took place almost 25 years ago and none of the victims survived. For the majority, the passage of a significant period of time was the most important factor in this case. With the passage of time, the potential impact on the rights of children and their families may subside, while the public interest in reporting on and addressing child abuse in Pakistan is persistent. In this case, the crimes against these children took place more than 25 years ago, and all the identifiable child victims depicted in the documentary are deceased. Child abuse has remained widespread in Pakistan (see section 2) and is the subject of significant public discourse. The majority of the Board take note of the expert reports stating that Pakistan has a track record of cracking down on independent media and silencing dissent, while also failing to prevent or punish serious crimes against children. Therefore, social media platforms are necessary for all people, including news media, to report on and receive information relating to child abuse in Pakistan. This documentary was broadly accurate and factual in nature, and sensitive to the victims. It was specifically contextualized against recent government decisions to censor a film on the topic, and therefore made an important contribution to public discussions. A minority of the Board consider that Meta should not apply the newsworthiness allowance in this case, highlighting that the protection of the dignity and rights of the child victims as well as their families was paramount and should not be affected by the passage of time or other considerations as pointed out by the majority. The minority note that while the video raised issues of public interest, it was possible for those issues to be discussed in detail without showing the names and faces of the victims. Consequently, removing the post was in line with Meta’s values of privacy and dignity. When conducting a newsworthiness assessment, the Board notes it is imperative that Meta considers potential adverse human rights impacts of a decision to leave up or remove a post. These considerations are outlined in the next section. 8.2 Compliance with Meta’s Human Rights Responsibilities The majority of the Board find that removing this post was not necessary or proportionate, and restoring the post to Facebook is consistent with Meta’s human rights responsibilities. Freedom of Expression (Article 19 ICCPR) Article 19, para. 2 of the ICCPR provides for broad protection of political discourse and journalism (General Comment No. 34, (2011), para. 11). The UN Special Rapporteur on freedom of expression has stated that states can encourage media organizations to self-regulate the way in which they cover and involve children. Citing the set of draft guidelines and principles from the International Federation of Journalists, the UN Special Rapporteur noted that those included “provisions on avoiding the use of stereotypes and the sensational presentation of stories involving children,” (A/69/335, para. 63). When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human rights commitments, both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression,” (A/74/486, para. 41). I. Legality (Clarity and Accessibility of the Rules) The principle of legality under international human rights law requires rules that limit expression to be clear and publicly accessible (General Comment No. 34, para. 25). Restrictions on expression should be formulated with sufficient precision to enable individuals to regulate their conduct accordingly ( Ibid ). As applied to Meta, the company should provide guidance to users about what content is permitted on the platform and what is not. Additionally, rules restricting expression “may not confer unfettered discretion on those charged with [their] execution” and must “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not,” (A/HRC/38/35, para. 46). The Board finds that the Child Sexual Exploitation, Abuse and Nudity policy, as applied to this case, is sufficiently clear to satisfy the legality requirement but that improvements could be made. Journalists, like all users, should be provided with sufficient guidance on how to talk about challenging topics on social media platforms within the rules. It could be made clearer to people that sharing images in which the face or name of a child victim is visible is not permitted in discussion of issues around child abuse. More detailed definitions for identification through name or image are included in the internal guidelines, available only to Meta’s content reviewers. The Board urges Meta to explore providing more clarity around what precisely qualifies as identifying alleged victims “by name or image,” including whether “by name” includes partial names, and whether “image” means only showing faces, and/or allows blurring of faces. The Board notes that Meta considered policy changes in this area but decided not to include the “awareness raising” exception under the Child Sexual Exploitation, Abuse and Nudity policy, claiming that this position was in line with the best interest of the child, stipulated in Article 3 of the UNCRC. The company noted issues with revictimization and child victims’ reduced abilities to grant informed consent to being featured or referenced in reports about child abuse. In the interest of transparency and providing clear guidance to users, the Child Sexual Exploitation, Abuse and Nudity policy should clearly state that it does not permit the identification of child victims of sexual abuse, even where the intention is to report on, raise awareness or condemn that abuse. Since many other policies include policy exceptions, Meta should not presume that silence on whether exceptions apply is sufficient notice that media reporting and advocacy may be removed unless it meets certain conditions in terms of respect for dignity and privacy. Such notice could be framed similarly to existing guidance in the policy rationale, outlining why Meta has a blanket prohibition against sharing, for example, nude images of children, even when the intent of parents of those children is innocuous. Such an update should indicate Meta could grant a newsworthiness allowance in highly exceptional circumstances. The Board notes that Meta’s explanation of that allowance includes an example of it permitting for reasons of public interest and historical significance the “terror of war” photograph of Phan Thị Kim Phúc, sometimes referred to informally as the “napalm girl.” The Board notes that policy exceptions and general allowances, namely the newsworthiness allowance and the spirit of the policy allowance, are distinct, but not easily distinguishable. While each Community Standard may or may not provide for certain policy exceptions, general allowances can be applied across all policy areas within the Community Standards. Therefore, to provide clear and accessible guidance to users, Meta should create a new section within each Community Standard describing what policy exceptions and general allowances apply. When Meta has specific rationale for not providing certain exceptions that apply for other policies (such as awareness raising), Meta should include that rationale in this new section. This section should note that general allowances apply to all Community Standards. II. Legitimate Aim Restrictions on freedom of expression must pursue a legitimate aim, which includes the protection of the rights of others and the protection of public order and national security. In the Swedish Journalist Reporting Sexual Violence Against Minors decision, the Board concluded that the Child Sexual Exploitation, Abuse and Nudity policy aims to prevent offline harm to the rights of minors. The Board finds that Meta’s decision in this case and the policy underlying the original removal pursues the legitimate aim of protecting the rights of child victims of sexual abuse to physical and mental health (Article 17 UNCRC), and their right to privacy (Article 17 ICCPR, Article 16 UNCRC), consistent with respecting the best interests of the child (Article 3 UNCRC). III. Necessity and Proportionality The principle of necessity and proportionality provides that any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected,” (General Comment No. 34, paras. 33-34). Article 3 of the UNCRC states that “in all actions concerning children, ... the best interests of the child shall be a primary consideration.” Consistent with this, UNICEF’s Guidelines for Journalists Reporting on Children note that the rights and dignity of every child should be respected in every circumstance and that the best interests of the child should be protected over any other consideration, including advocacy for children’s issues and the promotion of child rights. The Committee on the Rights of the Child has noted that states should have regard for all children’s rights, including “to be protected from harm and to have their views given due weight,” (General Comment no. 25, para. 13). The Committee further highlighted that “privacy is vital to children’s agency, dignity and safety and for the exercise of their rights” and that “threats may arise from… a stranger sharing information about a child” online (General Comment no. 25, para. 67). The Board underlines that Meta’s prohibition of content identifying victims of child sexual exploitation by name or image is a necessary and proportionate policy. The circumstances to depart from this rule will be exceptional and require a detailed assessment of context by subject matter experts (for analogous or additional standards regarding the consideration of exceptional circumstances when determining whether to allow the identification of persons in vulnerable situations, see Armenian Prisoners of War Video). For the majority of the Board, Meta should have kept this content on the platform under its newsworthiness allowance. The majority highlight that leaving the content up under the newsworthiness allowance was consistent with the best interests of the children in this case, which Meta rightly identifies as a concern that should be given utmost importance. For the majority, three key factors in combination do provide the basis for a newsworthiness allowance. The passage of time was the leading factor in this case together with the fact that all the child victims concerned were dead, thus diminishing the possible direct harm to them. Second, the sexual abuse of children is still a widely existent but underreported phenomenon in Pakistan. Third, the documentary in question does not sensationalize the issue, but raises awareness in almost an educational way and could help inform public debate on a significant human rights concern that has long beset Pakistan and other nations. The majority of the Board note that while the images and names of the victims shown in old newspaper clippings and pictures could be blurred, removing the whole documentary against the backdrop of all the factors above is disproportionate. Instead, Meta could explore alternative measures to inform users about the relevant policy and provide technical solutions to prevent violations, as discussed below. Due to all the specific combinations of all factors outlined above, the documentary should have been given a newsworthiness allowance. For a minority of the Board, Meta’s decision to remove this content and not apply the newsworthiness allowance was in line with Meta’s human rights responsibilities and is consistent with the best interests of the child in this case. Such reporting, the minority believe, should prioritize the dignity of child victims of abuse and ensure their privacy rights are respected regardless of the passage of time and the assumed public debate value of such content. These Board Members highlight that when reporting on child abuse, journalists and media organizations have an ethical responsibility to follow professional codes of conduct. Given that engagement-based social media can incentivize the sensational and “click bait,” it is an appropriate mitigation for Meta to adopt strict content policies requiring media to report on sensitive matters impacting children responsibly. This would be consistent with applicable human rights standards that encourage “evidence-based reporting that does not reveal the identity of children who are victims and survivors,” (General Comment no. 25, para. 57) and that “encourage the media to provide appropriate information regarding all aspects of the ... sexual exploitation and sexual abuse of children, using appropriate terminology, while safeguarding the privacy and identity of child victims and child witnesses at all times,” ( Guidelines of the Optional Protocol to the Convention on the Rights of the Child, para. 28.f). While the content in this case does concern a matter of public interest, the minority believe that Meta requiring a stricter adherence to the standards of journalistic ethics would allow these issues to be reported in a way that respects the dignity and rights to privacy of the victims and their families. A minority of the Board also underline that Meta’s decision not to apply a strike to the news organization’s account when it properly removed the content was proportionate. Although the Board overturns Meta’s decision to remove this post, it remains alarmed that the company took 18 months to reach its decision on a piece of content that it finally deemed as violating despite dozens of user reports and flags from the company’s own virality prediction system. Meta should investigate the reasons for this and assess whether its systems or resources for reviewing Urdu language videos are sufficient (see Mention of the Taliban in News Reporting) . Effective systems are essential to ensure that such posts, where necessary, are referred to internal teams with the expertise to assess if there is a public interest reason to keep the content on the platform. The content in this case was one such example of viral content (attracting more than 21.8 million views) that should have been detected quickly, both to prevent potential harm, but also so that a newsworthiness assessment could be conducted. The Board also notes that had Voice of America Urdu, the media organization posting this content, received a more detailed explanation of the policy line it had violated, it could have easily reposted the content in an edited form e.g., after removing the segments with offending images or by blurring the faces of the victims. In this respect, Meta should consider providing users with more specific notifications about violations, in line with the Board’s recommendation no. 1 in Armenians in Azerbaijan and recommendation no. 2 in the Breast Cancer Symptoms and Nudity cases. Additionally, to ease the burden on users and mitigate the risk of them endangering children, Meta could explore providing users with more specific instructions or access within its products to, for instance, face-blurring tools for video so they can more easily adhere to Meta’s policies that protect the rights of children. Meta could also consider the feasibility of suspending such content for a set period of time before it is permanently removed, if not properly edited (see recommendation no. 13 in Sharing Private Residential Information policy advisory opinion). The author of the relevant content could be notified that during the suspension period they could avoid the subsequent removal of their content if they use such a tool to make the content compliant. 9. Oversight Board Decision The Oversight Board overturns Meta’s decision to take down the content, requiring the post to be restored. 10. Recommendations Content Policy 1. To better inform users when policy exceptions could be granted, Meta should create a new section within each Community Standard detailing what exceptions and allowances apply. When Meta has specific rationale for not allowing certain exceptions that apply to other policies (such as news reporting or awareness raising), Meta should include that rationale in this section of the Community Standard. The Board will consider this implemented when each Community Standard includes the described section and rationales for exceptions that do and do not apply. *Procedural Note: The Oversight Board’s decisions are prepared by panels of five Members and approved by the majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Return to Case Decisions and Policy Advisory Opinions" fb-j5oop3yz,Media Conspiracy Cartoon,https://www.oversightboard.com/decision/fb-j5oop3yz/,"November 22, 2023",2023,,"TopicDiscrimination, Marginalized communities, Race and ethnicityCommunity StandardHate speech",Hate speech,Overturned,"Australia, Germany, Israel","A user appealed Meta’s decision to leave up a Facebook comment which is an image depicting a caricature of a Jewish man holding a music box labelled “media,” while a monkey labelled “BLM” sits on his shoulder.",5423,826,"Overturned November 22, 2023 A user appealed Meta’s decision to leave up a Facebook comment which is an image depicting a caricature of a Jewish man holding a music box labelled “media,” while a monkey labelled “BLM” sits on his shoulder. Summary Topic Discrimination, Marginalized communities, Race and ethnicity Community Standard Hate speech Location Australia, Germany, Israel Platform Facebook This is a summary decision. Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors, and inform the public about the impact of the Board’s work. They are approved by a Board Member panel, not the full Board. They do not involve a public comment process, and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement. Case Summary A user appealed Meta’s decision to leave up a Facebook comment which is an image depicting a caricature of a Jewish man holding a music box labelled “media,” while a monkey labelled “BLM” sits on his shoulder. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and removed the comment. Case Description and Background In May 2023, a user posted a comment containing an image which depicts a caricature of a Jewish man holding an old-fashioned music box, while a monkey rests on his shoulders. The caricature has an exaggerated hooked nose and is labelled with a Star of David inscribed with “Jude,” resembling the badges Jewish people were forced to wear during the Holocaust. The monkey on his shoulder is labelled with “BLM,” (the acronym for the “Black Lives Matter” movement) while the music box is labelled with “media.” The comment received fewer than 100 views. This content violates two separate elements of Meta’s Hate Speech policy. Meta’s Hate Speech policy prohibits content which references “harmful stereotypes historically linked to intimidation,"" such as, “claims that Jewish people control financial, political, or media institutions.” Furthermore, Meta’s Hate Speech policy forbids dehumanizing imagery, such as content which equates “Black people and apes or ape-like creatures.” This content violates both elements as it insinuates that Jewish people control media institutions and equates “BLM” with a monkey. In their appeal to the Board, the user who reported the content stated that the content was antisemitic and racist towards Black people. Meta initially left the content on Facebook. When the Board brought this case to Meta’s attention, the company determined that the post violated its Hate Speech policy, and that its original decision to leave up the content was incorrect. The company then removed the content from Facebook. Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the person who reported content that was then left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case that is under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation processes involved, to reduce errors and increase fairness for Facebook and Instagram users. Case Significance The case highlights gaps in Meta’s enforcement of its Hate Speech policy, which can lead to the spread of content which promotes harmful stereotypes and dehumanizing imagery. Enforcement errors such as these need to be corrected given Meta’s responsibility to mitigate the risk of harm associated with content which targets marginalized groups. The Board has previously examined under-enforcement of Meta's Hate Speech policy where user content made implied clear violations of the company’s Standards. The Board recommended that Meta, “clarify the Hate Speech Community Standard and the guidance provided to reviewers, explaining that even implicit references to protected groups are prohibited by the policy when the reference would reasonably be understood” ( Knin Cartoon decision , recommendation no. 1). Meta partially implemented this recommendation. The Board has also issued recommendations that aim at reducing the number of enforcement errors. The Board recommended that Meta, “implement an internal audit procedure to continuously analyse a statistically representative sample of automated content removal decisions to reverse and learn from enforcement mistakes” ( Breast Cancer Symptoms and Nudity decision , recommendation no. 5). Meta states that this recommendation is work Meta already does, without publishing information to demonstrate this. The Board reiterates that full implementation of the recommendations above will help to decrease enforcement errors under the Hate Speech policy, reducing the prevalence of content which promotes offensive stereotypes and dehumanizing imagery. Decision The Board overturns Meta’s original decision to leave up the content. The Board acknowledges Meta’s correction of its initial error once the Board brought this case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-j8ybq5er,Weapons Post Linked to Sudan’s Conflict,https://www.oversightboard.com/decision/fb-j8ybq5er/,"February 13, 2024",2024,,"TopicViolence, War and conflictCommunity StandardViolence and incitement","Policies and TopicsTopicViolence, War and conflictCommunity StandardViolence and incitement",Upheld,Sudan,"The Oversight Board has upheld Meta’s decision to remove a post containing a graphic of a gun cartridge, with a caption providing instructions on how to create and throw a Molotov cocktail, shared during Sudan’s armed conflict.",39954,6166,"Upheld February 13, 2024 The Oversight Board has upheld Meta’s decision to remove a post containing a graphic of a gun cartridge, with a caption providing instructions on how to create and throw a Molotov cocktail, shared during Sudan’s armed conflict. Standard Topic Violence, War and conflict Community Standard Violence and incitement Location Sudan Platform Facebook Weapons Post Linked to Sudan's Conflict Decision PDF Weapons Post Linked to Sudan's Conflict Public Comments Appendix The Oversight Board has upheld Meta’s decision to remove a post containing a graphic of a gun cartridge, accompanied by a caption providing instructions on how to create and throw a Molotov cocktail. The Board finds the post violated Facebook’s Violence and Incitement Community Standard, posing an imminent risk of harm that could exacerbate ongoing violence in Sudan. This case has raised broader concerns about Meta’s human rights responsibilities for content containing instructions for weapons shared during armed conflicts. To meet these responsibilities, Meta should ensure exceptions to its violence and incitement rules are clearer. Additionally, Meta should develop tools to correct its own mistakes when it has sent the wrong notification to users about which Community Standard their content violated. About the Case In June 2023, a Facebook user posted an illustration of a gun cartridge, with the components identified in Arabic. The post’s caption provides instructions on how to create a Molotov cocktail using the components and advises wearing a helmet when throwing the incendiary device. It concludes with a call for victory for the Sudanese people and the Sudanese Armed Forces (SAF). Two months before the content was posted, fighting broke out in Sudan between the SAF and the Rapid Support Forces (RSF), a paramilitary group designated as dangerous by Meta in August 2023. Sudan’s armed conflict is ongoing and has spread across the country, with both sides having used explosive weapons in areas densely populated by civilians. Meta’s automated systems detected the content, determining that it violated Facebook’s Violence and Incitement Community Standard. Meta removed the post, applying a standard strike to the user’s profile. The user immediately appealed. This led to one of Meta’s human reviewers finding that the post violated the Restricted Goods and Services policy. The user then appealed to the Board, after which Meta determined the content should have been removed but, as per its original decision, under the Violence and Incitement Community Standard. Key Findings The Board finds the post violated the Violence and Incitement policy in two ways. First, the combined effect of the image and caption violated the rule that prohibits “instructions on how to make or use weapons where there is language explicitly stating the goal to seriously injure or kill people.” Regardless of the intent of the person who created the post, the step-by-step guide on how to build a Molotov cocktail and the advice to “use a helmet” indicates the content is calling on people to act on the instructions. Second, resorting to violence in support of the SAF during the ongoing armed conflict does not relate to a non-violent purpose. The Violence and Incitement policy prohibits instructions on how to make weapons, unless there is “context that the content is for a non-violent purpose.” The rule that prohibits instructions on making and using weapons does include an exception for content when it is shared for “recreational self-defense, military training purposes, commercial video games or news coverage.” Stakeholders consulted by the Board as well as news reports have claimed that Meta allows such instructions in exercise of self-defense for some armed conflicts. Meta has denied this is true. The Board is not in a position to determine the truth of these competing claims. What is essential, however, is that Meta’s rules on such an important issue are clear, and enforced consistently and rigorously. Given the use of Meta’s platforms by combatants and civilians during conflicts to share information on the use of weapons, or violent content for self-defense, Meta should clarify what the “recreational self-defense” and “military training” exceptions mean. The Board disagrees with Meta that these terms in the public language of the Violence and Incitement Community Standard have “a plain meaning.” To improve clarity, Meta should clarify which actors can benefit from “recreational self-defense” and in which settings this exception applies. Additionally, the public language of the policy on instructions to make or use weapons or explosives fails to expressly state that self-defense contexts are not considered during armed conflicts. This case also highlights another unclear exception to the Violence and Incitement Community Standard, which allows threats directed at terrorists and other violent actors. This is insufficiently clear because Meta does not clarify whether this applies to all organizations and individuals it designates under its separate Dangerous Organizations and Individuals policy. This is relevant to this case since the RSF was designated for a relevant period in 2023. However, it is impossible for users to know whether their post could be removed or not on this basis since the list of designated organizations and individuals is not available publicly. The Board has already raised concerns about such lack of clarity in our Haitian Police Station Video decision. The Board is also concerned that Meta’s notification system does not allow the company to rectify its own mistakes when it does not correctly communicate which Community Standard a user has violated. Being able to correctly inform users of their violation is crucial, guaranteeing fairness. Incorrect notifications undermine the user’s ability to appeal and access remedy. In this case, the user was informed in error that their post was removed for hate speech, even though it had been taken down for violating the Violence and Incitement Community Standard. Therefore, the Board encourages Meta to explore technically feasible ways in which it can make corrections to user notifications. The Oversight Board’s Decision The Oversight Board has upheld Meta’s decision to remove the post. The Board recommends that Meta: * Case summaries provide an overview of cases and do not have precedential value. 1.Decision Summary The Oversight Board upholds Meta’s decision to remove a post containing a graphic of a gun cartridge, with notes in Arabic identifying its different components. The post was accompanied by a caption in Arabic providing instructions on how to empty a shotgun shell of its pellets, and use the components to create a Molotov cocktail. The post also advises people throwing such an incendiary device to use a helmet to avoid injury. The caption ends with the call, “Victory for the Sudanese people / Victory for the Sudanese Armed Forces / Step forward O, my country.” A hostile speech classifier detected the content and found it violated Facebook’s Violence and Incitement Community Standard. The Board finds that the post did violate the Violence and Incitement Community Standard, which prohibits providing instructions on how to make or use weapons where there is language explicitly stating the goal to seriously injure or kill people. The Board also concludes that the post violates another policy line under the Violence and Incitement Community Standard, which prohibits instructions on how to make or use explosives unless there is context that the content is for a non-violent purpose. The Board finds that the content poses an imminent risk of harm that could exacerbate ongoing violence in Sudan. The case raises broader concerns about instructions for weapons that may be shared during an armed conflict and Meta’s human-rights responsibilities in this context. Implementing those responsibilities requires Meta to ensure greater coherence of the rules by clearly defining exceptions to the policy lines on making or using weapons or explosives under the Violence and Incitement Community Standard. Moreover, Meta should develop tools to enable the company to correct mistakes when informing users about which Community Standard they violated. 2. Case Description and Background In June 2023, a Facebook user posted an illustration of a gun cartridge. The different components of the cartridge are identified in Arabic. The caption for the post, also in Arabic, provides instructions on how to empty a shotgun shell of its pellets and use the components to create a Molotov cocktail – an incendiary device, typically in a bottle, which is easy to make. The caption also advises using a helmet when throwing the device to protect the person who is throwing the incendiary, and concludes, “Victory for the Sudanese people,” “Victory for the Sudanese Armed Forces,” and “Step forward O, my country.” Linguistic experts the Board consulted said that these phrases did not, in isolation, call for civilians to engage in violence. The content had only a few views before Meta removed it without human review, seven minutes after it was posted. At the time the content was posted in June 2023, the Sudanese Armed Forces (SAF) and the paramilitary group, the Rapid Support Forces (RSF), had been engaged in an armed conflict since mid-April, which continues to the present day. The RSF was designated as a dangerous organization under Meta’s Dangerous Organizations and Individuals policy on August 11, 2023, months after the conflict escalated. A hostile speech classifier, an algorithm Meta uses to identify potential violations to the Hate Speech , Violence and Incitement and Bullying and Harassment Community Standards, detected the content and determined that it violated the Violence and Incitement Community Standard. Meta removed the content and applied a standard strike to the content creator’s profile, which prevented them from interacting with groups and from creating or joining any messenger rooms for three days. The user immediately appealed Meta’s decision. This led to a human reviewer finding that the post violated the Restricted Goods and Services policy . The user then appealed to the Board. After the Board brought the case to Meta’s attention, the company determined that its original decision to remove the content under the Violence and Incitement Community Standard was correct, and that the post did not violate the Restricted Goods and Services policy. The Board has considered the following context in reaching its decision on this case. In April 2023, fighting broke out in Sudan’s capital between the SAF and the RSF. The user who posted the content in this case appears to support the SAF. While fighting initially centered on Khartoum, the capital of Sudan, the conflict then spread across the country including to Darfur and Kordofan . Both groups have used explosive weapons, including aerial bombs, artillery and mortar projectiles, and rockets and missiles, in areas densely populated by civilians. According to the United Nations , as of January 2024, more than 7 million people have been displaced since mid-April and more than 1.2 million people have fled Sudan. Up to 9,000 people have reportedly been killed . Some of the attacks against civilians have been ethnically motivated . In October 2023, the UN Special Rapporteur on trafficking in persons expressed concern over the increased risk of recruitment and use of child soldiers by armed forces and groups. As fighting continues throughout the country, experts have also noted the growing involvement of other armed groups. According to experts the Board consulted about the conflict in Sudan, both the SAF and the RSF rely on social media to “disseminate information and propaganda” about their respective agendas. While internet penetration remains low in Sudan, news organizations and civil society groups reported that both the SAF and the RSF use social media in an attempt to control narratives surrounding the conflict. For instance, both parties have posted proclamations of victory in areas where fighting is ongoing, thereby putting returning civilians who relied on inaccurate information at risk. 3. Oversight Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). When Meta commits to act on recommendations, the Board monitors their implementation. 4. Sources of Authority and Guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board Decisions The most relevant previous decisions of the Oversight Board include: II. Meta’s Content Policies Meta’s Violence and Incitement Community Standard aims to “prevent potential offline violence that may be related to content on our platforms.” Meta states it removes “language that incites or facilitates serious violence” and “threats to public or personal safety.” Meta distinguishes between casual statements, allowed under the policy, and those that pose a “genuine risk of physical harm or direct threats to public safety.” The Violence and Incitement Community Standard prohibits “threats that could lead to death (or other forms of high-severity violence).” It also states that Meta allows threats “directed against certain violent actors, like terrorist groups.” Meta updated the policy on December 6, 2023 to say that Meta does not prohibit threats when shared to raise awareness, in line with the Board’s recommendation in the Russian Poem case . In exchanges with the Board, Meta clarified that “calls for violence, aspirational threats and conditional threats of high or mid severity violence are all allowed if the target is a designated DOI [Dangerous Organization or Individual] entity. Statements of intent, however, always violate the policy.” The Violence and Incitement Community Standard also has two rules related to instructions on making and using weapons. The first rule prohibits content providing “instructions on how to make or use weapons where there is language explicitly stating the goal to seriously injure or kill people” or “imagery that shows or simulates the end result.” Such content is allowed only when shared in a context of “recreational self-defense, training by a country’s military, commercial video games, or news coverage (posted by a Page or with a news logo).” The second rule prohibits content providing instructions on how to make or use explosives, “unless with context that the content is for a non-violent purpose.” Examples of a non-violent purpose include “commercial video games, clear scientific/educational purpose, fireworks or specifically for fishing.” The Board’s analysis was informed by Meta’s commitment to voice , which the company describes as “paramount,” and its value of safety. III. Meta’s Human Rights Responsibilities 16. The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , in which it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. As per the UNGPs, the human rights responsibilities of businesses operating in a conflict setting are heightened (“Business, human rights and conflict-affected regions: towards heightened action,” A/75/212 ). The Board’s analysis of Meta’s human rights responsibilities in this case was informed by the following international standards: 5. User Submissions In their appeal to the Board, the author of the content stated that Meta misunderstood the post and that they were only sharing facts. 6. Meta’s Submissions Meta claims that the post violated two policy lines in the Violence and Incitement Community Standard. First, it violated the policy line that prohibits content providing instructions on how to make or use weapons if there is evidence of a goal to seriously injure or kill people. Meta said the post was violating because its “caption suggests that the intended use is for throwing the Molotov cocktail against a target...” Moreover, the instructions are shared “in the context of armed conflict and apparently in support of the SAF.” Meta found the post did not fall under any of the exceptions under the policy line, such as for “recreational self-defense,” “military training” or “news coverage.” When asked by the Board about the meaning of “recreational self-defense” and “military training,” Meta stated it does not have a definition of these terms “beyond the plain meaning of those words.” The content also violated the policy line prohibiting instructions on how to make or use explosives, without clear context that the content is for a non-violent purpose. Meta considers Molotov cocktails to be explosives under the meaning of the policy. Moreover, Meta assessed that the content “was shared with the intention of furthering the violent conflict.” According to the internal guidelines on how to apply the Violence and Incitement policy in place when the content was posted, Meta allows content that violates the policy “when shared in awareness-raising or condemning context.” Following updates to the public-facing Community Standards on December 6, 2023, the policy now reflects this guidance: “[Meta] do[es] not prohibit threats when shared in awareness-raising or condemning context.” However, Meta says the caption “makes clear that the intention is not to raise awareness but to enable violent action.” When asked whether Meta has exempted any countries or conflicts from the application of the Violence and Incitement policy lines prohibiting instructions on making or using weapons or explosives, Meta told the Board it has not applied any country-specific policy exceptions or allowances, “regardless of active conflicts.” Based on Meta’s updated Violence and Incitement Community Standard, Meta does not prohibit threats directed against “certain violent actors, like terrorist groups.” This means that some threats targeting designated dangerous organization or individual entities, such as the RSF, are allowed on Meta’s platforms. Meta explained, however, that this exception does not apply to the two policy lines under the Violence and Incitement Community Standard on instructions on how to make or use weapons or explosives. In other words, content explaining how to create or use weapons is prohibited even if it targets a designated dangerous organization or individual entity. Meta did not set up an Integrity Product Operations Center, which is used to respond to threats in real-time, to address the outbreak of violence in Sudan in April 2023. According to Meta, it was able to “handle the identified content risks through the current processes.” The company’s efforts to respond to the current conflict continue and build on work first described in the Board’s Sudan Graphic Video case. In response to the military coup in Sudan in October 2021, Meta created “a crisis response cross-functional team to monitor the situation and communicate emerging trends and risks,” which is ongoing. Additionally, Meta took the following steps, among others, to address potential content risks related to the 2023 Sudan conflict: removed pages and accounts representing the RSF, following Meta’s designation of the group as a dangerous organization; investigated potential fake accounts that could mislead public discourse surrounding the conflict; and designated Sudan as a Temporary High-Risk Location (for a description of the THRL designation and its relationship to the Violence and Incitement Community Standard, see Brazilian General’s Speech decision, Section 8.1). Meta informed the Board that it is working to establish longer-term crisis coordination “to provide dedicated operations oversight throughout the lifecycle of imminent and emerging crises,” following on from the Board’s recommendation in the Tigray Communication Affairs Bureau case. As of May 30, 2023, Sudan reached Meta’s highest internal crisis designation. Since then, Meta has been maintaining a heightened risk management level and are monitoring the situation for content risks as part of that work. The Crisis Policy Protocol is the framework Meta adopted for developing time-bound policy-specific responses to an emerging crisis. There are three crisis categories under the Crisis Policy Protocol – Category 1 being the least severe and Category 3 being the most severe. The Category 3 crisis designation in Sudan was a result of the escalating crisis meeting additional entry criteria, such as the existence of a “major internal conflict” and “military intervention.” The Board asked Meta sixteen questions in writing. Questions related to Meta’s hostile speech classifier; how Meta understands the concept of self-defense in relation to the Violence and Incitement Community Standard; measures taken in response to the conflict in Sudan; and the enforcement of the weapons-related policy lines of the Violence and Incitement Community Standard in armed conflicts. Meta answered all questions. 7. Public Comments The Oversight Board received 10 public comments relevant to this case. Three of the comments were submitted from the United States and Canada, two from Asia Pacific and Oceania, two from Europe, one from Latin American and Caribbean, one from Middle East and North Africa and one from Sub-Saharan Africa. This total includes public comments that were either duplicates, were submitted without consent to publish or were submitted with consent to publish but did not meet the Board’s conditions for publication. Public comments can be submitted to the Board with or without consent to publish, and with or without attribution. The submissions covered the following themes: conflict dynamics in Sudan; Meta’s human rights responsibilities in situations of armed conflict, particularly in the preservation of online content for human rights accountability; and the impact of Meta’s classifier design on the moderation of conflict-related content. To read public comments submitted for this case, please click here . 8. Oversight Board Analysis The Board selected this case to assess Meta’s policies on weapons-related content and the company’s enforcement practices in the context of armed conflicts. The case falls within the Board’s Crisis and Conflict Situations strategic priority. 8.1 Compliance With Meta’s Content Policies I. Content Rules The Board finds that the post violates Meta’s Violence and Incitement policy. The combined effect of the image and caption in the post meets the requirements of “language explicitly stating the goal” in the line that prohibits “instructions on how to make or use weapons where there is language explicitly stating the goal to seriously injure or kill people.” Meta considers Molotov cocktails as weapons prohibited under the Violence and Incitement policy. The post provides a step-by-step guide on how to build and use a Molotov cocktail. Intent to seriously injure or kill people can be inferred from this step-by-step guide as well as the advice to “use a helmet” to protect the person who throws the incendiary, which means the post is calling on people to act on the instructions. According to experts consulted by the Board, the calls for victory at the end of the caption clearly articulate support for one of the sides of the armed conflict. The content further violates the Violence and Incitement prohibition on “instructions on how to make or use explosives, unless with context that the content is for a non-violent purpose.” Resorting to violence in support of the SAF does not relate to a non-violent purpose; such purposes, as outlined in the policy, are limited to “commercial video games, clear scientific/educational purpose, fireworks or specifically for fishing.” The Board notes that, according to Meta, this prohibition applies whether or not Meta has designated the entity targeted by the content as a dangerous organization or individual under the Dangerous Organizations and Individuals Community Standard . Meta explained to the Board this is because “part of the harm of sharing these instructions is that they can be used by other people intending to harm other targets.” The Board finds that this rule was applied in accordance with Meta’s content policies when it came to removal of the content in this case. II. Enforcement Action Although the hostile speech classifier correctly identified the content as a violation of the Violence and Incitement Community Standard, the user was informed in error that their post was removed for hate speech. According to Meta, this was due to a bug in the company’s systems. Meta informed the Board that it is unable to send new messages to the same support inbox thread when it realizes a mistake was made. The Board is concerned that Meta’s user-notification system does not allow the company to rectify its own mistakes when it does not correctly communicate to the user which Community Standard they violated. This prevents users from learning about the actual reason their content was removed. As the Board previously highlighted in several cases (e.g., Armenians in Azerbaijan , Ayahuasca Brew , Nazi Quote ), Meta should transparently inform users about the content policies they violated. 8.2 Compliance With Meta’s Human Rights Responsibilities The Board finds that Meta’s decision to remove the post was consistent with the company’s human rights responsibilities. Freedom of Expression (Article 19 ICCPR) Article 19 of the ICCPR provides for broad protection of expression, including political expression. This right includes the “freedom to seek, receive and impart information and ideas of all kinds.” These protections remain active during armed conflicts, and should continue to inform Meta’s human rights responsibilities, alongside the mutually reinforcing and complementary rules of international humanitarian law that apply during such conflicts ( General Comment 31 , Human Rights Committee, 2004, para. 11; Commentary to UNGPs, Principle 12 ; see also UN Special Rapporteur’s report on Disinformation and freedom of opinion and expression during armed conflicts, Report A/77/288 , paras. 33-35 (2022); and OHCHR report on International legal protection of human rights in armed conflict (2011) at page 59). The UN Special Rapporteur on freedom of expression has stated that “[d]uring armed conflict, people are at their most vulnerable and in the greatest need of accurate, trustworthy information to ensure their own safety and well-being. Yet, it is precisely in those situations that their freedom of opinion and expression, which includes ‘the freedom to seek, receive and impart information and ideas of all kinds,’ is most constrained by the circumstances of war and the actions of the parties to the conflict and other actors to manipulate and restrict information for political, military and strategic objectives,” (Report A/77/288, para. 1). The Board recognizes the importance of ensuring that people can freely share information about conflicts, especially when social media is the ultimate source of information, while simultaneously ensuring content that is likely to fuel further offline violence does not go viral. When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human rights commitments, both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As in previous cases (e.g., Armenians in Azerbaijan , Armenian Prisoners of War Video ), the Board agrees with the UN Special Rapporteur on freedom of expression that, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression,” ( A/74/486 , para. 41). In doing so, the Board attempts to be sensitive to ways in which the human rights responsibilities of a private social media company may differ from a government implementing its human rights obligations. I. Legality (Clarity and Accessibility of the Rules) The principle of legality requires rules limiting expression to be accessible and clear, both to those enforcing the rules and those impacted by them (General Comment No. 34, para. 25). Users should be able to predict the consequences of posting content on Facebook and Instagram. The UN Special Rapporteur on freedom of expression has highlighted the need for “clarity and specificity” in content-moderation policies ( A/HRC/38/35, para. 46). The Board finds that the general rule prohibiting instructions on making or using weapons or explosives under certain circumstances is sufficiently clear, meeting the requirements of legality. The Board also notes, however, that Meta could further improve clarity around the policy’s exceptions by explaining concepts such as “recreational self-defense” and “training by a country’s military” in the public-facing language of the Violence and Incitement Community Standard. The Board disagrees with Meta’s claim that these terms have a “plain meaning.” With respect to the term “recreational self-defense,” the Board believes Meta should clarify the actors that can benefit from it, and in which settings the exceptions apply. Moreover, it is not expressly stated in the public-facing Violence and Incitement Community Standard that the term does not contemplate self-defense contexts in armed conflict settings. With respect to the term “training by a country’s military,” Meta does not clarify whether it is limited to militaries of recognized states nor how the company treats armies of de facto governments. Stakeholders consulted by the Board as well as public reporting have claimed that Meta allows instructions on making or using weapons in exercise of self-defense for certain armed conflicts. In response to the Board’s questions, Meta denied that these reports are true. The Board is not in a position to determine the truth of these conflicting claims. In any event, it is essential that Meta’s rules on as important an issue as this be enforced consistently and rigorously. Given the use of Meta’s platforms to exchange information during armed conflicts when both combatants and civilians may be sharing information on the use of weapons, or violent content invoking self-defense, Meta should clarify what the “recreational self-defense” and “military training” exceptions mean in the Violence and Incitement Community Standard. Additionally, the Board finds Meta’s policy exception to the Violence and Incitement Community Standard, which allows threats “directed against certain violent actors, like terrorist groups,” insufficiently clear, thus failing to meet the legality requirement. It does not clarify whether this policy line applies to all dangerous individuals and organizations designated under the Dangerous Organizations and Individuals Community Standard. Moreover, the list of designated organizations and individuals under the Dangerous Organizations and Individuals policy is not public. This exacerbates the lack of clarity to users on which posts will be removed or kept up depending on whether or not the entity referred to in their post is included in Meta’s hidden list of dangerous organizations. The Board repeats the concerns raised in the Haitian Police Station Video decision on this policy exception. II. Legitimate Aim Restrictions on freedom of expression (Article 19, ICCPR) must pursue a legitimate aim. The Violence and Incitement policy aims to “prevent potential offline harm” by removing content that poses “a genuine risk of physical harm or direct threats to public safety.” As previously concluded by the Board in the Alleged Crimes in Raya Kobo case , this policy serves the legitimate aim of protecting the rights of others, such as the right to life (Article 6, ICCPR). III. Necessity and Proportionality The principles of necessity and proportionality provide that restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected,” ( General Comment No. 34 , para. 34 ). The Board also considered the Rabat Plan of Action factors on what constitutes incitement to violence Rabat Plan of Action, OHCHR, A/HRC/22/17/Add.4,2013 ), while considering the differences between the legal obligations of states and the human rights responsibilities of businesses. Although the post in this case does not advocate hatred on the basis of nationality, race or religion, the Rabat Plan of Action nonetheless offers a useful framework in assessing whether or not the content incites others to violence. In this case, the Board finds that Meta removing the content from Facebook complied with the requirements of necessity and proportionality. Using the Rabat Plan of Action’s six-part test to inform its analysis, the Board finds support for the removal of this post, as explained below. Regardless of the intent of the content creator when posting, the step-by-step guide to making and using a Molotov cocktail created a genuine risk of imminent harm in an already volatile security situation. The incendiary weapon referred to in the post is prohibited under the Convention on Certain Conventional Weapons for being both excessively injurious and indiscriminate as a means of attack. The impact of an explosion not only poses a high risk of wounding civilians, it can also lead to “unnecessary suffering” or “superfluous injury” on combatants prohibited by customary international humanitarian law . Encouraging civilians with no military training to deploy and use incendiary weapons further increases these risks. The genuine risk of imminent harm exists despite the user being a private individual who is not influential, with a limited number of friends and followers. The Board notes that Meta’s hostile speech classifier was able to detect and remove the content within minutes of the content being posted. The Board also notes that the content was posted in the context of an ongoing armed conflict. At the time it was posted, two months into the armed conflict, reporting and expert analysis showed widespread human rights abuses committed by both the SAF and RSF. According to the UN, human rights groups, experts consulted by the Board and public comment submissions, including from Genocide Watch (PC-19006, PC-19001), both parties to the armed conflict have engaged in various abuses of international humanitarian and human rights laws, leading to millions of people being displaced, arbitrarily arrested, sexually violated or killed. The conflict is still ongoing and shows no signs of ending despite condemnation by the UN Security Council , civil society groups and human rights organizations. In this context, the Board finds that the content in this case incited violence, posing an imminent risk of civilians directly taking part in hostilities by using a particularly pernicious and outlawed weapon and further escalating the conflict. Meta does not allow content like the post under review in the context of self-defense when enforcing the Violence and Incitement policy. The Board finds this to be a sensible approach and urges Meta to enforce it consistently. Under the UNGPs, Meta’s human rights responsibilities include respecting “the standards of international humanitarian law in an armed conflict,” (Commentary to Principle 12, UNGPs). International humanitarian law provides standards for parties engaging in armed conflicts to maximize civilian protection (e.g., Additional Protocol II of the Geneva Conventions protecting civilians during armed conflict; Article 2, Protocol III of the Convention on Certain Conventional Weapons prohibiting the use of incendiary weapons). The Board believes that these standards can also be helpful for social media companies to achieve that aim when their platforms are used in armed conflict settings. In line with these standards, Meta should aim for a policy that results in the widest protection for civilians and civilian property in conflict settings. When applied to the Violence and Incitement policy, this means prohibiting credible threats regardless of the target. Access to Remedy The Board is concerned at Meta’s technical inability to correct mistakes in its notifications to users when it informs them of which rule they violated with their content. Being able to correctly inform users of the violation is a crucial component of enforcing Meta’s Community Standards and guarantees fairness and due process to the user. When a notification provided to users is incorrectly identified, the user’s ability to appeal and access remedy on Meta’s platforms is undermined. The Board encourages Meta to explore technically feasible ways in which it can make corrections to user notifications. 9. Oversight Board Decision The Oversight Board upholds Meta’s decision to take down the content. 10. Recommendations Content Policy 1. To better inform users of what content is prohibited on its platforms, Meta should amend its Violence and Incitement policy to include a definition of “recreational self-defense” and “military training” as exceptions to its rules prohibiting users from providing instructions on making or using weapons, and clarify that it does not allow any self-defense exception for instructions on how to make or use weapons in the context of an armed conflict. The Board will consider this implemented when the Violence and Incitement Community Standard is updated to reflect these changes. Enforcement 2. To make sure users are able to understand which policies their content was enforced against, Meta should develop tools to rectify mistakes in its user messaging notifying the user about the Community Standard they violated. The Board will consider this implemented when the related review and notification systems are updated accordingly. *Procedural Note: The Oversight Board’s decisions are prepared by panels of five Members and approved by the majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg, which draws on a team of more than 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Linguistic expertise was provided by Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world. Return to Case Decisions and Policy Advisory Opinions" fb-jr784res,Comment Targeting People with Down Syndrome,https://www.oversightboard.com/decision/fb-jr784res/,"April 23, 2025",2025,,"TopicDiscrimination, Marginalized communitiesCommunity StandardBullying and harassment, Hate speech","Bullying and harassment, Hate speech",Overturned,United States,A user appealed Meta’s decision to leave up a Facebook comment targeting individuals with Down syndrome and other disabilities.,7761,1195,"Overturned April 23, 2025 A user appealed Meta’s decision to leave up a Facebook comment targeting individuals with Down syndrome and other disabilities. Summary Topic Discrimination, Marginalized communities Community Standard Bullying and harassment, Hate speech Location United States Platform Facebook Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Summary A user appealed Meta’s decision to leave up a Facebook comment targeting individuals with Down syndrome and other disabilities. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and removed the comment. About the Case In February, a Facebook user commented on a post containing a Netflix advertisement for the show “Love on the Spectrum.” The comment included statements in which the user said they see people on the spectrum as “a different species of human.” In the same comment, the user also mentions, by name, a specific individual they know and states, “this girl … was also the only fat kid in class.” The user adds that people like this individual are “difficult to interact with ... they are a different kind. A different form of human.” The user who appealed to the Board Meta's original decision to leave up this comment characterized it as a “literal tirade against individuals with Down syndrome.” The appealing user noted that “this should not require an entire essay to explain why this comment should have been flagged by the Facebook system” and urged Meta to “do better.” According to Meta’s Hateful Conduct policy, the company removes dehumanizing speech targeting people on the basis of protected characteristics, such as a disability. This includes comparisons with or generalizations about: “Subhumanity (including but not limited to: savages, devils, monsters).” In the comment, the user generalizes, labeling people with Down syndrome as “a different species of human.” Additionally, Meta’s Bullying and Harassment policy prohibits content that targets a specific person with “statements of inferiority about their physical appearance .” The user’s description of a specific person as “the only fat kid” qualifies as a statement of inferiority about physical appearance. This is a violation under Tier 1 of the policy, which provides “universal protections for everyone.” The targeted person does not need to report the content themselves for it to constitute a violation and be removed. After the Board brought this case to Meta’s attention, the company determined that the content violated both the Hateful Conduct and Bullying and Harassment policies and that its original decision to leave up the comment was incorrect. Meta considered that the user violated the Hateful Conduct policy because they labeled people with Down syndrome as “a different species of human,” and the Bullying and Harassment policy because the user described a specific individual as “the only fat kid.” The company then removed the content from Facebook. Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook, Instagram and Threads users. Significance of Case This case is a particularly blatant example of dehumanizing speech about people with disabilities. It is not subtle or coded, stating that Down syndrome makes a person “a different species of human.” That it was not removed suggests a serious issue with Meta’s enforcement systems. The case thus highlights Meta's failure to effectively enforce its policies against hateful conduct. Recent reports indicate that online harassment of people with disabilities, such as those with Down syndrome, continues to increase significantly. According to the report on Countering Cyberbullying Against Persons with Disabilities from the Office of the United Nations High Commissioner for Human Rights (OHCHR), individuals with disabilities are “significantly more likely to experience cyberbullying” and “may even withdraw from digital spaces altogether as a result of online abuse.” The theme of the World Down Syndrome Day 2024 campaign was, “Calls for people around the world to end the stereotypes. #EndTheStereotypes.’’ The campaign stressed that, for people with Down syndrome and intellectual disabilities, stereotypes can prevent them from being treated with respect. The Board has issued recommendations aimed at improving Meta’s policy enforcement to reduce errors. The Board has urged the company to continuously improve its ability to detect content that violates its Hate Speech (now Hateful Conduct) Community Standard. For instance, the Board has recommended that Meta should “share [with the public] the results of the internal audits it conducts to assess the accuracy of human review and performance of automated systems in the enforcement of its Hate Speech [now Hateful Conduct] policy,’’ ( Criminal Allegations Based on Nationality , recommendation no. 2). In its initial response to the Board, Meta reported that the company will implement this recommendation in part. Meta stated that, while the company “will continue to share data on the amount of hate speech content addressed by [its] detection and enforcement mechanisms in the Community Standards Enforcement Report (CSER) ,” data on the accuracy of its enforcement on a global scale will be confidentially shared with the Board. This recommendation was issued in September 2024. The implementation is in progress, with data yet to be shared with the Board. The Board is concerned that Meta has not publicly shared what, if any, human rights due diligence it performed prior to the policy and enforcement changes announced on January 7, 2025 , as highlighted by the Board in the Criticism of EU Migration Policies and Immigrants, Posts Displaying South Africa’s Apartheid-Era Flag, Gender Identity Debate Videos and Posts Supporting UK Riots decisions. A less proactive enforcement approach may result in a higher prevalence of content targeting members of vulnerable groups, such as the post under review in this decision. In those decisions, the Board has emphasized that “[i]n relation to the enforcement changes, due diligence should be mindful of the possibilities of both overenforcement ( Call for Women’s Protest in Cuba , Reclaiming Arabic Words ) as well as underenforcement ( Holocaust Denial , Homophobic Violence in West Africa , Post in Polish Targeting Trans People ).” Also, in both decisions, the Board highlighted the importance of Meta ensuring that “adverse impacts of these changes on human rights are identified, mitigated and prevented, and publicly reported.” Decision The Board overturns Meta’s original decision to leave up the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-jrq1xp2m,Knin cartoon,https://www.oversightboard.com/decision/fb-jrq1xp2m/,"June 17, 2022",2022,,"TopicDiscrimination, Freedom of expression, Race and ethnicityCommunity StandardHate speech","Policies and TopicsTopicDiscrimination, Freedom of expression, Race and ethnicityCommunity StandardHate speech",Overturned,Croatia,The Oversight Board has overturned Meta’s original decision to leave a post on Facebook which depicted ethnic Serbs as rats,43937,6910,"Overturned June 17, 2022 The Oversight Board has overturned Meta’s original decision to leave a post on Facebook which depicted ethnic Serbs as rats Standard Topic Discrimination, Freedom of expression, Race and ethnicity Community Standard Hate speech Location Croatia Platform Facebook Serbian translation Knin cartoon public comments Croatian translation This decision is also available in Serbian and Croatian . Да бисте прочитали ову одлуку на српском језику, кликните овде . Da biste ovu odluku pročitali na hrvatskom, kliknite ovdje . The Oversight Board has overturned Meta’s original decision to leave a post on Facebook which depicted ethnic Serbs as rats. While Meta eventually removed the post for violating its Hate Speech policy, about 40 moderators had previously decided that the content did not violate this policy. This suggests that moderators consistently interpreted the Hate Speech policy as requiring them to identify an explicit, rather than implicit, comparison between ethnic Serbs and rats before finding a violation. About the case In December 2021, a public Facebook page posted an edited version of Disney’s cartoon “The Pied Piper,” with a caption in Croatian which Meta translated as “The Player from Čavoglave and the rats from Knin.” The video portrays a city overrun by rats. While the entrance to the city in the original cartoon was labelled “Hamelin,” the city in the edited video is labelled as the Croatian city of “Knin.” The narrator describes how the rats decided they wanted to live in a “pure rat country,” so they started harassing and persecuting the people living in the city. The narrator continues that, when the rats took over the city, a piper from the Croatian village of Čavoglave appeared. After playing a melody on his “magic flute,” the rats start to sing “their favorite song” and follow the piper out of the city. The song’s lyrics commemorate Momčilo Dujić, a Serbian Orthodox priest who was a leader of Serbian resistance forces during World War II. The piper herds the rats into a tractor, which then disappears. The narrator concludes that the rats “disappeared forever from these lands” and “everyone lived happily ever after.” The content in this case was viewed over 380,000 times. While users reported the content to Meta 397 times, the company did not remove the content. After the case was appealed to the Board, Meta conducted an additional human review, finding, again, that the content did not violate its policies. In January 2022, when the Board identified the case for full review, Meta decided that, while the post did not violate the letter of its Hate Speech policy, it did violate the spirit of the policy, and removed the post from Facebook. Later, when drafting an explanation of its decision for the Board, Meta changed its mind again, concluding that the post violated the letter of the Hate Speech policy, and all previous reviews were in error. While Meta informed the 397 users who reported the post of its initial decision that the content did not violate its policies, the company did not tell these users that it later reversed this decision. Key findings The Board finds that the content in this case violates Facebook’s Hate Speech and Violence and Incitement Community Standards. Meta’s Hate Speech policy prohibits attacks against people based on protected characteristics, including ethnicity. The content in this case, which compares ethnic Serbs to rats and celebrates past acts of discriminatory treatment, is dehumanizing and hateful. While the post does not mention ethnic Serbs by name, historical references in the content make clear that the rats being removed from the city represent this group. Replacing the name “Hamelin” with the Croatian city of “Knin,” the identification of the piper with the Croatian village of Čavoglave (a reference to the anti-Serb song “Bojna Čavoglave” by the band ‘Thompson’ whose lead singer is from Čavoglave) and the image of rats fleeing on tractors are all references to Croatian military’s “Operation Storm.” This 1995 operation reportedly resulted in the displacement, execution, and forcible disappearance of ethnic Serb civilians. The comments on the post confirm that this connection was clear to people who viewed the content. The Board is concerned that about 40 Croatian-speaking moderators deemed the content not to violate Facebook’s Hate Speech Community Standard. This suggests that reviewers consistently interpreted the policy as requiring them to find an explicit comparison between ethnic Serbs and rats before finding a violation. The Board also finds this content to violate Facebook’s Violence and Incitement Community Standard. The Board disagrees with Meta’s assessment that the content constitutes a call for expulsion without violence. By referring to the events of “Operation Storm,” the post aims to remind people of past conflict and contains a violent threat. The cartoon celebrates the violent removal of Knin’s ethnic Serb population and may contribute to a climate where people feel justified in attacking this group. A serious question raised by this case is why Meta concluded that the content did not violate its policies, despite it being reviewed so many times. The fact that the content was not sent to Meta’s specialized teams for assessment before it reached the Board shows that the company’s processes for escalating content are not sufficiently clear and effective. As such, the Board urges Meta to provide more information on how it escalates content. The Oversight Board’s decision The Oversight Board overturns Meta’s original decision to leave up the content. As a policy advisory opinion, the Oversight Board recommends that Meta: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board overturns Meta’s original decision to keep the content on Facebook. Following over 390 user reports to remove this content and Meta’s additional review of the content when the Board selected the case, Meta found this content to be non-violating. However, when developing the explanation of its decision to the Board, Meta reversed its position and declared that this was an “enforcement error,” removing the content for violating the Hate Speech policy . The Board finds that the content violates Meta’s Hate Speech and Violence and Incitement Community Standards. It finds that the Hate Speech policy on comparing people to animals applies to content that targets groups through implicit references to protected characteristics. In this case, the content compared Serbs to rats. The Board also finds that removing the content is consistent with Meta’s values and human rights responsibilities. 2. Case description and background In early December 2021, a public Facebook page describing itself as a news portal for Croatia posted a video with a caption in Croatian. Meta translated the caption as “The Player from Čavoglave and the rats from Knin.” The video was an edited version of Disney’s cartoon “The Pied Piper.” It was two minutes and 10 seconds long, with a voiceover in Croatian which was overlaid with the word “pretjerivač,” referring to a Croatian online platform of the same name. The video portrayed a city overrun by rats. While the entrance to the city in the original Disney cartoon was labelled “Hamelin,” the city in the edited video was labelled as the Croatian city of “Knin.” At the start of the video, a narrator described how rats and humans lived in the royal city of Knin for many years. The narrator continues that the rats decided that they wanted to live in a “pure rat country,” so they started harassing and persecuting people living in the city. The narrator explains that when rats took over the city, a piper from the Croatian village of Čavoglave appeared. Initially, the rats did not take the piper seriously and continued with “the great rat aggression.” However, after the piper started to play a melody with his “magic flute,” the rats, captivated by the melody, started to sing “their favourite song” and followed the piper out of the city. Meta translated the lyrics of the song sung by the rats as: “What is that thing shining on Dinara, Dujić's cockade on his head [...] Freedom will rise from Dinara, it will be brought by Momčilo the warlord.” The video then portrayed the city's people closing the gate behind the piper and the rats. The video ended with the piper herding the rats into a tractor, which then disappeared. The narrator concluded that once the piper lured all the rats into the “magical tractor,” the rats “disappeared forever from these lands” and “everyone lived happily ever after.” The following factual historical background is relevant to the Board’s decision. Croatia declared its independence from the Socialist Federal Republic of Yugoslavia on June 25, 1991. The remaining state of Yugoslavia (later called Federal Republic of Yugoslavia), which became predominantly of Serb ethnicity but contained many ethnic minorities, including Croats, used its armed forces in an attempt to prevent secession. The ensuing war, which lasted until 1995, resulted in extreme brutality on both sides, including forcible displacement of more than 200,000 ethnic Serbs from Croatia ( Human Rights Watch Report, Croatia, August 1996) . The Serb ethnic minority in Croatia, with the support of the Yugoslav National Army, opposed Croatian independence and (among other actions) established a state-like entity known as the Republic of Serbian Krajina (RSK). Knin became the capital of the RSK. During this period, many Croats were driven out of Knin. In 1995, Croatian forces reoccupied Knin in a military operation called “Operation Storm.” This was the last major battle of the war. Because some Serbs fled on tractors, references to tractors can be used to humiliate and threaten Serbs. Čavoglave is a village in Croatia near Knin, known as the birthplace of the lead vocalist and songwriter of the Thompson band. This Croatian band became known during the Croatian War of Independence for their anti-Serb song “Bojna Čavoglave,” which remains available online. The song, which was added to the video and has violent imagery, celebrates the retaking of Knin during Operation Storm. The piper who leads the rats out of Knin in the cartoon is identified as the “piper from Čavoglave.” The melody and lyrics that the cartoon rats sing also have specific meanings. The lyrics are from a song called “What is that thing that shines above Dinara,” which commemorates the Serbian past and Momčilo Dujić, a Serbian Orthodox priest who was a leader of Serbian resistance forces during World War II. The page that shared the content has over 50,000 followers. The content was viewed over 380,000 times, shared over 540 times, received over 2,400 reactions, and had over 1,200 comments. The majority of the users who reacted to, commented on, or shared the content have accounts located in Croatia. Among the comments in Croatian were statements translated by Meta as: “Hate is a disease,” and “Are you thinking how much damage you are doing with this and similar stupidities to your Croatian people who live in Serbia?” Users reported the content 397 times, but Meta did not remove the content. Of those who reported the post, 362 users reported it for hate speech. Several users appealed the leave-up decision to the Board. This decision is based on the appeal filed by one of these users, whose account appears to be located in Serbia. The user’s report was automatically rejected by an automatic system. This system resolves reports in cases that have already been examined and considered non-violating by Meta’s human reviewers a certain number of times, so that the same content is not re-reviewed. The content in this case was assessed as non-violating by several human reviewers before the automated decision was triggered. In other words, although the user report that generated this appeal was reviewed by an automatic system, previous reports of the same content had been reviewed by human reviewers, who decided that the content was not violating. After the user who reported the content appealed Meta’s decision to take no action to the Board, an additional human review was conducted on the appeal level and again found that the content did not violate Meta policies. Meta further explained that in total, about 40 “human reviewer decisions (…) assessed the content as non-violating"" and that “no human reviewer escalated the content.” Most of these human reviews took place on the appeal level. Meta added that all reviewers that reviewed the content are Croatian speakers. When the Oversight Board included this case in its shortlist sent to Meta to confirm legal eligibility for full review, Meta did not change its assessment of the content, as it sometimes does at that stage. In late January 2022, when the Board designated the case for full review, Meta’s Content Policy team took another look. At that point, Meta determined that the Knin cartoon post did not violate the letter of the Hate Speech policy but violated the spirit of that policy and decided to remove it from Facebook. Meta explained that a “‘spirit of the policy’ decision is made when the policy rationale section of one the Community Standards makes clear that the policy is meant to address a given scenario that the language of the policy itself does not address directly. In those circumstances, we may nonetheless remove the content through a ‘spirit of the policy’ decision.” Later, when drafting its rationale for the Board, Meta changed its mind again, this time concluding that the post violated the letter of the Hate Speech policy, and that all previous reviews were in error. According to Meta, the 397 users who reported the content were only informed about Meta’s initial determinations that the content did not violate Meta’s policies. They were not notified once Meta changed its decision and removed the content. Meta explained that due to “technical and resource limitations” it did not notify users when reported content is initially evaluated as non-violating and left up, and only later evaluated as violating and removed. 3. Oversight Board authority and scope The Board has authority to review Meta’s decision following an appeal from the user who reported content that was then left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include policy advisory statements with non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). 4. Sources of authority The Oversight Board considered the following sources of authority: I. Oversight Board decisions: The most relevant prior Oversight Board decisions include: II. Meta’s content policies: The policy rationale for Facebook’s Hate Speech Community Standard states that hate speech is not allowed on the platform “because it creates an environment of intimidation and exclusion and, in some cases, may promote real-world violence.” The Community Standard defines hate speech as a direct attack against people on the basis of protected characteristics, including race, ethnicity, and/or national origin. Meta prohibits content targeting a person or group of people based on protected characteristic(s) with ""dehumanizing speech or imagery in the form of comparisons, generalizations or unqualified behavioral statements (in written or visual form) to or about: [a]nimals that are culturally perceived as intellectually or physically inferior.” Meta also prohibits “[e]xclusion in the form of calls for action, statements of intent, aspirational or conditional statements, or statements advocating or supporting, defined as [...] Explicit exclusion, which means things such as expelling certain groups or saying they are not allowed.” The policy rationale for Facebook's Violence and Incitement Community Standard states that Meta ""aim[s] to prevent potential offline harm that may be related to Facebook” and that it restricts expression “when [it] believe[s] there is a genuine risk of physical harm or direct threats to public safety.” Specifically, Meta prohibits “coded statements where the method of violence or harm is not clearly articulated, but the threat is veiled or implicit,” including where the content contains “references to historical [...] incidents of violence.” III. Meta’s values: Meta’s values are outlined in the introduction to Facebook’s Community Standards. The value of “Voice” is described as “paramount”: Meta limits “Voice” in service of four other values and two are relevant here: IV. International human rights standards The UN Guiding Principles on Business and Human Rights ( UNGPs ), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for businesses’ human rights responsibilities. In 2021, Meta announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs . The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following human rights standards: 5. User submissions The user who reported the content, and appealed to the Board in Croatian, states “[t]he Pied Piper symbolises the Croatian Army, which in 1995 conducted an expulsion of Croatia’s Serbs, portrayed here as rats.” According to this user, Meta did not assess the video correctly. They state that the content represents ethnic hate speech and that it “fosters ethnic and religious hatred in the Balkans.” They also state that “this and many other Croatian portals have been stoking up ethnic intolerance between two peoples who have barely healed wounds of the war the video refers to.” When notified the Board had selected this case, the user who posted the content was invited to provide a statement. An administrator responded that they were a part of the page only as a business associate. 6. Meta’s submissions In the rationale Meta provided to the Board, Meta described its review process for this decision, but focused on explaining why its eventual removal of the content under the Hate Speech policy was justified. After repeated reports, multiple human reviewers found the content non-violating. Only after the Oversight Board selected the case did the company change its mind. Then, Meta determined that the content did not violate the letter of the hate speech policy, but that it made a “spirit of the policy” decision to remove the content. At this point, the Board informed Meta that it had selected the content for full review. Meta then changed its mind again, this time concluding that the content violated the letter of the policy. Specifically, it stated that it violated the policy line which prohibits content that targets members of a protected group and contains “[d]ehumanizing speech or imagery in the form of comparisons, generalizations, or unqualified behavioral statements (in written or visual form) to or about…[a]nimals that are culturally perceived as intellectually or physically inferior.” In this revised determination, Meta stated that the content was violating as it contained a direct attack against Serbs in Knin by comparing them to rats. Meta explained that its earlier determination that the content only violated the “spirit” of the Hate Speech policy was based on the assumption that the language of the policy did not prohibit attacks against groups on the basis of a protected characteristic identified implicitly. After additional review of this reasoning, Meta “concluded that it is more accurate to say that the policy language also prohibits attacks that implicitly identify” a protected characteristic. Meta stated that its eventual removal was consistent with its values of “Dignity” and “Safety,” when balanced against the value of “Voice.” According to Meta, dehumanizing comparisons of people to animals that are culturally perceived as inferior may contribute to adverse and prejudicial treatment “in social integration, public policy, and other societally-impactful processes at institutional or cultural levels through implicit or explicit discrimination or explicit violence.” Meta added that given the history of ethnic tensions and continuing discrimination against ethnic Serbs in Croatia, the video may contribute to a risk of real-world harm. In this regard, Meta referred to the Board’s “Zwarte Piet” case decision. Meta also stated that the removal was consistent with international human rights standards. According to Meta, its policy was “easily accessible” on Meta’s Transparency Center website. Additionally, the decision to remove the content was legitimate to protect the rights of others from discrimination. Finally, Meta argued that its decision to remove the content was necessary and proportionate because the content “does not allow users to freely connect with others without feeling as if they are being attacked on the basis of who they are” and because of “no less intrusive means available for limiting this content other than removal.” The Board also asked Meta whether this content violated the Violence and Incitement Community Standard. Meta responded that it did not because “the content did not contain threats or statements of intent to commit violence” against ethnic Serbs and “exclusion or expulsion without violence does not constitute a violent threat.” According to Meta, for the content to be removed under this policy “a more overt connection tying the rats in the video to the violent and forcible displacement” of Serbs would be necessary. 7. Public comments The Oversight Board received two public comments related to this case. One of the comments was submitted from Asia Pacific and Oceania and one from Europe. The submissions covered the following themes: whether the content should stay on the platform, whether the comparison of Serbs to rats violates Meta’s Hate Speech policy, and suggestions on how to enforce content rules on Facebook more effectively. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board looked at the question of whether this content should be restored through three lenses: Meta’s content policies, the company’s values, and its human rights responsibilities. 8.1 Compliance with Meta’s content policies The Board finds that the content in this case violates the Hate Speech Community Standard. It also violates the Violence and Incitement Standard. Hate Speech Meta’s Hate Speech policy prohibits attacks against people based on protected characteristics, including ethnicity. Here, the attacked group are ethnic Serbs living in Croatia, specifically in Knin, targeted on the basis of their ethnicity. While the caption and video do not mention ethnic Serbs by name, the content of the video in its historic context, the replacement of the name “Hamelin” with “Knin,” the lyrics used in the video, the identification of the piper with Čavoglave and therefore with the song by Thompson about Operation Storm, and the use of the tractor image are unmistakable references to Serb residents of Knin. Serbs are depicted as rats who must be removed from the city. The comments on the post and the many user reports confirm that this connection was abundantly clear to people who viewed the content. The content contains two “attacks” within the definition of that term in the Hate Speech policy. First, the Hate Speech policy prohibits comparisons to “[a]nimals that are culturally perceived as intellectually or physically inferior.” Meta’s Internal Implementation Standards, which are guidelines provided to content reviewers, specify that comparisons to “vermin” are prohibited under this policy. The video contains a visual comparison of Serbs to rats. This constitutes a dehumanizing comparison in violation of the Hate Speech policy and the Internal Implementation Standards. The Board finds that implied comparisons of the kind in this content are prohibited by Meta's hate speech policy. Meta explained that previous decisions not to remove the content were based on the assumption that the letter of the policy did not apply to implicit references to protected characteristics. The Board disagrees with this assumption. The letter of the policy prohibits attacks based on protected characteristics no matter whether references to those characteristics are explicit or implicit. The Hate Speech Standard states that comparisons can take a written or visual form, such as video, and the language of the Standard does not require that references to targeted groups be explicit. While this reading of the policy is in line with both its text and rationale, the policy does not clearly formulate that implicit references are covered by the policy too. Second, the content contains support for expelling Serbs from Knin. The rationale of the Hate Speech Standard defines attacks as “violent or dehumanising speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation.” According to the policy line of the Hate Speech Community Standard applied in this case, explicit exclusion means supporting “things such as expelling certain groups or saying they are not allowed.” The video in this case celebrates a historical incident where ethnic Serbs were forcibly expelled from Knin and the content states the townspeople were much better off after the rats were gone. This video contains support for ethnic cleansing in violation of the Hate Speech Standard. Violence and Incitement The Violence and Incitement policy prohibits “content that threatens others by referring to known historical incidents of violence.” The caption and the video contain references to “Operation Storm,” the 1995 military operation that reportedly resulted in displacement, execution, and disappearance of ethnic Serb civilians. In the video, the city is named Knin and the rats flee on tractors, both references to Operation Storm. Comments to the post make clear these references are apparent to ethnic Serbs and Croatians. The video may contribute to a climate where people feel justified in attacking ethnic Serbs. The post is designed to remind people of past conflict and to rekindle ethnic strife, with the goal of ridding the Knin area of the small remaining Serbian ethnic minority (on historical revisionism and radical nationalism in Croatia see Council of Europe Advisory Committee on the Framework Convention for the Protection of National Minorities’ 2021 Fifth Opinion on Croatia , on online hate speech against ethnic Serbs see the 2018 European Commission against Racism and Intolerance report , para 30). When a threat is “veiled,” according to the Facebook policy, it requires “additional context to enforce.” That context is present in this post. The Board disagrees with Meta’s assessment that the content did not contain threats or statements of intent to commit violence, and that calls for exclusion or expulsion without specifying means of violence may not constitute a violent threat. The forced expulsion of people is an act of violence. The use of the Pied Piper story is not advocacy of peaceful removal but a clear reference to known historical incidents of violence, in particular with the imagery of the tractor. As evidenced by the users who reported this post and the public comments, in the eyes of observers, rats in the cartoon represent the ethnic Serb population of the Knin area, including those who remained there. The cartoon clearly celebrates their removal. In the context of the Pied Piper story, the rats are induced to leave Knin by a magic flute rather than compelled by force, but the tractor reference refers to the actual forcible removal which is widely known about in the country. The tractor is a metaphor, but threats can be conveyed by metaphor no less effectively than by direct statements. Meta’s review process The Board is particularly interested in why the company concluded this content was not violating so many times. It would have been helpful if Meta had focused on this at the outset, instead of focusing on why its revised decision to remove the post was correct. If the company wishes to reduce the level of violating content on its platform, it needs to treat the Board’s selection of enforcement error cases as an opportunity to explore the reasons for its mistakes. The Board notes the complexity of assessing cases such as this one and the difficulty of applying Facebook’s Community Standards while accounting for context, especially considering the volume of content that human reviewers assess each day. Because of these challenges, the Board believes that it is important for Meta to improve its instructions to reviewers and pathways and processes for escalation. “Escalation"" means for human reviewers to send a case to Meta’s specialized teams, which then assess the content. According to the rationale provided by Meta, to avoid subjectivity and achieve consistency in enforcement, human reviewers are instructed to apply the letter of the policy and not to evaluate intent. While objectivity and consistency are legitimate goals, the Board is concerned that the instructions provided to reviewers appear to have resulted in about 40 human reviewers erroneously qualifying the content as non-violating, and no reviewer reaching the decision which Meta ultimately believed to be the correct one, which is removal. The possibility to escalate content is supposed to lead to better outcomes in difficult cases. Review on the escalation level may assess intent and is better equipped to account for context. This content was reported 397 times, had a wide reach, raised policy questions, required context to assess and involved content from a Croatian online platform which, according to experts consulted by the Board, was previously the subject of public and parliamentary discussion on the freedom of speech in the context of satire. Yet, no reviewer escalated the content. Meta told the Board it encourages reviewers to escalate “trending content,” and to escalate when in doubt. Meta defined trending content as “anything that is repetitive in nature (…) combined with the type of action associated with the content (i.e. potential harm or community risk(…)).” The fact that the content was not escalated prior to Board selection indicates that escalation pathways are not sufficiently clear and effective. The failure to escalate was a systemic breakdown. One factor which may have prevented the escalation of content in this case is that Meta does not provide at scale reviewers with clear thresholds on when content is “trending.” Another factor that may have contributed to the failure of reviewers to identify the content as “trending” – and thus to escalate – was the automated review system Meta used in this case. Meta explained that it uses automation to respond to reports when there are a certain amount of non-violation decisions over a given time period to avoid re-review. The Board is concerned about Meta’s escalation pathways, and notes that it should provide more information regarding these. It should study whether additional pathways to escalate content are necessary and whether the automated system used in this case prevents content which is viral and often reported from being escalated. The case also exposed flaws in Meta’s reporting and appeal process. The Board is concerned about Meta not notifying users when the company changes its decision in a case. A user who reported content which is initially evaluated as non-violating and left up, and then later evaluated as violating and removed, should be updated on this. 8.2 Compliance with Meta’s values The Board finds that removing the content is consistent with Meta’s values of “Voice,” ”Dignity,” and “Safety.” The Board recognizes that “Voice” is Meta’s paramount value, but the company allows for expression to be limited to prevent abuse and other forms of online and offline harm. Those targeted by dehumanizing and negative stereotypes may also see their “Voice” affected, as their use may have a silencing impact on those targeted and inhibit their participation on Facebook and Instagram. By allowing such posts to be shared, Meta may contribute to a discriminatory environment. The Board considers the values of “Dignity” and “Safety” to be of superseding importance in this case. In this regard, the Board noted the continuing increase in cases of physical violence against ethnic Serbs in Croatia ( 2021 CoE Fifth Opinion on Croatia , para. 116). This justified displacing the user’s “Voice” to protect the “Voice,” “Dignity,” and “Safety” of others. 8.3 Compliance with Meta’s human rights responsibilities The Board concludes that removing the post from the platform is consistent with Meta's human rights responsibilities as a business. Meta has committed itself to respect human rights under the UN Guiding Principles on Business and Human Rights ( UNGPs ) . Its Corporate Human Rights Policy states that this includes the International Covenant on Civil and Political Rights ( ICCPR ). Freedom of expression (Article 19 ICCPR) The scope of the right to freedom of expression is broad. Article 19, para. 2, of the ICCPR gives heightened protection to expression on political issues and discussion of historical claims (General Comment No. 34, paras. 20 and 49). ICCPR Article 19 requires that where restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). The UN Special Rapporteur on freedom of expression has encouraged social media companies to be guided by these principles when moderating online expression. I. Legality (clarity and accessibility of the rules) The principle of legality requires rules used by states to limit expression to be clear and accessible ( General Comment 34 , para. 25). The legality standard also requires that rules restricting expression “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” and “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not” (Ibid.). Individuals must have enough information to determine if and how their expression may be limited, so that they can adjust their behavior accordingly. Applied to Meta’s content rules for Facebook, users should be able to understand what is allowed and what is prohibited, and reviewers should have clear guidance on how to apply these standards. The Board finds that the Hate Speech Community Standard prohibits implicit targeting of groups on the basis of protected characteristics. This is the case for both dehumanizing comparisons to animals and for statements advocating or supporting exclusion. The errors that occurred in this case show that the language of the policy and the guidance provided to reviewers are not sufficiently clear. In the case, about 40 human reviewers decided the content did not violate the Hate Speech Community Standard. Prior to the final determination by Meta, no human reviewer found the content to be violating. This indicates reviewers consistently interpreted the policy as requiring them to find an explicit comparison between ethnic Serbs and rats before finding a violation. The company first informed the Board that the spirit of the policy prohibited implied comparisons to animals, and later that the letter of the policy covered implied comparisons. The confusion throughout this process evidences a need for clearer policy and implementation guidance. II. Legitimate aim The Board has previously recognized that the Hate Speech Community Standard and the Violence and Incitement Standard pursue the legitimate aim of protecting the rights of others. Those rights include the rights to equality and non-discrimination (Article 2, para. 1, ICCPR, Article 2 and 5 ICERD ) and exercise their freedom of expression on the platform without being harassed or threatened (Article 19 ICCPR ). III. Necessity and proportionality For restrictions on expression to be considered necessary and proportionate, those restrictions “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected” ( General Comment 34 , para. 34). The Special Rapporteur on free expression has also noted that on social media, “the scale and complexity of addressing hateful expression presents long-term challenges” ( A/HRC/38/35 , para. 28). However, according to the Special Rapporteur, companies should “demonstrate the necessity and proportionality of any content actions (such as removals or account suspensions).” Moreover, companies are required “to assess the same kind of questions about protecting their users’ right to freedom of expression” (ibid para. 41.). The Facebook Hate Speech Community Standard prohibits specific forms of discriminatory expression, including comparison to animals and calls for exclusion, absent any requirement that the expression incite violence or discriminatory acts. The Board, drawing upon the UN Special Rapporteur’s guidance, has previously explained that, while such prohibitions would raise concerns if imposed by a government at a broader level, particularly if enforced through criminal or civil sanctions, Facebook can regulate such expression, demonstrating the necessity and proportionality of the action (see the “ South Africa Slur ” decision). The content in this case, comparing ethnic Serbs to rats and celebrating past acts of discriminatory treatment, is dehumanizing and hateful. The Board would have come to a similar conclusion about any content that targets an ethnic group in this way, especially in a region that has a recent history of ethnic conflict. The Board finds removing this content from the platform was necessary to address the serious harms hate speech on the basis of ethnicity poses. The Board considered the factors in the Rabat Plan of Action ( The Rabat Plan of Action, OHCHR, A/HRC/22/17/Add.4, 2013 ) to guide its analysis, while accounting for differences between international law obligations of states and human rights responsibilities of businesses . Meta has a responsibility to “seek to prevent or mitigate adverse human rights impacts that are directly linked to [its] operations, products or services” (UNGPs, Principle 13). In its analysis, the Board focused on the social and political context, intent, the content and form of the speech and the extent of its dissemination. Regarding the context, this relates to a region that has recently experienced ethnic conflict and the backdrop of online hate speech and incidents of discrimination against ethnic minorities in Croatia (see Section 8.1. under Violence and Incitement). It intends to incite ethnic hatred, and this may contribute to individuals taking discriminatory action. The form of the expression and its wide reach is also important. The video was shared by an administrator of a Page which, according to expert briefings the Board received, is a Croatian news portal known for anti-Serb sentiments. The cartoon video form can be particularly harmful because it is especially engaging. Its reach was broad. While the video was created by someone else, it is likely that the popularity of the page (which has over 50,000 followers) would increase the reach of the video, especially as it reflects the views of the page and its followers. The content was viewed over 380,000 times, shared over 540 times, received over 2,400 reactions and had over 1,200 comments. In the “South Africa Slur” decision, the Board decided that it is in line with Meta’s human rights responsibilities to prohibit “some discriminatory expression” even “absent any requirement that the expression incite violence or discriminatory acts.” The Board notes that Article 20, para. 2, ICCPR, as interpreted in the Rabat Plan of Action, requires imminent harm to justify restrictions on expression. The Board does not believe that this post would result in imminent harm. However, Meta can legitimately remove posts from Facebook that encourage violence in a less immediate way. This is justified, as the human rights responsibilities of Meta as a company differ from the human rights obligations of states. Meta can apply less strict standards for removing content from its platform than those which apply to states imposing criminal or civil penalties. In this case, depicting the Serbs as rats and calling for their exclusion while referencing historical acts of violence, impacts the rights to equality and non-discrimination of those targeted. This justifies removing the post. Many Board Members also believed that the content had a negative impact of the freedom of expression of others on the platform, as it contributed to an environment where some users would feel threatened. The Board finds that removing the content from the platform is a necessary and proportionate measure. Less invasive interventions, such as labels, warning screens, or other measures to reduce dissemination, would not have provided adequate protection against the cumulative effects of leaving content of this nature on the platform (for a similar analysis see the “Depiction of Zwarte Piet” case). 9. Oversight Board decision The Oversight Board overturns Meta's original decision to leave up the content, requiring the post to be removed. 10. Policy advisory statement Content policy 1. Meta should clarify the Hate Speech Community Standard and the guidance provided to reviewers, explaining that even implicit references to protected groups are prohibited by the policy when the reference would reasonably be understood. The Board will consider this recommendation implemented when Meta updates its Community Standards and Internal Implementation Standards to content reviewers to incorporate this revision. Enforcement 2. In line with Meta’s commitment following the ""Wampum belt"" case (2021-012-FB-UA), the Board recommends that Meta notify all users who have reported content when, on subsequent review, it changes its initial determination. Meta should also disclose the results of any experiments assessing the feasibility of introducing this change with the public. The Board will consider this recommendation implemented when Meta shares information regarding relevant experiments and, ultimately, the updated notification with the Board and confirms it is in use in all languages. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. The company Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world, provided linguistic expertise. Return to Case Decisions and Policy Advisory Opinions" fb-kbhzs8bl,This case became unavailable for review by the Board as a result of user action,https://www.oversightboard.com/decision/fb-kbhzs8bl/,"January 28, 2021",2021,January,"TopicPolitics, Religion, ViolenceCommunity StandardHate speech","Policies and TopicsTopicPolitics, Religion, ViolenceCommunity StandardHate speech",,Malaysia,"A user commented on a post by posting a screenshot of two tweets by former Malaysian Prime Minister, Dr Mahathir Mohamad, in which the former Prime Minister stated that 'Muslims have a right to be angry and kill millions of French people for the massacres of the past' and '[b]ut by and large the Muslims have not applied the 'eye for an eye' law.",1794,306,"January 28, 2021 A user commented on a post by posting a screenshot of two tweets by former Malaysian Prime Minister, Dr Mahathir Mohamad, in which the former Prime Minister stated that 'Muslims have a right to be angry and kill millions of French people for the massacres of the past' and '[b]ut by and large the Muslims have not applied the 'eye for an eye' law. Standard Topic Politics, Religion, Violence Community Standard Hate speech Location Malaysia Platform Facebook A user commented on a post by posting a screenshot of two tweets by former Malaysian Prime Minister, Dr Mahathir Mohamad, in which the former Prime Minister stated that “Muslims have a right to be angry and kill millions of French people for the massacres of the past” and “[b]ut by and large the Muslims have not applied the “eye for an eye” law. Muslims don’t. The French shouldn’t. Instead, the French should teach their people to respect other people’s feelings.” The user did not add a caption alongside the screenshots. Facebook removed the post for violating its policy on Hate Speech . The user indicated in their appeal to the Oversight Board that they wanted to raise awareness of the former Prime Minister’s “horrible words.” The case 2020-001-FB-UA became unavailable for review by the Board as a result of user action. This case concerned a comment on a post, with the user who made the comment appealing Facebook's decision to remove it. However, the post itself, which remained on the platform, was subsequently deleted by the user who posted it. As a result, it would not be possible for the Board to restore the content. The Board’s review process ended after the case had already been assigned to a panel, but prior to the start of deliberations. Return to Case Decisions and Policy Advisory Opinions" fb-l0xmz0rp,Cartoon Showing Taliban Oppression Against Women,https://www.oversightboard.com/decision/fb-l0xmz0rp/,"March 7, 2024",2024,,"TopicArt / Writing / Poetry, Freedom of expression, JournalismCommunity StandardDangerous individuals and organizations",Dangerous individuals and organizations,Overturned,"Afghanistan, Netherlands",A user appealed Meta’s decision to remove a Facebook post containing a political cartoon illustrating Afghan women’s oppression under the Taliban regime. This case highlights errors in Meta’s enforcement of its Dangerous Organizations and Individuals policy.,5390,792,"Overturned March 7, 2024 A user appealed Meta’s decision to remove a Facebook post containing a political cartoon illustrating Afghan women’s oppression under the Taliban regime. This case highlights errors in Meta’s enforcement of its Dangerous Organizations and Individuals policy. Summary Topic Art / Writing / Poetry, Freedom of expression, Journalism Community Standard Dangerous individuals and organizations Location Afghanistan, Netherlands Platform Facebook This is a summary decision. Summary decisions examine cases in which Meta reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors and inform the public about the impact of the Board’s work. They are approved by a Board Member panel, not the full Board. They do not consider public comments and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement. Case Summary A user appealed Meta’s decision to remove a Facebook post containing a political cartoon illustrating Afghan women’s oppression under the Taliban regime. This case highlights errors in Meta’s enforcement of its Dangerous Organizations and Individuals policy, specifically in the context of political discourse delivered through satire. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Case Description and Background In August 2023, a Facebook user, a professional cartoonist from the Netherlands, posted a cartoon showing three Taliban men seated on a car crusher with a group of distressed women beneath it. In the background, there's a meter labelled ""oppress-o-meter"" connected to a control panel, and one of the men is seen pressing a button, causing the crusher to lower. The caption accompanying this image reads: ""2 years of Taliban rule. #Afghanistan #Taliban #women #oppression."" The post was removed for violating Meta’s Dangerous Organizations and Individuals policy, which prohibits representation of and certain speech about the groups and people the company judges as linked to significant real-world harm. In their appeal to the board, the user indicated that the content was a political cartoon that was satirical in nature, commenting on the continued and worsening oppression of women in Afghanistan under Taliban rule. Meta’s Dangerous Organizations and Individuals policy allows content that reports on, condemns or neutrally discusses dangerous organizations or individuals or activities. After the Board brought this case to Meta’s attention, the company determined that the content did not violate the Dangerous Organizations and Individuals policy and its removal was incorrect. The company then restored the content to Facebook. Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook and Instagram users. Case Significance This case highlights flaws in Meta’s enforcement procedures, particularly when detecting and interpreting images associated with designated organizations and individuals. The over-enforcement of this policy could potentially lead, as it did in this case, to artistic expression linked to legitimate political discourse being removed. In 2022, the Board also recommended that “Meta should assess the accuracy of [human] reviewers enforcing the reporting allowance under the Dangerous Organizations and Individuals policy in order to identify systemic issues causing enforcement errors,” ( Mention of the Taliban in News Reporting , recommendation no. 5). Additionally, in the same decision ( Mention of the Taliban in News Reporting , recommendation no. 6), the Board stated that “Meta should conduct a review of the HIPO ranker [high-impact false positive override system] to examine if it can more effectively prioritize potential errors in the enforcement of allowances to the Dangerous Organizations and Individuals policy.” For both recommendations, Meta reported progress on implementation. The Board has issued a recommendation that “Meta should ensure that it has procedures to analyze satirical content and context properly and that moderators are provided adequate incentives to investigate the context of potentially satirical content,” ( Two Buttons Meme , recommendation no. 3). Meta has reported implementation in part for this recommendation. The Board emphasizes that full implementation of these recommendations could reduce the number of enforcement errors under Meta’s Dangerous Organizations and Individuals policy. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to the company’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-l1lania7,Wampum belt,https://www.oversightboard.com/decision/fb-l1lania7/,"December 9, 2021",2021,December,"TopicArt / Writing / Poetry, Culture, Marginalized communitiesCommunity StandardHate speech","Type of DecisionStandardPolicies and TopicsTopicArt / Writing / Poetry, Culture, Marginalized communitiesCommunity StandardHate speechRegion/CountriesLocationCanada, United StatesPlatformPlatformFacebookAttachmentsPublic Comments 2021-012-FB-UA",Overturned,"Canada, United States",The Oversight Board has overturned Meta's original decision to remove a Facebook post from an Indigenous North American artist that was removed under Facebook's Hate Speech Community Standard.,40193,6348,"Overturned December 9, 2021 The Oversight Board has overturned Meta's original decision to remove a Facebook post from an Indigenous North American artist that was removed under Facebook's Hate Speech Community Standard. Standard Topic Art / Writing / Poetry, Culture, Marginalized communities Community Standard Hate speech Location Canada, United States Platform Facebook Public Comments 2021-012-FB-UA Note: On October 28, 2021, Facebook announced that it was changing its company name to Meta. In this text, Meta refers to the company, and Facebook continues to refer to the product and policies attached to the specific app. The Oversight Board has overturned Meta’s original decision to remove a Facebook post from an Indigenous North American artist that was removed under Facebook’s Hate Speech Community Standard. The Board found that the content is covered by allowances to the Hate Speech policy as it is intended to raise awareness of historic crimes against Indigenous people in North America. About the case In August 2021, a Facebook user posted a picture of a wampum belt, along with an accompanying text description in English. A wampum belt is a North American Indigenous art form in which shells are woven together to form images, recording stories and agreements. This belt includes a series of depictions which the user says were inspired by “the Kamloops story,” a reference to the May 2021 discovery of unmarked graves at a former residential school for Indigenous children in British Columbia, Canada. The text provides the artwork’s title, “Kill the Indian/ Save the Man,” and identifies the user as its creator. The user describes the series of images depicted on the belt: “Theft of the Innocent, Evil Posing as Saviours, Residential School / Concentration Camp, Waiting for Discovery, Bring Our Children Home.” In the post, the user describes the meaning of their artwork as well as the history of wampum belts and their purpose as a means of education. The user states that the belt was not easy to create and that it was emotional to tell the story of what happened at Kamloops. They apologize for any pain the art causes survivors of Kamloops, noting their “sole purpose is to bring awareness to this horrific story.” Meta’s automated systems identified the content as potentially violating Facebook’s Hate Speech Community Standard the day after it was posted. A human reviewer assessed the content as violating and removed it that same day. The user appealed against that decision to Meta prompting a second human review which also assessed the content as violating. At the time of removal, the content had been viewed over 4,000 times, and shared over 50 times. No users reported the content. As a result of the Board selecting this case, Meta identified its removal as an “enforcement error” and restored the content on August 27. However, Meta did not notify the user of the restoration until September 30, two days after the Board asked Meta for the contents of its messaging to the user. Meta explained the late messaging was a result of human error. Key findings Meta agrees that its original decision to remove this content was against Facebook’s Community Standards and an ""enforcement error.” The Board finds this content is a clear example of ‘counter speech’ where hate speech is referenced to resist oppression and discrimination. The introduction to Facebook’s Hate Speech policy explains that counter speech is permitted where the user’s intent is clearly indicated. It is apparent from the content of the post that it is not hate speech. The artwork tells the story of what happened at Kamloops, and the accompanying narrative explains its significance. While the words ‘Kill the Indian’ could, in isolation, constitute hate speech, in context this phrase draws attention to and condemns specific acts of hatred and discrimination. The Board recalls its decision 2020-005-FB-UA in a case involving a quote from a Nazi official. That case provides similar lessons on how intent can be assessed through indicators other than direct statements, such as the content and meaning of a quote, the timing and country of the post, and the substance of reactions and comments on the post. In this case, the Board found that it was not necessary for the user to expressly state that they were raising awareness for the post to be recognized as counter speech. The Board noted internal “Known Questions” to moderators that a clear statement of intent will not always be sufficient to change the meaning of a post that constitutes hate speech. Moderators are expected to make inferences from content to assess intent, and not rely solely on explicit statements. Two separate moderators concluded that this post constituted hate speech. Meta was not able to provide specific reasons why this error occurred twice. The Oversight Board decision The Oversight Board overturns Meta's original decision to take down the content. In a policy advisory statement, the Board recommends that Meta: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board overturns Meta’s original decision to remove a post by an Indigenous North American artist that included a picture of their art along with its title, which quotes an historical instance of hate speech. Meta agreed that the post falls into one of the allowances within the Facebook Community Standard on Hate Speech as it is clearly intended to raise awareness of historic crimes against Indigenous people in North America. 2. Case description In early August 2021, a Facebook user posted a picture of a wampum belt, along with an accompanying text description in English. A wampum belt is a North American Indigenous art form in which shells are woven together to form images, recording stories and agreements. This belt includes a series of depictions which the user says were inspired by “the Kamloops story,” a reference to the May 2021 discovery of unmarked graves at a former residential school for Indigenous children in British Columbia, Canada. The text provides the artwork’s title, “Kill the Indian/ Save the Man,” and identifies the user as its creator. The user then provides a list of phrases that correspond to the series of images depicted on the belt: “Theft of the Innocent, Evil Posing as Saviours, Residential School / Concentration Camp, Waiting for Discovery, Bring Our Children Home.” In the post, the user describes the meaning of their artwork as well as the history of wampum belts and their purpose as a means of education. The user states that the belt was not easy to create and that it was very emotional to tell the story of what happened at Kamloops. They go on to say that the story cannot be hidden from the public knowledge again and that they hope the belt will help prevent that happening. The user concludes their post by apologizing for any pain the artwork causes to survivors of the residential school system, saying that their “sole purpose is to bring awareness to this horrific story.” Meta’s automated systems identified the content as potentially violating the Facebook Community Standard on Hate Speech the day after it was posted. A human reviewer assessed the content as violating and removed it that same day. The user appealed against that decision to Meta prompting a second human review which also assessed the content as violating. At the time of removal, the content had been viewed over 4,000 times, and shared over 50 times. No users reported the content. As a result of the Board selecting this case, Meta identified its removal as an “enforcement error” and restored the content on August 27. However, Meta did not notify the user of the restoration until September 30, two days after the Board asked Meta for the contents of its messaging to the user. Meta explained the late messaging was a result of human error. The messaging itself did not inform the user that their content was restored as a consequence of their appeal to the Board and the Board’s selection of this case. A public comment by the Association on American Indian Affairs (Public Comment-10208) points out that the quote used as the title of the artwork is from Richard Henry Pratt, an army officer who established the first federal Indian boarding school in the United States of America. The phrase summarized the policies behind the creation of boarding schools that sought to forcefully ‘civilize’ Native peoples and ‘eradicate all vestiges of Indian culture.’ Similar policies were adopted in Canada and have been found to amount to cultural genocide by the Truth and Reconciliation Commission of Canada . The user’s reference to what happened at “Kamloops” is a reference to the Kamloops Indian Residential School, a former boarding school for First Nations children in British Columbia, Canada. In May 2021, leaders of the Tk’emlúps te Secwépemc First Nation announced the discovery of unmarked graves in Kamloops. Authorities have confirmed 200 probable burial sites in the area. The Canadian government estimates that a minimum of 150,000 Indigenous children went through the residential school system before the last school was shut down in 1997. Indigenous children were often forcibly removed from their families and prohibited from expressing any aspect of Indigenous culture. The schools employed harsh and abusive corporal punishment, and staff committed or tolerated sexual abuse and serious violence against many students. Students were malnourished, the schools were poorly heated and cleaned, and many children died of tuberculosis and other illnesses with minimal medical attention. The Truth and Reconciliation Commission concluded that at least 4,100 students died while attending the schools, many from mistreatment or neglect, others from disease or accident. 3. Authority and scope The Board has authority to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or reverse Meta’s decision, and its decision is binding on the company (Charter Article 4; Article 3, Section 5). The Board’s decisions may include policy advisory statements with non-binding recommendations that Meta must respond to (Charter Article 4; Article 3, Section 4). When the Board selects cases like this one, where Meta subsequently agrees that it made an error, the Board reviews the original decision to help increase understanding of why errors occur, and to make observations or recommendations that may contribute to reducing errors and to enhancing due process. After the Board’s decision in Breast Cancer Symptoms and Nudity ( 2020-004-IG-UA , Section 3), the Board adopted a process that enables Meta to identify any enforcement errors prior to a case being assigned to a panel (see: transparency reports , page 30). It is unhelpful that in these cases, Meta focuses its rationale entirely on its revised decision, explaining what should have happened to the user’s content, while inviting the Board to uphold this as the company’s “ultimate” decision. In addition to explaining why the decision the user appealed against was wrong, the Board suggests that Meta explain how the error occurred, and why the company’s internal review process failed to identify or correct it. The Board will continue to base its reviews on the decision a user appealed. 4. Relevant standards The Oversight Board considered the following standards in its decision: I. Facebook Community Standards: The Facebook Community Standards define hate speech as ""a direct attack on people based on what we call protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity and serious disease or disability."" Under “Tier 1,” prohibited content includes “violent speech or support in written or visual form.” The Community Standard also includes allowances to distinguish non-violating content: We recognize that people sometimes share content that includes someone else's hate speech to condemn it or raise awareness. In other cases, speech that might otherwise violate our standards can be used self-referentially or in an empowering way. Our policies are designed to allow room for these types of speech, but we require people to clearly indicate their intent. If the intention is unclear, we may remove content. II. Meta’s values: Meta's values are outlined in the introduction to the Facebook Community Standards. The value of ""Voice"" is described as ""paramount"": The goal of our Community Standards has always been to create a place for expression and give people a voice. […] We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Meta limits ""Voice"" in service of four values, and two are relevant here: ""Safety"": We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn't allowed on Facebook. “Dignity”: We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade them. III. Human rights standards: The UN Guiding Principles on Business and Human Rights ( UNGPs ), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following human rights standards: 5. User statement The user stated in their appeal to the Board that their post was showcasing a piece of traditional artwork documenting history, and that it had nothing to do with hate speech. The user further stated that this history “needed to be seen” and in relation to Meta’s removal of the post stated that “this is censorship.” 6. Explanation of Meta’s decision Meta told the Board that the phrase “Kill the Indian” constituted a Tier 1 attack under the Facebook Community Standard on Hate Speech , which prohibits “violent speech” targeting people on the basis of a protected characteristic, including race or ethnicity. However, Meta acknowledged the removal of the content was wrong because the policy permits sharing someone else’s hate speech to “condemn it or raise awareness.” Meta noted that the user stated in the post that their purpose was to bring awareness to the horrific story of what happened at Kamloops. Meta noted that the phrase “Kill the Indian/Save the Man” originated in the forced assimilation of Indigenous children. By raising awareness of the Kamloops story, the user was also raising awareness of forced assimilation through residential schools. In response to a question from the Board, Meta clarified that a content reviewer would not need to be aware of this history to correctly enforce the policy. The user’s post stated they were raising awareness of a horrific story and therefore a reviewer could reasonably conclude that the post was raising awareness of the hate speech it quoted. Meta informed the Board that no users reported the content in this case. Meta operates machine learning classifiers that are trained to automatically detect potential violations of the Facebook Community Standards. In this case, two classifiers automatically identified the post as possible hate speech. The first classifier, which analyzed the content, was not very confident that the post violated the Community Standard. However, another classifier determined, on the basis of a range of contextual signals, that the post might be shared widely and seen by many people. Given the potential harm that can arise from the widespread distribution of hate speech, Meta's system automatically sent the post to human review. Meta clarified in response to the Board’s questions that a human reviewer based in the Asia-Pacific region determined the post to be hate speech and removed it from the platform. The user appealed and a second human reviewer in the Asia-Pacific region reviewed the content and also determined it to be hate speech. Meta confirmed to the Board that moderators do not record reasoning for individual content decisions. 7. Third-party submissions The Oversight Board considered eight public comments related to this case: four from the United States and Canada, two from Europe, one from Sub-Saharan Africa, and one from Asia-Pacific and Oceania. The submissions addressed themes including the significance of the quote the user based the title of their artwork on, context about the use of residential schools in North America, and how Meta’s content moderation impacts artistic freedoms and the expression rights of people of Indigenous identity or origin. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board looked at the question of whether this content should be restored through three lenses: the Facebook Community Standards, Meta’s values and its human rights responsibilities. 8.1 Compliance with Community Standards Meta agreed that its original decision to remove this content was against the Facebook Community Standards and was an ""enforcement error.” The Board finds that the content in this case is unambiguously not hate speech. This content is a clear example of ‘counter speech,’ where hate speech is referenced or reappropriated in the struggle against oppression and discrimination. The Hate Speech Community Standard explicitly allows “content that includes someone else’s hate speech to condemn it or raise awareness.” Two separate moderators nevertheless concluded that this post constituted hate speech. Meta was not able to provide the specific reasons why this particular error occurred twice. In the Nazi Quote case ( 2020-005-FB-UA ), the Board noted that the context in which a quote is used is important to understand its meaning. In that case, the content and meaning of the quote, the timing of the post and country where it was posted, as well as the substance of reactions and comments to the post, were clear indications that the user did not intend to praise a designated hate figure. The Board finds that it was not necessary for the user to expressly state that they were raising awareness for the intent and meaning of this post to be clear. The pictured artwork tells the story of what happened at Kamloops, and the accompanying narrative explains its significance. While the words ‘Kill the Indian’ could, in isolation, constitute hate speech, assessing the content as a whole makes clear the phrase is used to raise awareness of and condemn hatred and discrimination. The content used quotation marks to distinguish the hateful phrase of its title, which in full was “Kill the Indian / Save the Man.” This should have given a reviewer pause to look deeper. The way the user told the Kamloops story and explained the cultural significance of the wampum belt made clear they identified with the victims of discrimination and violence, and not its perpetrators. Their narrative clearly condemned the events uncovered at Kamloops. It was clear from comments and reactions to the post that this intent to condemn and raise awareness was understood by the user’s audience. The Board notes that Facebook’s internal “Known Questions,” which form part of the guidance given to moderators, instruct moderators to err on the side of removing content that includes hate speech where the user’s intent is not clear. The Known Questions also state that a clear statement of intent will not always be sufficient to change the meaning of a post that constitutes hate speech. This internal guidance provides limited instruction to moderators on how to properly distinguish prohibited hate speech from counter speech that quotes hate speech to condemn it or raise awareness. As far as the Board is aware, there is no guidance on how to assess evidence of intent in artistic content quoting or using hate speech terms, or in content discussing human rights violations, where such content is covered by the policy allowances. 8.2 Compliance with Meta’s values The Board finds that the original decision to remove this content was inconsistent with Meta’s values of “Voice” and “Dignity” and did not serve the value of “Safety.” While it is consistent with Meta’s values to limit the spread of hate speech on its platforms, the Board is concerned that Meta’s moderation processes are not able to properly identify and protect the ability of people who face marginalization or discrimination to express themselves through counter speech. Meta has stated its commitment to supporting counter speech: As a community, a social platform, and a gathering of the shared human experience, Facebook supports critical Counterspeech initiatives by enforcing strong content policies and working alongside local communities, policymakers, experts, and changemakers to unleash Counterspeech initiatives across the globe. Meta claims that “Voice” is the company’s most important value. Art that seeks to illuminate the horrors of past atrocities and educate people on their lasting impact is one of the most important and powerful expressions of the value of “Voice,” especially for marginalized groups who are expressing their own culture and striving to ensure their own history is heard. Counter speech is not just an expression of “Voice” but also a key tool for the targets of hate speech to protect their own dignity and push back against oppressive, discriminatory, and degrading conduct. Meta must ensure that its content policies and moderation practices account for and protect this form of expression. For a user who is raising awareness about mass atrocities to be told that their speech is being suppressed as hate speech is an affront to their dignity. This accusation, in particular when confirmed by Meta on appeal, may lead to self-censorship. 8.3 Compliance with Meta’s human rights responsibilities The Board concludes that the removal of this post contravened Meta's human rights responsibilities as a business. Meta has committed itself to respect human rights under the UN Guiding Principles on Business and Human Rights ( UNGPs ). Its Corporate Human Rights Policy states that this includes the International Covenant on Civil and Political Rights (ICCPR). This is the Board’s first case concerning artistic expression, as well as its first case concerning expression where the user self-identifies as an Indigenous person. It is one of several cases the Board has selected where the user was seeking to bring attention to serious human rights violations. Freedom of expression (Article 19 ICCPR) International human rights standards emphasize the value of political expression (Human Rights Committee General Comment 34 , para. 38). The scope of protection for this right is specified in Article 19, para. 2, of the ICCPR, which gives special mention to expression “in the form of art.” The International Convention on the Elimination of All Forms of Racial Discrimination (ICERD) also provides protection from discrimination in the exercise of the right to freedom of expression (Article 5), and the Committee tasked with monitoring states' compliance has emphasized the importance of the right with respect to assisting ""vulnerable groups in redressing the balance of power among the components of society"" and to offer ""alternative views and counterpoints"" in discussions (CERD Committee, General Recommendation 35, para. 29). Art is often political, and international standards recognize the unique and powerful role of this form of communication in challenging the status quo (UN Special Rapporteur in the field of cultural rights, A/HRC/23/34 , at paras 3 – 4). The internet, and social media platforms like Facebook and Instagram in particular, have special value to artists in reaching new and larger audiences. Their livelihoods may depend on access to social platforms that dominate the Internet. The right to freedom of expression is guaranteed to all people without discrimination (Article 19, para. 2, ICCPR). The Board received submissions that the rights of Indigenous people to free, prior and informed consent where states adopt legislative or administrative measures that affect those communities imply a responsibility for Meta to consult with these communities as it develops its content policies (Public Comment-10240, Minority Rights Group; see also UN Declaration on the Rights of Indigenous Peoples, Article 19). The UN Special Rapporteur on freedom of opinion and expression has raised a similar concern in the context of social media platforms’ responsibilities ( A/HRC/38/35 , para. 54). The content in this case engages a number of other rights as well, including the rights of persons belonging to national, ethnic or linguistic minorities to enjoy, in community with other members of their group, their own culture (Article 27, ICCPR), and the right to participate in cultural life and enjoy the arts (Article 15, ICESCR). The art of creating a wampum belt that sought to record and bring awareness to human rights atrocities and their continued legacy receives protection under the UN Declaration on Human Rights Defenders , Article 6(c), as well as the right to truth about atrocities ( UN Set of Principles to Combat Impunity ). The UN Declaration on the Rights of Indigenous Peoples expressly recognizes the forcible removal of children can be an act of violence and genocide (Article 7, para 2) and provides specific protection against forced assimilation and cultural destruction (Article 8, para 1). ICCPR Article 19 requires that where restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim and necessity and proportionality (Article 19, para. 3, ICCPR). The UN Special Rapporteur on freedom of expression has encouraged social media companies to be guided by these principles when moderating online expression, mindful that regulation of expression at scale by private companies may give rise to concerns particular to that context (A/HRC/38/35, paras. 45 and 70). The Board has employed the three-part test based on Article 19 of the ICCPR in all of its decisions to date. I. Legality (clarity and accessibility of the rules) The Community Standard on Hate Speech clearly allows content that condemns hate speech or raises awareness. This component of the policy is sufficiently clear and accessible for the user to understand the rules and act accordingly ( General Comment 34 , para. 25). The legality standard also requires that rules restricting expression “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not” ( Ibid. ) The failure of two moderators to properly assess the application of policy allowances to this content indicates that further internal guidance to moderators may be required. II. Legitimate aim Any state restriction on freedom of expression must pursue one of the legitimate aims listed in Article 19, para. 3 of the ICCPR. In its submissions to the Board, Meta has routinely invoked aims from this list when justifying action it has taken to suppress speech. The Board has previously recognized that Facebook’s Hate Speech Community Standard pursues the legitimate aim of protecting the rights of others. Those rights include the right to equality and non-discrimination, freedom of expression, and the right to physical integrity. III. Necessity and proportionality The clear error in this case means that the removal was obviously not necessary, which Meta has accepted. The Board is concerned that such an unambiguous error may indicate deeper problems of proportionality in Meta’s automated and human review processes. Any restrictions on freedom of expression should be appropriate to achieve their protective function and should be the least intrusive instrument amongst those that might achieve their protective function (General Comment 34, para. 34). Whether Meta’s content moderation system meets the requirements of necessity and proportionality depends largely on how effective it is in removing actual hate speech while minimizing the number of erroneous detections and removals. Every post that is wrongly removed harms freedom of expression. The Board understands that mistakes are inevitable, for both humans and machines. Hate speech and responses to it will always be context specific, and its boundaries are not always clear. However, the types of mistakes and the people or communities who bear the burden of those mistakes reflect design choices that must constantly be assessed and examined. This requires further investigation of the root causes of the mistake in this case, and broader evaluation of how effectively counter speech is moderated. Given the importance of critical art from Indigenous artists in helping to counter hatred and oppression, the Board expects Meta to be particularly sensitive to the possibility of wrongful removal of the content in this case and similar content on Facebook and Instagram. It is not sufficient to evaluate the performance of Meta’s enforcement of Facebook’s Hate Speech policy as a whole. A system that performs well on average could potentially perform quite poorly on subcategories of content where incorrect decisions have a particularly pronounced impact on human rights. It is possible that the types of errors that occurred in this case are rare; the Board notes, however, that members of marginalized groups have raised concerns about the rate and impact of false positive removals for several years. The errors in this case show that it is incumbent on Meta to demonstrate that it has undertaken human rights due diligence to ensure its systems are operating fairly and are not exacerbating historical and ongoing oppression (UNGPs, Principle 17). Meta routinely evaluates the accuracy of its enforcement systems in dealing with hate speech. This assessment is not broken down into assessments of accuracy that specifically measure Meta’s ability to distinguish hate speech from permitted content that condemns hate speech or raises awareness. Meta’s existing processes also include ad-hoc mechanisms to identify error trends and investigate their root causes, but this requires large samples of content against which to measure system performance. The Board enquired whether Meta has specifically assessed the performance of its review systems in accurately evaluating counter speech that constitutes artistic expression and counter speech raising awareness of human rights violations. Meta told the Board that it had not undertaken specific research on the impact of false positive removals on artistic expression or on expression from people of Indigenous identity or origin. Meta has informed the Board of obstacles to beginning such assessments, including the lack of a system to automate the collation of a sample of content that benefits from policy allowances. This was because reviewers mark content as violating or non-violating, and are not required to indicate where non-violating content engages a policy allowance. A sample of counter speech that fits within this allowance would need to be assembled manually. While the Board was encouraged by the level of detail provided on how Meta evaluates performance during a Question and Answer session held at the Board’s request, it is clear that more investment is needed in assessing the accuracy of enforcement of Hate Speech policy allowances and learning from error trends. Without additional information about Meta’s design decisions and the performance of its human and automated systems, it is difficult for the Board or Meta to assess the proportionality of Meta’s current approach to hate speech. When assessing whether it is necessary and proportionate to use the specific machine learning tools at work in this case to automatically detect potential hate speech, understanding the accuracy of those tools is key. Machine learning classifiers always involve trade-offs between rates of false positives and false negatives. The more sensitive a classifier is, the more likely it is to correctly identify instances of hate speech, but it is also more likely to wrongly flag material that is not hate speech. Differently trained classifiers and different models vary in their utility and effectiveness for different tasks. For any given model, different thresholds can be used that reflect a judgment about the relative importance of avoiding different types of mistakes. The likelihood and severity of mistakes should also inform decisions about how to deploy a classifier, including whether it can take action immediately or whether it requires human approval, and what safeguards are put into place. Meta explained that the post in question in this case was sent for review by its automated systems because it was likely to have a large audience. This approach can limit the spread of harmful material, but it is also likely to increase the risk that powerful art that counters hate is wrongly removed. Meta told the Board that it regularly evaluates the rate of false positives over time, measured against a set of decisions by expert reviewers. Meta also noted that it was possible to assess the accuracy of the particular machine learning models that were relevant in this case and that it keeps information about its classifiers’ predictions for at least 90 days. The Board requested information that would allow us to evaluate the performance of the classifier and the appropriateness of the thresholds that Meta used in this case. Meta informed the Board that it could not provide the information the Board sought because it did not have sufficient time to prepare it for us. However, Meta noted that it was considering the feasibility of providing this information in future cases. Human review can provide two important safeguards on the operation of Meta’s classifiers: first before the post was removed, and then again upon appeal. The errors in this case indicate that Meta’s guidance to moderators assessing counter speech may be insufficient. There are any number of reasons that could have contributed to human moderators twice reaching the wrong decision in this case. The Board is concerned that reviewers may not have sufficient resources in terms of time or training to prevent the kind of mistake seen in this case, especially in respect of content permitted under policy allowances (including, for example, “condemning” hate speech and “raising awareness”). In this case, both reviewers were based in the Asia-Pacific region. Meta was not able to inform the Board whether reviewer accuracy rates differed for moderators assessing potential hate speech who are not located in the region the content originates from. The Board notes the complexity of assessing hate speech, and the difficulty of understanding local context and history, especially considering the volume of content that moderators review each day. It is conceivable that the moderators who assessed the content in this case had less experience with the oppression of Indigenous peoples in North America. Guidance should include clear instruction to evaluate content in its entirety and support moderators in more accurately assessing context to determine evidence of intent and meaning. The Board recommended in its Two Buttons Meme decision ( 2021-005-FB-UA ) that Meta let users indicate in their appeal that their content falls into one of the allowances to the Facebook Community Standard on Hate Speech. Currently, when a user appeals one of Meta’s decisions that goes to human review, the reviewer is not informed that the user has contested a prior decision and does not know the outcome of the prior review. Whereas Meta has informed the Board that it believes this information will bias the review, the Board is interested in whether this information could increase the likelihood of more nuanced decision-making. This is a question that could be empirically tested by Meta; the results of those tests would be useful in evaluating the proportionality of the specific measures that Meta has chosen to adopt. Under the UNGPs, Meta has a responsibility to perform human rights due diligence (Principle 17). This should include identifying any adverse impacts of content moderation on artistic expression and the political expression of Indigenous peoples countering discrimination. Meta should further identify how it will prevent, mitigate and account for its efforts to address those adverse impacts. The Board is committed to monitoring Meta's performance and expects to see the company prioritize risks to marginalized groups and show evidence for continual improvements. 9. Oversight Board decision The Oversight Board overturns Meta's original decision to take down the content. 10. Policy advisory statement Enforcement 1. Provide users with timely and accurate notice of any company action being taken on the content their appeal relates to. Where applicable, including in enforcement error cases like this one, the notice to the user should acknowledge that the action was a result of the Oversight Board’s review process. Meta should share the user messaging sent when Board actions impact content decisions appealed by users, to demonstrate it has complied with this recommendation. These actions should be taken with respect to all cases that are corrected at the eligibility stage of the Board’s process. 2. Study the impacts of modified approaches to secondary review on reviewer accuracy and throughput. In particular, the Board requests an evaluation of accuracy rates when content moderators are informed that they are engaged in secondary review, so they know the initial determination was contested. This experiment should ideally include an opportunity for users to provide relevant context that may help reviewers evaluate their content, in line with the Board’s previous recommendations. Meta should share the results of these accuracy assessments with the Board and summarize the results in its quarterly Board transparency report to demonstrate it has complied with this recommendation. 3. Conduct accuracy assessments focused on Hate Speech policy allowances that cover artistic expression and expression about human rights violations (e.g., condemnation, awareness raising, self-referential use, empowering use). This assessment should also specifically investigate how the location of a reviewer impacts the ability of moderators to accurately assess hate speech and counter speech from the same or different regions. The Board understands this analysis likely requires the development of appropriate and accurately labelled samples of relevant content. Meta should share the results of this assessment with the Board, including how these results will inform improvements to enforcement operations and policy development and whether it plans to run regular reviewer accuracy assessments on these allowances, and summarize the results in its quarterly Board transparency report to demonstrate it has complied with this recommendation. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world and Duco Advisers, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology, provided expertise on socio-political and cultural context. Return to Case Decisions and Policy Advisory Opinions" fb-lxnfad5f,Haitian Police Station Video,https://www.oversightboard.com/decision/fb-lxnfad5f/,"December 5, 2023",2023,December,"TopicFreedom of expression, Safety, ViolenceCommunity StandardViolence and incitement","Policies and TopicsTopicFreedom of expression, Safety, ViolenceCommunity StandardViolence and incitement",Overturned,Haiti,"The Oversight Board has overturned Meta’s decision to take down a video showing people entering a police station in Haiti, attempting to break into a cell holding an alleged gang member and threatening them with violence.",46551,7458,"Overturned December 5, 2023 The Oversight Board has overturned Meta’s decision to take down a video showing people entering a police station in Haiti, attempting to break into a cell holding an alleged gang member and threatening them with violence. Standard Topic Freedom of expression, Safety, Violence Community Standard Violence and incitement Location Haiti Platform Facebook Haitian Police Station Video Public Comments Appendix Haitian Creole Translation Haitian Police Station Video Decision PDF To read this decision in Haitian Creole, click here . Pou li desizyon sa an Kreyòl Ayisyen, klike isit la . The Oversight Board has overturned Meta’s decision to take down a video from Facebook showing people entering a police station in Haiti, attempting to break into a cell holding an alleged gang member and threatening them with violence. The Board finds the video did violate the company’s Violence and Incitement policy. Nonetheless, the majority of the Board disagrees with Meta’s assessment on the application of the newsworthiness allowance in this case. For the majority, Meta’s near three-week delay in removing the content meant the risk of offline harm had diminished sufficiently for a newsworthiness allowance to be applied. Moreover, the Board recommends that Meta assess the effectiveness and timeliness of its responses to content escalated through the Trusted Partner program. About the Case In May 2023, a Facebook user posted a video showing people in civilian clothing entering a police station, attempting to break into a cell holding a man – who is a suspected gang member, according to Meta – and shouting “we’re going to break the lock” and “they’re already dead.” Towards the end of the video, someone yells “bwa kale na boudaw,” which Meta interpreted as a call for the group to “to take action against the person ‘bwa kale style’ – in other words, to lynch him.” Meta also interpreted “bwa kale” as a reference to the civilian movement in Haiti that involves people taking justice into their own hands. The video is accompanied by a caption in Haitian Creole that includes the statement, “the police cannot do anything.” The post was viewed more than 500,000 times and the video around 200,000 times. Haiti is experiencing unprecedented insecurity, with gangs taking control of territory and terrorizing the population. With police unable to address the violence and, in some instances, said to be complicit, a movement has emerged that has seen “more than 350 people [being] lynched by local people and vigilante groups” in a four-month period this year, according to the UN High Commissioner for Human Rights. In retaliation, gangs have taken revenge on those believed to be in or sympathetic to the movement. A Trusted Partner flagged the video to Meta as potentially violating 11 days after it was posted, warning the content might incite further violence. Meta’s Trusted Partner program is a network of non-governmental organizations, humanitarian agencies and human rights researchers from 113 countries. Meta told the Board that the “greater the level of risk [of violence in a country], the higher the priority for developing relationships with Trusted Partners,” who can report content to the company. About eight days after the Trusted Partner’s report in this case, Meta determined the video included both a statement of intent to commit and a call for high severity violence and removed the content from Facebook. Meta referred this case to the Board to address the difficult moderation questions raised by content related to the “Bwa Kale” movement in Haiti. Meta did not apply the newsworthiness allowance because the company found the risk of harm was high and outweighed the public interest value of the post, noting the ongoing pattern of violent reprisals and killings in Haiti. Key Findings The Board finds the content did violate Facebook’s Violence and Incitement Community Standard because there was a credible threat of offline harm to the person in the cell as well as to others. However, the majority of the Board disagrees with Meta on the application of the newsworthiness allowance in this case. Given the delay of nearly three weeks between posting and enforcement, Meta should have applied the newsworthiness allowance to keep up the content, with the Board concluding the risk of harm and public interest involved in any newsworthiness analysis should be assessed at the time Meta is considering issuing any allowance, rather than at the time content is posted. The Board finds that Meta should update its language on the newsworthiness allowance to make this clear to users. For the majority of Board Members, Meta’s near three-week delay in removing the content meant the risk of offline harm had diminished sufficiently for a newsworthiness allowance to be applied. This group considered the context in Haiti, the extent and reach of the post, and the likelihood of harm given the delay in enforcement. By that time, when the video already had 200,000 views, the risk the content posed had already likely materialized. Furthermore, in a situation of protracted widespread violence and breakdown in public order, sharing information becomes even more important to allow communities to react to events, with the video holding the potential to inform people in both Haiti and abroad about the realities in the country. However, a minority of Board Members find Meta was right not to apply the allowance. Since the content was posted during a period of heightened risk, the threat of the video leading to additional and retaliatory violence had not passed when Meta reviewed the content. These Board Members consider removal necessary to address these risks. The Board is concerned about Meta’s ability to moderate content in Haiti in a timely manner during this period of heightened risk. The delay in this case appears to be the result of the company’s failure to invest adequate resources in moderating content in Haiti. Meta was not able to provide a timely assessment of the report from its Trusted Partner. Reports from Trusted Partners are one of the main tools Meta relies on in Haiti to identify potentially violating content. A recent report by a Trusted Partner found that Meta does not adequately resource its own teams to review content identified by Trusted Partners and there is significant irregularity in response times. Finally, the Board notes Meta failed to activate its Crisis Policy Protocol in Haiti. While Meta told the Board it already had risk-mitigation measures in place, the Board is concerned the lengthy delay in this case indicates that existing measures are inadequate. If the company fails to use this protocol in such situations, it will not deliver timely or principled moderation, undermining the company’s and the public’s ability to assess the effectiveness of the protocol in meeting its aims. The Oversight Board's Decision The Oversight Board overturns Meta's decision to take down this content, requiring the post to be restored. The Board recommends that Meta: 1: Decision Summary The Oversight Board overturns Meta’s decision to take down a Facebook post of a video depicting a group of people entering a police station in Haiti. As the crowd attempts to gain access to a locked cell holding an alleged gang member, members of the crowd shout, “we’re going to break the lock” and “they’re already dead,” and other phrases threatening violence. The Board finds the post did violate Meta’s Violence and Incitement policy as it depicts incitement to violence in a context where there is a credible threat of offline harm to the person in the cell as well as others. However, the majority of the Board disagrees with Meta’s assessment on the application of the newsworthiness allowance in this case. For the majority, given the near three-week delay in Meta removing the content, the risk of harm had significantly diminished, and Meta should have kept the content on the platform given the public interest value of the post. For a minority of the Board, Meta was right not to apply the newsworthiness allowance in this case, as the risk that the video could lead to additional and retaliatory violence had not passed when the company reviewed it, given the overall context of widespread and ongoing gang and “self-defense” or “vigilante” violence in Haiti. The Board also finds that, to meet its human-rights responsibilities, Meta must ensure that moderation of content in Haiti, during this period of heightened risk, is effective and timely. The Board recommends Meta assess the timeliness and effectiveness of its responses to content escalated through the Trusted Partner program, including how effective Meta is in providing timely responses to escalations and what corrective measures Meta plans to adopt to improve response times to Trusted Partner escalations. 2: Case Description and Background In May 2023, a Facebook user posted a video with a caption in Haitian Creole. The video shows a large group of people, who are wearing civilian clothing, walking into a police station and approaching a locked cell that has a man inside. According to Meta, the man inside the cell is a suspected member of the “5 Seconds Gang,” a well-armed and prominent gang in Haiti. The video also shows an individual from the group in the station attempting to break the cell’s lock. Several other people shout words of encouragement, including “we’re going to break the lock” and “they’re already dead.” Toward the end of the video, someone yells “bwa kale na boudaw.” According to Meta’s interpretation when referring the case to the Board, this phrase interpreted literally means “wooden stick up your ass”, and given the context indicated a call for the group “to take action against the person ‘bwa kale style’ – in other words, to lynch him.” Meta interprets the use of the term “bwa kale” to refer to the civilian movement of the same name, which involves civilians taking justice into their own hands against alleged gang members. The video is accompanied by a caption describing what happens and stating that the “police cannot do anything, things are going to get weird.” According to linguistic experts consulted by the Board, the caption conveys a loss of faith in the police and a bleak outlook on what could happen next. The post was viewed over 500,000 times and the video was viewed around 200,000 times. A Trusted Partner flagged the video to Meta as potentially violating 11 days after it was posted to Facebook, warning the content might incite further violence. Meta assessed the content and removed it from Facebook for violating its Violence and Incitement Community Standard. Meta’s Trusted Partner program is a network of non-governmental organizations, humanitarian agencies, human rights defenders and researchers from 113 countries around the world. Meta told the Board that the “greater the level of risk [of violence in a country], the higher the priority for developing relationships with Trusted Partners.” Trusted Partners can report content to Meta and provide feedback on the company’s content policies and enforcement. In this case, eight days after the Trusted Partner’s report, Meta determined the video included both a statement of intent to commit and a call for high-severity violence, and removed the content. The following context is relevant to the Board’s decision. Haiti is experiencing “ unprecedented insecurity ,” with gangs taking control of territory and terrorizing the population. Police are unable to address the violence and, in some cases, are reported to be complicit . According to the UN Special Representative to Haiti , “during the first quarter of the year, 1,647 criminal incidents – homicides, rapes, kidnappings and lynching – were recorded,” which is more than double the number compared with the same period in 2022. This rise in violence is taking place amid a political and humanitarian crisis. Haiti has not had an elected government since the assassination of President Jovenel Moïse in 2021 and has endured an ongoing cholera epidemic and natural disasters . In March 2023, Médecins Sans Frontiéres (MSF) reported having to close one of its hospitals as a result of the intense violence in the country’s capital. Acting Prime Minister Ariel Henry has repeatedly appealed to the international community to send multinational forces to fight gang control, citing this as a necessary first step in “creating an environment for the State to function again.” A civilian movement, referred to as “Bwa Kale,” has emerged in response to the rise in violence and the inability of the government or the police to protect the population. A widely reported event that took place on April 24, 2023, has proven a pivotal moment for the movement. When Haitian police stopped a bus carrying 14 men with weapons, who were allegedly on their way to join an allied gang in a nearby district, a crowd gathered at the scene. Police stood back, and some were seen to help, as the crowd stoned the alleged gang members and burned them to death. Recordings of this event circulated widely on social media. According to a report by the National Human Rights Defense Network in Haiti, following the circulation of these recordings on social media, others, “armed with firearms, machetes, and tires, began to search for armed bandits, their relatives, or anyone suspected of having links with them, in order to lynch them.” According to the UN High Commissioner for Human Rights , between April 24 and mid-August, “more than 350 people have been lynched by local people and vigilante groups.” In retaliation, gangs have taken revenge on those believed to be in or sympathetic to the movement. On October 2, 2023, the United Nations Security Council authorized a year-long multinational security mission to Haiti. According to reporting , it will be several months before forces are dispatched to Haiti. 3: Oversight Board Authority and Scope The Board has authority to review decisions that Meta submits for review (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1.1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect to identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). When Meta commits to act on recommendations, the Board monitors their implementation. 4: Sources of Authority and Guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board Decisions The most relevant previous decisions of the Oversight Board include: II. Meta’s Content Policies Meta’s Violence and Incitement policy “aims to prevent potential offline harm that may be related to content on Facebook.” The policy rationale notes that not all calls for violence are literal and likely to incite violence, therefore the company tries to “consider the language and context in order to distinguish casual statements from content that constitutes a credible threat to public or personal safety.” The policy rules prohibit “[s]tatements of intent to commit high-severity violence” and “[c]alls for high-severity violence.” Meta defines high-severity violence as a threat that could lead to death or is likely to be lethal. As part of the policy rationale, Meta explains that the company “see[s] aspirational or conditional threats directed at terrorists and other violent actors (e.g. ‘Terrorists deserve to be killed’), and [it] deem[s] those non-credible, absent specific evidence to the contrary.” The Board’s analysis was informed by Meta’s commitment to voice, which the company describes as “paramount,” and its value of safety. In explaining its commitment to voice, Meta states that “in some cases, we allow content – which would otherwise go against our standards – if it’s newsworthy and in the public interest.” This is known as the newsworthiness allowance . It is a general policy exception applicable to all Community Standards. To potentially apply the allowance, Meta conducts a balancing test, assessing the public interest in the content against the risk of harm. Meta removes content “even if it has some degree of newsworthiness, when leaving it up presents a risk of harm, such as physical, emotional or financial harm, or direct threat to public safety.” III. Meta’s Human-Rights Responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human-rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , in which it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board’s analysis of Meta’s human-rights responsibilities in this case was informed by the following international standards: · The rights to freedom of opinion and expression : Article 19, International Covenant on Civil and Political Rights ( ICCPR ), General Comment No. 34 , Human Rights Committee, 2011; UN Special Rapporteur on freedom of opinion and expression, reports: A/HRC/38/35 (2018) and A/74/486 (2019). · The right to life : Article 6, ICCPR. · The prohibition of advocacy of hatred that constitutes incitement to discrimination, hostility or violence : Article 20, para. 2, ICCPR; Rabat Plan of Action, UN High Commissioner for Human Rights report: A/HRC/22/17/Add.4 (2013). 5: User Submissions Following Meta’s referral and the Board’s decision to accept the case, the user was sent a message notifying them of the Board’s review and providing them with an opportunity to submit a statement to the Board. The user did not submit a statement. 6: Meta’s Submissions Meta determined that the video constituted both a statement of intent to commit high-severity violence and a call for high-severity violence against the man being held in the cell, who, according to Meta, is a suspected member of the “5 Seconds Gang.” The “5 Seconds Gang” is a prominent gang in Haiti, so called because of “the perception that members will kill a person in that amount of time.” A member of the crowd can be heard on the video saying, “We’re going to break the lock…They’re already dead,” which Meta considered a statement of intent to kill the man. Meta also interpreted the phrase “bwa kale na boudaw” as a call to kill the man. Meta provided a broad analysis of the political, security and humanitarian situation in Haiti as background on the risk of harm posed by the content in question. Meta also noted that gang violence has become endemic in the country as government officials struggle to maintain authority and that “vigilantism is contributing to a culture of extrajudicial retributive violence.” Meta considered two specific exceptions to the Community Standards as well as the newsworthiness allowance as part of its analysis. According to Meta, the company will allow content that violates the Violence and Incitement policy if it is “shared for the purpose of condemning or raising awareness of violence. The onus is on the user to make clear that one of those purposes is the intent.” In this case, Meta did not find a clear intent to condemn or raise awareness in the post. According to Meta, the fact the video was shared on a Facebook page that describes itself as a media page is not sufficient to satisfy this exception. Meta also stated that it sometimes allows calls for high-severity violence in content that targets a person or entity designated under Meta’s Dangerous Organizations and Individuals (DOI) policy. According to Meta, this exception applies only if the company has confirmed that the target is a dangerous organization or individual, or a member of one. Meta informed the Board that the company has designated the “5 Seconds Gang” a dangerous organization. However, the company was unable to confirm the man in the cell shown in the video is a member of the gang. Had Meta been able to confirm his membership, then the content would not have violated the prohibition on call to action, according to the company. Finally, in considering whether to apply the newsworthiness allowance, Meta determined the risk of harm from the post outweighed its public-interest value. Meta found that the video could contribute to violence either against the “5 Seconds Gang” or the Bwa Kale movement. While the content did have value in notifying others of impending violence and unfolding events, according to Meta, that value was diminished given the widespread coverage of the Bwa Kale movement. Meta looked to the UN Rabat Plan of Action’s factors in assessing whether the post constitutes an incitement to violence and concluded that “the speech constituted an incitement to imminent violence” as the threat was “specific and connected to ongoing violent events.” In response to the Board’s questions, Meta informed the Board that the company did not designate the situation in Haiti as a crisis under the Crisis Policy Protocol (CPP) as the company already had mitigation measures in place when the protocol was launched in August 2022. The Board asked Meta 18 questions in writing. Questions related to Meta’s language capacity in enforcing its Community Standards in Haiti; processes for the review of reports from Trusted Partners and how the program relates to other systems Meta employs in crisis situations; and whether and how Meta used the Crisis Policy Protocol in Haiti. Meta answered all questions. 7: Public Comments The Oversight Board received nine public comments. Seven of the comments were submitted from the United States and Canada, one from Asia Pacific and Oceania, and one from Europe. To read public comments submitted for this case, please click here . 8: Oversight Board Analysis The Board examined whether this content should be removed by analyzing Meta’s content policies, human-rights responsibilities and values. The Board also assessed the implications of this case for Meta’s broader approach to content governance. The Board selected this case to examine the role of social media in the context of extreme insecurity and violence, and how Meta’s policies and enforcement systems address content shared during an ongoing crisis. This case falls into the Board’s strategic priority of Crisis and Conflict Situations. 8.1 Compliance With Meta’s Content Policies The Board finds that the content in this case violates the Violence and Incitement Community Standard. Nonetheless, the majority of the Board disagrees with Meta’s assessment on the application of the newsworthiness allowance. For the majority, given the delay of nearly three weeks in enforcement, Meta should have applied the newsworthiness allowance to allow the content to remain on Facebook at the time Meta reviewed the content. I. Content Rules a. Violence and Incitement Meta prohibits “[s]tatements of intent to commit high-severity violence” and “[c]alls for high-severity violence.” The Board finds the content in this case violates both policy lines. The content depicts incitement to violence in a context where there is a credible threat of offline harm to the person in the cell as well as others. The video shows a crowd of people as they attempt to break into a cell that holds a man who is alleged to be a gang member. People from the crowd shout that they will break in and that the man is “already dead.” These are statements that show intent to use lethal force. A member of the crowd shouts “bwa kale na boudaw,” a phrase that, in the context in Haiti , constitutes a call to high-severity violence. While “bwa kale” has been used in various contexts, including in music and political messaging, in this case, the phrase is used in a context that mirrors deadly events in which civilians have killed suspected gang members or their allies. Meta allows content that violates the Violence and Incitement policy to remain on the platform if it is shared to “raise awareness of or to condemn violence.” These exceptions are not included in the public-facing language of the policy but are provided in the internal set of instructions for content moderators. For the exception to apply, the company requires that the user make it clear that they are posting the content for either of the two reasons. The Board finds the user in this case did not meet this burden; therefore, the content does not benefit from this exception as it is defined by Meta. The caption accompanying the video is descriptive and concludes with a statement that “[t]he police cannot do anything, things are going to get weird.” Describing the video or providing a neutral or ambiguous caption does not meet the standard established by Meta. Meta also sometimes allows calls for high-severity violence when the target is a member of a designated Dangerous Organization or Individual . This exception is referred to in Meta’s policy rationale for the Violence and Incitement policy, although it is not set out in the rules. The Board agrees that this exception does not apply in this case. However, the Board notes a number of concerns with this exception. First, the exception is not clearly articulated in the public-facing Community Standard. Second, as the list of individuals and organizations designated under Meta’s policies is not public, there is no way for a user to know how this exception would apply to their content. The Board has repeatedly recommended that Meta should provide greater clarity and transparency on the Dangerous Organizations and Individuals policy (see Mention of the Taliban in News Reporting ; Shared Al Jazeera Post ; Öcalan's Isolation ; and Nazi Quote ). Finally, according to Meta, the credibility of the threat is not a consideration in applying this exception. If the target is a designated entity or a violent actor, the content is deemed non-violating. The Board finds it troubling that credible threats against anyone designated under the opaque Dangerous Organizations and Individuals policy are exempted from the Violence and Incitement Community Standard. b. Newsworthiness Allowance While the Board finds the content violates the Violence and Incitement Community Standard, the majority of the Board disagrees with Meta on the application of the newsworthiness allowance in this case. First, the Board notes that the risk of harm and public interest involved in the newsworthiness analysis should be assessed at the time Meta is considering issuing the allowance, rather than at the time the user posted the content. Meta should update the public-facing language on the newsworthiness allowance to make this clear to users. Ideally, the two points in time should be close enough to avoid a different outcome, particularly in the context of widespread and escalating violence overcoming an entire nation. Unfortunately, in this case, nearly three weeks passed between the user posting the video and Meta’s removal of the content. The majority of the Board finds that the risk of harm had significantly diminished when Meta made its decision (i.e., nearly three weeks after the incitement depicted in the video was posted) and Meta should have kept up the content by applying the allowance. The video has the potential to inform the public in Haiti, as well as abroad, of the realities of violence and the breakdown in public order at a time when Haiti is seeking international aid and intervention. Whatever risk the content posed, including to identifiable individuals in the video, it had significantly diminished by the time Meta issued the allowance, as discussed further in section 8.2 (iii) analysis below. Had Meta reviewed the content soon after it was first posted, the risk of harm would have outweighed the public interest of the post, as in the Communal Violence in Indian State of Odisha case. In that case, Meta identified and removed the content within days of it being posted, at a time of heightened tensions and ongoing violence, when it posed a serious and likely risk of furthering violence, which outweighed the public interest value of the content. In this case, given Meta’s delay in reviewing the content, the risk of harm had significantly diminished, and was outweighed by its public interest value to safeguard access to information in order to inform the broader public of the situation in Haiti during this period. By the time Meta made its newsworthiness assessment, the post had been viewed 500,000 times and whatever risk of harm the video posed, had likely already materialized. As newsworthiness is assessed on escalation by Meta’s internal teams, Meta has the resources and expertise to make an even more context-sensitive assessment and to account for the change in circumstances when making that determination. For a minority of Board Members, Meta was right not to apply the newsworthiness allowance in this case. While the risk of harm to the individuals depicted in the video was most acute in the days following the posting of the content, the risk that the video could lead to additional and retaliatory violence had not passed when Meta reviewed the content, given the overall context of widespread and ongoing violence and insecurity in Haiti. Therefore, the harms inherent in having the content on the platform still outweighed the public interest in publicizing the speech, as discussed further in section 8.2 (iii) below. The risk that others, upon seeing this video, could take up arms and join the movement and seek to punish someone had not abated. Neither had the possibility passed of a member of the “5 Seconds Gang,” or an affiliated gang, recognizing someone in the video and seeking revenge on them, on other members of the Bwa Kale movement or on members of the police force. For these Board Members, the fact that several individuals are identifiable in the video and the risk of retaliation is well established and ongoing, means the content should not benefit from the allowance, even with the delay. 8.2 Compliance With Meta’s Human-Rights Responsibilities The majority of the Board finds removing this content, three weeks after it was posted, was not necessary and proportionate and restoring the post to Facebook is consistent with Meta’s human-rights responsibilities. The Board also finds that, to meet its human-rights responsibilities, Meta must ensure that moderation of content in Haiti during this period of heightened risk is effective and timely. Freedom of Expression (Article 19 ICCPR) Article 19 of the ICCPR provides for broad protection of expression, including “commentary on one’s own and on public affairs” as well as expression that people may find offensive (General Comment 34, para 11). When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human-rights commitments, both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression” ( A/74/486 , para. 41). I. Legality (Clarity and Accessibility of the Rules) The principle of legality requires rules limiting expression to be accessible and clear, both to those enforcing the rules and those impacted by them (General Comment No. 34, para. 25). Rules restricting expression “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” and must “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not” ( Ibid ). Applied to rules that govern online speech, the UN Special Rapporteur on freedom of expression has said they should be clear and specific ( A/HRC/38/35 , para. 46). People using Meta’s platforms should be able to access and understand the rules and content reviewers should have clear guidance regarding their enforcement. The Board finds that, as applied to the facts of this case, Meta’s prohibition of statements of intent to commit and calls for high-severity violence are clearly stated. The Board considers that the policy and its purpose, as applied to this case, are sufficiently clear to satisfy the legality requirement. However, the Board notes that the “raising awareness or condemning violence” exception to the Violence and Incitement policy is still not available in the public-facing language of the policy. Failing to include these exceptions in the public-facing language of the Community Standard, and to explain that the onus is on the user to make their intent clear, raises serious legality concerns (see section 8.1 (1)(a) above). In the Russian Poem case, the Board recommended that Meta add to the public-facing language of its Violence and Incitement Community Standard its interpretation of the policy that allows for content containing statements with “neutral reference to a potential outcome of an action or an advisory warning,” and content that “condemns or raises awareness of violent threat.” Meta committed to making this change but has not updated the Violence and Incitement Community Standard accordingly. The Board highlights this recommendation again and urges Meta to add this exception to the public-facing language of the Community Standard. II. Legitimate Aim Under Article 19, para. 3 of the ICCPR, expression may be restricted for a defined and limited list of reasons. In this case, the Board finds the Violence and Incitement Community Standard’s prohibition of statements of intent and calls to commit high-severity violence serves the legitimate aim of protecting public order and respecting the rights of others. III. Necessity and Proportionality The principle of necessity and proportionality provides that any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected” ( General Comment No. 34 , para. 34 ). The Board has previously used the Rabat Plan factors to analyze the necessity and proportionality of removing content under the Violence and Incitement Community Standard when public safety was at issue (see Brazilian General’s Speech and Cambodian Prime Minister ). In this case, the Board looked to the Rabat Factors to evaluate the necessity and proportionality of removing this content. The Board also considered the lengthy delay in Meta reviewing this content, and what this indicates for the company’s ability to meet its human-rights responsibilities in moderating content in Haiti. The majority of the Board finds removing this content, nearly three weeks after it was posted, was no longer necessary. The majority considered the context in Haiti, the extent and reach of the post and the likelihood of harm given the delay between posting of the content and its removal. The risk a post presents depends on the context in which it is shared. That context changed when Meta failed to act and, as a result, the video already had 200,000 views by the time of review. For these Board Members, given the delay in Meta’s review of the content and the high number of views that had previously occurred, whatever risk the content posed had likely already materialized. A timely assessment from Meta about this post would have affected the necessity and proportionality analysis and warranted its removal as in the Communal Violence in Indian State of Odisha case, in which the removal occurred within days of the content being posted. Given the delay in Meta’s enforcement, the majority believes removal was no longer necessary. Additionally, in a situation of protracted widespread violence and a breakdown of government authority and public order, sharing information becomes even more important for allowing communities to react to important events affecting them. Experts consulted by the Board highlighted the fact that people in Haiti rely on information shared on WhatsApp to stay informed of potential risks. In a context where “work of journalists is constrained by threats and violence, [where attacks] on journalists occur frequently, and impunity for perpetrators is the norm”, preserving access to information on social media becomes even more important. Ensuring content documenting events is not removed unnecessarily can aid in efforts to inform the public, and to identify and hold accountable those inciting and carrying out violence in Haiti. In Claimed COVID-19 Cure , the Board emphasized that Meta should explain the range of options it has at its disposal in achieving legitimate aims (such as preventing harm) and articulate why the selected one is the least intrusive means. As noted in that decision, Meta should publicly demonstrate three things in determining its least intrusive means: (1) the public interest objective could not be addressed through measures that do not infringe on speech; (2) among the measures that infringe on speech, Facebook (sic) has selected the least intrusive measure; and (3) the selected measure actually helps achieve the goal and is not ineffective or counterproductive (A/74/486, para. 51-52). In this case, for example, given the international community’s interest in assessing the situation in order to help the people of Haiti (as described above), Meta should publicly justify why measures such as geo-blocking would be insufficient to avert harm. Given nearly three weeks had elapsed before Meta reviewed the content, the company should also explain why measures such as preventing engagement with the content or employing demotions would not have been sufficient to minimize the risk of harm at that point. Rather, Meta seems to ask the Board to assess necessity and proportionality solely within a binary up/down box instead of considering the impacts of its full range of tools, as is required by a serious human-rights approach to content moderation. For a minority of the Board, removing this content is necessary and proportionate, especially given the context in Haiti, the extent and reach of the post, and the likelihood of harm. The Board found that this video was posted during a period of heightened risk, with intensifying gang violence and the start of a civilian movement of “self-defense” or “vigilante” violence against suspected gang members. This movement has previously taken suspected gang members from police custody to kill them by stoning, beating and setting them on fire. According to the UN High Commissioner for Human Rights , between April 24 and mid-August, “more than 350 people have been lynched by local people and vigilante groups. Those killed have included 310 alleged gang members, 46 members of the public and a police officer.” Videos of such events have circulated on social media and have been connected to others taking up arms to join and search for suspected gang members in order to kill them. Additionally, according to reports from the UN High Commissioner for Human Rights , members of the municipal government and police forces believed to be sympathetic to local self-defense groups have been killed by gangs in retaliation, as well as people believed to be in the movement. The leader of the “5 Seconds Gang” has previously threatened retaliation, including murder, on social media. The post names the precinct and shows the face of the person trying to break into the cell, as well as the faces of multiple people in the crowd. This post was viewed over 500,000 times. Given these facts, the threat of violence from this video circulating on Facebook was direct and imminent (General Comment 34, para 35), especially in the immediate period following its publication but also when Meta conducted its review. Additionally, for the minority, no measure, short of removal, would be sufficient to protect those depicted and those at risk of further violence spurred on by this video. The Board is concerned about Meta's ability to proactively identify and effectively moderate content in Haiti in a timely manner. The Board notes the heightened risk of content directly contributing to harm in a context in which public order and government services are absent, and extrajudicial and decentralized killing has become the main tool in a fight for power and control. In this case, there was a significant delay in Meta evaluating and removing the content. This delay appears to be a result of the company’s failure to invest adequate resources into moderating content in Haiti. The Board has previously raised concerns about the company’s lack of investment in moderating content in non-English languages (see e.g. Mention of the Taliban in News Reporting, Shared Al Jazeera Post and Ocalan’s Isolation). In this case, Meta was not able to provide a timely assessment of a report from a Trusted Partner, which is one of the main tools Meta relies on in Haiti to identify potentially violating content. A recent report by one of Meta’s Trusted Partners that evaluated the program found significant irregularity in response times from Meta and concluded that the program is under-resourced. Trusted Partners invest their time and resources to alert Meta of potentially dangerous content on its platforms. The Board is concerned that Meta is not resourcing its internal teams adequately enough to evaluate these reports in a timely manner. Finally, Meta failed to activate its Crisis Policy Protocol in Haiti. In the Former President Trump’s Suspension case, the Board urged Meta to develop and publish a policy to govern its responses to crises and novel situations where its regular processes would not prevent or avoid imminent harm. In response, Meta created the Crisis Policy Protocol, which aims to “codify [the company’s] policy-specific responses to ensure [Meta] is timely, systematic and proportionate in a crisis” ( Crisis Policy Protocol , Policy Forum Minutes, January 25, 2022). In this case, Meta told the Board that the company did not designate the situation in Haiti as a crisis under the protocol as it is “designed to facilitate timely assessment and mitigation of novel or emergent crises,” and the company already had risk-mitigation measures in place in Haiti when the Crisis Policy Protocol came into use in August 2022. However, the Board is concerned that if the company fails to use the Crisis Policy Protocol in such situations, it will fail to deliver principled and timely moderation in these circumstances. Many crises and conflicts around the world are ongoing or have periods of acute violence or harm that subside and re-emerge depending on the circumstances. Meta must have a mechanism in place to assess risks in such crises and transition from existing mitigation measures to those provided by the Crisis Policy Protocol. Failure to use the Crisis Policy Protocol under such circumstances undermines the company’s and the public’s ability to assess the effectiveness of the protocol in meeting its aims. The Board understands that Meta must make difficult decisions when it comes to how it prioritizes resourcing for its various content-moderation systems (i.e. developing language-specific classifiers, hiring content moderators, deploying the Crisis Policy Protocol or prioritizing operational measures such as Trusted Partners). However, to meet its human-rights responsibilities, Meta must ensure that moderation of content in Haiti, during this period of heightened risk, is effective and timely. 9: Oversight Board Decision The Oversight Board overturns Meta's decision to take down the content, requiring the post to be restored. 10: Recommendations Enforcement 1.To address the risk of harm, particularly where Meta has no or limited proactive moderation tools, processes or measures to identify and assess content, Meta should assess the timeliness and effectiveness of its responses to content escalated through the Trusted Partner program. The Board will consider this recommendation implemented when Meta both shares the results of this assessment with the Board – including the distribution of average time to final resolution for escalations originating from Trusted Partners disaggregated by country, Meta's own internal goals for time to final resolution, and any corrective measures it is taking in case those targets are not met – as well as publishes a public-facing summary of its findings to demonstrate it has complied with this recommendation. Policy The Board also reiterates the following recommendation from the Russian Poem case: Meta should add to the public-facing language of its Violence and Incitement Community Standard that it interprets the policy to allow content containing statements with “neutral reference to a potential outcome of an action or an advisory warning,” and content that “condemns or raises awareness of violent threat.” *Procedural Note: The Oversight Board’s decisions are prepared by panels of five Members and approved by the majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg, which draws on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Linguistic expertise was provided by Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world. Return to Case Decisions and Policy Advisory Opinions" fb-m8d2sogs,Hostages Kidnapped From Israel,https://www.oversightboard.com/decision/fb-m8d2sogs/,"December 19, 2023",2023,December,"TopicSafety, Violence, War and conflictCommunity StandardViolence and incitement",Violence and incitement,Overturned,"Israel, Palestinian Territories","The Board overturns Meta’s original decision to remove the content from Facebook. It finds that restoring the content to the platform, with a “mark as disturbing” warning screen, is consistent with Meta’s content policies, values and human-rights responsibilities.",34250,5301,"Overturned December 19, 2023 The Board overturns Meta’s original decision to remove the content from Facebook. It finds that restoring the content to the platform, with a “mark as disturbing” warning screen, is consistent with Meta’s content policies, values and human-rights responsibilities. Expedited Topic Safety, Violence, War and conflict Community Standard Violence and incitement Location Israel, Palestinian Territories Platform Facebook Hebrew translation In the weeks following the publication of this decision, we will upload a translation in Hebrew here and an Arabic translation will become available through the ‘language’ tab accessed in the menu at the top of this screen. לקריאת החלטה זו בעברית יש ללחוץ כאן . 1. Summary The case involves an emotionally powerful video showing a woman, during the October 7 Hamas-led terrorist attack on Israel, begging her kidnappers not to kill her as she is taken hostage and driven away. The accompanying caption urges people to watch the video to better understand the horror that Israel woke up to on October 7, 2023. Meta’s automated systems removed the post for violating its Dangerous Organizations and Individuals Community Standard. The user appealed the decision to the Oversight Board. After the Board identified the case for review, Meta informed the Board that the company had subsequently made an exception to the policy line under which the content was removed and restored the content with a warning screen. The Board overturns Meta’s original decision and approves the decision to restore the content with a warning screen but disapproves of the associated demotion of the content barring it from recommendations. This case, together with Al-Shifa Hospital ( 2023-049-IG-UA ), are the Board’s first cases decided under its expedited review procedures. 2. Case Context and Meta’s Response On October 7, 2023, Hamas, a designated Tier 1 organization under Meta’s Dangerous Organizations and Individuals Community Standard, led unprecedented terrorist attacks on Israel from Gaza that killed an estimated 1,200 people, and resulted in roughly 240 people being taken hostage ( Ministry of Foreign Affairs, Government of Israel ). Israel immediately undertook a military campaign in Gaza in response to the attacks. Israel’s military action has killed more than 18,000 people in Gaza as of mid-December 2023 ( UN Office for the Coordination of Humanitarian Affairs , drawing on data from the Ministry of Health in Gaza), in a conflict where both sides have been accused of violating international law. Both the terrorist attacks and Israel’s subsequent military actions have been the subjects of intense worldwide publicity, debate, scrutiny, and controversy, much of which has taken place on social media platforms, including Instagram and Facebook. Meta immediately designated the events of October 7 a terrorist attack under its Dangerous Organizations and Individuals policy. Under its Community Standards, this means that Meta would remove any content on its platforms that “praises, substantively supports or represents” the October 7 attacks or their perpetrators. It would also remove any perpetrator-generated content relating to such attacks and third-party imagery depicting the moment of such attacks on visible victims. In reaction to an exceptional surge in violent and graphic content being posted to its platforms following the terrorist attacks and military response, Meta put in place several temporary measures , including lowering the confidence thresholds for the automatic classification systems (classifiers) of its Hate Speech, Violence and Incitement, and Bullying and Harassment policies to identify and remove content. Meta informed the Board that these measures applied to content originating in Israel and Gaza across all languages. The changes to these classifiers increased the automatic removal of content where there was a lower confidence score for the content violating Meta’s policies. In other words, Meta used its automated tools more aggressively to remove content that might be prohibited. Meta did this to prioritize its value of safety, with more content removed than would have occurred under the higher confidence threshold in place prior to October 7. While this reduced the likelihood that Meta would fail to remove violating content that might otherwise evade detection or where capacity for human review was limited, it also increased the likelihood of Meta mistakenly removing non-violating content related to the conflict. When escalation teams assessed videos as violating its Violent and Graphic Content, Violence and Incitement and Dangerous Organizations and Individuals policies, Meta relied on Media Matching Service banks to automatically remove matching videos. This approach raised the concern of over-enforcement, including people facing restrictions on or suspension of their accounts following multiple violations of Meta's content policies (sometimes referred to as ""Facebook jail""). To mitigate that concern, Meta withheld “ strikes ” that would ordinarily accompany automatic removals based on the Media Matching Service banks (as Meta announced in its newsroom post ). Meta’s changes in the classifier confidence threshold and its strike policy are limited to the Israel-Gaza conflict and are intended to be temporary. As of December 11, 2023, Meta had not restored confidence thresholds to pre-October 7 levels. 3. Case Description This case involves a video of the October 7 attacks depicting a woman begging her kidnappers not to kill her as she is taken hostage and driven away on a motorbike. The woman is seen sitting on the back of the vehicle, reaching out and pleading for her life. The video then shows a man, who appears to be another hostage, being marched away by captors. The faces of the hostages and those abducting them are not obscured and are identifiable. The original footage was shared broadly in the immediate aftermath of the attacks. The video posted by the user in this case, approximately one week after the attacks, integrates text within the video stating: “Israel is under attack,” and includes the hashtag #FreeIsrael, also naming one of the hostages. In a caption accompanying the video, the user states that Israel was attacked by Hamas militants and urges people to watch the video to better understand the horror that Israel woke up to on October 7, 2023. At the time of writing, both people being abducted in the video were still being held hostage. An instance of this video was placed in a Media Matching Service bank. Meta initially removed the post in this case for violating its Dangerous Organizations and Individuals policy, which prohibits third-party imagery depicting the moment of designated terror attacks on visible victims under any circumstances, even if shared to condemn or raise awareness of the attack. Meta did not apply a strike. The user then appealed Meta’s decision to the Oversight Board. In the immediate aftermath of the October 7 terrorist attacks, Meta enforced strictly its policy on videos showing the moment of attack on visible victims. Meta explained this was due to concerns about the dignity of the hostages as well as the use of such videos to celebrate or promote Hamas’ actions. Meta added videos depicting moments of attack on October 7, including the video shown in this case, to Media Matching Service banks so future instances of identical content could be removed automatically. Meta told the Board that it applied the letter of the Dangerous Organizations and Individuals policy to such content and issued consolidated guidance to reviewers. On October 13, the company explained in its Newsroom post that it temporarily expanded the Violence and Incitement policy to remove content that clearly identified hostages when Meta is made aware of it, even if it was done to condemn the actions or raise awareness of their situation. The company affirmed to the Board that these policies applied equally to both Facebook and Instagram, although similar content has been reported to have appeared widely on the latter platform, indicating there may have been less effective enforcement of this policy there. The Violence and Incitement Community Standard generally allows content that depicts kidnappings and abductions in a limited number of contexts, including where the content is shared for informational, condemnation, or awareness-raising purposes or by the family as a plea for help. However, according to Meta, when it designates a terrorist attack under its Dangerous Organizations and Individuals policy, and those attacks include hostage-taking of visible victims, Meta’s rules on moment-of-attack content override the Violence and Incitement Community Standard. In such cases, the allowances within that policy for informational, condemning or awareness-raising sharing of moment-of-kidnapping videos do not apply and the content is removed. However, as events developed following October 7, Meta observed online trends indicating a change in the reasons why people were sharing videos featuring identifiable hostages at the moment of their abduction. Families of victims were sharing the videos to condemn and raise awareness, and the Israeli government and media organizations were similarly sharing the footage, including to counter emerging narratives denying the October 7 events took place or denying the severity of the atrocities. In response to these developments, Meta implemented an exception to its Dangerous Organizations and Individuals policy, while maintaining its designation of the October 7 events. Subject to operational constraints, moment-of-kidnapping content showing identifiable hostages would be allowed with a warning screen in the context of condemning, raising awareness, news reporting, or a call for release. Meta told the Board that the roll-out of this exception was staggered and did not reach all users at the same time. On or around October 20, the company began to allow hostage-taking content from the October 7 attacks. Initially it did so only from accounts included in the “Early Response Secondary Review” program (commonly known as “cross-check”), given concerns about operational constraints, including uncertain human review capacity. The cross-check program provides guaranteed additional human review of content by specific entities whenever they post content that is identified as potentially violating and requiring enforcement under Meta content policies. On November 16, Meta determined it had capacity to expand the allowance of hostage-taking content to all accounts and did so, but only for content posted after this date. Meta has informed the Board and explained in the public newsroom update that the exception it is currently making is limited only to videos depicting the moment of kidnapping of the hostages taken in Israel on October 7. After the Board identified this case, Meta reversed its original decision and restored the content with a “mark as disturbing” warning screen. This restricted the visibility of the content to people over the age of 18 and removed it from recommendations to other Facebook users. 4. Justification for Expedited Review The Oversight Board’s Bylaws provide for expedited review in “exceptional circumstances, including when content could result in urgent real-world consequences,” and decisions are binding on Meta (Charter, Art. 3, section 7.2; Bylaws, Art. 2, section 2.1.2). The expedited process precludes the level of extensive research, external consultation or public comments that would be undertaken in cases reviewed on ordinary timelines. The case is decided on the information available to the Board at the time of deliberation and is decided by a five-member panel without a full vote of the Board. The Oversight Board selected this case and one other case, Al-Shifa Hospital (2023-049-IG-UA) because of the importance of freedom of expression in conflict situations, which has been imperiled in the context of the Israel-Hamas conflict. Both cases are representative of the types of appeals users in the region have been submitting to the Board since the October 7 attacks and Israel’s subsequent military action. Both cases fall within the Oversight Board’s crisis and conflict situations priority. Meta’s decisions in both cases meet the standard of “urgent real-world consequences” to justify expedited review, and accordingly the Board and Meta agreed to proceed under the Board’s expedited procedures. In its submissions to the Board, Meta recognized that “the decision on how to treat this content is difficult and involves competing values and trade-offs,” welcoming the Board’s input on this issue. 5. User Submissions The author of the post stated in their appeal to the Board that the video captures real events. It aims to “stop terror” by showing the brutality of the attack on October 7, in which the hostages were captured. The user was notified of the Board’s review of their appeal. 6. Decision The Board overturns Meta’s original decision to remove the content from Facebook. It finds that restoring the content to the platform, with a “mark as disturbing” warning screen, is consistent with Meta’s content policies, values and human-rights responsibilities. However, the Board also concludes that Meta’s demoting of the restored content, in the form of its exclusion from the possibility of being recommended, does not accord with the company’s responsibilities to respect freedom of expression. 6.1 Compliance With Meta’s Content Policies The Board finds Meta’s initial decision to remove the content was in line with its Dangerous Organizations and Individuals policy at the time, prohibiting “third-party imagery depicting the moment of [designated] attacks on visible victims.” Restoring the content, with a warning screen, also complied with Meta’s temporary allowance to permit such content when shared for the purposes of condemning, awareness-raising, news reporting or calling for release. 6.2 Compliance With Meta’s Human-Rights Responsibilities The Board agrees with Meta’s initial policy position, on October 7, to remove “third-party imagery depicting the moment of [designated] attacks on visible victims,” in accordance with the Dangerous Organizations and Individuals policy. Protecting the dignity of hostages and ensuring they are not exposed to public curiosity should be Meta’s default approach. In exceptional circumstances, however, when a compelling public interest or the vital interest of hostages require it, temporary and limited exceptions to this prohibition can be justified. In the specific circumstances of this case, as Meta recognized in restoring the content and adding a warning screen to it after the Board had selected it, the content should be allowed. The Board finds Meta’s decision to temporarily change its initial approach -- allowing such content with a warning screen when shared for purposes of condemning, awareness-raising, news reporting or calling for release -- was justifiable. Moreover, this change was justifiable earlier than November 16, as it became clear that Meta's strict enforcement of the Dangerous Organizations and Individuals policy was impeding expression aimed at advancing and protecting the rights and interests of the hostages and their families. Given the fast-moving circumstances, and the high costs to freedom of expression and access to information of removing this kind of content, Meta should have moved more quickly to adapt its policy. As the Board stated in the Armenian Prisoners of War Video case, the protections for freedom of expression under Article 19 of the International Covenant on Civil and Political Rights (ICCPR) “remain engaged during armed conflicts, and should continue to inform Meta’s human rights responsibilities, alongside the mutually reinforcing and complementary rules of international humanitarian law that apply during such conflicts.” The UN Guiding Principles on Business and Human Rights impose a heightened responsibility on businesses operating in a conflict setting (""Business, human rights and conflict-affected regions: towards heightened action,"" A/75/212 ). When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human-rights commitments, both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. In doing so, the Board attempts to be sensitive to how those rights may be different as applied to a private social media company than as applied to a government. Nonetheless, as the UN Special Rapporteur on freedom of expression has stated, while companies do not have the obligations of governments, “their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression” (report A/74/486 , para. 41). Legality requires that any restriction on freedom of expression should be accessible and clear enough to provide guidance as to what is permitted and what is not. As applied to this case, Meta’s Dangerous Organizations and Individuals rule prohibiting third-party imagery depicting the moment of designated terror attacks on visible victims, regardless of the context it is shared in, is clear. In addition, Meta publicly announced on October 13, through a newsroom post, that it would remove all such videos. While Meta subsequently changed its approach -- first on October 20, allowing such content (with a warning screen) when shared by entities benefiting from the ERSR program for informational or raising awareness purposes, and again on November 16, expanding that allowance for all users -- the company did not announce this change publicly until December 5. This was after the Board identified this case for review but before the Board publicly announced it was taking this case on December 7. Throughout the conflict, the rules that Meta has applied have changed several times but have not been made fully clear to users. It is also not clear under which policy the warning screen is imposed, as neither the Dangerous Organizations and Individuals nor Violence and Incitement policies provide for the use of warning screens. The Board encourages Meta to address these legality concerns by clarifying publicly the basis and scope of its current policy regarding content relating to the hostages taken from Israel on October 7, and its relation to the more general policies at issue. Under Article 19, para. 3 of the ICCPR, expression may be restricted for a defined and limited list of reasons. The Board has previously found that the Dangerous Organizations and Individuals and the Violence and Incitement policies pursue the legitimate aim of protecting the rights of others (See Tigray Communication Affairs Bureau and Mention of the Taliban in News Reporting ). The principle of necessity and proportionality provides that any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected” ( General Comment No. 34, para. 34 ). The Board finds Meta’s initial decision to remove all content depicting visible hostages was necessary and proportionate to achieve the aims of protecting the safety and dignity of hostages and to ensure Meta’s platforms were not used to further the violence of October 7 or to encourage further degrading and inhumane treatment of hostages. The Board considered international humanitarian law, the digital context during the first week of the conflict, and the mitigation measures Meta undertook to limit the impact of its decision to remove all such content. International humanitarian law, in both international and non-international armed conflicts, identifies the special vulnerability of hostages in an armed conflict and provides protections to address those heightened risks; the taking of hostages is prohibited by the Geneva Conventions ( Common Article 3 , Geneva Conventions; Articles 27 and Article 34 , Geneva Convention IV). According to the ICRC’s commentary, the definition of hostage taking in the International Convention against the taking of hostages , as the seizure or detention of a person, accompanied by the threat to kill, to injure or to continue to detain that person in order to compel a third party to do or to abstain from doing any act as an explicit or implicit conditions for his or her release, should be used to define the term under the Geneva Conventions. Common Article 3 also prohibits “outrages upon personal dignity, in particular humiliating and degrading treatment.” Article 27 of Geneva Convention IV protects protected persons, including hostages, from inhumane and degrading treatment, including from insults and public curiosity. The sharing of hostage videos can serve as an integral part of a strategy to threaten a government and the public and can promote the continuing detention and degradation of hostages, an ongoing violation of international law. Under such circumstances, permitting the dissemination of the images of violence, mistreatment, and ongoing vulnerability, can promote further violence and degrading treatment. Videos of hostages began circulating on social media platforms simultaneously with the October 7 attack. According to reporting, throughout the first week videos were broadcast by Hamas and the Palestinian Islamic Jihad, with grave concerns that further livestreams and videos of executions or torture could follow. Under such circumstances, Meta’s decision to prohibit all such videos on its platforms was reasonable and consistent with international humanitarian law, its Community Standards, and its value of safety. Industry standards, such as commitments laid out in the Christchurch Call, require companies to react rapidly and effectively to harmful content shared in the aftermath of violent extremist attacks. However, the Christchurch Call also emphasizes the need to respond to such content in a manner consistent with human rights and fundamental freedoms. At the same time, Meta took measures to limit the potential adverse impact of its decision to remove all such imagery by deciding not to apply a strike to users who had their content removed under this policy, mitigating the detrimental effects of the strict policy on users who may have been posting the content for purposes such as condemning or raising awareness and reporting on events. The Board finds that it can be reasonable and helpful for Meta to consider calibrating or dispensing with strikes to mitigate the negative consequences of categorical rules strictly applied at scale, and therefore making a restriction on expression more proportionate to the aim being pursued. The Geneva Conventions prohibit exposing protected persons, including hostages, to public curiosity as it constitutes humiliating treatment. There are narrow exceptions to this prohibition, which the Board analyzed in Armenian Prisoners of War Video decision, requiring a reasonable balance to be struck between the benefits of public disclosure of materials depicting identifiable prisoners, “given the high value of such materials when used as evidence to prosecute war crimes, promote accountability, and raise public awareness of abuse, and the potential humiliation and even physical harm that may be caused to the persons in the shared materials.” The exceptional disclosure requires a compelling public interest or that the vital interests of the hostage are served. For these reasons, any decision to make a temporary exception to restrictions on content showing identifiable hostages or prisoners of war must be assessed on the particular facts relating to those hostages or Prisoners Of War and their rights and interests, must be continuously reviewed to ensure it is narrowly tailored to serve those rights and interests, and does not become a general exception to the rules aimed at protecting the rights and dignity of protected persons. The facts in this case provided strong signals that such disclosure was in the vital interest of the hostages. Within the first week following the October 7 attack, families of the hostages also began to organize and share videos as part of their campaign to call for the release of hostages and to pressure various governments to act in the best interest of the hostages. The video which was used as part of the post in this case was also part of a campaign by the family of the woman depicted. In addition, from approximately October 16, the Israeli government began showing video compilations to journalists to demonstrate the severity of the October 7 attacks. There were reports of narratives spreading denying the atrocities in the weeks following October 7. Given this, it was reasonable for Meta to conclude that the company must not silence the families of hostages and frustrate the work of news organizations and other entities to investigate and report on the facts. It can be crucial to seeking the future safety of the hostages for families and authorities to see a hostage alive and to be able to identify their physical condition, and even identify the kidnappers. This is particularly important while Meta lacks a transparent and effective mechanism for preserving such content (see further discussion below). In short, given the changes in the digital environment in the weeks following the events of October 7, Meta was justified in making a temporary exception to its policies, limited to the hostages taken in Israel on October 7. The Board also concludes that Meta took too long to roll out the application of this exception to all users. Meta was also too slow to announce this temporary change to the public. On October 20 Meta began allowing such videos with a warning screen when shared for informational and awareness-raising purposes, limited to those entities on the cross-check ERSR list. On November 16, almost four weeks after the initial allowance was acknowledged and nearly a month and a half into the conflict, Meta extended that allowance to all users. On December 5, Meta finally announced through a newsroom article that it had made a change to its policy prohibiting videos depicting hostages at the moment of attack. While the Board finds the concept of a staged rollout of changes to the policy is reasonable in principle, the Board concludes that the company should have reacted to changing circumstances more quickly. After Meta changed its initial approach and introduced the allowance, Meta still applied a warning screen on the content. As the Board has concluded in previous cases, applying a warning screen can in certain circumstances be a proportionate measure, even though it has a negative impact on the reach of the content, because it provides users with the ability to share and a choice of whether to see disturbing content (see Armenian Prisoners of War Video ). The Board finds that excluding content raising awareness of potential human-rights abuses, conflicts, or acts of terrorism from recommendations is not a necessary or proportionate restriction on freedom of expression, in view of the very high public interest in such content. Warning screens and removal from recommendations serve separate functions, and should in some instances be decoupled, in particular in crisis situations. Removing content from recommendation systems means reducing the reach that this content would otherwise get. The Board finds this practice interferes with freedom of expression in disproportionate ways in so far as it applies to content that is already limited to adult users and that is posted to raise awareness, condemn, or report on matters of public interest such as the development of a violent conflict. The Board is also concerned that Meta’s rapidly changing approach to content moderation during the conflict has been accompanied by an ongoing lack of transparency that undermines effective evaluation of its policies and practices, and that can give it the outward appearance of arbitrariness. For example, Meta confirmed that the exception permitting the sharing of imagery depicting visible victims of a designated attack for informational or awareness-raising purposes is a temporary measure. However, it is unclear whether this measure is part of the company’s Crisis Policy Protocol or was improvised by Meta’s teams as events unfolded. Meta developed the Crisis Policy Protocol in response to the Board’s recommendation no. 18 in the Former President Trump’s Suspension case. According to the company it is meant to provide Meta with a framework for anticipating and responding to risks consistently across crises. The lack of transparency on the protocol means neither the Board nor the public knows whether the policy measure used in this case (i.e. allowing content violating the letter of the relevant rule under the Dangerous Organizations and Individuals policy for raising awareness and condemnation purposes with a warning screen) was developed and evaluated prior to this conflict, what the exact scope of the temporary policy measure is (e.g., whether it applies to videos depicting hostages in detention, after the October 7 attack), the criteria for its use, the circumstances under which the measure will no longer be necessary, and whether Meta intends to resume removing all such content once the temporary measure ends. The Board reemphasizes the lack of timely and effective notification, for users and the public, of these ad hoc crisis measures. The Board has previously held that Meta should announce such exceptions to its Community Standards, “their duration and notice of their expiration, in order to give people who use its platforms notice of policy changes allowing certain expression” (see Iran Protest Slogan , recommendation no. 5, which Meta has partially implemented). The lack of transparency can also have a chilling effect on users who may fear their content will be removed and their account penalized or restricted if they make a mistake. Finally, given the baseline general prohibition on allowing hostages to be exhibited and the very exceptional circumstances under which this can be relaxed, prompt and regular notice and transparency regarding the exact scope and time limitations of the exceptions helps to ensure that they will remain as limited as possible. Moreover, the company first began allowing entities benefiting from its cross-check program to share videos of hostages with a warning screen for informational or awareness-raising purposes before expanding this allowance to all users. Adopting an intermediate step to ease into a more permissive temporary policy appears reasonable given the context, allowing the company to test the effects of the change on a more limited scale before implementing it broadly. However, doing so through the use of the cross-check program also highlights anew some of the problems that the Board had previously identified in its policy advisory opinion on the subject. These include unequal treatment of users, lack of transparent criteria for the cross-check lists, the need to ensure greater representation of users whose content is likely to be important from a human-rights perspective, such as journalists and civil society organizations, and overall lack of transparency around how cross-check works. The use of the cross-check program in this way also contradicts how Meta has described and explained the purpose of the program, as a mistake prevention system and not a program that provides certain privileged users with more permissive rules. Meta has indicated that it continues to work on implementing most of the recommendations the Board made in that policy advisory opinion, but neither the Board nor the public have sufficient information to evaluate whether the reliance on the cross-check list during the conflict was in line with Meta’s human rights responsibilities or was likely to lead to a disparate impact, privileging one market or one group of speakers over another. Finally, Meta has a responsibility to preserve evidence of potential human-rights violations and violations of international humanitarian law, as also recommended in the BSR report (recommendation no. 21) and advocated by civil society groups. Even when content is removed from Meta’s platforms, it is vital to preserve such evidence in the interest of future accountability (see Sudan Graphic Video and Armenian Prisoners of War Video ). While Meta explained that it retains all content that violates its Community Standards for a period of one year, the Board urges that content specifically related to potential war crimes, crimes against humanity, and grave violations of human rights be identified and preserved in a more enduring and accessible way for purposes of longer-term accountability. The Board notes that Meta has agreed to implement recommendation no. 1 in the Armenian Prisoners of War Video case. This called on Meta to develop a protocol to preserve and, where appropriate, share with competent authorities, information to assist in investigations and legal processes to remedy or prosecute atrocity crimes or grave human-rights violations. Meta has informed the Board that it is in the final stages of developing a “consistent approach to retaining potential evidence of atrocity crimes and serious violations of international human rights law” and expects to provide the Board with a briefing about its approach soon. The Board expects Meta to fully implement the above recommendation. *Procedural note: The Oversight Board's expedited decisions are prepared by panels of five members and are not subject to majority approval of the full Board. Board decisions do not necessarily represent the personal views of all members. Return to Case Decisions and Policy Advisory Opinions" fb-mbgotvn8,Russian poem,https://www.oversightboard.com/decision/fb-mbgotvn8/,"November 16, 2022",2022,,"TopicArt / Writing / Poetry, War and conflictCommunity StandardHate speech","Policies and TopicsTopicArt / Writing / Poetry, War and conflictCommunity StandardHate speech",Overturned,"Latvia, Russia, Ukraine",The Oversight Board has overturned Meta’s original decision to remove a Facebook post comparing the Russian army in Ukraine to Nazis and quoting a poem that calls for the killing of fascists,42282,6668,"Overturned November 16, 2022 The Oversight Board has overturned Meta’s original decision to remove a Facebook post comparing the Russian army in Ukraine to Nazis and quoting a poem that calls for the killing of fascists Standard Topic Art / Writing / Poetry, War and conflict Community Standard Hate speech Location Latvia, Russia, Ukraine Platform Facebook Latvian translation Ukrainian translation Russian poem public comments This decision is also available in Russian (via the “language” tab accessed through the menu at the top of this screen), in Latvian ( here ) and in Ukrainian ( here ). Lai iepazītos ar lēmumu latviski, spied šeit . Щоб прочитати це рішення українською мовою, натисніть тут . The Oversight Board has overturned Meta’s original decision to remove a Facebook post comparing the Russian army in Ukraine to Nazis and quoting a poem that calls for the killing of fascists. It has also overturned Meta’s finding that an image of what appears to be a dead body in the same post violated the Violent and Graphic Content policy. Meta had applied a warning screen to the image on the grounds that it violated the policy. This case raises some important issues about content moderation in conflict situations. About the case In April 2022, a Facebook user in Latvia posted an image of what appears to be a dead body, face down, in a street. No wounds are visible. Meta confirmed to the Board that the person was shot in Bucha, Ukraine. The Russian text accompanying the image argues that the alleged atrocities Soviet soldiers committed in Germany in World War Two were excused on the basis that they avenged the crimes Nazi soldiers had committed in the USSR. It draws a connection between the Nazi army and the Russian army in Ukraine, saying the Russian army “became fascist.” The post cites alleged atrocities committed by the Russian army in Ukraine and says that “after Bucha, Ukrainians will also want to repeat... and will be able to repeat.” It ends by quoting the poem “Kill him!” by Soviet poet Konstantin Simonov, including the lines: “kill the fascist... Kill him! Kill him! Kill!” The post was reported by another Facebook user and removed by Meta for violating its Hate Speech Community Standard. After the Board selected the case, Meta found it had wrongly removed the post and restored it. Three weeks later, it applied a warning screen to the image under its Violent and Graphic Content policy. Key findings The Board finds that removing the post, and later applying the warning screen, do not align with Facebook’s Community Standards, Meta’s values, or its human rights responsibilities. The Board finds that, rather than making general accusations that “Russian soldiers are Nazis,” the post argues that they acted like Nazis in a particular time and place, and draws historical parallels. The post also targets Russian soldiers because of their role as combatants, not because of their nationality. In this context, neither Meta’s human rights responsibilities nor its Hate Speech Community Standard protect soldiers from claims of egregious wrongdoing or prevent provocative comparisons between their actions and past events. The Board emphasizes the importance of context in assessing whether content is urging violence. In this case, the Board finds that the quotes from the poem “Kill him!” are an artistic and cultural reference employed as a rhetorical device. When read in the context of the whole post, the Board finds that the quotes are being used to describe, rather than encourage, a state of mind. They warn of cycles of violence and the potential for history to repeat itself in Ukraine. Meta’s internal guidance for moderators clarifies that the company interprets its Violence and Incitement Community Standard to allow such “neutral reference[s] to a potential outcome” and “advisory warning[s].” However, this is not explained in the public Community Standards. Likewise, the Violent and Graphic Content policy prohibits images depicting a violent death. Internal guidance for moderators describes how Meta determines whether a death appears violent, but this is not included in the public policy. In this case, a majority of the Board finds that the image in the post does not include clear indicators of violence which, according to Meta’s internal guidance for moderators, would justify the use of a warning screen. Overall, the Board finds that this post is unlikely to exacerbate violence. However, it notes that there are additional complexities in evaluating violent speech in international conflict situations where international law allows combatants to be targeted. The Russian invasion of Ukraine is internationally recognized as unlawful. The Board urges Meta to revise its policies to take into consideration the circumstances of unlawful military intervention. The Oversight Board's decision The Oversight Board overturns Meta's original decision to remove the post and its subsequent determination that the image in the post violated the Violent and Graphic Content policy, as a result of which Meta applied a warning screen. The Board recommends that Meta: * Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board overturns Meta’s original decision to remove a Facebook post addressing the conflict in Ukraine. The content, which is in the Russian language and was posted in Latvia, comprises a photographic image of a street view with a person lying – likely deceased – on the ground, accompanied by text. The text includes quotations from a well-known poem by the Soviet poet Konstantin Simonov calling for resistance against the German invaders during World War II, and it implies that Russian invaders are playing a similar role in Ukraine to that which German soldiers played in the USSR. After the Board selected this post for review, Meta changed its position and restored the content to the platform. The content raises important definitional questions under Meta’s Hate Speech and Violence and Incitement policies. A few weeks after deciding to restore the post, Meta affixed a warning screen to the photo. A majority of the Board finds that the photographic image does not violate the Violent and Graphic Content policy, as the image lacks clear visual indicators of violence, as described in Meta's internal guidelines to content moderators, which would justify the use of the warning screen. 2. Case description and background In April 2022, a Facebook user in Latvia posted a photo and text in Russian to their News Feed. The post was viewed approximately 20,000 times, shared approximately 100 times, and received almost 600 reactions and over 100 comments. The photo shows a street view with a person lying, likely deceased, on the ground, next to a fallen bicycle. No wounds are visible. The text begins, “they wanted to repeat and repeated.” The post comments on alleged crimes committed by Soviet soldiers in Germany during the Second World War. It says such crimes were excused on the basis that soldiers were avenging the horrors that the Nazis had inflicted on the USSR. It then draws a connection between the Second World War and the invasion of Ukraine, arguing that the Russian army “became fascist.” The post states that the Russian army in Ukraine “rape[s] girls, wound[s] their fathers, torture[s] and kill[s] peaceful people.” It concludes that “after Bucha, Ukrainians will also want to repeat... and will be able to repeat” such actions. At the end of the post, the user shares excerpts of the poem “Kill him!” by Soviet poet Konstantin Simonov, including the lines: “kill the fascist so he will lie on the ground’s backbone, not you”; “kill at least one of them as soon as you can”; “Kill him! Kill him! Kill!” The same day the content was posted, another user reported it as “violent and graphic content.” Based on a human reviewer decision, Meta removed the content for violating its Hate Speech Community Standard. Hours later, the user who posted the content appealed and a second reviewer assessed the content as violating the same policy. The user appealed to the Oversight Board. As a result of the Board selecting the appeal for review on May 31, 2022, Meta determined that its previous decision to remove the content was in error and restored it. On June 23, 2022, 23 days after the content was restored, Meta applied a warning screen to the photograph in the post under the Violent and Graphic Content Community Standard, on the basis that it shows the violent death of a person. The warning screen reads “sensitive content – this photo may show violent or graphic content,” and gives users two options: “learn more” and “see photo.” The following factual background is relevant to the Board’s decision and is based on research commissioned by the Board: 3. Oversight Board authority and scope The Board has authority to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include policy advisory statements with non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). When the Board selects cases like this one, where Meta has agreed that it made an error, the Board reviews the original decision to help increase understanding of why errors occur, and to make observations or recommendations that may contribute to reducing errors and to enhancing fair and transparent procedures. 4. Sources of authority The Oversight Board considered the following authorities and standards: I. Oversight Board decisions: The most relevant previous decisions of the Oversight Board include: II. Meta’s content policies: Under the Hate Speech Community Standard , Meta does not permit “violent” or “dehumanizing” speech that is directed at people or groups on the basis of their protected characteristics. The policy states that “dehumanizing speech” includes “comparisons, generalizations, or unqualified behavioral statements to or about... violent and sexual criminals.” The policy explicitly does not apply to qualified behavioral statements. Groups described as “having carried out violent crimes or sexual offenses” are not protected from attacks under the Hate Speech policy. Under the Violence and Incitement Community Standard , Meta does not allow “threats that could lead to death (and other forms of high-severity violence)” where “threat” is defined as, among other things, “calls for high-severity violence” and “statements advocating for high-severity violence.” Meta's internal guidelines for content reviewers clarify that the company interprets this policy to allow content containing statements with “neutral reference to a potential outcome of an action or an advisory warning.” Under the Violent and Graphic Content Community Standard , “imagery that shows the violent death of a person or people by accident or murder” is covered with a warning screen. III. Meta’s values: Meta has described Facebook’s values of “Voice,” “Dignity,” and “Safety,” among others, in the introduction to the Community Standards. This decision will refer to those values if and as relevant to the decision. IV. International human rights standards: The UN Guiding Principles on Business and Human Rights ( UNGPs ), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following human rights standards: 5. User submissions In their appeal to the Board, the user states that the photo they shared is the “most innocuous” of the pictures documenting the “crimes of the Russian army in the city of Bucha,” “where dozens of dead civilians lie on the streets.” The user says that their post does not call for violence and is about “past history and the present.” They say the poem was originally dedicated to the “struggle of Soviet soldiers against the Nazis,” and that they posted it to show how “the Russian army became an analogue of the fascist army.” As part of their appeal, they state they are a journalist and believe it is important for people to understand what is happening, especially in wartime. 6. Meta’s submissions In the rationale Meta provided to the Board, the company analyzed the content of this case in light of three different policies, starting with Hate Speech. Meta focused on why the company reversed its original decision, rather than explaining how it had come to its original decision. According to Meta, claiming that Russian soldiers committed crimes in the context of the Russia-Ukraine conflict does not constitute an attack under the Hate Speech policy because “qualified behavioral statements” are allowed on the platform. Meta also explained that fascism is a political ideology, and merely linking the Russian army to a certain political ideology does not constitute an attack because “the Russian army is an institution and therefore not a protected characteristic group or subset covered by the Hate Speech policy (as compared to Russian soldiers, who are people).” Finally, Meta indicated that the different excerpts of the poem “Kill him!” quoted in the text of the post (e.g., “kill a fascist,” “kill at least one of them,” “kill him!”) refer to “Nazis” in the context of World War II, and Nazis are not a protected group. Meta also analyzed the post in light of its Violence and Incitement policy. In this regard, Meta explained that stating that “Ukrainians will also want to repeat… and will be able to repeat” after the events in Bucha does not advocate violence. Meta claimed that this is a “neutral reference to a potential outcome,” which the company interprets as permitted under the Violence and Incitement policy. Meta also stated that quoting Simonov’s poem was a way of raising awareness of the potential for history to repeat itself in Ukraine. Finally, the company explained that advocating violence against individuals covered in the Dangerous Individuals and Organizations policy, such as the Nazis (referred to as “fascists” in Simonov’s poem), is allowed under the Violence and Incitement Community Standard. Meta then explained that a warning screen and appropriate age restrictions were applied to the post under its Violent and Graphic Content policy because the image included in the content shows the violent death of a person. Meta confirmed that the image depicts an individual who was shot in Bucha, Ukraine. In response to the Board’s questions, Meta provided further explanation of initiatives it has developed in the context of the conflict in Ukraine. Meta confirmed, however, that none of these initiatives were relevant to the initial removal of the content in this case or the decision to restore it with a warning screen. The company added that it has taken several steps consistent with the UN Guiding Principles on Business and Human Rights to ensure due diligence in times of conflict. These steps included engagement with Ukrainian and Russian civil society, and Russian independent media, to seek feedback on the impact of the measures Meta adopted at the onset of the conflict. The Board asked Meta 11 questions, and Meta responded to them all fully. 7. Public comments The Oversight Board received eight public comments related to this case. Three of the comments were submitted from Europe, four from the United States and Canada and one from Latin America and the Caribbean. The submissions covered the following themes: international armed conflict; the importance of context in content moderation; the role of journalists in conflict situations; the documentation of war crimes; and artistic expression. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board initially examined whether the content was permissible under Meta's content policies, interpreted where necessary in light of the values of the platform, and then assessed whether the treatment of the content comports with the company's human rights responsibilities. This case was selected by the Board because the removal of this post raised important concerns about artistic expression and cultural references repurposed in new contexts that potentially risk inciting violence in conflict situations. Online spaces for expression are particularly important for people impacted by war and social media companies must pay particular attention to protecting their rights. This case demonstrates how a lack of contextual analysis, which is common in content moderation at scale, may prevent users from expressing opinions regarding conflict, and drawing provocative historical parallels. 8.1 Compliance with Meta’s content policies The Board finds that the content in this case does not violate the Hate Speech Community Standard, or the Violence and Incitement Standard. A majority of the Board finds that the content does not violate the Violent and Graphic Content Community Standard. I. Hate Speech Meta’s Hate Speech policy prohibits attacks against people based on protected characteristics, including nationality. Profession receives “some protections” when referenced along with a protected characteristic. The company’s internal guidelines for moderators further clarify that “quasi-protected subsets,” including groups defined by a protected characteristic plus a profession (e.g., Russian soldiers), are generally entitled to protection against Hate Speech Tier 1 attacks. Such attacks include, among others, “violent speech” and “dehumanizing speech... in the form of comparisons, generalizations, or unqualified behavioral statements” “to or about... violent and sexual criminals.” Meta’s Hate Speech policy does not offer protection to “groups described as having carried out violent crimes or sexual offences.” Generic claims that Russian soldiers have a propensity to commit crimes could, depending on content and context, violate Meta’s Hate Speech policy. Such claims could fall within the prohibition against “dehumanizing speech” under the policy, in the form of “unqualified behavioral statements.” Meta’s policy distinguishes between: (i) the attribution of bad character or undesirable traits to a group on account of its ethnicity, national origin or other protected characteristics (this is what Meta means by “generalizations”); (ii) the criticism of members of a group without context (this is what Meta means by “unqualified behavioral statements”); and (iii) the criticism of members of a group for their past behavior (this is what Meta means by “qualified behavioral statements”). In this case, the claims about invading Russian soldiers are made in the context of their actions in the Ukraine conflict, not in general. The question here is whether the user’s comparison of Russian soldiers to World War II-era German fascists, and their assertion that they raped women, killed their fathers, and killed and tortured innocent persons, violated Meta’s Hate Speech policy. Both Meta and the Board conclude that the post does not constitute violent or dehumanizing speech under the Hate Speech policy, though for different reasons. This was the only part of the Hate Speech policy that has been identified as relevant to this post. Meta argues that the post is not violating because the accusatory statements are directed at the Russian army , which is an institution, and not at Russian soldiers , who are people. The Board finds that this distinction does not hold in this case, since the user refers to “army” and “soldiers” interchangeably. Nonetheless, the Board finds that the user’s accusation that Russian soldiers committed crimes comparable to the Nazis’ in the context of Russia’s invasion of Ukraine is permitted. This is the case because ascribing specific actions (e.g., “they began to really take revenge – rape girls, cut their fathers, torture and kill peaceful people of peaceful outskirts Kyiv”) and comparing Russian soldiers’ actions in Ukraine with other armies known to have committed war crimes (e.g., “the Russian army, after 70 years, completely repeated itself in Germany and the German [army] in Ukraine”) are “qualified” statements, related to behavior observed during a specific conflict. The Board therefore finds that, comparing Russian soldiers’ actions in a specific context to the crimes of the Nazis is permitted under the Community Standards, regardless of whether a generic comparison to the Nazis is or is not permissible. According to Meta’s internal guidelines, material that may otherwise constitute hate speech does not violate the policy if it is targeted against groups “described as having carried out violent crimes or sexual offenses.” More broadly, it does not violate the hate speech policy to report instances of violations of human rights in a particular context, even if those people are identified by reference to their national origin. The Board further finds that the different excerpts of the poem “Kill him!” quoted in the content (e.g., “kill a fascist,” “kill at least one of them,” “kill him!”) should not be considered “violent speech” because, when read together with the rest of the post, the Board understands that the user is calling attention to the cycle of violence, rather than urging violence. Finally, the Board concludes that Russian soldiers are targeted in the post because of their role as combatants, not their nationality. The claims are not attacks directed at a group “on the basis” of their protected characteristics. It follows that the content is not hate speech, because no protected characteristic is engaged. II. Violence and Incitement Under the Violence and Incitement policy, Meta removes “calls for high-severity violence,” “statements advocating for high-severity violence,” and “aspirational or conditional statements to commit high-severity violence,” among other types of expression. The company’s internal guidelines for moderators further clarify that Meta interprets this Community Standard to allow statements with “neutral reference to a potential outcome of an action or an advisory warning.” Additionally, the internal guidelines explain “content that condemns or raises awareness of violent threats” is also allowed under the Violence and Incitement policy. This applies to content that “clearly seeks to inform and educate others about a specific topic or issue; or content that speaks to one’s experience of being a target of a threat or violence,” including academic and media reports. Meta explained to the Board that it decided to restore this content because the post did not advocate violence. Meta characterizes the user’s statement that “Ukrainians will also want to repeat … and will be able to repeat” after the events in Bucha as a “neutral reference to a potential outcome,” which the company interprets as permitted under the Violence and Incitement policy. Meta also states that quoting Simonov’s poem was a way of raising awareness of the potential for history to repeat itself in Ukraine. Finally, pointing to the fact that Simonov’s poem is directed against German fascists, it notes that advocating violence against individuals covered in the Dangerous Individuals and Organizations policy, such as the Nazis, is allowed under the Violence and Incitement Community Standard. The Board is partially persuaded by this reasoning. It agrees that the sentence “Ukrainians will also want to repeat... and will be able to repeat” neither calls for nor advocates violence. Read literally, this portion of the post merely states that Ukrainians might well respond as violently to the Russian army’s actions as the Soviets did to the Nazis’. In other words, it is a “neutral reference to a potential outcome,” permitted as per Meta’s interpretation of the Violence and Incitement policy, clarified in the internal guidelines provided to content moderators. The Board also finds that the excerpts with violent language of the poem “Kill him!” cited in the section above, may be read as describing, not encouraging, a state of mind. When read together with the entire post, including the photographic image, the excerpts are part of a broader message warning of the potential for history to repeat itself in Ukraine. They are an artistic and cultural reference employed as a rhetorical device by the user to convey their message. Therefore, the Board concludes that this part of the content is also permitted by Meta’s internal guidelines. The Board concludes, however, that Meta is being unrealistic when it analyzes the post as if it were merely a call to violence against Nazis. The user makes clear that they regard Russian soldiers in Ukraine today as akin to Germans in Russia during World War II. To the extent that the post, with its quotations from Simonov’s poem, could be considered to refer to soldiers now committing atrocities against civilians, there is a risk that readers will read this as a call to violence against Russian soldiers today. The Board nonetheless agrees with Meta’s conclusion that the post does not violate the Violence and Incitement Standard because its primary meaning, in context, is a warning against a cycle of violence. III. Violent and Graphic Content Under its Violent and Graphic Content policy, Meta adds warning labels to content “so that people are aware of the graphic or violent nature before they click to see it.” That is the case for “[i]magery that shows the violent death of a person or people by accident or murder.” In its internal guidelines for content moderators, Meta describes as “indicators of a violent death” graphic imagery of the “aftermath of violent death where the victim appears dead or visibly incapacitated, and there are additional visual indicators of violence,” such as “blood or wounds on the body, blood surrounding the victim, bloated or discolored body, or bodies excavated from debris.” The internal guidelines further explain that bodies without “any visible indicator of violent death” or without “at least one indicator of violence” should not be considered as a depiction of a “violent death.” The photo included in the content shows a view of a street with a person lying still on the ground. No wounds are visible. Meta was able to confirm that the person was shot in Bucha, Ukraine. The Board notes that content moderators working at scale would not necessarily have access to this type of information. A majority of the Board finds that the Violent and Graphic Content policy was not violated because the photographic image lacks clear visual indicators of violence, as described in Meta’s internal guidelines for content moderators. Therefore, the majority concludes that a warning screen should not have been applied. Considering the context of armed conflict in Ukraine and the debris depicted in the image, a minority of the Board finds that the content does violate the Violent and Graphic Content policy. IV. Enforcement action The content was reported by a user for Violent and Graphic Content but was taken down under the Hate Speech policy. It was restored only after being brought to Meta’s attention by the Board. In response to questions from the Board, Meta explained that the case content was not escalated to policy or subject matter experts for an additional review. ""Escalated for review"" means that, instead of the decision being revisited by content moderators conducting at-scale review, it is sent to an internal team at Meta that is responsible for the relevant policy or subject area. 8.2 Compliance with Meta’s values The Board finds that removing the content, and placing a warning screen over the image included in it, are not consistent with Meta’s values. The Board is concerned about the situation of Russian civilians, and possible effects of violent speech targeting Russians in general. However, in this case the Board finds the content does not pose a real risk to the “Dignity” and “Safety” of those people that would justify displacing “Voice,” especially in a context where Meta should make sure users impacted by war are able to discuss its implications. 8.3 Compliance with Meta’s human rights responsibilities The Board finds that Meta’s initial decision to remove the content and Meta’s decision to apply a warning screen to the content were both inconsistent with Meta’s human rights responsibilities as a business. Meta has committed itself to respect human rights under the UN Guiding Principles on Business and Human Rights ( UNGPs ). Facebook's Corporate Human Rights Policy states that this includes the International Covenant on Civil and Political Rights ( ICCPR ). Freedom of expression (Article 19 ICCPR) The scope of the right to freedom of expression is broad. Article 19, para. 2, of the ICCPR gives heightened protection to expression, including artistic expression, on political issues, and commentary on public affairs, as well as to discussions of human rights and of historical claims ( General Comment No. 34 , paras. 11 and 49). Even expression which may be regarded as “deeply offensive” is entitled to protection (General Comment No. 34, para. 11). The content under analysis by the Board in this case contains strong language. However, it amounts to political discourse and draws attention to human rights abuses in a war context. The content in this case included quotes from a well-known war poem, which the user employed as a provocative cultural reference to educate and warn their audience of the potential consequences of Russian soldiers' actions in Ukraine. The UN Special Rapporteur on freedom of expression has highlighted that artistic expression includes “the fictional and nonfictional stories that educate or divert or provoke” ( A/HRC/44/49/Add.2 , para. 5). ICCPR Article 19 requires that where restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). The UN Special Rapporteur on freedom of expression has encouraged social media companies to be guided by these principles when moderating online expression ( A/HRC/38/35 , paras. 45 and 70). I. Legality (clarity and accessibility of the rules) The principle of legality requires rules used by states to limit expression to be clear and accessible (General Comment 34, para. 25). The Human Rights Committee has further noted that rules “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” (General Comment 34, para. 25). Individuals must have enough information to determine if and how their expression may be limited, so that they can adjust their behavior accordingly. Applied to Meta’s content rules for Facebook, users should be able to understand what is allowed and what is prohibited. Two sections from Meta’s internal guidelines on how to enforce the Violence and Incitement Community Standard are particularly relevant to the conclusion reached by both Meta and the Board that the content should stay on Facebook. First, Meta interprets this policy to allow messages that warn of the possibility of violence by third parties, if they are statements with “neutral reference to a potential outcome of an action or an advisory warning.” Second, otherwise violating content is allowed if it “condemns or raises awareness of violent threats.” The Board notes, however, that these internal guidelines are not included in the public-facing language of the Violence and Incitement Community Standard. This might cause users to believe that content such as the post in this case is violating, when it is not. Meta should integrate these sections into the public-facing language of the Violence and Incitement policy, so that it becomes sufficiently clear to users. The Board is aware that publicizing more detail on Meta’s content policies might enable users with malicious intent to circumvent the Community Standards more easily. The Board considers, however, that the need for clarity and specificity prevails over the concern that some users might attempt to “game the system.” Not knowing that “neutral references to a potential outcome,” “advisory warnings,” “condemning” or “raising awareness” of violent threats are permitted might cause users to avoid initiating or engaging in public interest discussions on Meta’s platforms. The Board is also concerned that in this case, Meta's decision to affix a warning screen is inconsistent with its internal guidelines to content moderators. Additionally, Meta’s interpretation of the Violent and Graphic Content policy may not be clear to users. Meta should seek to clarify, in the public-facing language of the policy, how the company interprets the policy, and how it determines whether an image “shows the violent death of a person or people by accident or murder,” in the context of conflict, as per Meta’s internal guidelines to content moderators. Additionally, Meta informed the Board that no message was sent to the user who originally reported the content to inform them that the post was later restored by the company. This raises legality concerns, as the lack of relevant information for users may interfere with “the individual's ability to challenge content actions or follow up on content-related complaints” ( A/HCR/38/35 , at para. 58). The Board notes that notifying reporters of the enforcement action taken against the content they reported, and the relevant Community Standard enforced, would help users to better understand and follow Meta's rules. II. Legitimate aim Any restriction on freedom of expression should also pursue a ""legitimate aim."" The Board has previously recognized that the Hate Speech Community Standard pursues the legitimate aim of protecting the rights of others (General Comment No. 34, para. 28), including the rights to equality and non-discrimination based on ethnicity and national origin (Article 2, para. 1, ICCPR). Protecting Russians targeted by hate is, therefore, a legitimate aim. However, the Board finds that protecting “soldiers” from claims of wrongdoing is not a legitimate aim, when they are being targeted because of their role as combatants during a war, not because of their nationality or another protected characteristic; criticism of institutions such as the army should not be prohibited (General Comment No. 34, para. 38). The Violence and Incitement Standard, properly framed and applied, pursues the legitimate aim of protecting the rights of others. In the context of this case, this policy seeks to prevent the escalation of violence which could lead to harm to the physical security (Article 9, ICCPR) and life (Article 6, ICCPR) of people in the areas impacted by the Russian-Ukraine conflict. The Board notes that there are additional complexities involved in evaluating violent speech in the context of armed resistance to an invasion. The Russian invasion of Ukraine is internationally recognized as unlawful (A/RES/ES-11/1), and the use of force as self-defense against such acts of aggression is permitted (Article 51, UN Charter). In a context of international armed conflict, international humanitarian law on the conduct of parties to hostilities allows active combatants to be lawfully targeted in the course of armed conflict. This is not the case with people no longer taking active part in the hostilities, including prisoners of war (Article 3, Geneva Convention relative to the Treatment of Prisoners of War ). When violence is itself lawful under international law, speech urging such violence presents different considerations that must be examined separately. Although the Board has found the content in this case to be non-violating, the Board urges Meta to revise its policies to take into consideration the circumstances of unlawful military intervention. With reference to the company’s decision to affix a warning screen to the photograph, Meta notes that the Violent and Graphic content policy aims to promote an environment that is conducive to diverse participation by limiting ""content that glorifies violence or celebrates the suffering or humiliation of others."" The Board agrees that this aim is legitimate in the context of Meta’s goal to promote an inclusive platform. III. Necessity and proportionality The principle of necessity and proportionality provides that any restrictions on freedom of expression ""must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected"" (General Comment 34, para. 34). In order to assess the risks posed by violent or hateful content, the Board is typically guided by the six-factor test described in the Rabat Plan of Action , which addresses advocacy of national, racial or religious hatred that constitutes incitement to hostility, discrimination or violence. In this case, the Board finds that, despite the context of ongoing armed conflict and the charged cultural references employed by the user, it is unlikely that the post – a warning against a cycle of violence – would lead to harm. The Board concludes that the initial content removal was not necessary. Additionally, a majority of the Board finds that the warning screen was also not necessary, whereas a minority of the Board finds that it was both necessary and proportional. Considering the relevant factors here, the Board concludes that despite the context of Russia’s unlawful invasion of Ukraine where potentially inflammatory speech could increase tensions, the evident intention of the user (raising awareness around the war and its consequences), the reflective tone adopted in quoting a war poem, and the proliferation of other communications regarding the horrific events in Ukraine, mean the content is not likely to contribute significantly to the exacerbation of violence. A majority of the Board concludes that the use of a warning screen inhibits freedom of expression and is not a necessary response in this instance, as the photographic image lacks clear visual indicators of violence, as described in Meta's internal guidelines to content moderators, which would justify the use of the warning screen. Social media companies should consider a range of possible responses to problematic content to ensure restrictions are narrowly tailored ( A/74/486 , para. 51). In this regard, the Board considers that Meta should further develop customizations tools so that users are able to decide on whether to see sensitive graphic content with or without warnings on Facebook and on Instagram. A minority of the Board finds that the warning screen was a necessary and proportionate measure that was appropriately tailored to encourage participation and freedom of expression. This minority believes that, in consideration of the dignity of deceased persons, especially in the context of an armed conflict, and the possible effects of images depicting death and violence on a great number of users, Meta may err on the side of prudence by adding warning screens over content such as the one under analysis. 9. Oversight Board decision The Oversight Board overturns Meta's original decision to take down the content, and Meta’s subsequent determination that the Violent and Graphic Content policy was violated, which led the company to affix a warning screen to the photographic image in the post. 10. Policy advisory statement Content policy 1. Meta should add to the public-facing language of its Violence and Incitement Community Standard that the company interprets the policy to allow content containing statements with “neutral reference to a potential outcome of an action or an advisory warning,” and content that “condemns or raises awareness of violent threats.” The Board expects that this recommendation, if implemented, will require Meta to update the public-facing language of the Violence and Incitement policy to reflect these inclusions. 2. Meta should add to the public-facing language of its Violent and Graphic Content Community Standard detail from its internal guidelines about how the company determines whether an image “shows the violent death of a person or people by accident or murder.” The Board expects that this recommendation, if implemented, will require Meta to update the public-facing language of the Violent and Graphic Content Community Standard to reflect this inclusion. Enforcement 3. Meta should assess the feasibility of implementing customization tools that would allow users over 18 years old to decide whether to see sensitive graphic content with or without warning screens, on both Facebook and Instagram. The Board expects that this recommendation, if implemented, will require Meta to publish the results of a feasibility assessment. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. The company Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world, provided linguistic expertise. Return to Case Decisions and Policy Advisory Opinions" fb-mfadk60o,Bengali Debate about Religion,https://www.oversightboard.com/decision/fb-mfadk60o/,"December 8, 2023",2023,December,"TopicFreedom of expression, Marginalized communities, ReligionCommunity StandardCoordinating harm and publicizing crime",Coordinating harm and publicizing crime,Overturned,"Bangladesh, India",A user appealed Meta’s decision to remove a Facebook post with a link to a YouTube video which addressed Islamic scholars’ unwillingness to discuss atheism.,6513,983,"Overturned December 8, 2023 A user appealed Meta’s decision to remove a Facebook post with a link to a YouTube video which addressed Islamic scholars’ unwillingness to discuss atheism. Summary Topic Freedom of expression, Marginalized communities, Religion Community Standard Coordinating harm and publicizing crime Location Bangladesh, India Platform Facebook This is a summary decision. Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors. They are approved by a Board Member panel, not the full Board. They do not involve a public comment process and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement. Case Summary A user appealed Meta’s decision to remove a Facebook post with a link to a YouTube video that addressed Islamic scholars’ unwillingness to discuss atheism. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Case Description and Background In May 2023, a user who identifies themselves as an atheist and critic of religion posted a link to a YouTube video on Facebook. The thumbnail image of the video asks, in Bengali, “Why are Islamic scholars afraid to debate the atheists on video blogs?” and contains an image of two Islamic scholars. The caption of the post states, “Join the premiere to get the answer!” The content had approximately 4,000 views. In their appeal to the Board, the user claimed that the purpose of sharing the video was to promote a ""healthy debate or discussion” with Islamic scholars, specifically on topics such as the theory of evolution and Big Bang theory. The user states that this post adheres to Facebook’s Community Standards by “promoting open discussion.” Furthermore, the user stressed that Bangladeshi atheist activists are frequently subject to censorship and physical harms. Meta initially removed the content under its Coordinating Harm and Promoting Crime policy , which prohibits content “facilitating, organizing, promoting or admitting to certain criminal or harmful activities targeted at people, businesses, property or animals.” Meta acknowledged this content does not violate this policy although the views espoused by the atheist may be viewed as “provocative to many Bangladeshis.” Meta offered no further explanation regarding why the content was removed from the platform. Although a direct attack against people based on their religious affiliation could be removed for hate speech, a different Meta policy, there is no prohibition in the company’s policies against critiquing a religion’s concepts or ideologies. After the Board brought this case to Meta’s attention, the company determined that the content did not violate the Coordinating Harm and Promoting Crime policy and the removal was incorrect. The company then restored the content to Facebook. Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation processes involved, reduce errors and increase fairness for Facebook and Instagram users. Case Significance This case highlights an error in Meta’s enforcement of its Coordinating Harm and Promoting Crime policy. These types of enforcement errors further limit freedom of expression for members of groups who are already subjected to intense censorship by state actors. Meta’s Coordinating Harm and Promoting Crime policy says that Meta allows users to discuss and debate “harmful activities” but only “so long as they do not advocate or coordinate harm,” with many of the policy’s clauses revolving around a user’s intent. In this case, while the user’s post could be interpreted as provocative – given the documented animosity towards Bangladeshi atheist activists – the user was not advocating or coordinating harm under Meta’s definition of such activities, highlighting a misinterpretation of the user’s intent. Previously, the Board has issued recommendations that the company clarify for users how they can make a non-violating intent clear on similar distinctions as in the company’s Dangerous Organizations and Individuals policy. Regarding that policy, the Board urged Meta to, “Explain in the Community Standards how users can make the intent behind their posts clear to [Meta] ... Facebook should also provide illustrative examples to demonstrate the line between permitted and prohibited content,” ( Ocalan’s Isolation decision, recommendation no. 6). Meta partially implemented this recommendation. Furthermore, the Board has issued recommendations aimed at preventing enforcement errors more broadly. The Board asked Meta to “[i]mplement an internal audit procedure to continuously analyze a statistically representative sample of automated content removal decisions to reverse and learn from enforcement mistakes” ( Breast Cancer Symptoms and Nudity decision , recommendation no. 5). Meta claimed that this was work it already does, without publishing information to demonstrate so. Additionally, the Board requested that Meta “[i]mprove its transparency reporting to increase public information on error rates by making the information viewable by country and language for each Community Standard… more detailed transparency reports will help the public spot areas where errors are more common, including potential specific impacts on minority groups, and alert Facebook to correct them,” ( Punjabi Concern Over the RSS in India decision , recommendation no. 3). Meta is still assessing the feasibility of this recommendation. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought this case to the company’s attention. The Board also urges Meta to speed up the implementation of still open recommendations to reduce such errors. Return to Case Decisions and Policy Advisory Opinions" fb-mp4zc4cc,Alleged crimes in Raya Kobo,https://www.oversightboard.com/decision/fb-mp4zc4cc/,"December 14, 2021",2021,December,"TopicFreedom of expression, War and conflictCommunity StandardHate speech","Policies and TopicsTopicFreedom of expression, War and conflictCommunity StandardHate speech",Upheld,Ethiopia,The Oversight Board has upheld Meta's original decision to remove a post alleging the involvement of ethnic Tigrayan civilians in atrocities in Ethiopia's Amhara region.,41052,6401,"Upheld December 14, 2021 The Oversight Board has upheld Meta's original decision to remove a post alleging the involvement of ethnic Tigrayan civilians in atrocities in Ethiopia's Amhara region. Standard Topic Freedom of expression, War and conflict Community Standard Hate speech Location Ethiopia Platform Facebook Amharic translation Tigrinya translation Public Comments 2021-014-FB-UA This decision is also available in Amharic and Tigrinya . ሙሉ ውሳኔውን በአማርኛ ለማንበብ፣ እዚህ ይጫኑ ። ብትግርኛ እተገብረ ውሳነ ምሉእ ከተንብቦ እንተ ደሊኻ ኣብዚ ጠውቕ ። Note: On October 28, 2021, Facebook announced that it was changing its company name to Meta. In this text, Meta refers to the company, and Facebook continues to refer to the product and policies attached to the specific app. The Oversight Board has upheld Meta’s original decision to remove a post alleging the involvement of ethnic Tigrayan civilians in atrocities in Ethiopia’s Amhara region. However, as Meta restored the post after the user’s appeal to the Board, the company must once again remove the content from the platform. About the case In late July 2021, a Facebook user from Ethiopia posted in Amharic. The post included allegations that the Tigray People’s Liberation Front (TPLF) killed and raped women and children, and looted the properties of civilians in Raya Kobo and other towns in Ethiopia’s Amhara region. The user also claimed that ethnic Tigrayan civilians assisted the TPLF with these atrocities. The user claims in the post that he received the information from the residents of Raya Kobo. The user ended the post with the following words “we will ensure our freedom through our struggle.” After Meta’s automatic Amharic language systems flagged the post, a content moderator determined that the content violated Facebook’s Hate Speech Community Standard and removed it. When the user appealed this decision to Meta, a second content moderator confirmed that the post violated Facebook’s Community Standards. Both moderators belonged to Meta’s Amharic content review team. The user then submitted an appeal to the Oversight Board. After the Board selected this case, Meta identified its original decision to remove the post as incorrect and restored it on August 27. Meta told the Board it usually notifies users that their content has been restored on the day they restore it. However, due to a human error, Meta informed this user that their post had been restored on September 30 – over a month later. This notification happened after the Board asked Meta whether it had informed the user that their content had been restored. Key findings The Board finds that the content violated Facebook’s Community Standard on Violence and Incitement. While Meta initially removed the post for violating the Hate Speech Community Standard, the company restored the content after the Board selected the case, as Meta claimed the post did not target the Tigray ethnicity and the user’s allegations did not constitute hate speech. The Board finds this explanation for restoring the content to be lacking detail and incorrect. Instead, the Board applied Facebook’s Violence and Incitement Community Standard to this post. This Standard prohibits “misinformation and unverifiable rumors that contribute to the risk of imminent violence or physical harm.” The Board finds that the content in this case contains an unverifiable rumor according to Meta’s definition of the term. While the user claims his sources are previous unnamed reports and people on-the-ground, he does not even provide circumstantial evidence to support his allegations. Rumors alleging that an ethnic group is complicit in mass atrocities, as found in this post, are dangerous and significantly increase the risk of imminent violence. The Board also finds that removing the post is consistent with Meta’s human rights responsibilities as a business. Unverifiable rumors in a heated and ongoing conflict could lead to grave atrocities, as was the case in Myanmar. In decision 2020-003-FB-UA , the Board stated that “in situations of armed conflict in particular, the risk of hateful, dehumanizing expressions accumulating and spreading on a platform, leading to offline action impacting the right to security of person and potentially life, is especially pronounced.” Cumulative impact can amount to causation through a “gradual build-up of effect,” as happened in the Rwandan genocide. The Board came to its decision aware of the tensions between protecting freedom of expression and reducing the threat of sectarian conflict. The Board is aware of civilian involvement in the atrocities in various parts of Ethiopia, though not in Raya Kobo, and the fact that Meta could not verify the post’s allegations at the time they were posted. The Board is also aware that true reports on atrocities can save lives in conflict zones, while unsubstantiated claims regarding civilian perpetrators are likely to heighten risks of near-term violence. The Oversight Board’s decision The Oversight Board upholds Meta’s original decision to remove the post. As Meta restored the content after the user’s appeal to the Board, the company must once again remove the content from the platform. In a policy advisory statement, the Board recommends that Meta: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board upholds Meta’s original decision to remove the content. The post alleges the involvement of ethnic Tigrayan civilians in the atrocities against the people in Ethiopia’s Amhara region. Meta initially applied the Hate Speech Community Standard to remove the post from Facebook, but restored it after the Board selected the case. The Board finds Meta’s explanation for restoration lacking detail and incorrect. The Board finds that the content violated the prohibition on unverified rumors under the Violence and Incitement Community Standard. 2. Case description In late July 2021, a Facebook user posted in Amharic on his timeline allegations that the Tigray People’s Liberation Front (TPLF) killed and raped women and children, as well as looted the properties of civilians in Raya Kobo and other towns in Ethiopia’s Amhara region. The user also claimed that ethnic Tigrayan civilians assisted the TPLF in these atrocities (for the specific translation and meaning of the Amharic post see Section 6 below). The user ended the post with the following words “we will ensure our freedom through our struggle.” The user claims in the post that he received the information from the residents of Raya Kobo. The post was viewed nearly 5,000 times, receiving fewer than 35 comments and more than 140 reactions. It was shared over 30 times. The post remained on Facebook for approximately one day. Among the comments in Amharic were statements, as translated by the Board’s linguistic experts, stating that: “[o]ur only option is to stand together for revenge” and “are you ready, brothers and sisters, to settle this matter?” According to Meta, the user’s account that posted the content is located in Ethiopia, but not in the Tigray or Amhara regions. The user’s profile picture includes a hashtag signaling disapproval of the TPLF. Based on the information available to the Board, the user describes himself as an Ethiopian man from Raya. The post was identified by Meta’s Amharic language automated system (classifier) as potentially violating its policies. Meta operates machine learning classifiers that are trained to automatically detect potential violations of the Facebook Community Standards. Meta announced that it is “using this technology to proactively identify hate speech in Amharic and Oromo, alongside over 40 other languages globally.” The Board understands that Ethiopia is a multilingual country, with Oromo, Amharic, Somali, and Tigrinya being the four most spoken languages in the country. Meta also reported that it hires moderators who can review content in Amharic, Oromo, Tigrinya, and Somali. Meta uses a “‘biased sampling’ method that samples content to improve the Amharic classifier quality. This means that Amharic content with both low and high potential match scores is continuously sampled and enqueued for human review to improve classifier performance.” This content was selected for human review as part of that improvement process. Meta also explained that its automated system determined that this content had “a high number of potential views” and that it gave the post “a low violating score.” The low violating score means that the content does not meet the threshold for auto-removal by Meta’s automated system. A content moderator from the Amharic content review team determined that the post violated Facebook’s Hate Speech Community Standard and removed it. This Standard prohibits content targeting a person or group of people based on their race, ethnicity, or national origin with “violent speech.” Meta stated that it notified the user that his post violated Facebook’s Hate Speech Community Standard, but not the specific rule that was violated. The user then appealed the decision to Meta, and, following a second review by another moderator from the Amharic content review team, Meta confirmed that the post violated Facebook’s policies. The user then submitted an appeal to the Oversight Board. As a result of the Board selecting the case, Meta identified the post’s removal as an “enforcement error” and restored it on August 27. Meta stated that it usually notifies users about content restoration on the same day. However, due to a human error, Meta informed this user of restoration on September 30. This happened after the Board asked Meta whether the user had been informed that their content had been restored. The case concerns unverified allegations that Tigrayans living in Raya Kobo town were collaborating with the TPLF to commit atrocities including rape against the Amhara ethnic group. These allegations were posted on Facebook in the midst of an ongoing civil war in Ethiopia that erupted in 2020 between the Tigray region’s forces and Ethiopian federal government forces and military and its allies (International Crisis Group, Ethiopia’s Civil War: Cutting a Deal to Stop the Bloodshed October 26, 2021). According to expert briefings received by the Board, Facebook is an important, influential and popular online medium for communication in Ethiopia. The expert briefings also noted there is little to no coverage on the conflict-affected areas in Ethiopian media, and Ethiopians use Facebook to share and receive information about the conflict. In its recent history, Ethiopia has seen recurring ethnic conflict involving, among others, Tigrayan groups (ACCORD, Ethnic federalism and conflict in Ethiopia, 2017). The Board is aware of allegations of serious violations of human rights and humanitarian law in the Tigray region and in other parts of the country, including in Afar, Amhara, Oromo and Somali regions by the involved parties in the current conflict ( UN Special Advisor on the Prevention of Genocide, on the continued deterioration of the situation in Ethiopia statement , July 30, 2021; Office of the United Nations High Commissioner for Human Rights (UN OHCHR), E thiopia: Bachelet urges end to ‘reckless’ war as Tigray conflict escalates , November 3, 2021). Furthermore, according to the recently published joint investigation by the Ethiopian Human Rights Commission (EHRC) and the UN OHCHR, Tigrayan and Amharan civilians were involved in human rights violations in late 2020. However, the scope of the investigation did not cover violations during July 2021 in areas mentioned in the user’s post (EHRC and UN OHCHR report, Joint Investigation into Alleged Violations of International Human Rights, Humanitarian and Refugee Law Committed by all Parties to the Conflict in the Tigray Region of the Federal Democratic Republic of Ethiopia , November 3, 2021). According to a Reuters report, local officials from Amhara region claimed that the Tigrayan forces killed 120 civilians in a village in Amhara region on September 1 and 2 ( Reuters , September 8). The Tigrayan forces later issued a statement rejecting what they called a “fabricated allegation” by the Amhara regional government. These allegations could not be independently confirmed. The Board is also aware of allegations that Tigrayans are ethnically profiled, harassed and are increasingly subject to hate speech ( Remarks of the UN High Commissioner for Human Rights Michelle Bachelet in Response to Questions on Ethiopia , December 9, 2020; NGOs Call for UN Human Rights Council Resolution on Tigray , June 11, 2021). 3. Authority and scope The Board has authority to review Meta's decision following an appeal from the user whose post was removed (Charter Article 2, Section 1). The Board may uphold or reverse that decision, and its decision is binding on Meta (Charter Article 3, Section 5, and Article 4). The Board's decisions may include policy advisory statements with non-binding recommendations that Meta must respond to (Charter Article 3, Section 4). According to its Charter, the Oversight Board is an independent body designed to protect free expression by making principled, independent decisions about important pieces of content. It operates transparently, exercising neutral, independent judgement and rendering decisions impartially. 4. Relevant standards The Oversight Board considered the following standards in its decision: I. Facebook’s Community Standards In the policy rationale for Facebook’s Hate Speech Community Standard, Meta states that hate speech is not allowed on the platform ""because it creates an environment of intimidation and exclusion and, in some cases, may promote real-world violence."" The Community Standard defines hate speech as “a direct attack against people — rather than concepts or institutions — on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease.” The rationale further defines an attack “as violent or dehumanizing speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation.” Facebook’s Hate Speech Community Standard describes three tiers of attacks. Under Tier 1, Facebook’s Community Standards prohibit “content targeting a person or group of people (including all subsets except those described as having carried out violent crimes or sexual offenses) on the basis of their aforementioned protected characteristic(s)” with “dehumanizing speech.” Such speech can take the form of generalizations or unqualified behavioral statements about people sharing a protected characteristic being “violent and sexual criminals” or “other criminals.” The rationale for Facebook’s Violence and Incitement Community Standard states that Meta “aim[s] to prevent potential offline harm that may be related to content” on the platform. Specifically, Meta prohibits content containing “misinformation and unverifiable rumors that contribute to the risk of imminent violence or physical harm.” As part of its Internal Implementation Standards, Meta considers an unverifiable rumor to be information which is extremely hard or impossible to trace the source of the information, or cannot be confirmed or debunked in a meaningful timeframe because it is extremely hard or impossible to trace the source of the information. Meta also considers information that is devoid of enough specificity for the claim to be debunked to be an unverifiable rumor. Meta notes that it requires additional context to enforce this policy found in Facebook’s Violence and Incitement Community Standard. II. Meta’s values Meta’s values are outlined in the introduction to Facebook’s Community Standards. The value of “Voice” is described as “paramount”: The goal of our Community Standards has always been to create a place for expression and give people a voice. […] We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Meta limits “Voice” in service of four other values, and two are relevant here: “Safety”: We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn't allowed on Facebook. “Dignity” : We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade others. III. Human rights standards The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , where it re-committed to respecting human rights in accordance with the UNGPs. The Board's analysis in this case was informed by the following human rights standards: 5. User statement The user stated in his appeal to the Board that he posted this content to protect his community which is in danger and that Meta must help communities in war zones. He stated the post is not hate speech “but is truth.” He also stated that the TPLF targeted his community of one million people and left them without food, water and other basic necessities. The user speculated that his post was reported “by members and supporters of that terrorist group,” and claimed to “know well most of the rules” and that he has “never broken any rules of Facebook.” 6. Explanation of Meta’s decision Meta explained in its rationale that the content was originally removed as an attack under Facebook’s Hate Speech Community Standard, specifically for violating its policy prohibiting “violent speech” targeted at Tigrayan people based on their ethnicity. Meta informed the Board that its moderators do not record their reasons for removing content, beyond indicating the Community Standard violated. Therefore, Meta did not confirm if the moderators who reviewed the post initially and on appeal applied the same rule within Facebook’s Hate Speech policy to remove the post. Meta stated in its rationale that, as a result of the Board selecting the case, the company determined that its “decision was an error” and restored the post. Meta also stated that the content did not violate its rules because it did not target the Tigray ethnicity and the user’s allegations about the TPLF or Tigrayans did not rise to the level of hate speech. Meta confirmed in its response to the Board’s question that its Amharic automated systems are in place and that these are audited and refreshed every six months. Meta also explained that it was the original text in Amharic that led the automated system to identify the content as potentially violating. Similarly, Meta confirmed that the two content moderators were Amharic speakers and that they based their review on the original text in Amharic. Meta explained in its submissions that its regional team provided the cultural and linguistic context in developing the case file for this appeal. For example, Meta’s decision rationale presented to the Board is based on the regional team’s translation of the content. Meta’s regional team translated the supposedly violating part of the user’s post as “Tigrean” teachers, health professionals and merchants “are leading the way for the rebel TPLF forces to get women raped and loot properties.” The Board requested and received an additional English translation of the text from its own linguistic experts and Meta provided an additional translation of the text by its external linguistic vendor. The two versions confirmed that the prevailing meaning of the text indicates that Tigrayan civilians assisted in the atrocities committed by the TPLF. For the purposes of this decision, the Board notes the version provided by Meta’s external vendor. That version reads as follows: “As reported previously and per information obtained from people living in the area who make a living as teachers, as health professionals, as merchants, as daily labour workers, and low [wage] workers, we are receiving direct reports that the Tigreans, who know the area very well, are leading the rebel group door-to-door exposing women to rape and looting property.” Moreover, the Amharic comments on the post stated that: “[o]ur only option is to stand together for revenge” and “are you ready, brothers and sisters, to settle this matter?” Meta confirmed in its response to the Board’s question that Ethiopia is “designated as a Tier 1 At-Risk Country.” According to Meta, this is the highest risk level. Meta noted that it has designated Ethiopia as “a crisis location” for its content policy and integrity work. As such, it established “a top-level” Integrity Product Operations Center (IPOC) for Ethiopia’s June 2021 elections and another IPOC to monitor post-election developments in September. Both IPOCs ended in the same month that they were set up. Meta also stated that it has been treating Ethiopia as a “top-level crisis” by its Operations, Policy and Product teams. Meta stated that its “Crisis Response Cross-functional team” that focuses on Ethiopia convenes weekly to understand and mitigate ongoing risk. Meta added that this work does not change the way it reviews content that does not pose such a risk, and that the work did not affect its determination in this case. In response to the Board’s question, Meta explained that its Trusted Partner(s) did not escalate the post for additional review for violations of misinformation and harms policies. Meta also noted that as there was no third-party fact check there “was no evidence to suggest that the claims made in the post were false or unverifiable rumor(s).” 7. Third-party submissions The Oversight Board received 23 public comments related to this case. Six of the comments were from Sub-Saharan Africa, specifically Ethiopia, one from the Middle East and North Africa, one was from Asia Pacific and Oceania, five were from Europe and 10 were from the United States and Canada. The Board received comments from stakeholders including academia, private individuals and civil society organizations focusing on freedom of expression and hate speech in Ethiopia. The submissions covered themes including whether the content should stay on the platform, difficulties in distinguishing criticism of the TPLF from hate speech against the Tigrayan people, and Meta’s lack of content moderators who speak Ethiopian languages. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The case concerns allegations made during an ongoing civil and ethnic war in a region with a history of lethal ethnic conflict. There is a tension between protecting freedom of expression and reducing the threat of sectarian conflict. This tension can only be resolved through attention to the specifics of a given conflict. The Board is aware of civilian involvement in the atrocities in various parts of Ethiopia, though not in Raya Kobo (see relevant context of the conflict in Ethiopia in Section 2 above). Meta stated that it had no evidence that the content was false or unverifiable rumor. The Board notes that at the time of the posting Meta could not and did not proactively verify the allegations. It was not possible to verify the allegations given the communication blackout in Amhara region. The situation in Amhara was beyond access for international observers and journalists. The Board is also aware that true reports on atrocities can be lifesaving in conflict zones by putting potential victims on notice of potential perpetrators. However, in an ongoing heated conflict, unsubstantiated claims regarding civilian perpetrators are likely to pose heightened risks of near-term violence. 8.1. Compliance with Community Standards Meta restored the content because it found that the content is not hate speech (see Section 6 above). The Board finds that explanation to be lacking detail and incorrect. The Board finds the Violence and Incitement Community Standard relevant to this case. The Board concludes that the post violates the Violence and Incitement policy’s prohibition on “misinformation and unverifiable rumors that contribute to the risk of imminent violence or physical harm” (see Section 4, for a definition of an “unverifiable rumor”). The content falls within Meta’s Internal Implementation Standards’ definition of an unverifiable rumor (see Section 4, for a definition of an “unverifiable rumor”). The Board finds that rumors alleging the complicity of an ethnic group in mass atrocities are dangerous and significantly increase the risk of imminent violence during an ongoing violent conflict such as presently in Ethiopia. The Board understands that Tigrayans in Ethiopia, like other ethnic groups, are already subject to imminent, and, in some instances actual, violence and physical harm. 8.2. Compliance with Meta’s values The Board finds that Meta’s decision to restore and allow the content is inconsistent with its values of “Dignity” and “Safety.” The Board recognizes that “Voice” is Meta’s paramount value, but the company allows for expression to be limited to prevent abuse and other forms of online and offline harm. In the context of this case, “Voice” that exposes human rights violations is of utmost importance. However, the form that an expression takes in the midst of a violent conflict is also important. Speech that seemingly seeks to bring attention to alleged human rights violations while making unverified claims during an ongoing violent conflict that an ethnic group is complicit in atrocities runs the risk that it will justify or generate retaliatory violence. This is particularly pertinent in Ethiopia in the current crisis. 8.3. Compliance with Meta’s human rights responsibilities The Board finds that removing the content in this case is consistent with Meta’s human rights responsibilities as a business under UNGP Principle 13, which requires companies to ""avoid causing or contributing to adverse human rights impacts through their own activities, and address such impacts when they occur.” In a heated and ongoing conflict, unverifiable rumors may lead to grave atrocities, which the experience in Myanmar has indicated. To mitigate such a risk a transparent system of moderating content in conflict zones, including a policy regarding unverifiable rumors, is a necessity. Freedom of Expression and Article 19 of the ICCPR Article 19 of the ICCPR provides broad protection for freedom of expression through any media and regardless of frontiers. However, the Article allows this right to be restricted under certain narrow and limited conditions, known as the three-part test of legality (clarity), legitimacy, and necessity, which also includes an assessment of proportionality. Although the ICCPR does not create obligations for Meta as it does for states, Meta has committed to respecting human rights as set out in the UNGPs . This commitment encompasses internationally recognized human rights as defined, among other instruments, by the ICCPR. The UN Special Rapporteur on freedom of opinion and expression has suggested that Article 19, para. 3 of the ICCPR provides a useful framework to guide platforms’ content moderation practices ( A/HRC/38/35 , para. 6) I. Legality (clarity and accessibility of the rules) The requirement of legality demands that any restriction on freedom of expression is: (a) adequately accessible so that individuals have a sufficient indication of how the law limits their rights; and (b) that the law must be formulated with sufficient precision so that individuals can regulate their conduct. Further, a law may not confer unfettered discretion for the restriction of freedom of expression on those charged with its execution ( General Comment 34 , para. 25). The term “unverifiable rumor"" is not defined in the public facing Community Standards. When Meta fails to explain key terms and how its policies are applied, users may find it difficult to understand if their content violates Facebook’s Community Standards. However, as applied to the facts of this case in which an unverified allegation was made in the midst of an ongoing violent conflict, the Board finds that the term “unverifiable rumor” provides sufficient clarity. The rumor was not verifiable for Meta, nor for the user who was not present in Raya Kobo. International observers and journalists also could not verify the rumor given the ongoing conflict and the communications blackout. In such circumstances it is foreseeable for users that such a post falls within the prohibition. II. Legitimate aim Restrictions on freedom of expression must pursue a legitimate aim, which includes the protection of the rights of others, among other aims. The Human Rights Committee interpreted the term “rights” to include human rights as recognized in the ICCPR and more generally in international human rights law ( General Comment 34 , para. 28). The Facebook Community Standard on Violence and Incitement exist in part to prevent offline harm that may be related to content on Facebook. Restrictions based this policy thus serve the legitimate aim of the protection of the right to life. III. Necessity and proportionality The principle of necessity and proportionality under international human rights law requires that restrictions on expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected” ( General Comment 34 , para. 34). The principle of proportionality demands a consideration for the form of expression at issue ( General Comment 34 , para. 34). In assessing whether the restriction on user’s speech served its aim as required by Meta’s human rights responsibilities, the Board considered what Meta has done to prevent and mitigate the risk to life from the spread of unverifiable rumors about parties to the Ethiopian conflict (see Section 6 above, for a description of Meta’s work in Ethiopia). The user alleged that Tigrayan civilians were accomplices in grave atrocities committed by Tigrayan forces. The user’s sources for this claim are previous unnamed reports and sources on-the-ground, but he did not provide even circumstantial evidence supporting the allegations which he could have added without putting at risk his sources. The Board is aware of the importance of shedding light on human rights violations in a conflict situation. Reporting on atrocities is an important activity serving the right of others to be informed. “Journalism is a function shared by a wide range of actors… [including] bloggers and others who engage in forms of self-publication … on the internet…”) ( General Comment 34 , para. 44). Those who engage in forms of self-publication share the responsibilities related to the watchdog function and when reporting on human rights they must meet standards of accuracy. Moreover, information on atrocities may save lives, especially where social media are the ultimate source of information. However, the above qualities are absent in the user’s post as it does not contain information about actual threat to life and it does not contain specific information that can be used in the documentation of human rights violation. As formulated, the content could contribute to ethnic hatred. Ethnic conflict situations call for heightened scrutiny over how people should report on and discuss human rights violations committed by parties to the conflict. These considerations apply to Facebook posts, which can be used to spread unverifiable rumors at great speed. The Board notes that some Ethiopian government officials have instigated or spread hate speech targeting Tigrayans (see, for example, Amnesty International, Ethiopia: Sweeping emergency powers and alarming rise in online hate speech as Tigray conflict escalates , DW 2020 report ). There is no evidence that this post formed part of such deliberate efforts to fan discord, but the present content has to be considered in view of reports that some Ethiopian government officials and public figures instigated or spread hate speech. Good faith postings or information on matters of public concern can enable vulnerable populations to better protect themselves. Additionally, a better understanding of important events may help in the pursuit of accountability. However, unverified rumors can feed into hateful narratives and contribute to their acceptance, especially in the absence of counter-speech efforts. The Board finds that in a country where there is an ongoing armed conflict and an assessed inability of governmental institutions to meet their human rights obligations under international law, Meta may restrict freedom of expression that it otherwise would not (see ICCPR Article 4 on derogations in times of public emergencies). The principle of proportionality must take account of “the form of expression at issue as well as the means of its dissemination” ( General Comment 34 , para. 34). On its own, an unverifiable rumor may not directly and immediately cause harm. However, when such content appears on an important, influential and popular social media platform during an ongoing conflict, the risk and likelihood of harm become more pronounced. The Board came to a similar conclusion in decision 2020-003-FB-UA . There, the Board found that “in situations of armed conflict in particular, the risk of hateful, dehumanizing expressions accumulating and spreading on a platform, leading to offline action impacting the right to security of person and potentially life, is especially pronounced.” Furthermore, cumulative impact can amount to causation through a “gradual build-up of effect,” as happened in Rwanda where calls to genocide were repeated (see the Nahimana case, Case No. ICTR-99-52-T , paras 436, 478, and 484-485). A direct call for violence is absent from the post in this case, although there is a reference to “our struggle.” Moreover, the content has been viewed by thousands of Amharan speakers in the 24 hours that it remained online. Some of them left comments that include calls for vengeance (see Section 2 above). The right to life entails a due diligence responsibility to undertake reasonable positive measures, which do not impose disproportionate burdens on freedom of expression, in response to foreseeable threats to life originating from private persons and entities, whose conduct is not attributable to the state. Specific measures of protection shall be taken towards persons in situations of vulnerability (members of ethnic and religious minorities) whose lives have been placed at particular risk because of specific threats or pre-existing patterns of violence ( General Comment 36 , paras 21 and 23). With the respective differences having been considered, these considerations are relevant to Meta’s responsibilities to protect human rights, because business is required to “seek to prevent or mitigate adverse human rights impacts that are directly linked to [its] operations, products or services” (UNGPs, Principle 13). The United Nations Working Group on the issue of human rights and transnational corporations and other business enterprises declared that the UNGPs impose on businesses a heightened responsibility to undertake due diligence in a conflict setting (“Business, human rights and conflict-affected regions: towards heightened action,” A/75/212 , paras 41-54). Consequently, the legitimate aim to protect the right to life of others means that Meta has a heightened responsibility in the present conflict setting. The Board will consider this in its proportionality analysis. The Board notes the steps Meta has taken so far in Ethiopia. The content was originally removed. However, Meta’s automated system determined that this content had “a low violating score,” thus Meta did not automatically remove the post. Even if the specific content was removed originally, Meta ultimately determined that the removal was an enforcement error. Meta also told the Board that the treatment of Ethiopia as Tier 1 At-Risk Country does not impact classifier performance or its ability to identify the content as potentially violating. The Board therefore concludes that without additional measures, Meta cannot properly fulfill its human rights responsibilities. The fact that Meta restored the content corroborates this concern. In the present case the content was posted during an armed conflict. In such situations Meta has to exercise heightened due diligence to protect the right to life. Unverified rumors are directly connected to an imminent threat to life and Meta must prove that its policies and conflict-specific measures that it took in Ethiopia are likely to protect life and prevent atrocities (see Section 6 for Meta’s response to the Ethiopian conflict). In the absence of such measures, the Board has to conclude that the content must be removed. To prevent innumerable posts feeding into that narrative through unverified rumors, removal is the required measure in this case during an ongoing violent ethnic conflict. A minority of the Board highlighted its understanding of the limited nature of this decision. In the context of an ongoing violent conflict, a post constituting an unverified rumor of ethnically-motivated violence by civilians against other civilians poses serious risks of escalating an already violent situation, particularly where Meta cannot verify the rumor in real time. Such increased risks triggered Meta’s human rights responsibility to engage in heightened due diligence with respect to content moderation involving the conflict. While it had various types of high alerts in place, Meta confirmed that such systems did not affect its determination in this case, which is difficult to understand given the risks of near-term violence. As noted in a previous decision of the Board ( 2021-001-FB-FBR ), it is difficult to assess if measures short of content removal would constitute the least burden on a user’s speech to achieve a legitimate aim when Meta does not provide relevant information about whether its own design decisions and policies have amplified potentially harmful speech. 9. Oversight Board decision The Oversight Board upholds Meta’s original decision to remove the content. Given that Meta subsequently restored the content after the user’s appeal to the Board, it must now remove the content once again from the platform. 10. Policy advisory statement Content policy 1. Meta should rewrite Meta’s value of “Safety” to reflect that online speech may pose risk to the physical security of persons and the right to life, in addition to the risks of intimidation, exclusion and silencing. 2. Facebook’s Community Standards should reflect that in the contexts of war and violent conflict, unverified rumors pose higher risk to the rights of life and security of persons. This should be reflected at all levels of the moderation process. Transparency 3. Meta should commission an independent human rights due diligence assessment on how Facebook and Instagram have been used to spread hate speech and unverified rumors that heighten the risk of violence in Ethiopia. The assessment should review the success of measures Meta took to prevent the misuse of its products and services in Ethiopia. The assessment should also review the success of measures Meta took to allow for corroborated and public interest reporting on human rights atrocities in Ethiopia. The assessment should review Meta’s language capabilities in Ethiopia and if they are adequate to protect the rights of its users. The assessment should cover a period from June 1, 2020, to the present. The company should complete the assessment within six months from the moment it responds to these recommendations. The assessment should be published in full. *Procedural note: The Oversight Board's decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world, provided expertise on socio-political and cultural context. The company Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world, provided linguistic expertise. Duco Advisers, an advisory firm focusing on the intersection of geopolitics, trust safety, and technology, also provided research. Return to Case Decisions and Policy Advisory Opinions" fb-nel6n8kl,Revolutionary Armed Forces of Colombia (FARC) Dissidents Video,https://www.oversightboard.com/decision/fb-nel6n8kl/,"May 15, 2025",2025,,"TopicViolence, War and conflictCommunity StandardDangerous individuals and organizations",Dangerous individuals and organizations,Overturned,Colombia,"A user appealed Meta’s decision to leave up a video posted on Facebook that depicts Estado Mayor Central (EMC), a conglomerate of dissident factions of the Revolutionary Armed Forces of Colombia (FARC).",6460,957,"Overturned May 15, 2025 A user appealed Meta’s decision to leave up a video posted on Facebook that depicts Estado Mayor Central (EMC), a conglomerate of dissident factions of the Revolutionary Armed Forces of Colombia (FARC). Summary Topic Violence, War and conflict Community Standard Dangerous individuals and organizations Location Colombia Platform Facebook Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Summary A user appealed Meta’s decision to leave up a video posted on Facebook that depicts Estado Mayor Central (EMC), a conglomerate of dissident factions of the Revolutionary Armed Forces of Colombia (FARC, after the Spanish acronym). After the Board brought the appeal to Meta’s attention, the company reversed its original decision and removed the post. About the Case In September 2024, a Facebook user posted a video depicting Estado Mayor Central (EMC), a conglomerate of dissident factions from the Revolutionary Armed Forces of Colombia (FARC) , a rebel group that fought against the Colombian government from 1964 to 2016. The video contains footage of military training, active military operations and a text overlay referencing killings attributed to the group. An overlay image displays FARC's logo. After a peace deal with the Colombian government in 2016 , FARC reformed into a legal political party. Despite this, dissidents from FARC’s new political leadership, including factions that are part of EMC , continue to engage in violence , including fighting the government. Under its Dangerous Organizations and Individuals policy , Meta removes content that glorifies, supports, represents or positively references dangerous organizations that “proclaim a violent mission or are engaged in violence.” The policy allows for “neutral discussions,” such as “factual statements, commentary, questions, and other information that do not express positive judgment around the designated dangerous organization.” In such instances, the company requires a clear indication of intent and defaults to removing content in case a user's intention is ambiguous or unclear. After the Board brought this case to Meta’s attention, the company found the video appears to be official propaganda for FARC dissident factions that rejected the peace process, as it displays the logo of factions that continue to engage in violent activities. Dissident groups from FARC are designated under Meta’s Dangerous Organizations and Individuals policy. The depiction of combatants training and carrying the wounded suggests the group produced this imagery. Sharing propaganda materials produced by designated groups outside of an allowable context such as “social and political discourse”, that includes users “reporting on, neutrally discussing or condemning dangerous organizations and individuals or their activities,” can be understood as a means of support for FARC dissident factions. Hence, the company removed the content from Facebook. Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the user who reported content that was then left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook, Instagram and Threads users. Significance of Case This case highlights an instance of Meta underenforcing its Dangerous Organizations and Individuals policy, specifically, a video promoting FARC dissidents engaged in violence in Colombia. The Board has already expressed concern in relation to Meta’s automated detection failing to flag content associated with the Rapid Support Forces (RSF), an entity not allowed to have a presence on the company’s platforms, in the Sudan’s Rapid Support Forces Video Captive decision. In that case, the Board issued a recommendation regarding the enforcement's accuracy of Meta’s Dangerous Organizations and Individuals policy. It called on the company “to enhance its automated detection and prioritization of content potentially violating the Dangerous Organizations and Individuals policy for human review, Meta should audit the training data used in its video content understanding classifier to evaluate whether it has sufficiently diverse examples of content supporting designated organizations in the context of armed conflicts, including different languages, dialects, regions and conflicts” (recommendation no. 2). Meta reported progress on this recommendation. Separately, the Board has recommended that “to improve the transparency of its designated entities and events list, Meta should explain in more detail the procedure by which entities and events are designated. It should also publish aggregated information on its designation list on a regular basis, including the total number of entities within each tier of its list, as well as how many were added and removed from each tier in the past year,” in Referring to Designated Dangerous Individuals as “Shaheed” (recommendation no. 4). The company stated it is working to update the Transparency Center to provide a more detailed explanation of its process for designating and de-designating entities and events. The Board believes that full implementation of both recommendations would, respectively, contribute to decreasing the number of enforcement errors and provide users with greater clarity about potential violations by their content under the Dangerous Organizations and Individuals policy. Decision The Board overturns Meta’s original decision to leave up the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-nusyob3z,Footage of Massacres in Syria,https://www.oversightboard.com/decision/fb-nusyob3z/,"May 13, 2025",2025,,"TopicNews events, Violence, War and conflictCommunity StandardViolent and graphic content",Violent and graphic content,Overturned,Syria,A user appealed Meta’s decision to remove a Facebook post of a video containing graphic footage of violence in Syria.,7668,1178,"Overturned May 13, 2025 A user appealed Meta’s decision to remove a Facebook post of a video containing graphic footage of violence in Syria. Summary Topic News events, Violence, War and conflict Community Standard Violent and graphic content Location Syria Platform Facebook Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Summary A user appealed Meta’s decision to remove a Facebook post of a video containing graphic footage of violence in Syria. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post with a warning screen. About the Case In December 2024, a user posted a video on Facebook featuring violent scenes, including beatings and stabbings, individuals being lit on fire, and injured and deceased people, including children. The caption above the video, written in Arabic, describes how the content shows scenes of massacres [participated in] by the “criminals of the Party of Satan in Syria,” and makes claims that the people have not forgotten and will not forget the crimes that were committed. According to established news reporting, “Party of Satan” appears to be a reference to Hezbollah . Under Meta’s Violent and Graphic Content Community Standard, Meta removes “the most graphic content and add[s] warning labels to other graphic content so that people are aware it may be sensitive or disturbing before they click through.” “Videos of people, living or deceased, in non-medical contexts” depicting “dismemberment,” “visible innards,” “burning or charred persons” or “throat-slitting” are, therefore, not allowed. Meta also notes that for “Imagery (both videos and still images) depicting a person’s violent death (including their moment of death or the aftermath) or a person experiencing a life-threatening event,” the company applies a warning screen so that people are aware that the content may be disturbing. In these instances, the company also limits the ability to view the content to adults aged 18 and older. Moreover, when provided with additional context, Meta may allow graphic content “in order to shed light on or condemn acts such as human rights abuses or armed conflict” to “allow room for discussion and awareness raising.” After the Board brought this case to Meta’s attention, the company determined that the content should not have been removed under the Violent and Graphic Content policy. The company then restored the content to Facebook with a warning screen. Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook, Instagram and Threads users. Significance of Case This case illustrates continuing issues with Meta’s ability to moderate content that raises awareness of and documents grave human rights violations. Despite language in Meta’s Violent and Graphic Content policy acknowledging that users may share content to shed light on or condemn acts such as “human rights abuses or armed conflict”, the company continues to remove content from its platforms that aims to accomplish precisely this. The Board has issued recommendations to guide Meta in its enforcement practices for graphic or violent content that is shared in the context of condemnation or to raise awareness with the intent of making these allowances enforceable at scale and not only in exceptional circumstances. For example, in the Sudan Graphic Video case decision, the Board recommended that “Meta should amend the Violent and Graphic Content Community Standard to allow videos of people or dead bodies when shared for the purposes of raising awareness or documenting human rights abuses. This content should be allowed with a warning screen so that people are aware that content may be disturbing.” ( Sudan Graphic Video , recommendation no. 1). The recommendation was declined after a feasibility assessment. Meta reported undertaking policy development on the subject and ultimately decided to keep the status quo to “remove content by default, but allow content with a warning label when there is additional context”( Meta Q4 2023 Quarterly Update on the Oversight Board). Additionally, the Board previously recommended that “Meta should add to the public-facing language of its Violent and Graphic Content Community Standard detail from its internal guidelines about how the company determines whether an image “shows the violent death of a person or people by accident or murder”. ( Russian Poem , recommendation no. 2). Meta demonstrated partial implementation of this recommendation through published information- the company updated the language in its Violent and Graphic Content Community Standard, by including language in parentheticals to clarify what the company means by “violent death.” The language now reads, “Imagery (both videos and still images) depicting a persons' violent death (including their moment of death or the aftermath).” Meta's extended definition did not, however, sufficiently explain how the company determines whether an image “shows the violent death of a person or people by accident or murder.” Furthermore, the Board has recommended that Meta “improve its transparency reporting to increase public information on error rates by making this information viewable by country and language for each Community Standard” ( Punjabi Concern over the RSS in India , recommendation no. 3). The Board underscored, in this recommendation, that “more detailed transparency reports will help the public spot areas where errors are more common, including potential specific impacts on minority groups.” The implementation of this recommendation is currently in progress. In its last update on this recommendation, Meta explained that the company is “in the process of compiling an overview of enforcement data to confidentially share with the Board.” The document will outline data points that provide indicators of enforcement accuracy across various policies. Meta stated that the company “remain[s] committed to compiling an overview that addresses the Board’s overarching call for increased transparency on enforcement accuracy across policies” (Meta’s H2 2024 Bi-Annual Report on the Oversight Board – Appendix). The Board stresses the importance of Meta continuing to improve its ability to accurately detect content that seeks to raise awareness about or to condemn human rights abuses and to keep such content, with a warning screen, on the platform under the company’s Violent and Graphic Content policy. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-o78k5lg3,Dehumanizing Comments About People in Gaza,https://www.oversightboard.com/decision/fb-o78k5lg3/,"April 18, 2024",2024,,"TopicMarginalized communities, Race and ethnicity, War and conflictCommunity StandardHate speech",Hate speech,Overturned,"Israel, Palestinian Territories","A user appealed Meta’s decision to leave up a Facebook post claiming that Hamas originated from the population of Gaza, comparing them to a “savage horde.” After the Board brought the appeal to Meta’s attention, the company reversed its original decision and removed the post.",5343,819,"Overturned April 18, 2024 A user appealed Meta’s decision to leave up a Facebook post claiming that Hamas originated from the population of Gaza, comparing them to a “savage horde.” After the Board brought the appeal to Meta’s attention, the company reversed its original decision and removed the post. Summary Topic Marginalized communities, Race and ethnicity, War and conflict Community Standard Hate speech Location Israel, Palestinian Territories Platform Facebook This is a summary decision . Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Summary A user appealed Meta’s decision to leave up a Facebook post claiming that Hamas originated from the population of Gaza and reflects their “innermost desires,” comparing them to a “savage horde.” After the Board brought the appeal to Meta’s attention, the company reversed its original decision and removed the post. About the Case In December 2023, a user reposted an image on Facebook featuring text, alongside the image of an unnamed man, expressing the view that the “general public” in Gaza is not the “victim” of Hamas but rather, that the militant group emerged as a “true reflection” of “the innermost desires of a savage horde.” The reposted image contained an endorsing caption that included the words, “the truth.” The post was viewed fewer than 500 times. Under Meta’s Hate Speech policy, Meta prohibits content targeting a person or group of people on the basis of their protected characteristics, specifically mentioning comparisons to “sub humanity” and including “savages” as an example. In this content, the reference to “the general public of Gaza” is an implicit reference to Palestinians in Gaza, thus targeting the protected characteristics of ethnicity and nationality. In a statement appealing this case to the Board, the user noted that the post “constituted dehumanizing speech,” by generalizing about the people of Gaza. After the Board brought this case to Meta’s attention, the company determined that the content did violate Meta’s Hate Speech policy and its original decision to leave the content up was incorrect. The company then removed the content from Facebook. Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the user who reported content that was left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process involved, reduce errors and increase fairness for Facebook and Instagram users. Significance of Case This case highlights errors in Meta’s enforcement of its Hate Speech policy, specifically relating to content that attacks people based on their protected characteristics. Moderation errors are especially harmful in times of ongoing armed conflict. As such, there should have been more robust content moderation practices in place. The Knin Cartoon case similarly contained hate speech targeted at a protected characteristic – an ethnicity – referring to one ethnic group as rats without explicitly naming them. However, the Knin Cartoon case required historical and cultural context to interpret the symbolic portrayal of an ethnic group, whereas the content in this case more directly ties dehumanizing comments to an entire population, which should reasonably be understood as referring to people by protected characteristic. In the Knin Cartoon decision, the Board recommended that Meta should “clarify the Hate Speech Community Standard and the guidance provided to reviewers, explaining that even implicit references to protected groups are prohibited by the policy when the reference would reasonably be understood,” ( Knin Cartoon decision, recommendation no. 1), which Meta has reported as partially implemented. In Q4 of 2022, Meta reported that they “added language to the Community Standards and reviewers’ policy guidance clarifying that implicit hate speech will be removed if it is escalated by at-scale reviewers to expert review where Meta can reasonably understand the user’s intent.” The Board considers this recommendation partially implemented as updates were not made to the Hate Speech Community Standard, but only to the general introduction of the Community Standards. The Board believes that full implementation of this recommendation would reduce the number of enforcement errors under Meta’s Hate Speech policy. Decision The Board overturns Meta’s original decision to leave up the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-ofs963dz,Politician’s Comments on Demographic Changes,https://www.oversightboard.com/decision/fb-ofs963dz/,"March 12, 2024",2024,,TopicFreedom of expression,Policies and TopicsTopicFreedom of expression,Upheld,"Belgium, France, Germany",The Oversight Board has upheld Meta’s decision to leave up a video clip in which French politician Éric Zemmour discusses demographic changes in Europe and Africa.,52256,8047,"Upheld March 12, 2024 The Oversight Board has upheld Meta’s decision to leave up a video clip in which French politician Éric Zemmour discusses demographic changes in Europe and Africa. Standard Topic Freedom of expression Location Belgium, France, Germany Platform Facebook Politician's Comments on Demographic Changes Public Comments Appendix Politician's Comments on Demographic Changes Decision PDF The Oversight Board has upheld Meta’s decision to leave up a video clip in which French politician Éric Zemmour discusses demographic changes in Europe and Africa. The content does not violate the Hate Speech Community Standard since there is no direct attack on people based on a protected characteristic such as race, ethnicity or national origin. The majority of the Board find that leaving up the content is consistent with Meta’s human rights responsibilities. However, the Board recommends that Meta should publicly clarify how it distinguishes immigration-related discussions from harmful speech, including hateful conspiracy theories, targeting people based on their migratory status. About the Case In July 2023, a video clip in which French politician Éric Zemmour discusses demographic changes in Europe and Africa was posted on his official Facebook page by a user who is the page’s administrator. The clip is part of a longer video interview with the politician. In the video, Zemmour states: “Since the start of the 20th century, there has been a population explosion in Africa.” He goes on to say that while the European population has stayed roughly the same at around 400 million people, the African population has increased to 1.5 billion people, “so the power balance has shifted.” The post’s caption, in French, says that in the 1900s, “when there were four Europeans for one African, [Europe] colonized Africa,” and now “there are four Africans for one European and Africa colonizes Europe.” Zemmour’s Facebook page has about 300,000 followers while this post had been viewed about 40,000 times as of January 2024. Zemmour has been the subject of multiple legal proceedings, with more than one conviction in France for inciting racial hatred and making racially insulting comments about Muslims, Africans and Black people. He ran for president in 2022 but did not progress beyond the first round. Central to his electoral campaigning is the Great Replacement Theory, which argues that white European populations are being deliberately replaced ethnically and culturally through migration and the growth of minority communities. Linguistic experts note the theory and terms associated with it “incite racism, hatred and violence targeting the immigrants, non-white Europeans and target Muslims specifically.” The video in the post does not specifically mention the theory. Two users reported the content for violating Meta’s Hate Speech policy but since the reports were not prioritized for review in a 48-hour period, they were both automatically closed. Reports are prioritized by Meta’s automated systems according to the severity of the predicted violation, the content’s virality (number of views) and likelihood of a violation. One of the users then appealed to Meta, which led to one of the company’s human reviewers deciding the content did not violate Meta’s rules. The user then appealed to the Board. Key Findings The majority of the Board conclude the content does not violate Meta’s Hate Speech Community Standard. The video clip contains an example of protected (albeit controversial) expression of opinion on immigration and does not contain any call for violence, nor does it direct dehumanizing or hateful language towards vulnerable groups. While Zemmour has been prosecuted for use of hateful language in the past, and themes in this video are similar to the Great Replacement Theory, these facts do not justify removal of a post that does not violate Meta’s standards. For there to have been a violation, the post would have had to include a “direct attack,” specifically calling for the “exclusion or segregation” of a “protected characteristic” group. Since Zemmour’s comments do not contain any direct attack, and there is neither an explicit call to exclude any group from Europe nor any statement about Africans tantamount to a harmful stereotype, slur or any other direct attack, they do not break Meta’s Hate Speech rules. The policy rationale also makes it clear that Meta allows “commentary on and criticism of immigration policies,” although what is not shared publicly is that the company allows calls for exclusion when immigration policies are being discussed. However, the Board does find it concerning that Meta does not consider Africans a protected characteristic group, given the fact that national origin, race and religion are protected both under Meta’s policies and international human rights law. Africans are mentioned throughout the content and, in this video, serve as a proxy for non-white Africans. The Board also considered the relevance of the Dangerous Organizations and Individuals policy to this case. However, the majority find the post does not violate this policy because there are not enough elements to review it as part of a wider Violence-Inducing Conspiracy Network. Meta defines these networks as non-state actors who share the same mission statement, promote unfounded theories claiming that secret plots by powerful actors are behind social and political problems, and who are directly linked to a pattern of offline harm. A minority of Board Members find that Meta’s approach to content spreading harmful conspiracy theories is inconsistent with the aims of the policies it has designed to prevent an environment of exclusion affecting protected minorities, both online and offline. Under these policies, content involving certain other conspiracy narratives is moderated to protect threatened minority groups. While these Board Members believe that criticism of issues like immigration should be allowed, it is precisely because evidence-based discussion on this topic is so relevant that the spread of such conspiracy theories, such as the Great Replacement Theory, can be harmful. It is not individual pieces of content but the combined effects of such content shared on a large scale and at high speeds that causes the greatest challenge to social media companies. Therefore, Meta needs to reformulate its policies so that its services are not misused by those who promote conspiracy theories causing online and offline harm. Meta has undertaken research into a policy line that could address hateful conspiracy theories but the company decided this would ultimately lead to removal of too much political speech. The Board is concerned about the lack of information that Meta shared on this process. The Oversight Board's Decision The Oversight Board has upheld Meta’s decision to leave up the post. The Board recommends that Meta: *Case summaries provide an overview of cases and do not have precedential value. 1. Decision Summary The Oversight Board upholds Meta’s decision to leave up a post on French politician Éric Zemmour’s official Facebook page that contains a video of Mr. Zemmour being interviewed, in which he discusses demographic changes in Europe and Africa. The Board finds the content does not violate Meta’s Hate Speech Community Standard because it does not directly attack people on the basis of protected characteristics, including race, ethnicity and national origin. The majority of the Board find that Meta’s decision to keep the content on Facebook is consistent with its human rights responsibilities. A minority of Board Members find that Meta’s policies are inadequate to meet Meta’s human rights responsibilities to address the significant threat posed by harmful and exclusionary conspiracy theories such as the Great Replacement Theory. The Board recommends that Meta publicly clarify how it handles content spreading hateful conspiracy theories given the need to protect speech about immigration while addressing the potential offline harms of such harmful conspiracy theories. 2. Case Description and Background On July 7, 2023, a user posted a video on the official, verified Facebook page of French politician Éric Zemmour. In the video, which is a 50-second clip of a longer interview conducted in French, Zemmour discusses demographic changes in Europe and Africa. The user who posted the video was an administrator of the page, which has about 300,000 followers. Zemmour was a candidate in the 2022 French presidential election and won around 7% of the votes in the first round, but did not advance any further. Before running for office, Zemmour was a regular columnist at Le Figaro and other newspapers, as well as an outspoken TV commentator famed for his provocations on Islam, immigration and women. As explained in greater detail below, he has been involved in multiple legal proceedings and convicted in some of them on account of these comments. Although Meta did not consider the user who posted the video to be a public figure, the company did consider Zemmour a public figure. In the video, Zemmour states that: “Since the start of the 20th century, there has been a population explosion in Africa.” He states that while the European population has stayed roughly the same at around 400 million people, the African population has increased to 1.5 billion people, “so the power balance has shifted.” The caption in French repeats the claims in the video, stating that in the 1900s, “when there were four Europeans for one African, [Europe] colonized Africa,” and now “there are four Africans for one European and Africa colonizes Europe.” These figures are compared to figures available from United Nations bodies provided below. Additionally, the Board’s majority position on these numbers is described in greater detail in Section 8.2 below. When this case was announced by the Board on November 28, 2023, the content had been viewed around 20,000 times. As of January 2024, the content had been viewed about 40,000 times and had fewer than 1,000 reactions, the majority of which were “likes,” followed by “love” and “Haha.” On July 9, 2023, two users separately reported the content as violating Meta’s Hate Speech policy. The company automatically closed both reports because they were not prioritized for review in a 48-hour period. Meta explained that reports are dynamically prioritized for review based on factors such as the severity of the predicted violation, the content’s virality (number of views the content has had) and the likelihood that the content will violate the company’s policies. The content was not removed and stayed on the platform. On July 11, 2023, the first person who reported the content appealed Meta’s decision. The appeal was assessed by a human reviewer who upheld Meta’s original decision to keep the content up. The reporting user then appealed to the Oversight Board. Ten days before the content was posted, the fatal shooting of Nahel Merzouk , a French 17-year-old of Moroccan and Algerian descent, who died after two police officers shot him at point-blank range in a suburb of Paris on June 27, 2023, sparked widespread riots and violent protests in France. The protests, which were ongoing when the content was posted, were directed at police violence and the perceived systemic racial discrimination of policing in France. These protests were the most recent in a long series of protests about police violence that, it is claimed, often targets immigrants of African origin and other marginalized communities in France. According to the European Commission against Racism and Intolerance (ECRI), the main victims of racism in France are immigrants, especially those of African origin and their descendants. In 2022, the Committee on the Elimination of Racial Discrimination (CERD) urged France to redouble its efforts to effectively prevent and combat racist hate speech and said that “despite the State party’s efforts… the Committee remains concerned at how persistent and widespread racist and discriminatory discourse is, especially in the media and on the Internet.” It is also “concerned at some political leaders’ racist remarks with regard to certain ethnic minorities, in particular Roma, Travellers, Africans, persons of African descent, persons of Arab origin and non-citizens,” (CERD/C/FRA/CO/22-23, para. 11). According to a 1999 report from the Department of Social and Economic Affairs of the United Nations, the estimated population of Africa in the year 1900 was 133 million. Data from the United Nations in 2022 estimates that the 2021 population of Africa was around 1.4 billion. It also projects that by 2050, the estimated population of Africa could be close to 2.5 billion. According to the same 1999 report, the population of Europe in 1900 was approximately 408 million. The Department of Economic and Social Affairs of the United Nations estimates that the population of Europe in 2021 was approximately 745 million and will decline to approximately 716 million by 2050. In France, as in many parts of the world, the arrival of large numbers of migrants from other countries has become one of the most salient topics of political debate. As of September 2023, approximately 700,000 refugees and asylum seekers were located in France, making it the third-largest host country in the European Union. As the Board reviewed this case, France experienced large-scale protests and heated public debate on migration amidst the Parliamentary passage of a new immigration bill that, among other things, sets migration quotas and tightens the rules around family reunification and access to social benefits. Zemmour and his party have been very active in these discussions, advocating for immigration restrictions. Zemmour has been the subject of multiple legal proceedings and has been convicted several times by French courts for inciting racial hatred and making racially insulting comments in recent years, as a result of his statements about Muslims, Africans, Black people and LGBTQIA+ people. Zemmour was convicted of incitement to racial hatred for comments he made in 2011 on television in which he said “most dealers are blacks and Arabs. That's a fact.” More recently, a court found Zemmour guilty of inciting racial hatred in 2020 and fined him 10,000 euros for stating that child migrants are “thieves, killers, they’re rapists. That’s all they are. We should send them back.” He has exhausted his right to appeal in another case in which he was convicted and fined by the correctional court for inciting discrimination and religious hatred against the French Muslim community. The conviction was based on statements he made on a television show in 2016 that Muslims should be given “the choice between Islam and France” and that “for thirty years we have been experiencing an invasion, a colonization (…) it is also the fight to Islamize a territory which is not, which is normally a non-Islamized land.” In December 2022, the European Court of Human Rights held that the conviction did not violate Zemmour’s right to freedom of expression. Although the content in this case does not explicitly mention the Great Replacement Theory, the concept is central to Zemmour’s political ideology and featured heavily in his presidential campaign, during which he promised, if elected, to create a “Ministry of Remigration.” He also stated that he would “send back a million” foreigners in five years. According to independent research commissioned by the Board, which was conducted by experts in conspiracy theories and French politics, social media trends, and linguistics, proponents of the Great Replacement Theory argue that white European populations are being deliberately replaced ethnically and culturally through migration and the growth of minority communities. It insists that contemporary migration of non-white (and predominantly Muslim) people from non-European countries (mostly, in Africa and Asia) to Europe is a form of demographic warfare. The Board’s experts emphasized that migration and the increase in migration is not factually disputed. Rather, it is the insistence that there is an actual plot or conspiracy to bring non-whites into Europe in order to replace or reduce the proportion of white populations that marks the Great Replacement Theory as conspiratorial. Linguistic experts consulted by the Board explained that the Great Replacement Theory and terms associated with it “incite racism, hatred and violence targeting the immigrants, non-white Europeans and target Muslims specifically.” A report by the European Union’s Radicalization Awareness Network notes that the anti-Semitic, anti-Muslim and overall anti-immigration sentiment spread by people advancing the Great Replacement Theory has informed the selection of targets by several high-profile solo attackers in Europe in recent years. The Board’s commissioned research also indicated that the theory has inspired myriad violent incidents around the world in recent years, including the mass shooting in Christchurch , New Zealand, in which 51 Muslims were killed. A minority of the Board also consider the fact that violent far-right protests have been on the rise in France over the past year as important context. Following the fatal stabbing of a teenager during a festive gathering on November 18 in Crépol, a rural community in France, activists and right-wing parties led violent protests in which protestors physically clashed with the police. They alleged that immigrants and minorities were responsible, despite the fact that of the nine people arrested in connection with the stabbing, eight were French and one Italian. Interior Minister Gérald Darmanin said that militia members “seek to attack Arabs, people with different skin colors, speak of their nostalgia for the Third Reich.” French Green Party politician Sandrine Rousseau compared these protests to ratonnades, physical violence carried out against an ethnic minority or a social group, predominantly against people of North African origin. The most notable ratonnade , often associated with the popularization of the term, occurred on October 17, 1961 during peaceful protests by Algerians in which 200 Algerians were killed during an outburst of police violence. The word has come up repeatedly in different contexts in France since then. For example, in December 2022 French politicians joined social media users in denouncing street violence, comparing it to ratonnades after the France-Morocco World Cup match. 3. Oversight Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the person who previously reported the content that was left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying the Board’s decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). When Meta commits to act on recommendations, the Board monitors their implementation. 4. Sources of Authority and Guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board Decisions II. Meta’s Content Policies Hate Speech The policy rationale for the Hate Speech Community Standard explains that hate speech, defined as a direct attack against people on the basis of protected characteristics, is not allowed on the platform “because it creates an environment of intimidation and exclusion and, in some cases, may promote real-world violence.” The policy lists as protected characteristics, among others, race, ethnicity, national origin and religious affiliation. The policy explains that “attacks are separated into two tiers of severity,” with Tier 1 attacks being more severe. The rationale for the Hate Speech Community Standard also explains that the policy “protect(s) refugees, migrants, immigrants and asylum seekers from most severe attacks,” but that Meta allows “commentary on and criticism of immigration policies.” Meta’s internal guidance to content moderators elaborates on this, explaining that it considers migrants, immigrants, refugees and asylum-seekers status as quasi-protected. That means Meta protects them from Tier 1 attacks but not from Tier 2 attacks under the Hate Speech policy. The policy, which previously had three different attack tiers but now only has two of them, currently prohibits as a Tier 2 attack, among other types of content, “exclusion or segregation in the form of calls for action, statements of intent, aspirational or conditional statements, or statements advocating or supporting defined as: [...] explicit exclusion, which means things like expelling certain groups or saying they are not allowed.” Meta declined the Board’s request to publish further information about its internal guidance to content reviewers on this point. In a 2017 Newsroom post entitled “Hard Questions: Who Should Decide What Is Hate Speech in an Online Global Community?,” which is linked at the bottom of the rationale for the Hate Speech Community Standard with the text, “Learn more about our approach to hate speech,” Meta recognized that policy debates on immigration often become “a debate over hate speech, as two sides adopt inflammatory language.” The company said that after reviewing posts on Facebook about the migration debate globally, it “decided to develop new guidelines to remove calls for violence against migrants or dehumanizing references to them — such as comparisons to animals, to filth or to trash.” The company left in place, however, “the ability for people to express their views on immigration itself,” given it is “deeply committed to making sure Facebook remains a place for legitimate debate.” Dangerous Organizations and Individuals The rationale for the Dangerous Organizations and Individuals Community Standard states that Meta does not allow organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on Meta’s platforms. It also explains that Meta assesses these entities “based on their behavior both online and offline – most significantly, their ties to violence.” The Dangerous Organizations and Individuals Community Standard explains that Meta prohibits the presence of Violence-Inducing Conspiracy Networks, currently defined as non-state actors that are: (i) “identified by a name, mission statement, symbol or shared lexicon”; (ii) “promote unfounded theories that attempt to explain the ultimate causes of significant social and political problems, events and circumstances with claims of secret plots by two or more powerful actors”; and (iii) “have explicitly advocated for or have been directly linked to a pattern of offline physical harm by adherents motivated by the desire to draw attention to or redress the supposed harms identified in the unfounded theories promoted by the network.” The Board’s analysis of the content policies was also informed by Meta’s commitment to voice, which the company describes as “paramount” as well as its values of safety and dignity. III. Meta’s Human Rights Responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , in which it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board’s analysis of Meta’s human rights responsibilities in this case was informed by the following international standards: 5. User Submissions The Board received a submission from the user who reported the content and appealed Meta’s decision to keep it up, as part of their appeal to the Board. In the submission, the appealing user says that Zemmour is explaining both colonization and migration in terms of overpopulation only, which the user classified as “fake news.” 6. Meta’s Submissions After the Board selected this case, Meta reviewed the post against the Hate Speech policy with subject-matter experts and determined that its original decision to leave up the content was correct. Meta did not provide further information on the specific remit or knowledge areas of the experts conducting this additional review. Meta emphasized that, for a piece of content to be considered as violating, the policy requires both a protected characteristic group and a direct attack – and that the claims about population changes and colonization lacked those elements. Meta explained that it does not consider the allegation that one group is “colonizing” a place to be an attack in and of itself so long as it does not amount to a call for exclusion, and emphasized that it “want[s] to allow citizens to discuss the laws and policies of their nations so long as this discussion does not constitute attacks against vulnerable groups who may be the subject of those laws.” Finally, Meta explained that the content does not identify a protected characteristic group because Zemmour refers to “Africa,” a continent and its countries, and that the “Hate Speech policy does not protect countries or institutions from attacks.” Meta refused to lift confidentiality related to the company’s policy development process on harmful conspiracy theories. Meta instead stated: “We have considered policy options specific to content discussing conspiracy theories that does not otherwise violate our existing policies. However, we have concluded that, for the time being, implementing any of the options would risk removing a significant amount of political speech.” The Board asked Meta eight questions in writing. Questions covered Meta’s policy development in relation to the Great Replacement Theory; the applicability of various Hate Speech and Dangerous Organizations and Individuals policy lines; and the violation history for the Facebook page and posting user. Meta answered six of the Board’s questions, with two not answered satisfactorily. After Meta did not provide sufficient detail in response to the Board’s initial question about policy development in relation to the Great Replacement Theory, the Board asked a follow-up question to which Meta provided additional but still less than comprehensive information. 7. Public Comments The Oversight Board received 15 public comments. Seven of the comments were submitted from the United States and Canada, three from Europe, two from Central and South Asia, one from the Middle East and North Africa, one from Asia Pacific and Oceania, and one from Sub-Saharan Africa. This total includes public comments that were either duplicates or were submitted with consent to publish but did not meet the Board’s conditions for publication. Public comments can be submitted to the Board with or without consent to publish and with or without consent to attribute (i.e., anonymously). The submissions mainly covered two themes. First, several comments emphasized that removing the content under review in this case would be tantamount to censorship, and could even “serve to increase the anger of the citizens who feel their voices will not be heard,” (PC-22009). Second, two organizations submitted comments emphasizing the negative offline impact of this type of content and, specifically, the Great Replacement Theory. Both of these comments argued that there is a link between the Christchurch massacre and the theory (PC-22013, Digital Rights Foundation; PC-22014, Global Project Against Hate and Extremism). To read public comments submitted for this case, please click here . 8. Oversight Board Analysis The Board analyzed Meta’s content policies, human rights responsibilities and values to determine whether the content in this case should be removed. The Board also assessed the implications of this case for Meta’s broader approach to content governance. The Board selected this case as an opportunity to review Meta’s approach to content targeting migrants in the context of increasingly global anti-immigrant rhetoric and heated public debates about immigration policies; especially given the challenges associated with distinguishing, at-scale, harmful content from political speech discussing immigration policies. 8.1 Compliance With Meta’s Content Policies The Board concludes that the content does not violate Meta’s policies. Thus, Meta’s decision to leave the content on Facebook was correct. A minority of the Board believe, however, that Meta’s policies could clearly distinguish even the harshest criticisms of immigration policies from speech engaging with conspiracy theories that are harmful toward protected characteristic groups. I. Content Rules Hate Speech The majority of the Board conclude that the content in this case does not violate Meta’s Hate Speech Community Standard, and is in fact an example of protected, though controversial, expression of opinion on the topic of immigration. The 50-second clip of Zemmour’s interview posted by the user contains no call for violence, nor does it direct dehumanizing or hateful language toward vulnerable groups. The fact that Zemmour has in the past been prosecuted and convicted for use of hateful language, or that the themes of the post bear resemblance to those of the Great Replacement Theory – which many believe to have sparked violence against migrants and members of minority groups – is not a proper justification for removing a post that does not violate Meta’s standards. The policy requires two elements to be present for the content to be considered as violating: (i) a “direct attack” and (ii) a “protected characteristic” group at which the direct attack is aimed. Meta defines “direct attacks” as, among other types of speech, “exclusion or segregation in the form of calls for action,” as explained in more detail under Section 4 above. Moreover, the policy rationale makes clear that Meta allows “commentary on and criticism of immigration policies.” For the majority, Zemmour’s comments in the video focus mainly on supposed demographical information he presents on Africa, Europe and “colonization.” The video contains, among other assertions, the statements, “So the balance of power has reversed,” and “When there are now four Africans for one European, what happens? Africa colonizes Europe, and in particular, France.” Zemmour’s comments do not contain any direct attack, and in fact he does not use the phrase “The Great Replacement” or refer directly to the theory. There is no explicit call to exclude any group from Europe, nor any statement about Africans tantamount to a harmful stereotype, slur or any other direct attack. The Board does, however, find it concerning that Meta does not consider Africans a protected characteristic group given the fact that national origin, race and religion are protected characteristics both under Meta’s policies and international human rights law. Africans are mentioned throughout the content. Africa is a collective of nations – thus “Africans” refer to people who are nationals of African countries. Second, in the context of Zemmour’s previous comments and discussions about migration in France, the term “Africans” serves as a proxy for non-white Africans, in particular Black and Muslim Africans. Dangerous Organizations and Individuals The majority of the Board also conclude that the content does not violate Meta’s Dangerous Organizations and Individuals policy, given the lack of elements required to assess this particular piece of content as part of a wider Violence-Inducing Conspiracy Network. As explained under Section 6 above, Meta considered policy options specific to content discussing conspiracy theories that does not otherwise violate any policies but concluded that, for the time being, implementing any of the options would risk removing a significant amount of political speech. The Board expresses its concern about the lack of information provided by Meta in response to the Board’s questions on this policy development process. The Board notes the company did not provide any specific information about the research it conducted, the information it gathered, the scope of its outreach, types of experts consulted nor the different policy options it analyzed. The Board is also concerned that Meta chose not to share information about the policy development process and its outcome with the public. A minority of the Board understand that despite the content implicitly targeting several overlapping protected characteristic groups (Black people, Arabs and Muslims), as currently worded, the rules included in Meta’s Hate Speech and Dangerous Organizations and Individuals policies do not prohibit content such as this. The fact that the post only repeats the more “palatable” parts of the Great Replacement Theory is, however, not decisive. In the Board’s decision on the Former President Trump’s Suspension case, the Board highlighted that Meta “must assess posts by influential users in context according to the way they are likely to be understood, even if their incendiary message is couched in language designed to avoid responsibility.” Nonetheless, as will be explained in more detail in Section 8.3, for a minority of the Board, Meta’s approach to content spreading harmful conspiracy theories, such as the Great Replacement Theory, is inconsistent with the aims of the different policies the company has designed to prevent the creation of an environment of intimidation and exclusion that affects protected minorities from online and offline harm. Though a minority of the Board strongly agree that Meta’s policies should allow criticism and discussions of all issues (like immigration) that are relevant in democratic societies, they should also establish clear guardrails to prevent the spread of implicit or explicit attacks against vulnerable groups, taking into account the offline harm of certain conspiratorial narratives, such as the Great Replacement Theory. II. Transparency Meta provides some insight on how it handles immigration-related content in its Transparency Center under the Hate Speech Community Standard in which the company explains that refugees, migrants, immigrants and asylum seekers are protected against the most severe attacks. In a 2017 Newsroom post , linked from the Hate Speech policy’s rationale, Meta provides some additional detail. However, the company does not explicitly explain how it handles Great Replacement Theory-related content. The 2017 post has information relevant to the topic but was not updated after the 2021 policy development process mentioned under Section 6 above. Meta also does not explicitly explain in its public-facing policy that calls for exclusion are allowed in the context of discussions on immigration. It is also not clear how implicit or veiled attacks in this context are addressed. 8.2 Compliance With Meta’s Human Rights Responsibilities The majority of the Board find that leaving the content up is consistent with Meta’s human rights responsibilities. A minority believe that, in order to be consistent with its human rights responsibilities, Meta needs to reformulate its policies so that its services are not misused by those who promote conspiracy theories that cause online and offline harm. Freedom of Expression (Article 19 ICCPR) Article 19 of the ICCPR provides for broad protection of the right to freedom of expression, including “freedom to seek, receive and impart information and ideas of all kinds,” including “political discourse” and commentary on “public affairs,” (General Comment No. 34, para. 11). The Human Rights Committee has said that the scope of this right “embraces even expression that may be regarded as deeply offensive, although such expression may be restricted in accordance with the provisions of article 19, paragraph 3 and article 20” to protect the rights or reputations of others or to prohibit incitement to discrimination, hostility or violence (General Comment No. 34, para. 11). In the context of public debates about migration, the UN General Assembly noted its commitment to “protect freedom of expression in accordance with international law, recognizing that an open and free debate contributes to a comprehensive understanding of all aspects of migration.” It further committed to “promote an open and evidence-based public discourse on migration and migrants in partnership with all parts of society that generates a more realistic, humane and constructive perception in this regard,” (A/RES/73/195 , para 33). Immigration and related policies – highly disputed and relevant to political processes not only in France but at a global level – are legitimate topics for debate on Meta’s platforms. For the majority, given the potential implications for the public debate, banning this kind of speech on Meta's platforms would be a clear infringement of freedom of expression and a dangerous precedent. For a minority of the Board, it is precisely because open and evidence-based discussions on immigration are so relevant to a democratic society, that the spread of conspiracy theories, such as the Great Replacement Theory, in social media platforms can be so harmful. As reported by the Institute for Strategic Dialogue, the methods used to broadcast the theory “include dehumanizing racist memes, distort[ing] and misrepresent[ing] demographic data and us[ing] debunked science.” When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human rights commitments, both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression,” ( A/74/486 , para. 41). I. Legality (Clarity and Accessibility of the Rules) The principle of legality requires rules limiting expression to be accessible and clear, formulated with sufficient precision to enable an individual to regulate their conduct accordingly (General Comment No. 34, para. 25). Additionally, these rules “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” and must “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not,” ( Ibid ). Applied to rules that govern online speech, the UN Special Rapporteur on freedom of expression has stated they should be clear and specific ( A/HRC/38/35 , para. 46). People using Meta’s platforms should be able to access and understand the rules and content reviewers should have clear guidance regarding their enforcement. None of Meta’s current policies “specifically and clearly” prohibit the content in this case. For the majority of the Board, an ordinary user, reading the Hate Speech Community Standard or Meta’s 2017 “Hard Questions” blog post (linked from the Community Standard) would likely get the impression that only the most severe attacks against immigrants and migrants would be removed, as Meta clearly indicates that it wants to allow commentary and criticism of immigration policies on its platforms. The majority of the Board find that this commitment is in line with Meta’s human rights responsibilities. For a minority, the Hate Speech policy aims to prevent the creation of an environment of exclusion or segregation to which hateful conspiracy theories such as the Great Replacement Theory contribute. Given that content engaging with such theories usually targets vulnerable and minority groups and constitutes an attack on their dignity, an ordinary user could expect protection from this type of content under Meta’s Hate Speech policy. Meta’s current Dangerous Organization and Individuals policy has no provisions prohibiting the content in this case. For the majority, even if Meta specifically and clearly prohibited content engaging with the Great Replacement Theory on its platforms, the content in this case does not go so far as to name the theory or elaborate on elements of the theory in ways that could be considered conspiratorial and harmful. The post does not allege that migratory flows to Europe involving specific groups of people are part of a secret plot involving actors with hidden agendas. II. Legitimate Aim Any restriction on freedom of expression should also pursue at least one of the legitimate aims listed in the ICCPR, which includes protecting the “rights of others.” “The term ‘rights’ includes human rights as recognized in the Covenant and more generally in international human rights law,” ( General Comment No. 34 , para. 28). In several decisions, the Board has found that Meta’s Hate Speech policy, which aims to protect people from harm caused by hate speech, pursues a legitimate aim that is recognized by international human rights law standards (see the Knin Cartoon decision ). It protects the right to life (Article 6, para. 1, ICCPR) as well as the rights to equality and non-discrimination, including based on race, ethnicity and national origin (Article 2, para. 1, ICCPR; Article 2, ICERD). The Board has also previously found that Meta’s Dangerous Organizations and Individuals policy seeks to prevent and disrupt real-world harm with the legitimate aim of protecting the rights of others (see the Shared Al Jazeera Post decision). Conversely, the Board has repeatedly noted that it is not a legitimate aim to restrict expression for the sole purpose of protecting individuals from offense, (see Depiction of Zwarte Piet citing UN Special Rapporteur on freedom of expression, report A/74/486, para. 24, and Former President Trump’s Suspension ), as the value that international human rights law places on uninhibited expression is high (General Comment No. 34, para. 38). III. Necessity and Proportionality The principle of necessity and proportionality provides that any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected,” (General Comment No. 34, para. 34). The nature and range of responses available to a company like Meta are different to those available to a State, and often represent less severe infringements on rights than, for example, criminal penalties. As part of their human rights responsibilities, social media companies should consider a range of possible responses to problematic content beyond deletion to ensure restrictions are narrowly tailored ( A/74/486 , para. 51). When analyzing the risks posed by potentially violent content, the Board is guided by the six-part test described in the Rabat Plan of Action, which addresses incitement to discrimination, hostility or violence (OHCHR, A/HRC/22/17/Add.4, 2013). The test considers context, speaker, intent, content and form, extent that the expression has been disseminated and the likelihood of imminent harm. For the majority of the Board, removal of the content in this case is neither necessary nor proportionate. The Rabat test emphasizes the content and form of speech as “a critical element of incitement.” In the content under review in this case, Zemmour’s comments, as reproduced in the 50-second clip posted by the user, do not directly engage with the conspiratorial elements of the Great Replacement Theory and the video does not contain inflammatory elements, such as violent or inciting imagery. The comments and the caption also do not contain any direct calls for violence or exclusion. The majority believe it would violate freedom of expression to exclude politically controversial content on the basis of statements made by the speaker elsewhere. The majority view the numbers that Zemmour cites as only slightly exaggerated. The majority also note that the main subject of Zemmour’s statements in the video is immigration, perhaps one of today’s most salient political issues. For a minority of Board Members, the content in this case does not violate Meta’s current policies (see Section 8.1). However, the company has designed a set of policies aimed at preventing the creation of an environment of exclusion and intimidation that not only affects protected minorities online (impacting the voices of excluded groups) but also offline. Under these policies, antisemitic and white supremacist narratives, as well as content from Violence-Inducing Conspiracy Networks is moderated. Removing such content is in line with Meta’s human rights responsibilities. As explained in Section 2 above, the Great Replacement Theory argues that there is a deliberate plot to achieve the replacement of white populations in Europe with migrant populations predominantly from Africa and Asia. The spread of Great Replacement Theory narratives has contributed to the incitement of racism, hatred and violence targeting immigrants, non-white Europeans and Muslims. A minority of the Board emphasize it is not simply an abstract idea or a controversial opinion but rather a typical conspiracy theory that leads to online and offline harm. It undoubtedly contributes to the creation of an atmosphere of exclusion and intimidation of certain minorities. The evidence of the harm produced by the aggregate or cumulative, scaled and high-speed circulation of antisemitic content on Meta’s platforms, as discussed in the Holocaust Denial case, is similar to the evidence of harm produced by the Great Replacement Theory, indicated under Section 2. For these reasons, a minority find it is inconsistent with the principle of non-discrimination and Meta’s values of safety and dignity that Meta has decided to protect certain threatened minority groups from exclusion and discrimination caused by conspiratorial narratives, while keeping others who are in a similar situation of risk unprotected. A minority of the Board found no compelling reason to differentiate Meta's approach to the Great Replacement Theory from the company’s approach to other conspiratory narratives mentioned above, which Meta moderates in line with its human rights responsibilities. Related to the above, for a minority of the Board, the greater challenge faced by social media companies is not in individual pieces of content, but rather in the accumulation of harmful content that is shared on a large scale and at a high speed. The Board has explained that “moderating content to address the cumulative harms of hate speech, even when the expression does not directly incite violence or discrimination can be consistent with Meta’s human rights responsibilities in certain circumstances,” (see the Depiction of Zwarte Piet and Communal Violence in Indian State of Odisha decisions). In 2022, the CERD expressed its concern “at how persistent and widespread racist and discriminatory discourse is [in France], especially in the media and on the Internet.” For a minority, the accumulation of Great Replacement Theory-related content “creates an environment where acts of violence are more likely to be tolerated and reproduce discrimination in a society,” (see the Depiction of Zwarte Piet and Communal Violence in Indian State of Odisha decisions). A minority highlight that under the UNGPs “business enterprises should pay special attention to any particular human rights impacts on individuals from groups and populations that may be at a heightened risk of vulnerability and marginalization,” (UNGPs Principles 18 and 20). As stated in Section 2 above, the main victims of racism in France are immigrants, especially those of African origin and their descendants. In a 2023 interview , the Director General of Internal Security for France shared his belief that extremist groups, including those that think they have to take action to stop the “Great Replacement,” represent a serious threat in the country. Even though Meta stated that moderating conspiracy theory-related content would risk removing “an unacceptable amount of political speech,” a minority of the Board note the company did not provide any evidence nor data to support that assertion. Moreover, Meta did not explain why this is the case with Great Replacement Theory content but not with, for instance, white supremacist or antisemitic content, since these could also be understood as spreading conspiracy theories. Given the reasons above, for a minority, Meta needs to review its policies to address content that promotes the Great Replacement Theory, unless the company has sufficient evidence: (i) to rule out the harm resulting from the spread of this type of content, as discussed in this decision; or (ii) to demonstrate that the impact of moderating this type of content on protected political speech would be disproportionate. For a proportionate response, among other options, Meta could consider creating an escalation-only policy to allow for the takedown of content openly expressing support of the Great Replacement Theory, without impacting protected political speech, or consider designating actors explicitly engaging with the Great Replacement Theory as part of a Violence-Inducing Conspiracy Network under Meta’s Dangerous Organizations and Individuals policy. The majority is skeptical that any policy could be devised under which this content would be violating, which could satisfy the demands of legality, necessity and proportionality, particularly given the lack of the words “Great Replacement” or any variation thereof in the content. An attempt to remove such content, even taken as a coded reference would result in the removal of significant amounts of protected political expression. Content that is protected on its face should not suffer “guilt by association” either because of the identity of the speaker or the resemblance to hateful ideologies. 9. Oversight Board Decision The Oversight Board upholds Meta’s decision to leave up the content. 10. Recommendations Transparency 1. Meta should provide greater detail in the language of its Hate Speech Community Standard about how it distinguishes immigration-related discussions from harmful speech targeting people on the basis of their migratory status. This includes explaining how the company handles content spreading hateful conspiracy theories. This is necessary for users to understand how Meta protects political speech on immigration while addressing the potential offline harms of hateful conspiracy theories. The Board will consider this implemented when Meta publishes an update explaining how it is approaching immigration debates in the context of the Great Replacement Theory, and links to the update prominently in its Transparency Center. *Procedural Note: The Oversight Board’s decisions are prepared by panels of five Members and approved by the majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Linguistic expertise was provided by Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world. Return to Case Decisions and Policy Advisory Opinions" fb-onl5yqve,Human Trafficking in Thailand,https://www.oversightboard.com/decision/fb-onl5yqve/,"November 22, 2023",2023,,"TopicFreedom of expression, SafetyCommunity StandardHuman exploitation",Human exploitation,Overturned,Thailand,A user appealed Meta’s decision to remove a Facebook post calling attention to human trafficking practices in Thailand.,5120,782,"Overturned November 22, 2023 A user appealed Meta’s decision to remove a Facebook post calling attention to human trafficking practices in Thailand. Summary Topic Freedom of expression, Safety Community Standard Human exploitation Location Thailand Platform Facebook This is a summary decision. Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors, and inform the public about the impact of the Board’s work. They are approved by a Board Member panel, not the full Board. They do not involve a public comment process, and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement. Case Summary A user appealed Meta’s decision to remove a Facebook post calling attention to human trafficking practices in Thailand. The appeal underlines the importance of designing moderation systems that are sensitive to contexts of awareness-raising, irony, sarcasm, and satire. After the Board brought the appeal to Meta’s attention, the company reversed its earlier decision and restored the post. Case Description and Background A Facebook user posted in Thai about a human trafficking business targeting Thais and transporting them for sale in Myanmar. The post discusses what the user believes are common practices that the business employs, such as pressuring victims to recruit others into the business. It also makes ironic statements, such as “if you want to be a victim of human trafficking, don't wait.” The content also contains screenshots of what appears to be messages from the business attempting to recruit victims, and of content promoting the business. Meta originally removed the post from Facebook, citing its Human Exploitation policy , under which the company removes “[c]ontent that recruits people for, facilitates or exploits people through any of the following forms of human trafficking,” such as “labor exploitation (including bonded labor).” The policy defines human trafficking as “the business of depriving someone of liberty for profit.” It allows “content condemning or raising awareness about human trafficking or smuggling issues.” After the Board brought this case to Meta’s attention, the company determined that its removal was incorrect and restored the content to Facebook. The company told the Board that, while the images in isolation would violate the Human Exploitation policy, the overall context is clear, making the content non-violating. Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case that is under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation processes involved, to reduce errors and increase fairness for Facebook and Instagram users. Case Significance The case underlines the importance of designing moderation systems that are sensitive to contexts of awareness-raising, irony, sarcasm, and satire. These are important forms of commentary, and should not be removed as a result of overly-literal interpretations. In terms of automation, the Board has urged Meta to implement an internal audit procedure to continually analyze a statistically representative sample of automated removal decisions to reverse and learn from enforcement mistakes (“ Breast Cancer Symptoms and Nudity , ” recommendation no. 5). In terms of human moderation, the Board asked Meta to ensure that it has adequate procedures in place to assess satirical content and relevant context properly, and that appeals based on policy exceptions be prioritized for human review ("" Two Buttons Meme , "" recommendations nos. 3 and 5). Meta has reported implementing the first two of these recommendations but has not published information to demonstrate complete implementation; for the first recommendation, as of Q4 2022, Meta reported having ""completed the global roll out of new, more specific messaging that lets people know whether automation or human review led to the removal of their content from Facebook"" but did not furnish information evidencing this. For the third recommendation, Meta reported it had ""nearly completed work on ways to allow users to indicate if their appeal falls under a policy exception""; once this is complete, Meta will begin to assess if taking into account policy exceptions is beneficial to the overall prioritization workflow. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to the company’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-ouuwkhko,Homophobic Violence in West Africa,https://www.oversightboard.com/decision/fb-ouuwkhko/,"October 15, 2024",2024,,"TopicLGBT, Sex and gender equality, Violence","Policies and TopicsTopicLGBT, Sex and gender equality, Violence",Overturned,Nigeria,"The Oversight Board is seriously concerned about Meta’s failure to take down a video showing two men who appear to have been beaten for allegedly being gay. In overturning the company’s original decision, the Board notes that by leaving the video on Facebook for five months, there was a risk of immediate harm by exposing the men’s identities, given the hostile environment for LGBTQIA+ people in Nigeria.",42827,6575,"Overturned October 15, 2024 The Oversight Board is seriously concerned about Meta’s failure to take down a video showing two men who appear to have been beaten for allegedly being gay. In overturning the company’s original decision, the Board notes that by leaving the video on Facebook for five months, there was a risk of immediate harm by exposing the men’s identities, given the hostile environment for LGBTQIA+ people in Nigeria. Standard Topic LGBT, Sex and gender equality, Violence Location Nigeria Platform Facebook Homophobic Violence in West Africa Decision PDF Igbo Translation To read the full decision in Igbo , click here . Iji gụọ mkpebi ahụ n'uju n'asụsụ Igbo, pịa ebe a . The Oversight Board is seriously concerned about Meta’s failure to take down a video showing two bleeding men who appear to have been beaten for allegedly being gay. The content was posted in Nigeria, which criminalizes same-sex relationships. In overturning the company’s original decision, the Board notes that by leaving the video on Facebook for five months, there was a risk of immediate harm to the men by exposing their identities, given the hostile environment for LGBTQIA+ people in Nigeria. Such damage is immediate and impossible to undo. The content, which shared and mocked violence and discrimination, violated four different Community Standards, was reported multiple times and reviewed by three human moderators. This case reveals systemic failings around enforcement. The Board’s recommendations include a call for Meta to assess enforcement of the relevant rule under the Coordinating Harm and Promoting Crime Community Standard. They also address the failings likely to have arisen from Meta identifying the wrong language being spoken in the video and how the company handles languages it does not support for at-scale content review. About the Case A Facebook user in Nigeria posted a video that shows two bleeding men who look like they could have been tied up and beaten. People around the frightened men ask them questions in one of Nigeria’s major languages, Igbo. In response, one of the men responds with his name and explains, seemingly under coercion, that he was beaten for having sex with another man. The user who posted this content included an English caption mocking the men, stating they were caught having sex and that this is “funny” because they are married. The video was viewed more than 3.6 million times. Between December 2023 when it was posted and February 2024, 92 users reported the content, the majority for violence and incitement or hate speech. Two human reviewers decided it did not violate any of the Community Standards so should remain on Facebook. One user appealed to Meta but, after another human review, the company decided again there were no violations. The user then appealed to the Board. After the Board brought the case to Meta’s attention, the company removed the post under its Coordinating Harm and Promoting Crime policy. Nigeria criminalizes same-sex relationships, with LGBTQIA+ people facing discrimination and severe restrictions on their human rights. Key Findings The Board finds the content violated four separate Community Standards, including the Coordinating Harm and Promoting Crime rule that does not allow individuals alleged to be members of an outing-risk group to be identified. The man’s admission in the video of having sex with another man is forced, while the caption explicitly alleges the men are gay. The content also broke rules on hate speech, bullying and harassment, and violent and graphic content. There are two rules on outing under the Coordinating Harm and Promoting Crime policy. The first is relevant here and applied at-scale. It prohibits: “outing: exposing the identity or locations affiliated with anyone who is alleged to be a member of an outing-risk group.” There is a similar rule applied only when content is escalated to Meta’s experts. The Board is concerned that Meta does not adequately explain the differences between the two outing rules and that the rule applied at-scale does not publicly state that “outing” applies to identifying people as LGBTQIA+ in countries where there is higher risk of offline harm, such as Nigeria. Currently, this information is only available in internal guidance. This ambiguity could lead to confusion, preventing users from complying with the rules, and hindering people targeted by such abusive content to get these posts removed. Meta needs to update its public rule and provide examples of outing-risk groups. This content was left up for about five months, despite breaking four different rules and featuring violence and discrimination. Human moderators reviewed the content and failed to identify that it broke the rules. With the video left up, the odds of someone identifying the men and of the post encouraging users to harm other LGBTQIA+ people in Nigeria increased. The video was eventually taken down but by this time, it had gone viral. Even after it was removed, the Board’s research shows there were still sequences of the same video remaining on Facebook. When the Board asked Meta about its enforcement actions, the company admitted two errors. First, its automated systems that detect language identified the content as English, before sending it to human review, while Meta’s teams then misidentified the language spoken in the video as Swahili. The correct language is Igbo, spoken by millions in Nigeria, but this is not supported by Meta for content moderation at-scale. If the language is not supported, as in this case, then content is sent instead to human reviewers who work across multiple languages and rely on translations provided by Meta’s technologies. This raises concerns about how content in unsupported languages is treated, the choice of languages the company supports for at-scale review and the accuracy of translations provided to reviewers working across multiple languages. The Oversight Board’s Decision The Oversight Board overturns Meta’s original decision to leave up the content. The Board recommends that Meta: * Case summaries provide an overview of cases and do not have precedential value. 1. Case Description and Background In December 2023, a Facebook user in Nigeria posted a video showing two men who are clearly visible and appear to have been beaten. They are sitting on the ground, near a pole and a rope, suggesting they may have been tied up, and are heavily bleeding. Several people ask the men questions in Igbo, one of the major languages in Nigeria. One of the men responds with his name and explains, seemingly under coercion, that he was beaten because he was having sex with another man. Both men appear frightened and one of them is kicked by a bystander. The user who posted the video added a caption in English mocking the men, saying that they were caught having sex and that this is “funny” because they are both married. The content was viewed over 3.6 million times, received about 9,000 reactions and 8,000 comments, and was shared about 5,000 times. Between December 2023 and February 2024, 92 users reported the content 112 times, the majority of these reports under Meta’s Violence and Incitement and Hate Speech policies. Several of the reports were reviewed by two human moderators who decided the content did not violate any of the Community Standards and therefore should remain on Facebook. One of the users then appealed Meta’s decision to keep the content up. Following another human review, the company again decided the content did not violate any of its rules. The user then appealed to the Board. After the Board brought the case to Meta’s attention, in May 2024, the company reviewed the post under its Coordinating Harm and Promoting Crime policy , removing it from Facebook. Following Meta’s removal of the original video, upon further research, the Board identified multiple instances of the same video left on the platform dating back to December 2023, including in Facebook Groups. After the Board flagged instances of the same video remaining on the platform, Meta removed them and added the video to a Media Matching Service (MMS) bank, which automatically identifies and removes content that has already been classified as violating. While this type of violation can result in a standard strike against the user who posted the content, Meta did not apply it in this case because the video was posted more than 90 days before any enforcement action was taken. Meta’s policy states that it does not apply standard strikes to accounts of users whose content violations are older than 90 days. The Board considered the following context in reaching its decision in this case: LGBTQIA+ people in Nigeria and in several other parts of the world face violence, torture, imprisonment and even death because of their sexual orientation or gender identity, with anti-LGBTQIA+ sentiment on the increase (see public comment by Outright International, PC-29658). Discrimination against people based on their sexual orientation or gender identity limits everyday life, impacting basic human rights and freedoms. Amnesty International reports that in Africa, 31 countries criminalize same-sex relationships. Sanctions range from imprisonment to corporal punishment. Nigeria’s Same Sex Marriage Prohibition Act not only criminalizes same-sex relationships but also prohibits public displays of affection and restricts the work of organizations defending LGBTQIA+ rights. In addition, colonial-era and other morality laws on sodomy , adultery and indecency are still enforced to restrict the rights of LGBTQIA+ people, with devastating outcomes. In a 2024 report, the UN Independent Expert on protection against violence and discrimination based on sexual orientation and gender identity emphasized: “States in all regions of the world have enforced existing laws and policies or imposed new, and sometimes extreme, measures to curb freedoms of expression, peaceful assembly and association specifically targeting people based on sexual orientation and gender identity,” (Report A/HRC/56/49 , July 2024, at para. 2). Activists and organizations supporting LGBTQIA+ communities can be subject to legal restrictions , harassment, arbitrary arrests , police raids and shutdowns, with threats of violence discouraging public support for LGBTQIA+ rights (see public comment by Pan-African Human Rights Defenders Network, PC-29657). Human rights organizations can struggle to document cases of abuse and discrimination due to fear of retaliation from public authorities and non-state actors, such as vigilantes and militias. Journalists covering LGBTQIA+ issues can also be targeted . Social media is an essential tool for human rights organizations documenting LGBTQIA+ rights violations and abuses, and advocating for stronger protections. People share videos, testimonials and reports to raise awareness and advocate for governments to uphold human rights standards (see public comment by Human Rights Watch, PC-29659). Additionally, platforms can act as information hubs, providing people with updates on legal developments as well as access to legal support. Independent research commissioned by the Board indicates that social media platforms play a crucial role for LGBTQIA+ people in countries with restrictive legal frameworks. The research indicates that Facebook, for example, allows users to connect, including anonymously and in closed groups, to share resources in a safer environment than offline spaces. Experts consulted by the Board noted that state authorities in some African countries also use social media to monitor and curtail the activities of users posting LGBTQIA+ content. The experts reported that in Nigeria, authorities have restricted access to online content about LGBTQIA+ issues. According to Freedom House, Nigeria has introduced legislation to regulate social media platforms more broadly, which could impact LGBTQIA+ rights online. Similarly, Access Now – a digital rights organization – reports that cybercrime laws in Ghana provide authorities with the ability to issue takedown requests or content bans that could restrict public discourse around LGBTQIA+ issues , and block documentation of human rights abuses as well as vital information for the community. Non-state actors , including vigilantes, also target LGBTQIA+ people with physical assaults, mob violence, public humiliation and ostracization. For example, in August 2024, a transgender Tik-Tok user known as “Abuja Area Mama” was found dead after allegedly being beaten to death in Nigeria’s capital Abuja. LGBTQIA+ people can be targets of blackmail by other community members who discover their sexual orientation or gender identity. According to Human Rights Watch, Nigeria’s legal framework encourages violence against LGBTQIA+ people, creating an environment of impunity for those carrying out this violence. 2. User Submissions In their statement to the Board, the user who reported the content claimed the men in the video were beaten solely for being gay. The user stated that, by not removing the video, Meta is allowing its platform to become a breeding ground for hate and homophobia and that if the video was of an incident in a Western country, it would have been removed. 3. Meta’s Content Policies and Submissions I. Meta’s Content Policies Coordinating Harm and Promoting Crime policy The Coordinating Harm and Promoting Crime policy aims to “prevent and disrupt offline harm and copycat behavior” by prohibiting “facilitating, organizing, promoting, or admitting to certain criminal or harmful activities targeted at people, businesses, property or animals.” Two policy lines in the Community Standards address “outing.” The first is applied at-scale, and the second requires “additional context to enforce” (which means that the policy line is only enforced following escalation). The first policy line applies to this case. It specifically prohibits “outing: exposing the identity or locations affiliated with anyone who is alleged to be a member of an outing-risk group.” This policy line does not explain which groups are considered to be “outing-risk groups.” The second policy line, which is only enforced on escalation and was not applied in this case, also prohibits “outing: exposing the identity of a person and putting them at risk of harm” for a specific list of vulnerable groups, including LGBTQIA+ members, unveiled women, activists and prisoners of war. According to Meta’s internal guidance to content reviewers on the first policy line, identity exposure can occur through the use of personal information such as a person’s name or image. Meta’s internal guidelines list “outing-risk groups,” including LGBTQIA+ people in countries where the affiliation to a group may carry an associated risk to the personal safety of its members. It also provides that “outing” must be involuntary: a person cannot out themselves (for example, by declaring themselves to be a member of an outing-risk group). Violent and Graphic Content policy The Violent and Graphic Content policy provides that certain disturbing imagery of people will be placed behind a warning screen. This includes: “Imagery depicting acts of brutality (e.g., acts of violence or lethal threats on forcibly restrained subjects) committed against a person or group of people.” However, if such content is accompanied by “sadistic remarks,” the post will be removed. Sadistic remarks are defined in the public-facing rules as “commentary – such as captions or comments – expressing joy or pleasure from the suffering or humiliation of people or animals.” Bullying and Harassment policy The Bullying and Harassment Community Standard aims to prevent individuals being targeted on Meta’s platforms through threats and different forms of malicious contact, and that such behaviour “prevents people from feeling safe and respected.” The policy prohibits content that targets people with “celebration or mocking of [their] death or medical condition.” Meta’s internal guidelines explain that medical condition includes a serious disease, illness or injury. Hate Speech policy Meta’s Hate Speech policy rationale defines hate speech as a direct attack against people on the basis of protected characteristics, including sexual orientation. It prohibits content targeting people in written or visual form, such as: “Mocking the concept, events or victims of hate crimes even if no real person is depicted in an image.” Meta’s internal guidelines define hate crimes as a criminal act “committed with a prejudiced motive targeting people based on their [protected characteristics].” II. Meta’s Submissions After the Board selected this case, Meta found that the content violated the Coordinating Harm and Promoting Crime policy for identifying alleged members of an “outing-risk group” in a country where the affiliation with such a group may carry an associated risk to the personal safety of its members. Meta noted that the user’s caption alleged that the men were gay, and the admission from one of the men in the video was potentially coerced, demonstrating that the “outing,” by exposing their identity, was involuntary. Meta recognized that reviewers were wrong in finding that the post did not violate any Community Standards and investigated why these errors occurred. In this case, it appears reviewers only focused on the Bullying and Harassment policy related to “claims about romantic involvement, sexual orientation or gender identity” against private adults and found that the policy had not been violated, without considering other potential violations. This policy requires that the name and face of the user reporting the content match the person depicted in the content for it to be removed. Since the users reporting the content in this case were not depicted in the content, the reviewers assessed it as non-violating. Following its investigation, Meta’s human review teams took additional steps to improve accuracy in applying the Coordinating Harm and Promoting Crime policy, sending policy reminders and conducting knowledge tests on the outing of high-risk individuals policy. In response to the Board’s questions, Meta confirmed the post also violated three other Community Standards. The post violated the Violent and Graphic Content policy as the video included sadistic remarks about a depicted act of “brutality,” with the men subjected to excessive force while in a position of being dominated. Without the sadistic remarks, the content would only have been marked as disturbing under this policy. It violated the Bullying and Harassment policy because the caption mocks both men by referring to their situation as “funny” while showing their serious injuries. Lastly, it violated the Hate Speech Community Standard, since the caption mocked victims of a hate crime, particularly the assault and battery motivated by prejudice against two men based on their perceived sexual orientation. In response to Board questions, Meta confirmed that it conducted additional investigations that led to removals of other instances of the same video. The video was added to Meta’s MMS banks to prevent future uploads of the content. Meta also informed the Board it leverages its language detection and machine translation systems to provide support for content in Igbo through agnostic review at-scale. Meta has a few Igbo speakers who provide language expertise and content review for Igbo upon escalation (not at-scale). The company requires its human reviewers to have proficiency in English and “their relevant market language.” Before confirming that the language spoken in the video was Igbo, Meta misidentified the language of the video as Swahili in its engagement with the Board. Finally, Meta explained that because the user’s caption for the video was in English, the company’s automated systems identified the language of the content as English, routing it to English-speaking human reviewers. The Board asked Meta 24 questions on enforcement of the Coordinating Harm and Promoting Crime Community Standard and other content policies, Meta’s enforcement actions in Nigeria, Meta’s detection of content languages and human review assignments, as well as governmental requests and mitigation measures the company has undertaken to prevent harm. Meta responded to all the questions. 4. Public Comments The Oversight Board received seven public comments that met the terms for submission. Four of the comments were submitted from the United States and Canada, two from Sub-Saharan Africa and one from Europe. To read public comments submitted with consent to publish, click here . The submissions covered the following themes: violence against LGBTQIA+ people in West Africa by state and non-state actors, and risks associated with the exposure of people’s sexual orientation and/or gender identity; the impact of the criminalization of same-sex relationships on LGBTQIA+ people; the impact of this criminalization and other local laws in Nigeria, and West Africa more broadly, on the work conducted by human rights organizations, advocacy groups and journalists; and the importance of Meta’s platforms, and social media more broadly, to communication, mobilization and awareness-raising among LGBTQIA+ people in Nigeria and West Africa. 5. Oversight Board Analysis The Board analyzed Meta’s decision in this case against Meta’s content policies, values and human rights responsibilities. The Board also assessed the implications of this case for Meta’s broader approach to content governance. 5.1 Compliance With Meta’s Content Policies I. Content Rules The Board finds the content violates four Community Standards: Coordinating Harm and Promoting Crime, Hate Speech, Violent and Graphic Content, and Bullying and Harassment. The content violates the Coordinating Harm and Promoting Crime policy prohibiting identifying individuals alleged to be members of an outing-risk group. The Board agrees with Meta that the video exposes the identity of the two men against their will, as they appear to have been beaten and are visibly frightened. The admission of one to having sex with another man is therefore forced and involuntary. Additionally, the caption to the video explicitly alleges the men are gay. It also violates the Hate Speech policy prohibiting content mocking victims of hate crimes. The video captures the aftermath of violence against two men, which continues in the video, with their injuries clearly visible. One of the men explains they were beaten because they had sex with each other, with the video’s caption further demonstrating the criminal battery and assault was motivated by the men’s perceived sexual orientation. Because the post’s caption ridicules the victims of this hate crime by saying it is “funny” they are apparently married, the Board believes it meets Meta’s definition of “mocking.” The post’s caption also violates the Bullying and Harassment policy, for mocking their visible injuries (a “medical condition”) by referring to the situation as “funny.” Finally, the Board finds that the content violates the Violent and Graphic Content Community Standard too, since it includes “sadistic remarks” made about acts of brutality against the two men in a context of suffering and humiliation. In itself, and without other policy violations being present, this would warrant the application of a warning screen. However, as the caption contains “sadistic remarks” ridiculing the acts of violence and assault against the men, the policy requires content removal. II. Enforcement Action The Board is especially concerned that content depicting such severe violence and discrimination, and violating four Community Standards, was left up for about five months, and that sequences of the same video remained on the platform even after the original video was removed. After it was posted in December 2023, the video was reported 112 times by 92 different users, by which time this single instance had amassed millions of views and thousands of reactions. Three human moderators independently reviewed the reports and subsequent appeals. All three concluded there were no violations, seemingly because they were not reviewing the posts against all Community Standards. Additionally, these reviewers may not have been familiar with the Igbo language or able to perform an agnostic review, given that Meta’s automated systems wrongly identified the language of the content as English and routed it to English-speaking reviewers. 5.2 Compliance With Meta’s Human Rights Responsibilities The Board finds that leaving the content on the platform was not consistent with Meta’s human rights responsibilities in light of the UN Guiding Principles on Business and Human Rights (UNGPs). In 2021, Meta announced its Corporate Human Rights Policy , in which the company reaffirmed its commitment to respecting human rights in accordance with the UNGPs. Under Guiding Principle 13, companies should “avoid causing or contributing to adverse human rights impacts through their own activities” and “prevent or mitigate adverse human rights impacts that are directly linked to their operations, products or services” even if they have not contributed to those impacts. In interpreting the UNGPs, the Board has drawn from the UN Special Rapporteur on Freedom of Expression and Opinion’s recommendation that social media companies should consider the global freedom of expression standards set forth in the International Covenant on Civil and Political Rights (ICCPR) Articles 19 and 20, (see paras. 44-48 of the 2018 report of the UN Special Rapporteur on freedom of expression, A/HRC/38/35 and para. 41 of the 2019 report of the UN Special Rapporteur on freedom of expression, A/74/486 ). Article 20, para. 2 of the ICCPR provides that “any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence is to be prohibited by law.” This prohibition is “fully compatible with the right to freedom of expression as contained in article 19 [ICCPR], the exercise of which carries with it special duties and responsibilities,” ( General Comment No. 11 , (1983), para. 2). The Rabat Plan of Action on the prohibition of advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence is an important road map for interpreting Article 20, para 2 ( A/HRC/22/17/Add.4, 2013, para. 29). It sets out six relevant factors for states to determine whether to prohibit speech: “Context of statement; speaker’s status; intent to incite the audience against the target group; content of statement; extent of dissemination, and likelihood of harm, including imminence.” The Board has been using these factors to determine the necessity and proportionality of speech restrictions by Meta. In this case, the Board is considering the same factors when assessing whether Meta should remove the content given its human rights responsibilities. The Board finds that Meta’s original decision to leave the content on the platform created a risk of immediate harm to the men in the video, thereby warranting removal. In countries like Nigeria, where societal attitudes and the criminalization of same-sex relationships fuel homophobic violence, LGBTQIA+ people who are outed online may be subjected to offline violence and discrimination. Meta’s failures to take timely action on this video, allowing it to be shared so extensively, likely contributed to that hostile environment, creating risks for others (see public comment by Human Rights Watch, PC-29659). The Board also notes that the post amassed a great number of views (over 3.6 million), which increased the odds of someone identifying the men depicted in the video and of the post instigating users to harm LGBTQIA+ people, more broadly. Moreover, the Board highlights the sadistic remarks accompanying the video, which indicate the user’s intention of exposing and humiliating the men, inciting others to discriminate and harm them. The great number of reactions (about 9,000), comments (about 8,000) and shares (about 5,000) indicate that the user managed to engage their audience, further increasing the likelihood of harm, both to the men depicted in the video and to LGBTQIA+ people in Nigeria. Speech restrictions based on Article 20, para. 2 ICCPR should also meet ICCPR Article 19’s three-part test ( General Comment No. 34 , para. 50). The analysis that follows finds that removal of the post was consistent with Article 19. Freedom of Expression (Article 19 ICCPR) Article 19 of the ICCPR provides for broad protection of expression, including political expression and discussion of human rights, as well as expression that may be considered “deeply offensive,” (General Comment No. 34, (2011), para. 11, see also para. 17 of the 2019 report of the UN Special Rapporteur on freedom of expression, A/74/486 ). When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s human rights responsibilities in line with the UN Guiding Principles on Business and Human Rights. The Board does this both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression,” ( A/74/486 , para. 41). While the Board notes that multiple content policies are applicable to this case, its three-part analysis is focused on Meta’s Coordinating Harm and Promoting Crime Community Standard, given this is the policy under which the company eventually removed the content. I. Legality (Clarity and Accessibility of the Rules) The principle of legality requires rules limiting expression to be accessible and clear, formulated with sufficient precision to enable an individual to regulate their conduct accordingly (General Comment No. 34, para. 25). Additionally, these rules “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” and must “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not,” (Ibid). The UN Special Rapporteur on freedom of expression has stated that when applied to private actors’ governance of online speech, rules should be clear and specific ( A/HRC/38/35 , para. 46). People using Meta’s platforms should be able to access and understand the rules and content reviewers should have clear guidance regarding their enforcement. The Board finds that Meta’s prohibition on “outing” individuals by exposing the identity or locations affiliated with anyone who is alleged to be a member of an outing-risk group is not sufficiently clear and accessible to users. The Coordinating Harm and Promoting Crime Community Standard does not offer sufficient explanation for users to understand and distinguish between the two similar “outing” rules. The Board is particularly concerned that the Community Standard does not clearly explain that the at-scale rule prohibiting “outing” applies to identifying people as LGBTQIA+ in countries where the local context indicates higher risks of offline harm. Currently this information is only available in internal guidance to reviewers, making it impossible for users to know that persons alleged to belong to “at-risk” outing groups include LGBTQIA+ people in specific countries. The Board is concerned that the ambiguity surrounding Meta’s policies on content outing LGBTQIA+ individuals may result in user confusion and prevent them from complying with the platform’s rules. It also creates obstacles to people targeted by abusive content who are seeking the removal of such posts. Meta should, therefore, update its Coordinating Harm and Promoting Crime policy line that prohibits “outing,” and which the company enforces at-scale, to include illustrative examples of outing-risk groups, including LGBTQIA+ people in specific countries. II. Legitimate Aim Any restriction on freedom of expression should also pursue one or more of the legitimate aims listed in the ICCPR, which includes protecting the rights of others (Article 19, para. 3, ICCPR). The Coordinating Harm and Promoting Crime policy serves the legitimate aim of “prevent[ing] and disrupt[ing] offline harm,” including by protecting the rights of LGBTQIA+ people and those perceived as such in countries around the world where “outing” creates safety risks. Those rights include the right to non-discrimination (Articles 2 and 26, ICCPR), including in the exercise of their rights to freedom of expression and assembly (Articles 19 and 21, ICCPR), to privacy (Article 17, ICCPR), as well as to life (Articles 6, ICCPR), and liberty and security (Article 9 ICCPR). III. Necessity and Proportionality Under ICCPR Article 19(3), necessity and proportionality require that restrictions on expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected,” (General Comment No. 34, para. 34). The Board finds that Meta’s eventual decision to remove the content from the platform was necessary and proportionate. Research commissioned by the Board indicates that LGBTQIA+ people in Nigeria are continuously exposed to violence, arbitrary arrests, harassment, blackmail and discrimination, and risks of legal sanctions. The content itself depicts the aftermath of what appears to be corporal punishment for an alleged same-sex relationship. Under these circumstances, the Board determines that accurate enforcement of policies meant to protect LGBTQIA+ people is critical, especially in countries criminalizing same-sex relationships. Given these risks, the Board finds that content removal is the least intrusive means to provide protection to persons “outed” in this context. The damage from “outing” is immediate and impossible to undo; such measures can only be effective if implemented in a timely way. The Board is concerned that Meta was not able to swiftly identify and remove clearly harmful content that involuntarily exposes the identities of persons alleged to be gay, which in turn perpetuates an atmosphere of fear for LGBTQIA+ people and fosters an environment where the targeting of marginalized groups is further accepted and normalized (see public comment by GLAAD, PC-29655). Even though the content violated four different Community Standards, was reported 112 times and reviewed by three different moderators, it was only after the Board selected the case for review that Meta removed the post and ensured similar content containing the video was taken down. The Board is particularly alarmed by the virality of the video, which was viewed over 3.6 million times, received about 9,000 reactions and 8,000 comments, and was shared about 5,000 times in a five-month period. The Board understands that enforcement errors are to be expected in content moderation at-scale, however, Meta’s explanations in this case reveal systemic failings. While Meta has taken additional steps to improve accuracy when enforcing the Coordinating Harm and Promoting Crime policy, and has sought to prevent similar errors through additional training, it did not provide details on measures implemented to ensure human reviewers assess content against all of Meta's policies. This is particularly relevant in this case, in which the content was reviewed by three moderators who committed the same mistake, failing to assess the post against other relevant Community Standards. This indicates that Meta’s enforcement systems were inadequate. The Board finds Meta’s enforcement error particularly alarming given the context in Nigeria, which criminalizes same-sex relationships. In order to improve implementation of its policies, and in addition to the measures the company has already deployed, Meta should conduct an assessment of the enforcement accuracy of the Coordinating Harm and Promoting Crime rule that prohibits content outing individuals by exposing their identity or locations. Based on the results of this assessment, Meta then needs to improve the accuracy of the policy’s enforcement, including through updated training for content reviewers, given there should be no tolerance for this type of content. The Board also examined Meta’s enforcement practices in multilingual regions. In its exchanges with the Board, Meta initially misidentified the language of the video as Swahili, when it was actually Igbo. In response to a question from the Board, Meta noted that Igbo is not a language supported for content moderation at-scale for the Nigerian market, even if the company provides support for moderation of content in Igbo through agnostic review. According to Meta, the language is not supported because the demand for content moderation in Igbo is low. However, Meta informed the Board that when the content is in a language unsupported by the company’s at-scale reviewers, such as Igbo, it is routed to language-agnostic reviewers (reviewers that work with content in multiple languages) who assess the content based on translations provided by Meta’s machine translation systems. Meta also informed the Board that it has a few Igbo speakers who provide language expertise and content review for Igbo, although not at-scale, for the company. The Board acknowledges that Meta has in place mechanisms to allow for moderation in unsupported languages, such as language-agnostic review and a few specialists with Igbo expertise. However, the Board is concerned that by not engaging human reviewers who speak Igbo in the at-scale moderation of content in this language, that is spoken by tens of millions of people in Nigeria and globally, the company’s ability to effectively moderate content and mitigate potential risks is reduced. This could result in potential harm to user rights and safety, such as that experienced by the men shown in the video in this case. In light of its human rights commitments, Meta should reassess its criteria for selecting languages for support by the company’s at-scale reviewers in order to be in a better position to prevent and mitigate harms associated with the usage of its platforms. Furthermore, Meta informed the Board that its automated systems detected the language as English, before routing the content for human review. According to Meta, this happened because the user’s caption for the video was in English. While the caption was in English, the video is entirely in Igbo. Meta acknowledged that it wrongly identified the language of the content. The Board is concerned that bilingual content is being wrongly routed, potentially causing inaccurate enforcement. In order to increase the efficiency and accuracy of content review in unsupported languages, Meta should make sure its language detection systems can precisely identify content in unsupported languages and provide accurate translations of that content to language-agnostic reviewers. Meta should also ensure that this type of content is always routed to language-agnostic reviewers, even if it contains a mix of supported and unsupported languages. The company should also provide reviewers with the option to re-route content containing an unsupported language to agnostic review. The Board is very concerned that even after Meta removed the content in this case, the Board’s research unearthed further instances of the same video dating back to December 2023, including in Facebook Groups, which had not been removed. This indicates that Meta must take much more seriously its due diligence responsibilities to respect human rights under the UNGPs. The Board welcomes the fact that this video was added to a MMS bank to prevent further uploads, after the Board flagged to Meta the remaining sequences of the video on Facebook. Given the severity of human rights harms that can result from Meta’s platforms being used to distribute videos of this kind, Meta should make full use of automated enforcement to proactively remove similar violating content, in addition to using MMS banks to prevent new uploads. 6. The Oversight Board’s Decision The Oversight Board overturns Meta’s original decision to leave up the content. 7. Recommendations Content Policy 1. Meta should update the Coordinating Harm and Promoting Crime policy’s at-scale prohibition on “outing” to include illustrative examples of “outing-risk groups,” including LGBTQIA+ people in countries where same-sex relations are forbidden and/or such disclosures create significant safety risks. The Board will consider this recommendation implemented when the public-facing language of the Coordinating Harm and Promoting Crime policy reflects the proposed change. Enforcement 2. To improve implementation of its policy, Meta should conduct an assessment of the enforcement accuracy of the at-scale prohibition on exposing the identity or locations of anyone alleged to be a member of an outing-risk group, under the Coordinating Harm and Promoting Crime Community Standard. The Board will consider this recommendation implemented when Meta publicly shares the results of the assessment and explains how the company intends to improve enforcement accuracy of this policy. 3. To increase the efficiency and accuracy of content review in unsupported languages, Meta should ensure its language detection systems precisely identify content in unsupported languages and provide accurate translations of that content to language-agnostic reviewers. The Board will consider this recommendation implemented when Meta shares data signaling increased accuracy in the routing and review of content in unsupported languages. 4. Meta should ensure that content containing an unsupported language, even if mixed with supported languages, is routed to agnostic review. This includes providing reviewers with the option to re-route content containing an unsupported language to agnostic review. The Board will consider this recommendation implemented when Meta provides the Board with data on the successful implementation of this routing option for reviewers. *Procedural Note: Return to Case Decisions and Policy Advisory Opinions" fb-p93jpx02,Shared Al Jazeera post,https://www.oversightboard.com/decision/fb-p93jpx02/,"September 14, 2021",2021,,"TopicJournalism, News events, War and conflictCommunity StandardDangerous individuals and organizations","Policies and TopicsTopicJournalism, News events, War and conflictCommunity StandardDangerous individuals and organizations",Overturned,"Egypt, Israel, Palestinian Territories","The Oversight Board agrees that Facebook was correct to reverse its original decision to remove content on Facebook that shared a news post about a threat of violence from the Izz al-Din al-Qassam Brigades, the military wing of the Palestinian group Hamas.",33620,5178,"Overturned September 14, 2021 The Oversight Board agrees that Facebook was correct to reverse its original decision to remove content on Facebook that shared a news post about a threat of violence from the Izz al-Din al-Qassam Brigades, the military wing of the Palestinian group Hamas. Standard Topic Journalism, News events, War and conflict Community Standard Dangerous individuals and organizations Location Egypt, Israel, Palestinian Territories Platform Facebook Public Comments 2021-009-FB-UA Hebrew translation of decision Please note that this decision is available in both Arabic (via the ‘language’ tab accessed through the menu at the top of this screen) and Hebrew (via this link ). לקריאת ההחלטה במלואה יש ללחוץ כאן . The Oversight Board agrees that Facebook was correct to reverse its original decision to remove content on Facebook that shared a news post about a threat of violence from the Izz al-Din al-Qassam Brigades, the military wing of the Palestinian group Hamas. Facebook originally removed the content under the Dangerous Individuals and Organizations Community Standard, and restored it after the Board selected this case for review. The Board concludes that removing the content did not reduce offline harm and restricted freedom of expression on an issue of public interest. About the case On May 10, 2021, a Facebook user in Egypt with more than 15,000 followers shared a post by the verified Al Jazeera Arabic page consisting of text in Arabic and a photo. The photo portrays two men in camouflage fatigues with faces covered, wearing headbands with the insignia of the Al-Qassam Brigades. The text states ""The resistance leadership in the common room gives the occupation a respite until 18:00 to withdraw its soldiers from Al-Aqsa Mosque and Sheikh Jarrah neighborhood otherwise he who warns is excused. Abu Ubaida – Al-Qassam Brigades military spokesman."" The user shared Al Jazeera’s post and added a single-word caption “Ooh” in Arabic. The Al-Qassam Brigades and their spokesperson Abu Ubaida are both designated as dangerous under Facebook’s Dangerous Organizations and Individuals Community Standard. Facebook removed the content for violating this policy, and the user appealed the case to the Board. As a result of the Board selecting this case, Facebook concluded it had removed the content in error and restored it. Key findings After the Board selected this case, Facebook found that the content did not violate its rules on Dangerous Individuals and Organizations, as it did not contain praise, support or representation of the Al-Qassam Brigades or Hamas. Facebook was unable to explain why two human reviewers originally judged the content to violate this policy, noting that moderators are not required to record their reasoning for individual content decisions. The Board notes that the content consists of republication of a news item from a legitimate news outlet on a matter of urgent public concern. The original Al Jazeera post it shared was never removed and the Al-Qassam Brigades’ threat of violence was widely reported elsewhere. In general, individuals have as much right to repost news stories as media organizations have to publish them in the first place. The user in this case explained that their purpose was to update their followers on a matter of current importance, and their addition of the expression “Ooh” appears to be neutral. As such, the Board finds that removing the user’s content did not materially reduce offline harm. Reacting to allegations that Facebook has censored Palestinian content due to Israeli government demands, the Board asked Facebook questions including whether the company had received official and unofficial requests from Israel to remove content related to the April-May conflict. Facebook responded that it had not received a valid legal request from a government authority related to the user’s content in this case, but declined to provide the remaining information requested by the Board. Public comments submitted for this case included allegations that Facebook has disproportionately removed or demoted content from Palestinian users and content in Arabic, especially in comparison to its treatment of posts threatening anti-Arab or anti-Palestinian violence within Israel. At the same time, Facebook has been criticized for not doing enough to remove content that incites violence against Israeli civilians. The Board recommends an independent review of these important issues, as well as greater transparency with regard to its treatment of government requests. The Oversight Board’s decision The Oversight Board affirms Facebook’s decision to restore the content, noting that its original decision to remove the content was not warranted. In a policy advisory statement, the Board recommends that Facebook: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board agrees that Facebook was correct to reverse its original decision to remove content on Facebook that shared a news post about a threat of violence from the Izz al-Din al-Qassam Brigades, the military wing of the Palestinian group Hamas, made on May 10, 2021. The Al-Qassam Brigades are designated as a terrorist organization by many states, either as part of Hamas or on their own account. After the user appealed and the Board selected the case for review, Facebook concluded that the content was removed in error and restored the post to the platform. The Dangerous Individuals and Organizations policy states that sharing the official communications of a dangerous organization designated by Facebook is a form of substantive support. The policy, however, includes news reporting and neutral discussion exceptions. The company applied the news reporting exception to Al Jazeera’s post and erroneously failed to apply the neutral discussion exception, which it later corrected. The Board concludes that the removing of the content in this case was not necessary as it did not reduce offline harm and instead resulted in an unjustified restriction on freedom of expression on a public interest issue. 2. Case description On May 10, a Facebook user in Egypt (the user) with more than 15,000 followers shared a post by the verified Al Jazeera Arabic page consisting of text in Arabic and a photo. The photo portrays two men in camouflage fatigues with faces covered, wearing headbands with the insignia of the Al-Qassam Brigades, a Palestinian armed group and the militant wing of Hamas. The Board notes that Al-Qassam Brigades have been accused of committing war crimes (Report of the UN Independent Commission of Inquiry of the 2014 Gaza Conflict, A/HRC/29/CRP.4 , and Human Rights Watch, Gaza: Apparent War Crimes During May Fighting (2021)). The text in the photo states: ""The resistance leadership in the common room gives the occupation a respite until 18:00 to withdraw its soldiers from Al-Aqsa Mosque and Sheikh Jarrah neighborhood in Jerusalem, otherwise he who warns is excused. Abu Ubaida – Al-Qassam Brigades military spokesman."" Al Jazeera’s caption read: ""'He Who Warns is Excused'. Al-Qassam Brigades military spokesman threatens the occupation forces if they do not withdraw from Al-Aqsa Mosque."" The user shared the Al Jazeera post and added a single-word caption “Ooh” in Arabic. The Al-Qassam Brigades and their spokesperson Abu Ubaida are both designated as dangerous under Facebook’s Dangerous Organizations and Individuals Community Standard. On the same day, a different user in Egypt reported the post, selecting “terrorism” from the fixed list of reasons Facebook gives people who report content. The content was assessed by an Arabic speaking moderator in North Africa who removed the post for violating the Dangerous Individuals and Organizations policy. The user appealed and the content was reviewed by a different reviewer in Southeast Asia who did not speak Arabic but had access to an automated translation of the content. Facebook explained that this was due to a routing error that it is working on resolving. The second reviewer also found a breach of the Dangerous Individuals and Organizations policy and the user received a notification explaining that the initial decision was upheld by a second review. Due to the violation, the user received a three-day read-only restriction on their account. Facebook also restricted the user’s ability to broadcast livestreamed content and use advertising products on the platform for 30 days. The user then appealed to the Oversight Board. As a consequence of the Board selecting the case for review, Facebook determined that the content was removed in error and restored it. Facebook later confirmed to the Board that the original Al Jazeera post remained on the platform and had never been taken down. The content in this case relates to the May 2021 armed conflict between Israeli forces and Palestinian militant groups in Israel and Gaza, a Palestinian territory governed by Hamas. The conflict broke out after weeks of rising tensions and protests in Jerusalem tied to a dispute over ownership of homes in the Sheikh Jarrah neighborhood of East Jerusalem and to an Israeli Supreme Court ruling concerning the planned expulsion of four Palestinian families from the disputed properties. These tensions had escalated into a series of sectarian assaults by both Arab and Jewish mobs. On May 10, Israeli forces raided the Al-Aqsa Mosque, injuring hundreds of worshippers during Ramadan prayers (Communication from UN Independent Experts to the Government of Israel, UA ISR 3.2021 ). After this raid the Al-Qassam Brigades issued an ultimatum, demanding that Israeli soldiers withdraw from both the Mosque and Sheikh Jarrah by 6pm. After the deadline expired, Al-Qassam and other Palestinian militant groups in Gaza launched rockets at the civilian center of Jerusalem, which began 11 days of armed conflict. 3. Authority and scope The Board has the power to review Facebook's decision following an appeal from the user whose post was removed (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1). The Board may uphold or reverse that decision (Charter Article 3, Section 5). In line with case decision 2020-004-IG-UA, Facebook reversing a decision that a user appealed to the Board does not exclude the case from review. The Board's decisions are binding and may include policy advisory statements with recommendations. These recommendations are non-binding, but Facebook must respond to them (Charter Article 3, Section 4). 4. Relevant standards The Oversight Board considered the following standards in its decision: I. Facebook’s Community Standards: The Community Standard on Dangerous Individuals and Organizations states that Facebook does “not allow organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on Facebook.” Facebook carries out its own process of designating entities as dangerous under this policy, with its designations often based on national terrorist lists. On June 22, Facebook updated the policy to divide these designations into three tiers. The update explains that the three tiers “indicate the level of content enforcement, with Tier 1 resulting in the most extensive enforcement because we believe these entities have the most direct ties to offline harm.” Tier 1 designations are focused on “entities that engage in serious offline harms” such as terrorist groups and result in the highest level of content enforcement. Facebook removes praise, substantive support, and representation of Tier 1 entities as well as their leaders, founders, or prominent members. II. Facebook values: The value of ""Voice"" is described as ""paramount"": The goal of our Community Standards has always been to create a place for expression and give people a voice. […] We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Facebook limits ""Voice"" in the service of four values . “Safety” is the most relevant in this case: We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn't allowed on Facebook. III. Human Rights Standards: The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In March 2021, Facebook announced its Corporate Human Rights Policy , where it recommitted to respecting human rights in accordance with the UNGPs. The Board's analysis in this case was informed by the following human rights standards: 5. User statement In their appeal to the Board, the user explained that they shared the Al Jazeera post to update their followers on the developing crisis and that it was an important issue that more people should be aware of. The user stressed that their post simply shared content from an Al Jazeera page and that their caption was simply “ooh.” 6. Explanation of Facebook’s decision In response to Board inquiry, Facebook stated that it was unable to explain why the two human reviewers judged the content to violate the Dangerous Individuals and Organizations policy, noting that moderators are not required to record their reasoning for individual content decisions. The company clarified that “in this case, the content reviewers had access to the entire piece of content, which includes the caption and image of the original root post and the additional caption the content creator placed on the shared version of the post.” The company added that “generally, content reviewers are trained to look at the entire piece of content.” As a consequence of the Board selecting this case for review, Facebook reexamined its decision and found that the content did not contain praise, substantive support, or representation of the Al-Qassam Brigades or Hamas, their activities, or their members. Facebook explained that it reversed its decision since Al Jazeera’s post was non-violating and the user shared it using a neutral caption. According to the Dangerous Individuals and Organizations policy, channeling information or resources, including official communications, on behalf of a designated entity or event is a form of prohibited substantive support for a dangerous organization and entity. However, the policy specifically provides an exception for content published as part of news reporting, though it does not define what constitutes news reporting. The policy also provides a neutral discussion exception. The original Al Jazeera post appeared, and still appears, on the Al Jazeera Arabic Facebook page. It was never removed by Facebook. Facebook explained that Al Jazeera’s page is subject to the cross-check system , an additional layer of review which Facebook applies to some high profile accounts to minimize the risk of errors in enforcement. However, cross-checking is not performed on content that is shared by a third party, unless that third party is also a high profile account subject to cross-check. Thus, in this case, although the root post by Al Jazeera was subject to cross-checking, the post by the user in Egypt was not. The company stated that its restoration of the post is consistent with its responsibility to respect the right to seek, receive, and impart information. Facebook concluded that the user’s caption was neutral and did not fit within the definitions of praise, substantive support, or representation. Reacting to allegations that Facebook has censored Palestinian content due to the Israeli government’s demands, the Board asked Facebook: Has Facebook received official and unofficial requests from Israel to take down content related to the April-May conflict? How many requests has Facebook received? How many has it complied with? Did any requests concern information posted by Al Jazeera Arabic or its journalists? Facebook responded by saying: ""Facebook has not received a valid legal request from a government authority related to the content the user posted in this case. Facebook declines to provide the remaining requested information. See Oversight Board Bylaws, Section 2.2.2."" Under the Oversight Board Bylaws , Section 2.2.2, Facebook may “decline such requests where Facebook determines that the information is not reasonably required for decision-making in accordance with the intent of the charter, is not technically feasible to provide, is covered by attorney/ client privilege, and/or cannot or should not be provided because of legal, privacy, safety, or data protection restrictions or concerns.” The company did not indicate the specific reasons for the refusal under the Bylaws. 7. Third-party submissions The Oversight Board received 26 public comments related to this case. 15 were from the United States and Canada, seven from Europe, three from the Middle East and North Africa, and one from Latin America and the Caribbean. The submissions covered themes including the importance of social media to Palestinians, concerns about Facebook’s potential bias against and over-moderation of Palestinian or pro-Palestinian content, concerns about the alleged opaque relationship between Israel and Facebook, and concerns that messages from designated terrorist organization were allowed on the platform. Additionally, the Board received several public comments arguing that the reporting of such threats may also warn of attacks by an armed groups thus allowing those targeted to take measures to protect themselves. To read public comments submitted for this case, please click here . 8. Oversight Board analysis 8.1 Compliance with Community Standards The Board concludes that Facebook’s original decision to remove the content was not in line with the company’s Community Standards. Facebook’s reversal of the decision following the Board’s identification of this case therefore was correct. According to Facebook, Al Jazeera’s root post, which is not the subject of this appeal, did not violate the Community Standards and was never removed from the platform. While sharing official communications from a designated entity is a prohibited form of substantive support, the Dangerous Individuals and Organizations policy allows such content to be posted for condemnation, neutral discussion, or news reporting purposes. Facebook updated the Dangerous Individuals and Organizations policy on June 22 2021, making public previously confidential definitions of “substantive support,” “praise,” and “representation.” The user shared Al Jazeera’s post with a single-word caption “ooh.” Facebook concluded that the term was a neutral form of expression. Arabic language experts consulted by the Board explained that the meaning of “ooh” varies depending on its usage, with neutral exclamation being one interpretation. In the updated Dangerous Individuals and Organizations policy, Facebook requires “people to clearly indicate their intent” to neutrally discuss dangerous individuals or organizations, and “if the intention is unclear, we may remove content” (emphasis added). This is a change from a previous policy indicated in Facebook’s responses to the Board’s question in case decision 2020-005-FB-UA . There, Facebook explained that it “treated content that quotes, or attributes quotes (regardless of their accuracy), to a designated dangerous individual as an expression of support for that individual unless the user provides additional context to make their intent explicit ” (emphasis added). It follows that prior to the June 22 policy update of the public facing community standards, content moderators had less discretion to retain content with unclear intent while users were not aware of the importance of making their intent clear. It is understandable why content moderators, operating under time pressure, might treat the post as violating, especially under the version of the Community Standard in effect at the time. The post conveys a direct threat from a spokesman for a designated dangerous organization, and the user’s addition of the expression “ooh” did not make “explicit” the user’s intent to engage in neutral discussion. However, the salient fact is that this was a republication of a news item from a legitimate news outlet on a matter of urgent public concern. The root post, from Al Jazeera, has never been found to be violating and has remained on its page throughout. In general, individuals have no less right to repost news than news media organizations have to publish it in the first place. Although in some contexts the republication of material from a news source might be violating, in this case the user has explained that their purpose was to update their followers on a matter of current importance, and Facebook’s conclusion (on reexamination) that the user’s addition of the expression “ooh” was most likely neutral is confirmed by the Board’s language experts. Under the new version of the relevant Community Standard, announced on June 22, the post was not clearly violating, and Facebook did not err in restoring it. 8.2 Compliance with Facebook’s values The Board concludes that the decision to restore this content complies with Facebook’s value of “Voice,” and is not inconsistent with its value of “Safety.” The Board is aware that Facebook’s values play a role in the company’s development of policies and are not used by moderators to decide whether content is permissible. Facebook states that the value of “Voice” is “paramount.” In the Board's view, this is especially true in the context of a conflict where the ability of many people, including Palestinians and their supporters, to express themselves is highly restricted. As numerous public comments submitted to the Board stress, Facebook and other social media are the primary means that Palestinians have to communicate news and opinion, and to express themselves freely. There are severe limitations on the freedom of expression in territories governed by the Palestinian Authority and Hamas (A/75/532, para. 25). Additionally, the Israeli government has been accused of unduly restricting expression in the name of national security (Working Group on the Universal Periodic Review, A/HRC/WG.6/29/ISR/2, para. 36-37; Oxford Handbook on the Israeli Constitution, Freedom of Expression in Israel: Origins, Evolution, Revolution and Regression , (2021)). Furthermore, for people in the region more broadly, the ability to receive and share news about these events is a crucial aspect of “Voice.” The Board only selects a limited number of appeals to review, but notes the removal in this case was among several appeals that concerned content relating to the conflict. On the other hand, the value of “Safety” is also a vital concern in Israel and the Occupied Palestinian Territories, and other countries in the region. The user shared a post from a media organization that contained an explicit threat of violence from the Al-Qassam Brigades, implicating the value of “Safety.” However, the content that the user shared was broadly available around the world on and off Facebook. The root post, a news media report of the threat, was not removed from Facebook, and it still remains on Al Jazeera’s page. It also was widely reported elsewhere. The Board finds that sharing of the post did not pose any additional threat to the value of “Safety.” 8.3 Compliance with Facebook’s human rights responsibilities Freedom of expression (Article 19 ICCPR) Article 19 of the ICCPR states that everyone has the right to freedom of expression, which includes freedom to seek, receive and impart information. The enjoyment of this right is intrinsically tied to access to free, uncensored and unhindered press or other media (General Comment 34, para. 13). The Board agrees that the media “plays a crucial role in informing the public about acts of terrorism and its capacity to operate should not be unduly restricted” (General Comment 34, para. 46). The Board is also aware that terrorist groups may exploit the media’s duty and interest to report on their activities. However, counter-terrorism and counter-extremism efforts should not be used to repress media freedom (A/HRC/RES/45/18). Indeed, the media has an essential role to play during the first moments of a terrorist act, as it is “often the first source of information for citizens, well before the public authorities are able to take up the communication” (UNESCO, Handbook on Terrorism and the Media , p. 27, 2017). Social media contributes to this mission by supporting the dissemination of information about threats of or terrorism acts published in traditional media and non-media sources. While the right to freedom of expression is fundamental, it is not absolute. It may be restricted, but restrictions should meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). I. Legality (clarity and accessibility of the rules) To meet the test of legality, a rule must be (a) formulated with sufficient precision so that individuals can regulate their conduct accordingly, and (b) made accessible to the public. A lack of specificity can lead to subjective interpretation of rules and their arbitrary enforcement (General Comment No. 34, para. 25). The Board has criticized the vagueness of the Dangerous Individuals and Organizations Community Standard in several cases and called on the company to define praise, support and representation. Facebook has since revised the policy, releasing an update on June 22. It defined or gave examples of some key terms in the policy, organized its rules around three tiers of enforcement according to the connection between a designated entity and offline harm, and further stressed the importance of users making their intent clear when posting content related to dangerous individuals or organizations. The policy, however, remains unclear on how users can make their intentions clear and does not provide examples of the ‘news reporting,’ neutral discussion,’ and ‘condemnation’ exceptions. Moreover, the updated policy seemingly increases Facebook’s discretion in cases where the user’s intent is unclear, now providing that Facebook “may” remove the content without offering any guidance to users about the criteria that will inform the use of that discretion. The Board believes that criteria for assessing these exceptions, including illustrative examples, would help users understand what posts are permissible. Additional examples will also give clearer guidance to reviewers. In addition, the Board is concerned that this revision to the Community Standards was not translated into languages other than US English for close to two months, thus limiting access to the rules to users outside the US English market. Facebook explained that it applies changes to policies globally, even when translations are delayed. The Board is concerned that these translation delays leave the rules inaccessible for too many users for too long. This is not acceptable for a company with Facebook's resources. II. Legitimate aim Restrictions on freedom of expression must pursue a legitimate aim, which include the protection of national security and public order, and the rights of others, among additional aims. The Dangerous Individuals and Organizations policy seeks to prevent and disrupt real world harm with the legitimate aim of protecting the rights of others, which in this case includes the right to life and the security of persons. III. Necessity and proportionality Restrictions must be necessary and proportionate to achieve their legitimate aim, in this case protecting the rights of others. Any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected” (General Comment 34, para. 34). In resolving questions of necessity and proportionality, context plays a key role. The Board further stresses that Facebook has a responsibility to identify, prevent, mitigate and account for adverse human rights impacts (UNGPs, Principle 17). This due diligence responsibility is heightened in conflict-affected regions (A/75/212, para. 13). The Board notes that Facebook has taken some steps to ensure that content is not removed unnecessarily and disproportionately, as illustrated by the news reporting exception and its commitment to allow for the discussion of human rights concerns discussed in case decision 2021-006-IG-UA . The Board concludes that removal of the content in this case was not necessary. The Board recognizes that journalists face a challenge in balancing the potential harm of reporting on the statements of a terrorist organization, and keeping the public informed on evolving and dangerous situations. Some Board members expressed concern that the reporting in this instance provided little or no editorial context for Al-Qassam’s statements, and thus could be seen as a conduit for Al-Qassam’s threat of violence. However, the content posted by Al Jazeera was also widely reported by other outlets and was widely available globally, which was accompanied by further context as developments became available. The Board thus concludes that removal of this user’s republication of the Al Jazeera report did not materially reduce the terroristic impact the group presumably intended to induce, but instead affected the ability of this user, in a nearby country, to communicate the importance of these events to their readers and followers. As already noted in connection with the value of “Voice,” in reviewing the necessity of the removal, the Board considers significant the broader media and information environment in this region. The Israeli government, the Palestinian Authority, and Hamas unduly restrict free speech, which negatively impacts Palestinian and other voices. Restrictions on freedom of expression must be non-discriminatory, including on the basis of nationality, ethnicity, religion or belief, or political or other opinion (Article 2, para. 1, and Article 26, ICCPR). Discriminatory enforcement of the Community Standards violates this fundamental aspect of freedom of expression. The Board has received public comments and reviewed publicly available information alleging that Facebook has disproportionately removed or demoted content from Palestinian users and content in the Arabic language, especially in comparison to its treatment of posts threatening or inciting anti-Arab or anti-Palestinian violence within Israel. At the same time, Facebook has been criticized for not doing enough to remove content that incites violence against Israeli civilians. Below, the Board recommends an independent review of these important issues. 9. Oversight Board decision The Oversight Board affirms Facebook's decision to restore the content, agreeing that the original decision to take down the post was in error. 10. Policy advisory statement Content Policy To clarify its rules to users, Facebook should: 1. Add criteria and illustrative examples to its Dangerous Individuals and Organizations policy to increase understanding of the exceptions for neutral discussion, condemnation and news reporting. 2. Ensure swift translation of updates to the Community Standards into all available languages. Transparency To address public concerns regarding potential bias in content moderation, including in respect of actual or perceived government involvement, Facebook should: 3. Engage an independent entity not associated with either side of the Israeli-Palestinian conflict to conduct a thorough examination to determine whether Facebook’s content moderation in Arabic and Hebrew, including its use of automation, have been applied without bias. This examination should review not only the treatment of Palestinian or pro-Palestinian content, but also content that incites violence against any potential targets, no matter their nationality, ethnicity, religion or belief, or political opinion. The review should look at content posted by Facebook users located in and outside of Israel and the Palestinian Occupied Territories. The report and its conclusions should be made public. 4. Formalize a transparent process on how it receives and responds to all government requests for content removal, and ensure that they are included in transparency reporting. The transparency reporting should distinguish government requests that led to removals for violations of the Community Standards from requests that led to removal or geo-blocking for violating local law, in addition to requests that led to no action. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world, provided expertise on socio-political and cultural context. The company Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world, provided linguistic expertise. Return to Case Decisions and Policy Advisory Opinions" fb-p9pr9rsa,Swedish journalist reporting sexual violence against minors,https://www.oversightboard.com/decision/fb-p9pr9rsa/,"February 1, 2022",2022,,"TopicChildren / Children's rights, SafetyCommunity StandardAdult nudity and sexual activity","Policies and TopicsTopicChildren / Children's rights, SafetyCommunity StandardAdult nudity and sexual activity",Overturned,Sweden,The Oversight Board has overturned Meta's decision to remove a post describing incidents of sexual violence against two minors.,30928,4863,"Overturned February 1, 2022 The Oversight Board has overturned Meta's decision to remove a post describing incidents of sexual violence against two minors. Standard Topic Children / Children's rights, Safety Community Standard Adult nudity and sexual activity Location Sweden Platform Facebook Public Comments 2021-016-FB-FBR Note: Please be aware before reading that the following decision includes potentially sensitive material relating to content about sexual violence against minors. The Oversight Board has overturned Meta’s decision to remove a post describing incidents of sexual violence against two minors. The Board found that the post did not violate the Community Standard on Child Sexual Exploitation, Abuse and Nudity. The broader context of the post makes it clear that the user was reporting on an issue of public interest and condemning the sexual exploitation of a minor. About the case In August 2019, a user in Sweden posted on their Facebook page a stock photo of a young girl sitting down with her head in her hands in a way that obscures her face. The photo has a caption in Swedish describing incidents of sexual violence against two minors. The post contains details about the rapes of two unnamed minors, specifying their ages and the municipality in which the first crime occurred. The user also details the convictions that the two unnamed perpetrators received for their crimes. The post argues that the Swedish criminal justice system is too lenient and incentivizes crimes. The user advocates for the establishment of a sex offenders register in the country. They also provide sources in the comments section of the post, identifying the criminal cases by court reference numbers and linking to coverage of the crimes by local media. The post provides graphic details of the harmful impact of the crime on the first victim. It also includes quotes attributed to the perpetrator reportedly bragging to friends about the rape and referring to the minor in sexually explicit terms. While the user posted the content to Facebook in August 2019, Meta removed it two years later, in September 2021, under its rules on child sexual exploitation, abuse and nudity. The Board finds that this post does not violate the Community Standard on Child Sexual Exploitation, Abuse and Nudity. The post’s precise and clinical description of the aftermath of the rape as well as inclusion of the perpetrator’s sexually explicit statement did not constitute language that sexually exploited children or depicted a minor in a “sexualized context.” The Board also concludes that the post was not showing a minor in a “sexualized context” as the broader context of the post makes it clear that the user was reporting on an issue of public interest and condemning the sexual exploitation of a minor. The Board notes that Meta does not define key terms such as “depiction” and “sexualization” in its public-facing Community Standards. In addition, while Meta told the Board that it allows “reporting” on rape and sexual exploitation, the company does not state this in its publicly available policies or define the distinction between “depiction” and “reporting.” A recommendation, below, addresses these points. It is troubling that, after two years, Meta removed the post from the platform without an adequate explanation as to what caused the removal. No substantive change to the policies during this period explains the removal. The Oversight Board’s decision The Oversight Board overturns Meta’s decision to remove the content, and requires that the post be restored. As a policy advisory statement, the Board recommends that Meta: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board overturns Meta’s decision to remove the content from Facebook. The post reports on the rape of two minors and uses explicit language to describe the assault and its impact on one of the survivors. Meta applied the Child Sexual Exploitation, Abuse and Nudity Community Standard to remove the post and referred the case to the Oversight Board. The Board finds the content does not violate the policy against depictions of child sexual exploitation and should be restored. 2. Case description In August 2019, a user in Sweden posted on their Facebook Page a stock photo of a young girl sitting down with her head in her hands in a way that obscures her face with a caption in Swedish describing incidents of sexual violence against two minors using graphic language. The post contains details about the rapes of two unnamed minors, specifying their ages and the municipality in which the first crime had occurred. The user also details the convictions that the two unnamed perpetrators received for those crimes. One of those perpetrators reportedly received a non-custodial sentence as he was a minor when he committed the offence. The perpetrator in the other case was reported as having recently completed a custodial sentence for a violent crime against another woman. The user argues that the Swedish criminal justice system is too lenient and incentivizes crimes. The user advocates for the establishment of a sex offender register in the country. The user provides sources in the comments section of the post, identifying the criminal cases by court reference numbers and linking to coverage of the crimes by the local media. At the time this content was posted, discussions of penalties for child sexual assault were part of the broader criminal justice reform debate in Sweden. The user’s Facebook page is dedicated to posts on child sexual abusers and calls for reforming the existing penalties for sex crimes in Sweden. The post provides extensive and graphic details of the harmful impact of the crime on the first victim, including describing her physical and mental injuries, offline and online harassment she encountered, as well as the psychological support she received. The post also includes quotes attributed to the perpetrator reportedly bragging to friends about the rape and referring to the minor in sexually explicit terms; the post describes that the perpetrator said to his friends that “the girl was ‘tight’ and proudly showed off his bloody hands.” The post received about two million views, 2,000 comments and 20,000 reactions. According to Meta, the post was shared on a page with privacy settings set to public, which means that anyone could view the content posted. The page has about 100,000 followers, 95% of whom are located in Sweden. From when it was posted in August 2019 until September 1, 2021, eight users submitted feedback to flag potential Hate Speech, Violence and Incitement, and Bullying and Harassment violations. The processes for users to submit feedback on a post and those for users to report an alleged violation are different; users are given both options. Feedback sends signals to Meta that are considered in the aggregate and can influence how content is prioritized on the specific user’s feed. When a user reports a post as an alleged policy violation, the post is assessed by Meta for compliance with its policies. One user reported the post on September 5, 2019, for violating the Bullying and Harassment policy, leading to an automated review that assessed the post as non-violating and left it up. In August 2021, Meta’s technology identified the post as potentially violating. Following human review, the post was determined to violate the Child Sexual Exploitation, Abuse and Nudity policy and was removed. The content creator’s account incurred a strike resulting in two separate feature limits. One feature limit prevented the user from going live on Facebook, using ad products, and creating or joining Messenger rooms. The other, a 30-day feature limit, prevented the user from creating any new content, except for private messages. After the user appealed the decision and following additional human review, the post was not restored but the strike associated with this removal was reversed. Meta reversed the strike because the company determined that the purpose of the post was to raise awareness. Meta notes in its Transparency Center that whether the platform applies a strike “depends on the severity of the content, the context in which it was shared and when it was posted,” but it does not explicitly mention that a strike can be reversed or withheld if the purpose of posting the content is to raise awareness. According to Meta, in 2021, it removed five pieces of content from this page, all removed for violating the Child Sexual Exploitation, Abuse and Nudity policy. Three of the removed posts were restored, following additional review which determined that the posts were removed in error. The strikes associated with these removals were reversed when the posts were restored. When this post was removed, Meta also reduced the page’s distribution and removed it from recommendations. Meta explains, through the Transparency Center , that pages or groups that repeatedly violate their policies may be removed from recommendations and have their distribution reduced. The Transparency Center does not state how long this penalty lasts. Meta informed the Board that a page is removed from recommendations for as long as it exceeds the strike threshold. The strike threshold is three strikes for a standard violation and one strike for a severe violation (e.g., violation involving child sexual exploitation, suicide and self-harm or terrorism). 3. Authority and scope The Board has authority to review decisions that Meta submits for review (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1.1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include policy advisory statements with non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). 4. Relevant standards The Oversight Board considered the following standards in its decision: I. Facebook’s Community Standards The policy rationale for the Child Sexual Exploitation, Abuse and Nudity policy states that Meta does not permit content that “sexually exploits or endangers children.” Under this policy, Meta removes content that “threatens, depicts, praises, supports, provides instruction for, makes statements of intent, admits participation in or shares links of the sexual exploitation of children.” Meta also prohibits content “(including photos, videos, real-world art, digital content, and verbal depictions) that shows children in a sexualized context.” This policy also prohibits content that identifies or mocks, by name or image, alleged victims of child sexual exploitation, but does not prohibit functional identification of a minor. II. Meta’s values Meta’s values are outlined in the introduction to Facebook’s Community Standards. The value of “Voice” is described as “paramount”: The goal of our Community Standards has always been to create a place for expression and give people a voice. [We want] people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Meta limits “Voice” in service of four other values, and three are relevant here: “Safety”: Content that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook. “Privacy”: We’re committed to protecting personal privacy and information. Privacy gives people the freedom to be themselves, choose how and when to share on Facebook and connect more easily. “Dignity”: We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade others. III. Human Rights Standards The United Nations Guiding Principles on Business and Human Rights (UNGPs) establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , where it re-committed to respecting human rights in accordance with the UNGPs. The Board’s analysis in this case was informed by the following human rights standards: 5. User statement Following Meta’s referral and the Board’s decision to accept the case, the user was sent a message notifying them of the Board’s review and providing them with an opportunity to submit a statement to the Board. The user did not submit a statement. 6. Explanation of Meta’s decision Meta explained in its rationale that the content was removed because it violated the Community Standard on Child Sexual Exploitation, Abuse and Nudity. Meta explained that two lines made the post violative, one describing in detail the physical aftermath of the rape and the second quoting the perpetrator's sexually explicit description of the minor as “tight.” Meta referred to expert findings from a breadth of sources including the Rape, Abuse and Incest National Network (RAINN), the UK’s “2021 Tackling Child Sexual Abuse Strategy” and the EU’s “Strategy for a More Effective Fight Against Child Sexual Abuse,” as well as multiple academic articles, that allowing depictions of rape can harm victims through re-traumatization, invasion of privacy and by facilitating harassment. Meta also explained that, while some of its policies have carve-outs to allow sharing of content that would be otherwise violating when it is posted to raise awareness or to condemn harmful actions, the challenge of “determine[ing] where the risk of [re-traumatization] begins and the benefit of raising awareness ends” led it to prohibit graphic depictions even when shared in good faith and to raise awareness. Meta states in its rationale to the Board that it does allow reporting of rape and sexual assault, without graphic depiction. Meta also explained that it defines “depiction” to include showing an image, audio, describing in words, or broadcasting. Meta explained in its rationale that it determined that the values of ""Privacy,"" ""Safety"" and ""Dignity"" of minors displaced the value of voice because graphic content can revictimize children. Meta also stated that although the post does not name the victim, the information provided in the post could be used to identify the victim and lead to discriminatory treatment. Meta also explained that the Convention on the Rights of the Child (CRC) served as guidance for setting its policies and values, quoting General comment No. 25 (2021) from the UN Committee on the Rights of the Child to implement policies and practices to protect children from “recognized and emerging risks of all forms of violence in the digital environment.” Meta stated to the Board that it is the risk of revictimization that led it to determine that removal was necessary. While Meta considers applying the newsworthiness exception to graphic content when the public interest in the expression is especially strong and the risk of harm is low, in this case, Meta determined that the risk of harm outweighed the public interest value of the expression. According to Meta, Facebook has applied the newsworthiness allowance to violations of the Child Sexual Exploitation policy six times in the past year. 7. Third-party submissions The Board received 10 public comments in this case from stakeholders including academia and civil society organizations focusing on the rights of sexual assault survivors, children’s rights and freedom of expression. Three were from Europe, two from Latin America and the Caribbean and five from the United States and Canada. The submissions cover themes including the importance of protecting the privacy of survivors; the danger of removing speech of survivors or organizations working on prevention of child sexual exploitation and abuse; the role of Meta’s platform design choices in promoting sensationalist posts; and the need for greater transparency and clarity around the platform’s content moderation system. On November 30, 2021, a virtual roundtable took place with seven advocacy groups and organizations whose missions are to represent survivors of domestic and sexual violence against women and children. The discussion touched on a number of themes related to the case content including differentiating between what the general public might find to be graphic descriptions of a rape from actual clinical descriptions of the act and its aftermath; secondary exploitation or victimization of survivors for the purposes of soliciting or raising donations; empowering survivors by asking them what they want and obtaining informed consent when reporting on crimes committed against them; and survivor agency being of paramount importance. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board looks at the question of whether content should be restored through three lenses: the Facebook Community Standards; Meta’s publicly stated values; and its human rights responsibilities. The Board concludes that the content does not violate the Facebook Community Standards and should be restored. Meta’s values and human rights responsibilities support restoring the content. The Board recommends changes in Meta’s content policies to provide a clear definition of sexualization, graphic depiction, and reporting. 8.1. Compliance with Community Standards The Board concludes that this post does not violate the Community Standard on Child Sexual Exploitation, Abuse and Nudity, and the content should not have been removed. The Board concludes that the post’s precise and clinical description of the aftermath of the rape as well as inclusion of the perpetrator’s sexually explicit statement did not constitute language that sexually exploited children or depicted a minor in a “sexualized context.” The Board also concludes that the post was not showing a minor in a “sexualized context” because the broader context of the post makes it clear that the user was reporting on an issue of public interest and condemning the sexual exploitation of a minor. The user replicated language used in Swedish news media outlets reporting on the testimony provided in the court cases of the rapes referred to in the post. 8.2. Compliance with Meta’s values The Board finds that Meta’s decision to remove this post is inconsistent with its value of “Voice.” The Board agrees that the values of “Privacy,” “Safety,” and “Dignity” are of great importance when it comes to content that graphically describes the sexual exploitation of a minor. However, the Board finds the two sentences at issue did not rise to the level of content that sexually exploited children. In addition, the public interest in bringing attention to this issue and informing the public, or advocating for legal and policy reforms, are at the core of the value of “Voice.” In weighing the different values implicated in this case, the Board also notes the importance of not silencing advocates for and survivors of child sexual exploitation. The Board also recognizes that some survivors may be less likely to speak out for fear that the graphic details of the attack will go viral on the platform. 8.3. Compliance with Meta’s human rights responsibilities The Board finds that restoring the content in this case is consistent with Meta’s human rights responsibilities. Freedom of Expression and Article 19 of the ICCPR Article 19 of the ICCPR provides broad protection for freedom of expression through any media and regardless of frontiers. However, the right may be restricted under certain narrow and limited conditions, known as the three-part test of legality (clarity), legitimacy, and necessity and proportionality. Although the ICCPR does not create the same obligations for Meta as it does for states, Meta has committed to respecting human rights as set out in the UNGPs. This commitment encompasses internationally recognized human rights as defined, among other instruments, by the ICCPR and the CRC. The UN Special Rapporteur on freedom of opinion and expression has suggested that Article 19, para. 3 of the ICCPR provides a useful framework to guide platforms’ content moderation practices ( A/HRC/38/35 , para. 6) I. Legality (clarity and accessibility of the rules) The requirement of legality in international human rights law provides that any restriction on freedom of expression is: (a) sufficiently accessible so that individuals have an adequate indication on how the law limits their rights; and (b) that the law must be formulated with enough precision so that individuals can regulate their conduct. As discussed in Section 8.1 above, the Board concludes that this post did not violate Meta’s policy on child sexual exploitation, therefore the removal was not pursuant to an applicable rule. The Board also concludes that the policy could benefit from clear definition of key terms and examples of borderline cases. The terms “depiction” and “sexualization” are not defined in the public facing Community Standards. When Meta fails to define key terms or disclose relevant exceptions, users are unable to understand how to comply with the rules. The Board notes that Meta’s “Known Questions” and Internal Implementation Standards (IIS), which are guidelines provided to content reviewers to help them assess content that might amount to a violation of one of Facebook’s Community Standards, provide more specific criteria when it comes to what constitutes sexualization of a minor on the platform under the Child Sexual Exploitation, Abuse and Nudity policy. Meta informed the Board through its rationale for this case that it allows “reporting” on rape and sexual exploitation but does not state this in the publicly available policies or define the distinction between “depiction” and “reporting.” The Board notes that neither the public policies nor the Known Questions and IIS address the difference between prohibited graphic depiction or sexualization of a minor and non-violating reporting on the rape and sexual exploitation of a minor. The Board finds it troubling that the case content remained on the platform for two years and was then removed without an adequate explanation as to what triggered the removal. No substantive change to the policies during this period explains the removal. The Board asked whether sending the content for human review was triggered by a change to the classifier. Meta indicated that it was a combination of machine learning/artificial learning classifier scores (a prediction an algorithm makes about whether a specific piece of content is likely to be violative of a specific policy) and the number of views the post received over a two-week period that triggered sending the post for human review. In its response to the Board’s questions, Meta did not specify whether there was a change to its classifiers that would have determined that the content was not violating in 2019 but that its technology would flag the same content as potentially violating and worthy of sending for human review in 2021. II. Legitimate aim Restrictions on freedom of expression should pursue a legitimate aim, which includes the protection of the rights of others. The Board agrees that the Facebook Community Standard on Child Sexual Exploitation, Abuse and Nudity aims to prevent offline harm to the rights of minors that may be related to content on Facebook. Therefore, the restrictions in this policy aim to serve the legitimate aim of protecting the rights of children to physical and mental health (Article 12 ICESCR, Article 19 CRC), consistent with the best interests of the child (Article 3 CRC). III. Necessity and proportionality The principle of necessity and proportionality under international human rights law requires that restrictions on expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected” (General Comment 34, para. 34). The principle of proportionality demands consideration for the form of expression at issue (General Comment 34, para. 34). As the Board stated in case decision 2020-006-FB-FBR Section 8.3, Meta must show three things to demonstrate that it has selected the least intrusive instrument to address the legitimate aim: (1) the best interests of the child could not be addressed through measures that do not infringe on speech, (2) among the measures that infringe on speech, Meta has selected the least intrusive measure, and (3) the selected measure actually helps achieve the goal and is not ineffective or counterproductive (A/74/486, para. 52). Analyzing whether the aims could be achieved through measures that do not infringe on freedom of expression requires understanding the full breadth of choices Meta has made and options available for addressing the harm. This requires transparency to the Board on amplification and how Meta’s platform design may incentivize sensationalist content. The Board asked Meta for information or internal research on how its design choices for the Facebook platform, including its decisions or processes affecting which posts to amplify, incentivize sensationalist reporting on issues impacting children. Meta did not provide the Board a clear answer to the question or provide any research on the subject. Transparency is essential to ensure public scrutiny of Meta’s actions. The lack of detail in Meta’s response to the Board’s question or public disclosure of how the platform’s design choices on amplification impact speech frustrates the Board’s ability to fully determine the least restrictive instrument of respecting the rights of the child in accordance with their best interests. The Board concludes that removing this content discussing sex crimes against minors, an issue of public interest and a subject of public debate, does not constitute the least intrusive instrument of promoting the rights of the child. General Comment No. 34 highlights the importance of political expression in Article 19 of the ICCPR, including the right to freedom of expression in “political discourse,” “commentary on one’s own and on public affairs,” and “discussion of human rights,” all of which would encompass the discussion of a country’s criminal justice system and reporting on its operations in specific cases. The Board is aware of the off-platform harm to survivors of child sexual exploitation from depictions of that exploitation being available on the platform. However, the Board draws a distinction between the perpetrator's language sexualizing the child and the user’s post quoting the perpetrator for the purpose of raising awareness on an issue of public interest. The Board agrees with the input from organizations working for and with survivors of sexual exploitation on the importance of taking into consideration the need to protect survivor testimonies or other content aimed at informing the public and engaging in advocacy for reform of legal, social and cultural barriers to preventing child sexual exploitation. The Board considered whether the use of a warning screen may be the least intrusive measure for protecting the best interests of the child. For example, the Adult Sexual Exploitation Community Standard states that warning screens are applied to content that includes narratives or statements about adult sexual exploitation that are either shared by the victim or a third party (other than the victim) that is 1) in support of the victim, 2) in condemnation of the act, or 3) for general awareness, to be determined by the context or caption. According to a blog post on Meta’s newsroom about tackling misinformation, the company stated that when a warning screen is applied to a piece of content, 95% of users do not click to view it. Because the Board does not have information on the baseline level of engagement, the Board cannot reach a conclusion about the impact of warning screens especially as applied to content reporting on child sexual exploitation. Finally, the Board also considered the potential for offline harm when reporting includes information sufficient to identify a child. Content that may lead to functional or “jigsaw” identification of a minor who has been the victim of child sexual exploitation implicates children's rights to freedom of expression (ICCPR, Art. 19), privacy (CRC, Art. 16) and safety (CRC, Art. 19). Functional identification may occur when content provides or compiles enough discrete pieces of information to identify an individual without naming them. In this case, the Board is unable to determine whether the pieces of information provided, along with links to media reports, could increase the possibility that the victims will be identified. Some Board Members, however, emphasized that when there is doubt about whether a specific piece of content may lead to functional identification of a child victim, Meta should err on the side of protecting the privacy and physical and mental health of the child in accordance with international human rights principles. For these Board Members, the platform’s power to amplify is a key factor in assessing whether the minor can be identified and therefore the protections afforded to children who are victims of sexual abuse. The current Child Sexual Exploitation, Abuse and Nudity Community Standard prohibits, “content that identifies or mocks alleged victims of child sexual exploitation by name or image.” Other policies that deal with preventing the identification of a minor or a victim of a crime (e.g., Additional Protection of Minors Community Standard ; The Coordinating Harm and Publicizing Crime ) leave significant gaps in addressing functional identification of minors who are victims of sexual exploitation. 9. Oversight Board decision The Oversight Board overturns Meta’s decision to remove the content and requires the post to be restored. 10. Policy advisory statement Content Policy *Procedural note: The Oversight Board's decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world, provided expertise on socio-political and cultural context. Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology, also provided research. Return to Case Decisions and Policy Advisory Opinions" fb-q72fd6yl,Asking for Adderall®,https://www.oversightboard.com/decision/fb-q72fd6yl/,"February 1, 2022",2022,,"TopicDiscrimination, Health, SafetyCommunity StandardRegulated goods","Policies and TopicsTopicDiscrimination, Health, SafetyCommunity StandardRegulated goods",Overturned,United States,The Oversight Board has overturned Meta's original decision to remove a Facebook post that asked for advice on how to talk to a doctor about the prescription medication Adderall ®.,29523,4576,"Overturned February 1, 2022 The Oversight Board has overturned Meta's original decision to remove a Facebook post that asked for advice on how to talk to a doctor about the prescription medication Adderall ®. Standard Topic Discrimination, Health, Safety Community Standard Regulated goods Location United States Platform Facebook Public Comments 2021-015-FB-UA The Oversight Board has overturned Meta’s original decision to remove a Facebook post which asked for advice on how to talk to a doctor about the prescription medication Adderall ® . The Board did not find any direct or immediate connection between the content and the possibility of harm. About the case In June 2021, a Facebook user in the United States posted in a private group that claims to be for adults with attention deficit hyperactivity disorder (ADHD). The user identifies themselves as someone with ADHD and asks the group how to approach talking to a doctor about specific medication. The user states that they were given a Xanax prescription but that the medication Adderall has worked for them in the past, while other medications “zombie me out.” They are concerned about presenting as someone with drug-seeking behavior if they directly ask their doctor for a prescription. The post had comments from group members providing advice on how to explain the situation to a doctor. In August 2021, Meta removed the content under Facebook’s Restricted Goods and Services Community Standard. Following the removal, Meta restricted the user’s account for 30 days. As a result of the Board selecting this case, Meta identified its removal as an “enforcement error” and restored the content. Key findings The Board finds that Meta’s original decision to remove the post did not comply with the Facebook Community Standards. The Restricted Goods and Services Community Standard does not prohibit content which seeks advice on pharmaceutical drugs in the context of medical conditions. The Board finds that the definitions of substances under the Facebook Community Standard on Restricted Goods and Services are not sufficiently transparent to users. The rules in this case are particularly opaque because, according to the internal definitions shared with the Board, Adderall and Xanax could fall under either non-medical drugs or pharmaceutical drugs depending on the circumstances. Meta does not currently define non-medical drugs or pharmaceutical drugs in a public-facing document. The Board considers Meta’s decision to remove the post to be unnecessary and disproportionate. There was no direct or immediate connection between the content and the possibility of harm. The user clearly expressed that their intent was to seek health information and included the content warning “CW: Medication, addiction,” on the risks associated with the drugs their post discussed. Meta’s removal of the post also generated a strike against the user which, in combination with previous strikes they had received, resulted in their account being restricted for 30 days. This violation on the user’s freedom of expression was not reversed before the end of this period, and Meta failed in its responsibility to provide the user with an effective remedy. In the future, the company should review user appeals in a timely fashion when content-level enforcement measures trigger account-level penalties. The Oversight Board’s decision The Oversight Board overturns Meta’s original decision to remove the content. As a policy advisory statement, the Board recommends that Meta: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board overturns Meta’s original decision to remove a Facebook post asking for advice on how to talk to a doctor about asking for the prescription medication dextroamphetamine and amphetamine – commonly known by its brand name Adderall® – for treatment of ADHD. The Board concludes that the content did not violate Facebook’s Community Standards. The Board also recommends that Meta publish its internal definitions on what constitutes “pharmaceutical drugs” and “non-medical drugs” and clarify its Restricted Goods and Services policy. The company should also study the consequences and trade-offs of implementing a dynamic prioritization system that orders appeals for human review, and conduct regular reviewer accuracy assessments and share this data with the Board. 2. Case description In June 2021, a Facebook user in the United States posted in a private group that states in its bio that it is for adults with attention deficit hyperactivity disorder (ADHD). The post consists of text in English, with the user beginning the post by stating ""CW"" (indicating a content warning) on ""Medication, addiction."" The user identifies themselves as someone with ADHD and asks the group how to approach talking to a doctor about specific medication. The user states that they were given a Xanax prescription but that the medication Adderall has worked for them in the past, while other medications ""zombie me out."" They were concerned about presenting as someone with drug-seeking behaviour if they directly ask their doctor for a prescription. The post had comments from group members describing their own experiences and providing advice on how to explain the situation to a doctor. The group administrators are based in Canada and New Zealand. No users reported the content. Meta states that when the content was initially posted, its classifier technology gave this content a low score, meaning that the technology determined that it was unlikely to be violating. This low score, combined with no other signal that may trigger content review (such as virality), meant the content was not sent for human review. Almost two months later, in August 2021, the content was selected as part of a random sample to be used for training Meta’s classifier technology. A human reviewer labelled the content as violating the Restricted Goods and Services Community Standard. Meta told the Board that “while the primary purpose of this review is to develop training sets for classifiers, when a reviewer labels content as violating, Meta removes it in accordance with the Community Standards.” Meta therefore removed the content under Facebook's Restricted Goods and Services Community Standard. Under this policy, Meta takes down content that ""attempts to buy, sell or trade pharmaceutical drugs…[or] asks for pharmaceutical drugs except when content discusses the affordability, accessibility or efficacy of pharmaceutical drugs in a medical context."" The user appealed the decision to remove the content. Following human review, Meta upheld the original decision to remove the content. The user then submitted an appeal to the Board. As a result of the Board selecting this case, Meta identified its removal as an ""enforcement error"" and restored the content in September 2021. Meta states that when the content was restored, the classifier was also updated to reflect that the correct label for this content is non-violating. At the time of removal, the content had been viewed over 700 times, and it had not been shared. Following the removal, the user’s account was restricted for 30 days, preventing them from creating new content on the platform, interacting with groups (e.g., posting or commenting in groups, creating new groups), and creating or joining Messenger rooms. The user was still able to utilize Facebook Messenger to communicate with other users. 3. Authority and scope According to its Charter, the Oversight Board is an independent body designed to protect free expression by making principled, independent decisions about important pieces of content. It operates transparently, exercising neutral, independent judgement and rendering decisions impartially. The Board has the power to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or reverse that decision (Charter Article 3, Section 5), and its decision is binding on Meta (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include policy advisory statements with recommendations. These recommendations are non-binding, but Meta must respond to them (Charter Article 3, Section 4; Article 4). 4. Relevant standards The Oversight Board considered the following standards in its decision: I. Facebook’s Community Standards After the content had first been posted in June 2021, the United States English version of Facebook’s Restricted Goods and Services Community Standard, previously named the Regulated Goods Community Standard, was updated three times before the case was assigned to the Board, and once in November 2021 after the assignment. The current version has separate categories for “non-medical” drugs and “pharmaceutical” drugs, with different rules for what type of content is permitted in relation to each category. The Board notes that other versions of the Restricted Goods and Services Community Standards (for example, the UK one) have not been updated yet to reflect some of the most recent changes such as in the name and rationale of the policy. In relation to “pharmaceutical drugs,” the current Standard prohibits content which: “Attempts to buy, sell or trade pharmaceutical drugs except when: Attempts to donate or gift pharmaceutical drugs. Asks for pharmaceutical drugs except when content discusses the affordability, accessibility or efficacy of pharmaceutical drugs in a medical context.” At the time the content was posted in June 2021, “pharmaceutical” drugs were not in a separate category. Facebook prohibited: “ Content that attempts to buy, sell, trade, donate, gift, or solicit marijuana or pharmaceutical drugs.” For ""non-medical drugs,"" the current Standard prohibits content which: ""Attempts to buy, sell, trade, co-ordinate the trade of, donate, gift or asks for non-medical drugs. Admits to buying, trading or co-ordinating the trade of non-medical drugs by the poster of the content by themselves or through others. Admits to personal use without acknowledgment of or reference to recovery, treatment, or other assistance to combat usage. This content may not speak positively about, encourage use of, coordinate or provide instructions to make or use non-medical drugs. Coordinates or promotes (by which we mean speaks positively about, encourages the use of, or provides instructions to use or make) non-medical drugs."" Regarding ""non-medical drugs,"" at the time the content was posted in June 2021, the Community Standard prohibited content which: ""Attempts to buy, sell, trade, donate, gift, or solicit non-medical drugs. Admits to buying or trading non-medical drugs by the poster of the content by themselves or through others. Admits to personal use without acknowledgment of or reference to recovery, treatment, or other assistance to combat usage. Speaks positively, encourages, coordinates or provides instructions for use or make of non-medical drugs."" II. Meta’s values Meta’s values are outlined in the introduction to Facebook’s Community Standards. The value of “Voice” is described as “paramount”: The goal of our Community Standards has always been to create a place for expression and give people a voice. […] We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Meta limits “Voice” in service of four values, the relevant ones in this case being “Safety” and “Dignity”: “Safety”: We’re committed to making Facebook a safe place. Content that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook. “Dignity”: We believe that all people are equal in dignity and rights. III. Human rights standards The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. Meta’s Corporate Human Rights Policy , announced March 16, 2021 , reflects the company’s commitment to respect rights as reflected in the UNGPs. The Board’s analysis of Meta’s human rights responsibilities in this case was informed by the following human rights standards. 5. User statement The user states that they are an ADHD patient and posted in order to ask other patients how they talked to their doctor about these medications, as they did not want their doctor to think that they were selling or abusing them. They were nervous about the conversation and wanted to know how to ask appropriately and cautiously. They state that the post made it clear that they have no intention of abusing, selling or illegally obtaining the medication. 6. Explanation of Meta’s decision Following the Board’s selection of the case, Meta decided that the original content removal had been in error, and restored the content. Meta states that its Restricted Goods and Services policy distinguishes between “non-medical drugs” and “pharmaceutical drugs” to strike a balance between its values of “Voice” and “Safety.” It notes that certain drugs that ordinarily fall within the definition of “pharmaceutical drug” pose a risk of abuse and may, if used for a non-medical purpose, be treated as “non-medical drugs.” For example, it treats drugs like Oxycontin, Xanax, or Adderall as “pharmaceutical drugs” when used as intended but considers them “non-medical drugs” when content discusses using them “to achieve a ‘high’ or altered mental state.” It states that posts “concerning these types of drugs pose a particular challenge for review, as there is no way to assess user intent, and users suffering from addiction or seeking to deal drugs may infiltrate groups focused on medical discussions in an attempt to circumvent enforcement.” Meta states that the user in this case discussed Adderall and Xanax in the context of treatment for their medical condition, ADHD, and did not indicate anything suggesting that they used the drugs to achieve a high or altered mental state. The content therefore relates to “pharmaceutical drugs” as opposed to “non-medical drugs.” Meta explains that as the content concerned access to pharmaceutical drugs, it did not violate the Restricted Goods and Services policy, which prohibits content that “[a]sks for pharmaceutical drugs except when content discusses the affordability, accessibility or efficacy of pharmaceutical drugs in a medical context.” The content here did not contain a request asking for drugs, as the user was in fact asking for advice about how to ask a doctor for drugs. Further, “even if the user had asked for the drugs for medical use, the content related to the accessibility and efficacy of the drugs in question.” Meta concludes that the content should therefore not have been removed. With regards to its system for review of content, Meta has a system for prioritizing initial review of content, but explained to the Board that there is no prioritization framework for appeals – appeals are reviewed on a “first-in, first-out” basis. Additionally, the company explained it ""has been exploring options to adopt a prioritization framework"" for appeals. 7. Third-party submissions The Oversight Board considered 16 public comments related to this case. Four of the comments were submitted from Central and South Asia, four were from Europe, two were from Middle East and North Africa, and six were from United States and Canada. The submissions covered the following themes: classification of controlled substances in international standards and laws from various countries, Adderall abuse, and content related to Adderall sales on Facebook. To read public comments submitted for this case, please click here . 8. Oversight Board analysis 8.1 Compliance with Community Standards The Board agrees with Meta that its original decision to remove the post did not comply with the Facebook Community Standards. Meta distinguishes between “pharmaceutical” and “non-medical” drugs, and notes that certain drugs which pose a risk of abuse that ordinarily fall within the definition of “pharmaceutical” drug may, if used for a non-medical purpose, be treated as “non-medical” drugs. The Board agrees with Meta that in this case, the user was discussing Adderall and Xanax in a medical context and they are therefore “pharmaceutical” drugs for the purposes of applying the Restricted Goods and Services Community Standard. The Restricted Goods and Services Community Standard does not prohibit content which seeks advice on pharmaceutical drugs in the context of medical conditions. The user in this case was not attempting to buy, sell, trade, donate or ask for pharmaceutical drugs. As Meta itself notes in its rationale, “even if the user had asked for the drugs for medical use, the content related to the accessibility and efficacy of the drugs in question.” 8.2 Compliance with Meta’s values Meta’s original decision to remove the content was not consistent with the company’s values. Meta states that the Restricted Goods and Services Community Standard aims to strike a balance between “Voice” and “Safety.” The Standard does not seek to prohibit content such as the post at issue in this case, where a user was asking for advice related to the accessibility of a pharmaceutical drug for treatment of a medical condition. Meta’s policy is correct to permit this type of content – and its removal of this post in error was not in line with its values. The Board also notes the relevance of “Dignity” – people with ADHD or other health conditions who seek advice on pharmaceutical drugs may be disproportionately impacted by enforcement errors which restrict “Voice.” 8.3 Compliance with Meta’s human rights responsibilities The Board finds that Meta’s decision to remove the post was not consistent with international human rights standards. Meta has committed itself to respect human rights under the UN Guiding Principles on Business and Human Rights (UNGPs). Its Corporate Human Rights Policy states this commitment includes the International Covenant on Civil and Political Rights (ICCPR), the International Covenant on Economic, Social and Cultural Rights (ICESCR)and the Convention on the Rights of Persons with Disabilities (CRPD). Freedom of expression and access to information Article 19, para. 2 of the ICCPR provides broad protection for expression. This right includes “freedom to seek, receive and impart information and ideas of all kinds.” Article 21 of the CRPD applies this protection to persons with disabilities, who according to Article 1 include “those who have long-term physical, mental, intellectual or sensory impairments which in interaction with various barriers may hinder their full and effective participation in society on an equal basis with others.” The CRPD ensures that they can exercise this freedom “on an equal basis with others and through all forms of communication of their choice” (Article 21, CRPD). The UN Committee on Economic, Social and Cultural Rights makes clear that “access to health-related education and information” is a critical part of the right to health enshrined in Article 12 of the ICESCR (General Comment No. 14, para. 11). In this case, the user states that they are an ADHD patient. ADHD may be considered as a disability under the definition in Article 1 of the CRPD, and the user here discussed and sought health-related information through sharing past experiences with Adderall and Xanax. They were asking for advice on how to obtain these medications from a doctor for their condition. While the right to freedom of expression is fundamental, it is not absolute. Where restrictions on expression are imposed by a state, they should meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). As stated above, Meta has voluntarily committed itself to respecting human rights standards. Meta’s removal of the post failed the first and third parts of this test. I. Legality (clarity and accessibility of the rules) Any rules restricting expression must be clear, precise, and publicly accessible (General Comment 34, para. 25). Individuals must have enough information to determine if and how their speech may be limited, so that they can adjust their behavior accordingly. The Board finds that the definitions of substances under the Facebook Community Standard on Restricted Goods and Services are not sufficiently comprehensible and transparent to users. This policy prohibits content related to certain goods, including guns, marijuana, pharmaceutical drugs, non-medical drugs, alcohol and tobacco. Meta does not define non-medical drugs or pharmaceutical drugs in a public-facing document, but explained in response to the Board’s questions that it maintains internal definitions for moderators, as well as confidential, non-exhaustive lists of non-medical drugs and pharmaceutical drugs. The applicable rules in this case are particularly opaque because, according to the internal definitions shared with the Board, Adderall and Xanax could fall under either non-medical drugs or pharmaceutical drugs, depending on factors such as the intended use of the drug in the circumstances of each particular case. This classification is not provided in the Community Standards and therefore would not be apparent to users. If the post were considered as involving non-medical drugs, although the user's admission to personal use could be non-violating as it refers to treatment, the post could still be construed as speaking positively about the drugs. This would violate the Restricted Goods and Services Standard and lead to a different outcome from classifying the drugs as pharmaceutical in this case. The Board has taken notice of instances where content attempting to sell the very same drugs has remained on Facebook. This information is derived from public comment PC-10281 from the National Association of Boards of Pharmacy – a US-based non-profit organization whose members include the 50 US state pharmacy boards, as well as pharmacy regulators in the District of Columbia, Guam, Puerto Rico, the Virgin Islands, Bahamas, and 10 Canadian provinces. The Board observes that, to users who could see such pieces of content, the inconsistency in enforcement could result in confusion as to what is permitted on Facebook. Given these problems, the Board finds that Meta did not meet its responsibility to make its rules on Restricted Goods and Services clear and accessible to users. As recommended below, Meta should therefore include and explain the above definitions in the language of this policy. Additionally, improving consistency of enforcement could contribute to better understanding of what content is permitted on the platform. At the same time, Meta should make sure its training for content reviewers is adequate to ensure enforcement accuracy and consistency. It should also regularly assess reviewer accuracy rates under the Restricted Goods and Services policy, and share the results of these assessments with the Board and the public. II. Legitimate aim Any restriction on expression should pursue one of the legitimate aims listed in Article 19, para. 3 the ICCPR. Meta has a responsibility to ensure its rules comply with the principle of legitimacy (A/HRC/38/35, para. 45). In this regard, the Board finds that, by addressing the risks of drug abuse among Facebook users, the policy and the restriction pursued the legitimate aims of protecting public health and protecting the rights of others to health. III. Necessity and proportionality Any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected” (General Comment 34, para. 34). Meta’s interference with the user’s freedom of expression was unnecessary. The Board does not find any direct or immediate connection between the content and the possibility of harm. Based on the post itself and the statement provided by the user, they simply wanted advice about how to communicate with their doctor about treatment, and had no intention of abusing, selling or illegally obtaining the medicine. Meta’s removal of the post was also disproportionate. Not only did the user clearly express their intent to seek health information, they also included at the beginning of the post, the words “CW: Medication, addiction,” which the Board takes to mean a “content warning” on the risks associated with the drugs discussed. The Boards finds that the clear intention of the user, coupled with the content warning, was sufficient to address risks of abuse and potential harms to public health. The adverse consequences of removal, on the other hand, could be dire. Poor content moderation of health-related content can hinder access to information of a great number of users who rely on Facebook to learn more about their condition and engage in discussions about potential treatments. Access to effective remedy Article 2 of the ICCPR guarantees an effective remedy for anyone whose rights enshrined in the ICCPR have been violated. According to the UN Human Rights Committee, “cessation of an ongoing violation is an essential element of the right to an effective remedy” (General Comment No. 31, para. 15). Access to remedy is a key component of the UNGPs “Protect, Respect and Remedy” Framework (Principles 22, 29 and 31), and the UN Special Rapporteur on freedom of opinion and expression has stated that companies providing appropriate remediation for adverse human rights impacts is a minimum requirement for adherence (report A/HRC/38/35, at para. 11(f), para. 38, para. 59, para. 72). This is reflected in Meta’s voluntary commitments (Corporate Human Rights Policy, section 3: “providing remedies for human rights impacts”). In this case, Meta’s action on the post generated a strike against the user which, in combination with the previous strikes they had already received, resulted in a 30-day feature limit to the user’s account. The Board is concerned that as the content was restored 30 days following its removal, the user was subject to the feature limit for its entire duration. The user was punished for seeking information on how to speak with medical professionals about their medical condition. The interference with the user’s freedom of expression and related rights was not reversed until the case was brought to Meta’s attention following the Board’s selection, nor was it remedied. Meta failed its responsibility to provide an effective remedy – in the future, it should make sure user appeals are reviewed in a timely fashion when content-level enforcement measures also trigger account-level enforcement measures. 9. Oversight Board decision The Oversight Board overturns Meta's original decision to take down the content. 10. Policy advisory statement Content policy 1. Meta should publish its internal definitions for “non-medical drugs” and “pharmaceutical drugs” in the Facebook Community Standard on Restricted Goods and Services. The published definitions should: (a) make clear that certain substances may fall under either “non-medical drugs” or “pharmaceutical drugs” and (b) explain the circumstances under which a substance would fall into each of these categories. The Board will consider this recommendation implemented when these changes are made in the Community Standard. Enforcement 2. Meta should study the consequences and trade-offs of implementing a dynamic prioritization system that orders appeals for human review, and consider whether the fact that an enforcement decision resulted in an account restriction should be a criterion within this system. The Board will consider this recommendation implemented when Meta shares the results of these investigations with the Board and in its quarterly Board transparency report. 3. Meta should conduct regular assessments on reviewer accuracy rates focused on the Restricted Goods and Services policy. The Board will consider this recommendation implemented when Meta shares the results of these assessments with the Board, including how these results will inform improvements to enforcement operations and policy development, and summarize the results in its quarterly Board transparency reports . Meta may consider if these assessments should be extended to reviewer accuracy rates under other Community Standards. *Note on trademarks: Adderall ® is a registered trademark of Takeda Pharmaceuticals U.S.A. Inc. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Return to Case Decisions and Policy Advisory Opinions" fb-q98qpzb1,Journalist Recounting Meeting in Gaza,https://www.oversightboard.com/decision/fb-q98qpzb1/,"April 4, 2024",2024,,"TopicFreedom of expression, News events, War and conflictCommunity StandardDangerous individuals and organizations",Dangerous individuals and organizations,Overturned,"Palestinian Territories, Spain","A journalist appealed Meta’s decision to remove a Facebook post recounting his personal experience of interviewing Abdel Aziz Al-Rantisi, a co-founder of Hamas. This case highlights a recurring issue in the over-enforcement of the company’s Dangerous Organizations and Individuals policy, specifically regarding neutral posts. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post.",6670,960,"Overturned April 4, 2024 A journalist appealed Meta’s decision to remove a Facebook post recounting his personal experience of interviewing Abdel Aziz Al-Rantisi, a co-founder of Hamas. This case highlights a recurring issue in the over-enforcement of the company’s Dangerous Organizations and Individuals policy, specifically regarding neutral posts. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Summary Topic Freedom of expression, News events, War and conflict Community Standard Dangerous individuals and organizations Location Palestinian Territories, Spain Platform Facebook This is a summary decision . Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board , do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Case Summary A journalist appealed Meta’s decision to remove a Facebook post recounting his personal experience of interviewing Abdel Aziz Al-Rantisi, a co-founder of Hamas. This case highlights a recurring issue in the over-enforcement of the company’s Dangerous Organizations and Individuals policy, specifically regarding neutral posts. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Case Description and Background Soon after the October 7, 2023, terrorist attacks on Israel, a journalist posted on Facebook their written recollection of conducting an interview with Abdel Aziz Al-Rantisi, a co-founder of Hamas, which is a designated Tier 1 organization under Meta’s Dangerous Organizations and Individuals policy. The post describes the journalist’s trip to Gaza , their encounters with Hamas members and local residents, as well as the experience of finding and interviewing al-Rantisi. The post contains four photographs, including of Al-Rantisi, the interviewer and masked Hamas militants. In their appeal to the Board, the user clarified that the intention of the post was to inform the public about their experience in Gaza and interview with one of the original Hamas founders. Meta removed the post from Facebook, citing its Dangerous Organizations and Individuals policy , under which the company removes from its platforms certain content about individuals and organizations it designates as dangerous. However, the policy recognizes that “users may share content that includes references to designated dangerous organizations and individuals in the context of social and political discourse. This includes content reporting on, neutrally discussing or condemning dangerous organizations and individuals or their activities.” After the Board brought this case to Meta’s attention, the company determined the “content aimed to increase the situational awareness” and therefore did not violate the Dangerous Organizations and Individuals Community Standard. Meta cited the social and political discourse allowance in the context of “neutral and informative descriptions of Dangerous Organizations and Individuals activity or behavior.” Furthermore, Meta said that “the social and political discourse context is explicitly mentioned in the content so there is no ambiguity [about] the intent of the user” in this case. Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, to reduce errors and increase fairness for people who use Facebook and Instagram. Case Significance The case highlights over-enforcement of Meta’s Dangerous Organizations and Individuals policy, specifically news reporting on entities the company designates as dangerous. This is a recurring problem, which has been particularly frequent during the Israel-Hamas conflict, in which one of the parties is a designated organization. The Board has issued several recommendations relating to the news reporting allowance under the Dangerous Organizations and Individuals policy. Continued errors in applying this important allowance can significantly limit users’ free expression, the public’s access to information and impair public discourse. In a previous decision, the Board recommended that Meta “add criteria and illustrative examples to Meta’s Dangerous Organizations and Individuals policy to increase understanding of exceptions, specifically around neutral discussion and news reporting,” ( Shared Al Jazeera Post , recommendation no. 1). Meta reported implementation of this recommendation demonstrated through published information. In an update to the Dangerous Organizations and Individuals policy dated December 29, 2023, Meta has modified its explanations and now uses the term “glorification” instead of “praise” in its Community Standard. Furthermore, the Board has recommended that Meta should “assess the accuracy of reviewers enforcing the reporting allowance under the Dangerous Organizations and Individuals policy in order to identify systemic issues causing enforcement errors,” ( Mention of Taliban in News Reporting , recommendation no. 5). Meta reported implementation of this recommendation but has not published any information to demonstrate this. In cases of automated moderation, the Board has urged Meta to implement an internal audit procedure to continually analyze a statistically representative sample of automated removal decisions to reverse and learn from enforcement mistakes ( Breast Cancer Symptoms and Nudity , recommendation no. 5), which Meta has reported implementation on. The Board believes that full implementation of these recommendations could reduce the number of enforcement errors of Meta’s Dangerous Organizations and Individuals policy. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-qbjdascv,Armenians in Azerbaijan,https://www.oversightboard.com/decision/fb-qbjdascv/,"January 28, 2021",2021,January,"TopicCulture, Discrimination, ReligionCommunity StandardHate speech","Type of DecisionStandardPolicies and TopicsTopicCulture, Discrimination, ReligionCommunity StandardHate speechRegion/CountriesLocationArmenia, AzerbaijanPlatformPlatformFacebook",Upheld,"Armenia, Azerbaijan",The Oversight Board has upheld Facebook's decision to remove a post containing a demeaning slur which violated Facebook's Community Standard on hate speech.,22853,3511,"Upheld January 28, 2021 The Oversight Board has upheld Facebook's decision to remove a post containing a demeaning slur which violated Facebook's Community Standard on hate speech. Standard Topic Culture, Discrimination, Religion Community Standard Hate speech Location Armenia, Azerbaijan Platform Facebook This decision is also available in Armenian , Azerbaijani and Russian . Որոշման ամբողջական տարբերակը հայերենով կարդալու համար սեղմեք այստեղ . Qərarın tam mətnini oxumaq üçün bura klikləyin . Чтобы прочесть полное решение на русском языке, нажмите здесь . The Oversight Board has upheld Facebook’s decision to remove a post containing a demeaning slur which violated Facebook’s Community Standard on Hate Speech. About the case In November 2020, a user posted content which included historical photos described as showing churches in Baku, Azerbaijan. The accompanying text in Russian claimed that Armenians built Baku and that this heritage, including the churches, has been destroyed. The user used the term “тазики” (“taziks”) to describe Azerbaijanis, who the user claimed are nomads and have no history compared to Armenians. The user included hashtags in the post calling for an end to Azerbaijani aggression and vandalism. Another hashtag called for the recognition of Artsakh, the Armenian name for the Nagorno-Karabakh region, which is at the center of the conflict between Armenia and Azerbaijan. The post received more than 45,000 views and was posted during the recent armed conflict between the two countries. Key findings Facebook removed the post for violating its Community Standard on Hate Speech, claiming the post used a slur to describe a group of people based on a protected characteristic (national origin). The post used the term ""тазики"" (“taziks”) to describe Azerbaijanis. While this can be translated literally from Russian as “wash bowl,” it can also be understood as wordplay on the Russian word “азики” (“aziks”), a derogatory term for Azerbaijanis which features on Facebook’s internal list of slur terms. Independent linguistic analysis commissioned on behalf of the Board confirms Facebook’s understanding of ""тазики"" as a dehumanizing slur attacking national origin. The context in which the term was used makes clear it was meant to dehumanize its target. As such, the Board believes that the post violated Facebook’s Community Standards. The Board also found that Facebook’s decision to remove the content complied with the company’s values. While Facebook takes “Voice” as a paramount value, the company’s values also include “Safety” and “Dignity.” From September to November 2020, fighting over the disputed territory of Nagorno-Karabakh resulted in the deaths of several thousand people, with the content in question being posted shortly before a ceasefire. In light of the dehumanizing nature of the slur and the danger that such slurs can escalate into physical violence, Facebook was permitted in this instance to prioritize people's ""Safety"" and ""Dignity"" over the user's ""Voice”. A majority of the Board found that the removal of this post was consistent with international human rights standards on limiting freedom of expression. The Board believed it is apparent to users that using the term “тазики” to describe Azerbaijanis would be classed as a dehumanizing label for a group belonging to a certain nationality, and that Facebook had a legitimate aim in removing the post. The majority of the Board also viewed Facebook’s removal of the post as necessary and proportionate to protect the rights of others. Dehumanizing slurs can create an environment of discrimination and violence which can silence other users. During an armed conflict, the risks to people’s rights to equality, security of person and, potentially, life are especially pronounced. While the majority of the Board found that these risks made Facebook’s response proportionate, a minority believed that Facebook’s action did not meet international standards and was not proportionate. A minority thought Facebook should have considered other enforcement measures besides removal. The Oversight Board’s decision The Board upholds Facebook’s decision to remove the content. In a policy advisory statement, the Board recommends that Facebook: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision Summary The Oversight Board has upheld Facebook’s decision to remove a user’s post about the alleged destruction of churches in Azerbaijan for violating the Community Standard on Hate Speech. Independent analysis commissioned by the Board confirms Facebook’s assessment that the post contained a slur demeaning Azerbaijani national origin, which violates the Community Standards. Although the post contains political speech, Facebook was permitted to protect the safety and dignity of users by removing the post, especially in the context of an ongoing armed conflict between Armenia and Azerbaijan. Removing the post was also consistent with international human rights standards, which permit certain tailored restrictions on expression aimed at protecting the rights of others. The Board also advises Facebook to offer more detail on why posts have been removed to provide greater clarity and notice to users. 2. Case Description In November 2020, a user posted content which included historical photos described as showing churches in Baku, Azerbaijan. The accompanying text, in Russian, claimed that Armenians built Baku and that this heritage, including the churches, has been destroyed. The user used the term “т.а.з.и.к.и” (“taziks”) to describe Azerbaijanis, who the user claimed are nomads and have no history compared to Armenians. “Tazik,” which means “wash bowl” in Russian, appears to have been used in the post as a play on “azik,” a derogatory term for Azerbaijanis. The user included hashtags in the post calling for an end to Azerbaijani aggression and vandalism. Another hashtag called for the recognition of Artsakh, the Armenian name for the Nagorno-Karabakh region, which is at the center of the conflict between Armenia and Azerbaijan. The post received more than 45,000 views and was posted during the recent armed conflict between the two countries. Facebook removed the post for violating its Community Standard on Hate Speech. The user submitted a request for review to the Oversight Board. 3. Authority and Scope The Board has the authority to review Facebook’s decision under Article 2 (Authority to Review) of the Board’s Charter and may uphold or reverse that decision under Article 3, Section 5 (Procedures for Review: Resolution) of the Charter. Facebook has not presented reasons for the content to be excluded in accordance with Article 2, Section 1.2.1 (Content Not Available for Board Review) of the Board’s Bylaws, nor has Facebook indicated that it considers the case to be ineligible under Article 2, Section 1.2.2 (Legal Obligations) of the Bylaws. 4. Relevant Standards The Oversight Board considered the following standards in its decision: I. Facebook’s Community Standards: Facebook’s Community Standard on Hate Speech defines this as “a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability”. Prohibited content includes “content that describes or negatively targets people with slurs, where slurs are defined as words commonly used as insulting labels”. Facebook’s policy rationale says that such speech is not allowed “because it creates an environment of intimidation and exclusion and in some cases may promote real-world violence”. II. Facebook’s Values: The Facebook values relevant to this case are outlined in the introduction to the Community Standards. The first is ""Voice”, which is described as “paramount”: The goal of our Community Standards has always been to create a place for expression and give people a voice. […] We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Facebook limits ""Voice” in service of four other values. The Board considers that two of these values are relevant to this decision: Safety: We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook. Dignity : We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade others. III. Relevant Human Rights Standards considered by the Board: The UN Guiding Principles on Business and Human Rights ( UNGPs ), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. The UN Working Group on Human Rights and Transnational Corporations, tasked with monitoring the implementation of the UNGPs, has addressed their applicability in conflict situations ( A/75/212 , 2020). Drawing upon the UNGPs, the following international human rights standards were considered in this case: 5. User Statement In their statement to the Board, the user claimed that their post was not hate speech but was intended to demonstrate the destruction of Baku’s cultural and religious heritage. They also claimed that the post was only removed because Azerbaijani users who have “hate towards Armenia and Armenians” are reporting content posted by Armenians. 6. Explanation of Facebook’s Decision Facebook removed the post for violating its Community Standard on Hate Speech, claiming the post used a slur to describe a person or group of people on the basis of a protected characteristic (national origin). Facebook stated that it removed the content for using the term ""тазики"" (“taziks”) to describe Azerbaijanis. This word can be translated literally from Russian as “wash bowl,” but can also be understood as wordplay on the Russian word “азики” (“aziks”) – Facebook explained to the Board that this word is on its internal list of slur terms, which it compiles after consultation with regional experts and civil society organizations. After assessing the whole post and the context in which it was made, Facebook determined that the user posted the slur to insult Azerbaijanis. 7. Third party submissions The Oversight Board considered 35 public comments related to this case. Two of the comments submitted were from Central and South Asia, six from Europe, and 24 from the United States and Canada region. The submissions covered the following themes: the use of slurs and derogatory language which violate the Community Standards; the factual accuracy of the post’s claims; whether the post constitutes legitimate political or historical discussion; and the importance of assessing the background situation and context, including the conflict in Nagorno-Karabakh. 8. Oversight Board Analysis 8.1 Compliance with Community Standards The user’s post violated Facebook’s Community Standard on Hate Speech. This Community Standard explicitly prohibits the use of slurs based on ethnicity or national origin. The Board commissioned independent linguistic analysis which supports Facebook’s understanding of this term as a slur. The linguistic report confirms that the post implies a connection between “тазики,” or “wash basin,” and “азики,” a term often used to describe Azerbaijanis in a derogatory manner. There may be instances in which words that are demeaning in one context might be more benign, or even empowering, in another. Facebook’s Community Standard on Hate Speech acknowledges that, in some cases, “words or terms that might otherwise violate [its] standards are used self-referentially or in an empowering way.” The context in which “тазики” was used in this post makes clear, however, that, in linking Azerbaijanis to wash bowls, it was meant to dehumanize its target. 8.2 Compliance with Facebook Values The Board finds that the removal was consistent with Facebook’s values of “Safety” and “Dignity,” which in this case displaced the value of “Voice”. Facebook’s values place a priority on “Voice” as users of the platform must be able to express themselves freely. Facebook’s values also, however, include “Safety” and “Dignity.” Speech that is otherwise protected may be restricted when leaving this content and other posts like it on the platform makes Facebook less safe, and, relatedly, undermines the dignity and equality of people. Facebook’s prohibition on the use of slurs targeting national origin is intended to prevent users from posting content meant to silence, exclude, harass, or degrade other users. Left up, an accumulation of such content may create an environment in which acts of discrimination and violence are more likely. In this case, Facebook was permitted to treat the use of a slur as a serious interference with the values of “Safety” and “Dignity.” The conflict between Armenia and Azerbaijan, neighbors in the Southeast Caucasus, is of long standing. Most recently, from September to November 2020, fighting over the disputed territory of Nagorno-Karabakh resulted in the deaths of several thousand people. The content in question was posted to Facebook shortly before a ceasefire went into effect. This context was especially relevant for the Board. While pointed language may be a part of human interactions, particularly in conflict situations, the danger of dehumanizing slurs proliferating in a way that escalates into acts of violence is one that Facebook should take seriously. 8.3 Compliance with Human Rights Standards Facebook has recognized its responsibilities to respect human rights under the UN Guiding Principles on Business and Human Rights and indicated that it looks to authorities like the ICCPR and the Rabat Plan of Action when making content decisions, including in situations of armed conflict. The Board agrees with the UN Special Rapporteur on freedom of expression that although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression” (A/74/486, para. 41). Clarifying the nature of those questions and adjudicating whether Facebook’s answers fall within the zone of what the UN Guiding Principles require, is the principal task facing this Board. The Board’s starting point is that the scope of the right to freedom of expression is broad. Indeed, Article 19, para. 2, of the ICCPR gives heightened protection to expression on political issues, and discussion of historical claims, including as they relate to religious sites and peoples’ cultural heritage. That protection remains even where those claims may be inaccurate or contested and even when they may cause offense. Article 19, para. 3, of the ICCPR requires limits on freedom of expression to satisfy the three-part test of legality, legitimacy, and necessity and proportionality. A majority of the Board found that Facebook’s removal of this post from the platform met that test. a. Legality To satisfy the requirement of “legality,” any rule setting out a restriction on expression must be clear and accessible. Individuals must have enough information to determine if and how their speech may be limited, so that they can adjust their behavior accordingly. This requirement guards against arbitrary censorship (General Comment No. 34, para. 25). Facebook’s Community Standard on Hate Speech specifies that “slurs” are prohibited, and that these are defined as “words that are inherently offensive and used as insulting labels” in relation to a number of ""protected characteristics,” including ethnicity and national origin. In this case, the Board considered the legality requirement to be satisfied. There may be situations where a slur has multiple meanings or can be deployed in ways that would not be considered an “attack.” In more contested situations, concepts of “inherently offensive” and “insulting” may be considered too subjective and raise concerns for legality (A/74/486, para. 46). The application of the rule in this case does not present that concern. The user’s choice of words fell squarely within the prohibition on dehumanizing speech, which the Board views as clearly stated and easily available to users. The use of “т.а.з.и.к.и”, connecting a national identity to an inanimate unclean object, plainly qualifies as an “insulting label.” While the user’s subjective understanding of the rules is not determinative of legality, the Board notes that the user attempted to conceal the slur from Facebook’s automated detection tools by placing punctuation between each letter. This tends to confirm that the user was aware that they were using language that Facebook prohibits. b. Legitimacy Any restriction on freedom of expression should also pursue a “legitimate aim.” These aims are listed in the ICCPR, and include the aim of protecting “the rights of others” (General Comment No. 34, para. 28). Facebook’s prohibition on slurs seeks to protect people’s rights to equality and non-discrimination (Article 2, para. 1, ICCPR), to exercise their freedom of expression on the platform without being harassed or threatened (Article 19 ICCPR), to protect the right to security of person from foreseeable and intentional injury (Article 9, ICCPR, General Comment No. 35, para. 9), and even the right to life (Article 6 ICCPR). c. Necessity and Proportionality Necessity and proportionality require Facebook to show that its restriction on freedom of expression was necessary to address the threat, in this case the threat to the rights of others, and that it was not overly broad (General Comment No. 34, para. 34). The Board notes that international human rights law allows prohibitions on “insults, ridicule or slander of persons or groups or justification of hatred, contempt or discrimination” if such expression “clearly amounts to incitement to hatred or discrimination” on the grounds of race, colour, descent or national or ethnic origin (A/74/486, para. 17; GR35, para. 13). Facebook’s Hate Speech Community Standard prohibits some discriminatory expression, including slurs, absent any requirement that the expression incite violent or discriminatory acts. While such prohibitions would raise concerns if imposed by a Government at a broader level (A/74/486, para. 48), particularly if enforced through criminal or civil sanctions, the Special Rapporteur indicates that entities engaged in content moderation like Facebook can regulate such speech: The scale and complexity of addressing hateful expression presents long-term challenges and may lead companies to restrict such expression even if it is not clearly linked to adverse outcomes (as hateful advocacy is connected to incitement in Article 20(2) of the ICCPR). Companies should articulate the bases for such restrictions, however, and demonstrate the necessity and proportionality of any content actions. (A/HRC/38/35, para. 28) A majority of the Board found the slur used in this case hateful and dehumanizing. While it did not constitute incitement, the potential for adverse outcomes was nevertheless present. Context is key. The Board welcomes Facebook’s explanation that its designation of this term as a slur followed consultations with local experts and civil society organizations aware of its contextual usage. The majority noted that the post, when read as a whole, made clear the user’s choice of slur was not incidental but central to the user’s argument that the target group was inferior. Moreover, the post in question was widely disseminated at the height of an armed conflict between the user’s State and the State whose nationals the post attacked. The use of dehumanizing language in this context may have online effects, including creating a discriminatory environment that undermines the freedom of others to express themselves. In situations of armed conflict in particular, the risk of hateful, dehumanizing expressions accumulating and spreading on a platform, leading to offline action impacting the right to security of person and potentially life, is especially pronounced. In this particular case, for a majority of the Board, the presence of these risks and Facebook’s human rights responsibility to avoid contributing to them meant it was permitted to remove the slur. Furthermore, the Board found the removal proportionate. Less severe interventions, such as labels, warning screens, or other measures to reduce dissemination, would not have provided the same protection. Notably, Facebook did not take more severe measures also available to them, such as suspending the user’s account, despite the user seemingly re-posting offending content several times. This illustrates that notwithstanding the removal of this specific piece of content, the user remained free to engage in discussions on the same issues within the boundaries of the Community Standards. A minority found Facebook’s deletion of the post was not proportionate, on the basis that the risks cited by the majority were too remote, and were not foreseeable. Alternative less-instrusive enforcement options should therefore have been considered. Examples include affixing a warning or sensitivity screen to the content, reducing its virality, promoting counter-messaging, or other techniques. For this minority view, removal of the whole post because it used the slur led to the removal of speech on a matter of public concern, and the necessity and proportionality of that restriction has not been made out. Another minority view was that the reference to an inanimate object was offensive but not dehumanizing. This view considered that the slur would not contribute to military or other violent action. 9. Oversight Board Decision 9.1 Content Decision The Board upholds Facebook’s decision to remove the user’s post. 9.2 Policy Advisory Statement The Board recommends that Facebook: *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and must be agreed by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world, provided expertise on socio-political and cultural context. The company Lionbridge Technologies LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world, provided linguistic expertise. Return to Case Decisions and Policy Advisory Opinions" fb-qzf1vy4b,Cartoon About Rape,https://www.oversightboard.com/decision/fb-qzf1vy4b/,"September 12, 2024",2024,,"TopicSafety, Sex and gender equality, ViolenceCommunity StandardSexual exploitation of adults",Sexual exploitation of adults,Overturned,Mexico,A user appealed Meta’s decision to leave up a Facebook post which contained a cartoon that depicts an individual drugging another person with the implication of impending rape.,5747,878,"Overturned September 12, 2024 A user appealed Meta’s decision to leave up a Facebook post which contained a cartoon that depicts an individual drugging another person with the implication of impending rape. Summary Topic Safety, Sex and gender equality, Violence Community Standard Sexual exploitation of adults Location Mexico Platform Facebook Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Summary A user appealed Meta’s decision to leave up a Facebook post which contained a cartoon that depicts an individual drugging another person with the implication of impending rape. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and removed the post. About the Case In April 2024, a user in Mexico reshared a Facebook post that contained a cartoon about rape. The cartoon depicts two people, who appear to be men, entering a home. The resident of the home apologizes that their home is a mess but when they enter, it is perfectly clean. The other individual states that they only clean when they intend to engage in sexual intercourse. The resident of the home states “me too, my friend” while covering the other individual’s face with a cloth as they struggle. The caption accompanying the post states, “I’m sorry my friend” accompanied by a sad emoji. The user who reported this post explains that jokes about rape are “not funny” and that “men are less likely to report they have been raped and it’s because of these kinds of images.” Meta’s Adult Sexual Exploitation policy explicitly prohibits “content depicting, advocating for or mocking non-consensual sexual touching” including “[c]ontent mocking survivors or the concept of non-consensual sexual touching.” Lack of consent is determined by Meta through context, including verbal expressions, physical gestures, or incapacitation. After the Board brought this case to Meta’s attention, the company determined that the content violated the Adult Sexual Exploitation policy and that its original decision to leave up the content was incorrect. The company then removed the content from Facebook. Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the user who reported content that was then left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1). Where Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook, Instagram and Threads users. Significance of Case This case illustrates shortcomings in the enforcement of Meta’s Adult Sexual Exploitation policy. Over many years, civil society groups have repeatedly raised concerns about under-enforcement of Meta’s policies as applied to material that jokes about rape or mocks victims and survivors of sexual violence. The Board has previously addressed the difficulties inherent in accurately moderating jokes and attempts at humor. In the Two Buttons Meme decision, the Board stressed the importance of carefully evaluating the content and context of apparent jokes when Meta assesses posts. While the Two Buttons Meme decision dealt with satirical content that was wrongly removed, this case illustrates the mistakes made when posts expressed as jokes are not taken sufficiently seriously and are wrongly left up on the platform. As Meta now agrees, this post, which relies on a homophobic premise and mocks violent sexual assault, clearly violates the Adult Sexual Exploitation policy. Given the high likelihood of mistakes in these types of cases, the Board has recommended that Meta ensure that its processes appropriately include sufficient opportunities for “investigation or escalation where a content moderator is not sure if a meme is satirical or not” ( Two Buttons Meme decision, recommendation no. 3). Meta reported implementation of this recommendation but has not published information to demonstrate it. The Board has issued recommendations aimed at reducing the number of enforcement errors made by Meta. The Board urged Meta to “implement an internal audit procedure to continuously analyze a statistically representative sample of automated content removal decisions to reverse and learn from enforcement mistakes,” ( Breast Cancer Symptoms and Nudity decision, recommendation no. 5). Meta reframed the recommendation in their response and implementation and did not address the goal of the Board’s recommendation. The Board has also repeatedly stressed the importance of Meta devoting extra resources to improve its ability to accurately assess potentially harmful content that either criticizes or legitimizes systemic problems including gendered and sexual violence in cases such as those addressed in the India Sexual Harassment and Image of Gender-Based Violence decisions. Decision The Board overturns Meta’s original decision to leave up the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-r9k87402,Protest in India against France,https://www.oversightboard.com/decision/fb-r9k87402/,"February 12, 2021",2021,,"TopicReligion, ViolenceCommunity StandardViolence and incitement","Type of DecisionStandardPolicies and TopicsTopicReligion, ViolenceCommunity StandardViolence and incitementRegion/CountriesLocationFrance, IndiaPlatformPlatformFacebook",Overturned,"France, India",The Oversight Board has overturned Facebook's decision to remove a post under its Community Standard on violence and incitement.,22856,3526,"Overturned February 12, 2021 The Oversight Board has overturned Facebook's decision to remove a post under its Community Standard on violence and incitement. Standard Topic Religion, Violence Community Standard Violence and incitement Location France, India Platform Facebook To read this decision in Hindi click here . पूरे फैसले को हिन्दी में पढ़ने के लिए, कृपया यहां क्लिक करें। The Oversight Board has overturned Facebook’s decision to remove a post under its Violence and Incitement Community Standard. While the company considered that the post contained a veiled threat, a majority of the Board believed it should be restored. This decision should only be implemented pending user notification and consent. About the case In late October 2020, a Facebook user posted in a public group described as a forum for Indian Muslims. The post contained a meme featuring an image from the Turkish television show “Diriliş: Ertuğrul” depicting one of the show’s characters in leather armor holding a sheathed sword. The meme had a text overlay in Hindi. Facebook’s translation of the text into English reads: “if the tongue of the kafir starts against the Prophet, then the sword should be taken out of the sheath.” The post also included hashtags referring to President Emmanuel Macron of France as the devil and calling for the boycott of French products. In its referral, Facebook noted that this content highlighted the tension between what it considered religious speech and a possible threat of violence, even if not made explicit. Key findings Facebook removed the post under its Violence and Incitement Community Standard, which states that users should not post coded statements where “the threat is veiled or implicit.” Facebook identified “the sword should be taken out of the sheath” as a veiled threat against “kafirs,” a term which the company interpreted as having a retaliatory tone against non-Muslims. Considering the circumstances of the case, the majority of the Board did not believe that this post was likely to cause harm. They questioned Facebook’s rationale, which indicated that threats of violence against Muslims increased Facebook’s sensitivity to such threats, but also increased sensitivity when moderating content from this group. While a minority viewed the post as threatening some form of violent response to blasphemy, the majority considered the references to President Macron and the boycott of French products as calls to action that are not necessarily violent. Although the television show character holds a sword, the majority interpreted the post as criticizing Macron’s response to religiously motivated violence, rather than threatening violence itself. The Board notes that its decision to restore this post does not imply endorsement of its content. Under international human rights standards, people have the right to seek, receive and impart ideas and opinions of all kinds, including those that may be controversial or deeply offensive. As such, a majority considered that just as people have the right to criticize religions or religious figures, religious people also have the right to express offense at such expression. Restrictions on expression must be easily understood and accessible. In this case, the Board noted that Facebook’s process and criteria for determining veiled threats is not explained to users in the Community Standards. In conclusion, a majority found that, for this specific post, Facebook did not accurately assess all contextual information and that international human rights standards on expression justify the Board’s decision to restore the content. The Oversight Board’s decision The Board overturns Facebook’s decision to take down the content, requiring the post to be restored. As a policy advisory statement, the Board recommends that: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision Summary The Oversight Board has overturned Facebook’s decision to remove content it considered a veiled threat under its Violence and Incitement Community Standard. A majority of the Board found that restoring the content would comply with Facebook’s Community Standards, its values, and international human rights standards. 2. Case Description In late October 2020, a Facebook user posted in a public group that describes itself as a forum for providing information for Indian Muslims. The post contained a meme featuring an image from the Turkish television show “Diriliş: Ertuğrul” depicting a character from the show in leather armor holding a sheathed sword. The meme had a text overlay in Hindi. Facebook’s translation of the text into English reads: “if the tongue of the kafir starts against the Prophet, then the sword should be taken out of the sheath.” The accompanying text in the post, also in English, stated that the Prophet is the user’s identity, dignity, honor and life, and contained the acronym “PBUH” (peace be upon him). This was followed by hashtags referring to President Emmanuel Macron of France as the devil and calling for the boycott of French products. The post was viewed about 30,000 times, received less than 1,000 comments and was shared fewer than 1,000 times. In early November 2020, Facebook removed the post for violating its policy on Violence and Incitement. Facebook interpreted “kafir” as a pejorative term referring to nonbelievers in this context. Analyzing the photo and text, Facebook concluded that the post was a veiled threat of violence against “kafirs” and removed it. Two Facebook users had previously reported the post; one for Hate Speech and the other for Violence and Incitement, and Facebook did not remove the content. Facebook then received information from a third-party partner that this content had the potential to contribute to violence. Facebook confirmed that this third-party partner is a member of its trusted partner network and is not linked to any state. Facebook described this network as a way for the company to obtain additional local context. According to Facebook, the network consists of non-governmental organizations, humanitarian organizations, non-profit organizations, and other international organizations. After the post was flagged by the third-party partner, Facebook sought additional contextual information from its local public policy team, which agreed with the third-party partner that the post was potentially threatening. Facebook referred the case to the Oversight Board on November 19, 2020. In its referral, Facebook stated that it considered its decision to be challenging because the content highlighted tensions between what it considered religious speech and a possible threat of violence, even if not made explicit. 3. Authority and Scope The Oversight Board has the authority to review Facebook’s decision under the Board’s Charter Article 2.1 and may uphold or overturn that decision under Article 3.5. This post is within the Oversight Board’s scope of review: it does not fit within any excluded category of content set forth in Article 2, Section 1.2.1 of the Board’s Bylaws and it does not conflict with Facebook’s legal obligations under Article 2, Section 1.2.2 of the Bylaws. 4. Relevant Standards The Oversight Board considered the following standards in its decision: I. Facebook’s Community Standards The Community Standard on Violence and Incitement states that Facebook “aim[s] to prevent potential offline harm that may be related to content on Facebook” and that Facebook restricts expression “when [it] believe[s] there is a genuine risk of physical harm or direct threats to public safety.” Specifically, the standard indicates users should not post coded statements “where the method of violence or harm is not clearly articulated, but the threat is veiled or implicit.” Facebook also notes that it requires additional context to enforce this section of the standard. II. Facebook’s Values The Facebook values relevant to this case are outlined in the introduction to the Community Standards. The first is ""Voice,” which is described as “paramount”: The goal of our Community Standards has always been to create a place for expression and give people a voice. […] We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Facebook limits ""Voice” in service of four other values : “Authenticity,” “Safety,” “Privacy” and “Dignity.” The Board considers that the value of “Safety” is relevant to this decision: Safety: We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook. III. Relevant Human Rights Standards Considered by the Board The UN Guiding Principles on Business and Human Rights ( UNGPs ), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. Drawing upon the UNGPs, the following international human rights standards were considered in this case: 5. User Statement Facebook notified the user it had referred the case to the Oversight Board, and gave the user the opportunity to share further context about the post with the Board. The user was given a 15-day window to submit their statement from the time of the referral. The Board received no statement from the user. 6. Explanation of Facebook’s Decision Facebook first evaluated the post for a possible Hate Speech violation and did not remove the content. Facebook did not indicate that the term ""kafir"" appears on a list of banned slurs or that the post otherwise violated the Hate Speech policy. Facebook then removed this content based on its Violence and Incitement Community Standard. Under that standard, Facebook prohibits content that creates a “genuine risk of physical harm or direct threats to public safety,” including coded statements “where the method of violence or harm is not clearly articulated, but the threat is veiled or implicit.” Facebook explained that, in its view, veiled threats “can be as dangerous to users as more explicit threats of violence.” According to Facebook, veiled threats are removed when certain non-public criteria are met. Based on these criteria, Facebook determined “the sword should be taken out of the sheath” was a veiled threat against “kafirs” generally. In this case, Facebook interpreted the term “kafir” as pejorative with a retaliatory tone against non-Muslims; the reference to the sword as a threatening call to action; and also found it to be an “implied reference to historical violence.” Facebook stated that it was crucial to consider the context in which the content was posted. According to Facebook, the content was posted at a time of religious tensions in India related to the Charlie Hebdo trials in France and elections in the Indian state of Bihar. Facebook noted rising violence against Muslims, such as the attack in Christchurch, New Zealand, against a mosque. It also noted the possibility of retaliatory violence by Muslims as leading to increased sensitivity in addressing potential threats both against and by Muslims. Facebook further stated that its Violence and Incitement policy aligns with international human rights standards. According to Facebook, its policy is “narrowly framed to uphold the rights of others and to preserve the ‘necessity and proportionality’ elements required for permissible restriction of freedom of expression.” 7. Third party submissions The Board received six public comments related to this case. The regional breakdown of the comments was: one from Asia Pacific and Oceania, one from Latin America and Caribbean and four from the United States and Canada. The submissions covered various themes, including: the importance of knowing the identity and influence of the user, including where it was posted and in what group; the importance of recognizing who the target is; whether the post targeted public figures or private individuals; whether the user intended to encourage the harmful stereotype of Indian Muslims as violent; whether the content met the standard of veiled threat under Facebook’s Community Standards; whether the Violence and Incitement policy was applicable in this case; whether the post could be deemed as violent speech under Facebook’s Hate Speech policy; as well as feedback for improving the Board’s public comment process. To read public comments submitted for this case, please click here . 8. Oversight Board Analysis 8.1 Compliance with Community Standards A majority of the Board found that restoring this content would comply with Facebook’s Community Standards. Facebook indicated that the content was a veiled threat, prohibited by the Violence and Incitement Community Standard. The standard states users should not post coded statements “where the method of violence or harm is not clearly articulated, but the threat is veiled or implicit.” Facebook stated in its rationale to the Board that it focuses on “imminent physical harm” in interpreting this provision of the standard. Board Members unanimously considered it important to address veiled threats of violence, and expressed concern around users employing veiled threats to evade detection of Community Standards violation. Members also acknowledged the challenges Facebook faces in removing such threats at scale, given that they require contextual analysis. Board Members differed in their views on how clearly the target was defined, the tone of the post, and the risk of physical harm or violence posed by this content globally and in India. A majority of the Board considered that the use of the hashtag to call for a boycott of French products was a call to non-violent protest and part of discourse on current political events. The use of a meme from a popular television show within this context, while referring to violence, was not considered by a majority as a call to physical harm. In relation to Facebook’s explanation, the Board noted that Facebook justified its decision by referring to ongoing tensions in India. However, the examples cited were not related to this context. For example, the protests that occurred in India in reaction to President Macron’s statement following the killings in France in response to cartoon depictions of the Prophet Muhammad were not reported to be violent. Facebook also cited the November 7, 2020 elections in the Indian state of Bihar, yet the Board’s research indicates that these elections were not marked by violence against persons based on their religion. The Board unanimously found that analysis of context is essential to understand veiled threats, yet a majority did not find Facebook’s contextual rationale in relation to possible violence in India in this particular case compelling. A minority found that Facebook’s internal process, which relied upon a third party partner assessment, was commendable, and would defer to Facebook’s determination that the post presented an unacceptable risk of promoting violence. This view acknowledged that Facebook consulted regional and linguistic experts, and shared the assessment that the term “kafir” was pejorative. The minority did not consider the Board had strong basis to overturn. That said, a majority found that the Board’s independent analysis supported restoring the post under the Violence and Incitement Community Standard. 8.2 Compliance with Facebook Values A majority of the Board found that restoring the content would comply with the company’s values. Although Facebook’s value of “Safety” is important, particularly given heightened religious tensions in India, this content did not pose a risk to “Safety” that justified displacing “Voice.” The Board also recognized the challenges Facebook faces in balancing these values when dealing with veiled threats. A minority considered these circumstances justified displacing “Voice” to err on the side of “Safety.” 8.3 Compliance with Human Rights Standards A majority of the Board found that restoring this content would be consistent with international human rights standards. According to Article 19 of the ICCPR individuals have the right to seek, receive and impart ideas and opinions of all kinds, including those that may be controversial or deeply offensive (General Comment No. 34, para. 11). The right to freedom of expression includes the dissemination of ideas that may be considered blasphemous, as well as opposition to such speech. In this regard, freedom of expression includes freedom to criticize religions, religious doctrines, and religious figures (General Comment No. 34, para. 48). Political expression is particularly important and receives heightened protection under international human rights law (General Comment No. 34, at para. 34 and 38) and includes calls for boycotts and criticism of public figures. At the same time, the Board recognizes that the right to freedom of expression is not absolute and can exceptionally be subject to limitations under international human rights law. In this case, after discussing the factors in the Rabat Plan of Action, the Board did not consider the post to be advocacy of religious hatred reaching the threshold of incitement to discrimination, hostility or violence, which states are required to prohibit under ICCPR Article 20, para. 2. ICCPR Article 19, para. 3 requires restrictions on expression to be easily understood and accessible (legality requirement), to have the purpose of advancing one of several listed objectives (legitimate aim requirement), and to be necessary and narrowly tailored to the specific objective (necessity and proportionality requirement). The Board discussed Facebook’s removal decision against these criteria. I. Legality On legality, the Board noted that Facebook’s process and criteria for determining veiled threats is not explained to users in the Community Standards, making it unclear what “additional context” is required to enforce the policy. II. Legitimate aim The Board further considered that the restriction on expression in this case would serve a legitimate aim: the protection of the rights of others (the rights to life and integrity of those targeted by the post). III. Necessity and proportionality A majority of the Board considered that the removal of the post was not necessary, emphasizing the importance of assessing the post in its particular context. They considered that just as people have the right to criticize religion and religious figures, adherents of religions also have the right to express their offense at such expression. The Board recognized the serious nature of discrimination and violence against Muslims in India. The majority also considered the references to President Macron and the boycott of French products as non-violent calls to action. In this respect, although the post referenced a sword, the majority interpreted the post to criticize Macron’s response to religiously motivated violence, rather than credibly threaten violence. The Board considered a number of factors in determining that harm was improbable. The broad nature of the target (“kafirs”) and the lack of clarity around potential physical harm or violence, which did not appear to be imminent, contributed to the majority’s conclusion. The user not appearing to be a state actor or a public figure or otherwise having particular influence over the conduct of others, was also significant. In addition there was no veiled reference to a particular time or location of any threatened or incited action. The Board’s research indicated that protests in India following Macron’s statements were not reportedly violent. In this respect, some Board Members noted the Facebook group was targeted towards individuals in India and partly in Hindi, which suggests the scope of impact may have been more limited to an area that did not see violent reactions. Additionally, some Board Members considered that the examples cited by Facebook largely related to violence against the Muslim minority in India, which Board Members considered to be a pressing concern, and not retaliatory violence by Muslims. Therefore, the majority concluded that as well as not being imminent, these factors meant physical harm was unlikely to result from this post. A minority interpreted the post as threatening or legitimizing some form of violent response to blasphemy. Although the “sword” is a reference to nonspecific violence, the minority considered that the Charlie Hebdo killings and recent beheadings in France related to blasphemy mean this threat cannot be dismissed as unrealistic. The hashtags referencing events in France support this interpretation. In this case, the minority expressed that Facebook should not wait for violence to be imminent before removing content that threatens or intimidates those exercising their right to freedom of expression, and would have upheld Facebook’s decision. The majority, however, found that Facebook did not accurately assess all contextual information. The Board emphasized that restoring the content does not imply agreement with this content, and noted the complexities in assessing veiled or coded threats. Nonetheless, for this specific piece of content, international human rights standards on expression justify the Board’s decision to restore the content. 9. Oversight Board Decision 9.1 Content Decision The Oversight Board overturns Facebook’s decision to take down the content, requiring the post to be restored. 9.2 Policy Advisory Statement This decision should only be implemented pending user notification and consent. To ensure users have clarity regarding permissible content, the Board recommends that Facebook provide users with additional information regarding the scope and enforcement of this Community Standard. Enforcement criteria should be public and align with Facebook’s Internal Implementation Standards. Specifically, Facebook’s criteria should address intent, the identity of the user and audience, and context. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and must be agreed by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world, provided expertise on socio-political and cultural context. The company Lionbridge Technologies, LLC whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world, provided linguistic expertise. Return to Case Decisions and Policy Advisory Opinions" fb-rzl57qhj,“Two buttons” meme,https://www.oversightboard.com/decision/fb-rzl57qhj/,"May 20, 2021",2021,,"TopicFreedom of expression, Humor, PoliticsCommunity StandardCruel and insensitive","Type of DecisionStandardPolicies and TopicsTopicFreedom of expression, Humor, PoliticsCommunity StandardCruel and insensitiveRegion/CountriesLocationUnited StatesPlatformPlatformFacebook",Overturned,United States,The Oversight Board has overturned Facebook's decision to remove a comment under its Hate Speech Community Standard.,36359,5653,"Overturned May 20, 2021 The Oversight Board has overturned Facebook's decision to remove a comment under its Hate Speech Community Standard. Standard Topic Freedom of expression, Humor, Politics Community Standard Cruel and insensitive Location United States Platform Facebook Please note that this decision is available in both Turkish (via the ‘language’ tab accessed through the menu at the top of this screen) and Armenian (via this link ). Որոշման ամբողջական տարբերակը հայերենով կարդալու համար սեղմեք այստեղ. The Oversight Board has overturned Facebook’s decision to remove a comment under its Hate Speech Community Standard. A majority of the Board found it fell into Facebook’s exception for content condemning or raising awareness of hatred. About the case On December 24, 2020, a Facebook user in the United States posted a comment with an adaptation of the ‘daily struggle’ or ‘two buttons’ meme. This featured the split-screen cartoon from the original ‘two buttons’ meme, but with a Turkish flag substituted for the cartoon character’s face. The cartoon character has its right hand on its head and appears to be sweating. Above the character, in the other half of the split-screen, are two red buttons with corresponding statements in English: “The Armenian Genocide is a lie” and “The Armenians were terrorists that deserved it.” While one content moderator found that the meme violated Facebook’s Hate Speech Community Standard, another found it violated its Cruel and Insensitive Community Standard. Facebook removed the comment under the Cruel and Insensitive Community Standard and informed the user of this. After the user’s appeal, however, Facebook found that the content should have been removed under its Hate Speech Community Standard. The company did not tell the user that it upheld its decision under a different Community Standard. Key findings Facebook stated that it removed the comment as the phrase “The Armenians were terrorists that deserved it,” contained claims that Armenians were criminals based on their nationality and ethnicity. According to Facebook, this violated its Hate Speech Community Standard. Facebook also stated that the meme was not covered by an exception which allows users to share hateful content to condemn it or raise awareness. The company claimed that the cartoon character could be reasonably viewed as either condemning or embracing the two statements featured in the meme. The majority of the Board, however, believed that the content was covered by this exception. The ‘two buttons’ meme contrasts two different options not to show support for them, but to highlight potential contradictions. As such, they found that the user shared the meme to raise awareness of and condemn the Turkish government’s efforts to deny the Armenian genocide while, at the same time, justifying these same historic atrocities. The majority noted a public comment which suggested that the meme, “does not mock victims of genocide, but mocks the denialism common in contemporary Turkey, that simultaneously says the genocide did not happen and that victims deserved it.” The majority also believed that the content could be covered by Facebook’s satire exception, which is not included in the Community Standards. The minority of the Board, however, found that it was not sufficiently clear that the user shared the content to criticize the Turkish government. As the content included a harmful generalization about Armenians, the minority of the Board found that it violated the Hate Speech Community Standard. In this case, the Board noted that Facebook told the user that they violated the Cruel and Insensitive Community Standard when the company based its enforcement on the Hate Speech Community Standard. The Board was also concerned about whether Facebook’s moderators had the necessary time and resources to review content containing satire. The Oversight Board’s decision The Oversight Board overturns Facebook’s decision to remove the content and requires that the comment be restored. In a policy advisory statement, the Board recommends that Facebook: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board has overturned Facebook’s decision to remove content under its Hate Speech Community Standard. A majority of the Board found that the cartoon, in the form of a satirical meme, fell into the Hate Speech Community Standard’s exception for content that condemns hatred or raises awareness of it. 2. Case description On December 24, 2020, a Facebook user in the United States posted a comment with an adaption of the “daily struggle” or “two buttons” meme. A meme is a piece of media, which is often humorous, that spreads quickly across the internet. This featured the same-split screen cartoon from the original meme , but with the Turkish flag substituted for the cartoon character’s face. The cartoon character has its right hand on its head and appears to be sweating. Above the character, in the other half of the split-screen, there are two red buttons with corresponding statements in English: “The Armenian Genocide is a lie” and “The Armenians were terrorists that deserved it.” The meme was preceded by a ""thinking face"" emoji. The comment was shared on a public Facebook page that describes itself as forum for discussing religious matters from a secular perspective. It responded to a post containing an image of a person wearing a niqab with overlay text in English: “Not all prisoners are behind bars.” At the time the comment was removed, that original post it responded to had 260 views, 423 reactions and 149 comments. A Facebook user in Sri Lanka reported the comment for violating the Hate Speech Community Standard. Facebook removed the meme on December 24, 2020. Within a short period of time, two content moderators reviewed the comment against the company’s policies and reached different conclusions. While the first concluded that the meme violated Facebook’s Hate Speech policy, the second determined that the meme violated the Cruel and Insensitive policy. The content was removed and logged in Facebook’s systems based on the second review. On this basis, Facebook notified the user that their comment “goes against our Community Standard on cruel insensitive content.” After the user’s appeal, Facebook upheld its decision but found that the content should have been removed under its Hate Speech policy. For Facebook, the statement “The Armenians were terrorists that deserved it” specifically violated the prohibition on content claiming that all members of a protected characteristic are criminals, including terrorists. No other parts of the content, such as the claim that the Armenian genocide was a lie, were deemed to be violating. Facebook did not inform the user that it upheld the decision to remove their content under a different Community Standard. The user submitted their appeal to the Oversight Board on December 24, 2020. Lastly, in this decision, the Board referred to the atrocities committed against the Armenian people from 1915 onwards as genocide, as this term is commonly used to describe the massacres and mass deportations suffered by Armenians and it is also referred to in the content under review. The Board does not have the authority to legally qualify such atrocities and this qualification is not the subject of this decision. 3. Authority and scope The Board has authority to review Facebook’s decision following an appeal from the user whose post was removed (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1). The Board may uphold or reverse that decision (Charter Article 3, Section 5). The Board’s decisions are binding and may include policy advisory statements with recommendations. These recommendations are non-binding, but Facebook must respond to them (Charter Article 3, Section 4). The Board is an independent grievance mechanism to address disputes in a transparent and principled manner. 4. Relevant standards The Oversight Board considered the following standards in its decision: I. Facebook’s Community Standards Facebook's Community Standards define hate speech as “a direct attack on people based on what we call protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability.” Under “Tier 1,” prohibited content (“do not post”) includes content targeting a person or group of people on the basis of a protected characteristic with: However, Facebook allows “content that includes someone else’s hate speech to condemn it or raise awareness.” According to the Hate Speech Community Standard’s policy rationale, “speech that might otherwise violate our standards can be used self-referentially or in an empowering way. Our policies are designed to allow room for these types of speech, but we require people to clearly indicate their intent. If intention is unclear, we may remove content.” Additionally, the Board noted Facebook’s Cruel and Insensitive Community Standard which forbids content that targets “victims of serious physical or emotional harm,” including “attempts to mock victims […] many of which take the form of memes and GIFs.” This policy prohibits content (“do not post”) that “contains sadistic remarks and any visual or written depiction of real people experiencing premature death.” II. Facebook’s values Facebook’s values are outlined in the introduction to the Community Standards. The value of “Voice” is described as “paramount”: The goal of our Community Standards has always been to create a place for expression and give people a voice. […] We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Facebook limits “Voice” in service of four values, and two are relevant here: “Safety”: We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn't allowed on Facebook. “Dignity” : We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade others. II. Human rights standards The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Facebook announced its Corporate Human Rights Policy , where it committed to respecting rights in accordance with the UNGPs. The Board's analysis in this case was informed by the following human rights standards: 5. User statement The user stated in their appeal to the Board that “historical events should not be censored.” They noted that their comment was not meant to offend but to point out “the irony of a particular historical event.” The user noted that “perhaps Facebook misinterpreted this as an attack.” The user further stated that even if the content invokes “religion and war,” it is not a “hot button issue.” The user found Facebook and its policies overly restrictive and argued that “[h]umor like many things is subjective and something offensive to one person may be funny to another.” 6. Explanation of Facebook’s decision Facebook explained that it removed the comment as a Tier 1 attack under the Hate Speech Community Standard, specifically for violating its policy prohibiting content alleging that all members of a protected characteristic are criminals, including terrorists. According to Facebook, while the first statement in the meme “The Armenian Genocide is a lie” is a negative generalization, it did not directly attack Armenians and thus did not violate the company’s Community Standards. Facebook found that the second statement “The Armenians were terrorists that deserved it” directly attacked Armenians by alleging that they are criminals based on their ethnicity and nationality. This violated the company’s Hate Speech policy. In its decision rationale, Facebook assessed whether the exception for content that shares hate speech to condemn it or raise awareness of it should apply in this case. Facebook argued that the meme did not fall into this exception, as the user was not clear they intended to condemn hate speech. Specifically, Facebook explained to the Board that the sweating cartoon character in the meme could be reasonably viewed as either condemning or embracing the statements. Facebook also explained that its Hate Speech policy previously included an exception for humor. The company clarified that it removed this exception in response to a Civil Rights Audit report (July 2020) and as part of its policy development. In its response to the Board, Facebook claimed that “creating a definition for what is perceived to be funny was not operational for Facebook’s at-scale enforcement.” However, in the Civil Rights Audit report, the company disclosed it maintained a narrower exception for satire which Facebook defines as content that “includes the use of irony, exaggeration, mockery and/or absurdity with the intent to expose or critique people, behaviors, or opinions, particularly in the context of political, religious, or social issues. Its purpose is to draw attention to and voice criticism about wider societal issues.” This exception is not included in its Community Standards. It appears to be separate from the exception for content that includes hate speech to condemn it or raise awareness of it. Facebook also clarified that the content did not violate the Cruel and Insensitive policy, which prohibits “explicit attempts to mock victims,” including through memes, because it did not depict or name a real victim. Facebook also stated that its removal of the content was consistent with its values of “Dignity” and “Safety,” when balanced against the value of “Voice.” According to Facebook, content that calls the Armenian people terrorists “is an affront to their dignity, can be experienced as demeaning or dehumanizing, and can even create risks of offline persecution and violence.” Facebook argued that its decision was consistent with international human rights standards. Facebook stated that (a) its policy was “easily accessible” in the Community Standards, (b) the decision to remove the content was legitimate to protect “the rights of others from harm and discrimination,” and (c) its decision to remove the content was “necessary and proportionate to limit harm against Armenians.” To ensure that limits on expression were proportionate, Facebook argued that its Hate Speech policy applied to “a narrow set of generalizations.” 7. Third-party submissions The Oversight Board received 23 public comments related to this case. Four of the comments were from Europe, one from Middle East and North Africa and 18 from United States and Canada. The Board received comments from parties directly connected to issues of interest for this case. These included a descendant of victims of the Armenian genocide, organizations that study the nature, causes and consequences of genocide, as well as a former content moderator. The submissions covered themes including: the meaning and use of the “daily struggle” or “two buttons” meme as adapted by the user in this case, whether the content was intended as a political critique of the Turkish government and its denial of the Armenian genocide, whether the content was mocking the victims of the Armenian genocide, and how Facebook’s Hate Speech and Cruel and Insensitive Community Standards relate to this case. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board looked at the question of whether this content should be restored through three lenses: Facebook’s Community Standards; the company’s values; and its human rights responsibilities. 8.1 Compliance with Community Standards The Board analyzed each of the two statements against Facebook’s Community Standards, before examining the effect of juxtaposing these statements in this version of the “daily struggle” or “two buttons” meme. 8.1.1. Analysis of the statement “The Armenian Genocide is a lie” The Board noted that Facebook did not find this statement to violate its Hate Speech Community Standard. Facebook enforces its Hate Speech Community Standard by identifying (i) a “direct attack,” and (ii) a “protected characteristic” the direct attack was based upon. The policy rationale lists “dehumanizing speech” as an example of an attack. Ethnicity and national origin are included among the list of protected characteristics. Under the “do not post” section of its Hate Speech policy, Facebook prohibits speech “[m]ocking the concept, events or victims of hate crimes even if no real person is depicted in an image.” A majority of the Board noted, however, that the user’s intent was not to mock the victims of the events referred to in the statement, but to use the meme, in the form of satire, to criticize the statement itself. For the minority, the user’s intent was not sufficiently clear. The user could be sharing the content to embrace the statement rather than to refute it. In this case, Facebook notified the user that their content violated the Cruel and Insensitive Community Standard. Under this policy, Facebook prohibits “attempts to mock victims [of serious physical or emotional harm],” including content that “contains sadistic remarks and any visual or written depiction of real people experiencing premature death.” The Board noted but did not consider Facebook’s explanation that this policy is not applicable to this case because the meme does not depict or name the victims of the events referred to in the statement. Under the “do not post” section of its Hate Speech policy, Facebook also prohibits speech “[d]enying or distorting information about the Holocaust.” The Board noted the company’s explanation that this policy does not apply to the Armenian genocide or other genocides, and that this policy was based on the company’s “ consultation with external experts, the well-documented rise in anti-Semitism globally, and the alarming level of ignorance about the Holocaust .” 8.1.2. Analysis of the statement “The Armenians were terrorists that deserved it” The Board noted that Facebook found this statement to violate its Hate Speech Community Standard. The “do not post” section of this Hate Speech Community Standard prohibits “[d]ehumanizing speech or imagery in the form of comparisons, generalizations, or unqualified behavioral statements (in written or visual form).” The policy includes speech that portrays the targeted group as “criminals.” The Board believed the term “terrorists” fell into this category. 8.1.3 Analysis of the combined statements in the meme The Board is of the view that one should evaluate the content as a whole, including the effect of juxtaposing these statements in a well-known meme. A common purpose of the “daily struggle” or “two buttons” meme is to contrast two different options to highlight potential contradictions or other connotations, rather than to indicate support for the options presented. For the majority, the exception to the Hate Speech policy is crucial. This exception allows people to “share content that includes someone else’s hate speech to condemn it or raise awareness.” It also states: “our policies are designed to allow room for these types of speech, but we require people to clearly indicate their intent. If intention is unclear, we may remove content.” The majority noted that the content could also fall under the company’s satire exception, which is not publicly available. Assessing the content as a whole, the majority found that the user’s intent was clear. They shared the meme as satire to raise awareness about and condemn the Turkish government’s efforts to deny the Armenian genocide while, at the same time, justifying the same historic atrocities. The user’s intent was not to mock the victims of these events, nor to claim those victims were criminals or that the atrocity was justified. The majority took into account the Turkish government’s position on genocide suffered by Armenians from 1915 onwards ( Republic of Turkey, Ministry of Foreign Affairs ) as well as the history between Turkey and Armenia. In this context, they found that the cartoon character’s sweating face replaced with a Turkish flag and the content’s direct link to the Armenian genocide, meant the user shared the meme to criticize the Turkish government’s position on this issue. The use of the ""thinking face"" emoji, which is commonly used sarcastically, alongside the meme, supports this conclusion. The majority noted public comment “PC-10007” (made available under section 7 above), which suggested that “this meme, as described, does not mock victims of genocide, but mocks the denialism common in contemporary Turkey, that simultaneously says the genocide did not happen and that victims deserved it.” It would thus be wrong to remove this comment in the name of protecting Armenians, when the post is a criticism of the Turkish government, in support of Armenians. As such, the majority found that, taken as a whole, the content fell within the policy exception in Facebook’s Hate Speech Community Standard. For the minority, in the absence of specific context, the user’s intent was not sufficiently clear to conclude that the content was shared as satire criticizing the Turkish government. Additionally, the minority found that the user was not able to properly articulate what the alleged humor intended to express. Given the content includes a harmful generalization against Armenians, the minority found that it violated the Hate Speech Community Standard. 8.2 Compliance with Facebook’s values A majority of the Board believed that restoring this content is consistent with Facebook’s values. The Board recognized the Armenian community’s sensitivity to statements concerning the mass-scale atrocities suffered by Armenians from 1915 onwards, as well as the community’s long struggle to seek recognition of the genocide and justice for these atrocities. However, the majority does not find any evidence that the meme in this case posed a risk to “Dignity” and “Safety” that would justify displacing “Voice.” The majority also noted Facebook’s broad reference to “Safety,” without explaining how this value was applied in this case. The minority found that while satire should be protected, as the majority rightly stated, the statements in the comment damage the self-respect of people whose ancestors suffered genocide. The minority also found the statements to be disrespectful of the honor of those who were massacred and harmful, as it could increase the risk of discrimination and violence against Armenians. This justified displacing “Voice” to protect “Safety” and “Dignity.” 8.3 Compliance with Facebook’s human rights responsibilities Freedom of expression (Article 19 ICCPR) Article 19, para. 2 of the ICCPR provides broad protection for expression of “all kinds,” including written and non-verbal “political discourse,” as well as “cultural and artistic expression.” The UN Human Rights Committee has made clear the protection of Article 19 extends to expression that may be considered “deeply offensive” (General Comment No. 34, paras. 11, 12). In this case, the Board found that the cartoon, in the form of a satirical meme, took a position on a political issue: the Turkish government’s stance on the Armenian genocide. The Board noted that “cartoons that clarify political positions” and “memes that mock public figures” may be considered forms of artistic expression protected under international human rights law (UN Special Rapporteur on freedom of expression, report A/HRC/44/49/Add.2, at para. 5). The Board further emphasized that the value placed by the ICCPR upon uninhibited expression concerning public figures in the political domain and public institutions “is particularly high” (General Comment No. 34, para. 38). The Board also noted that laws establishing general prohibitions of expressions with incorrect opinions or interpretations of historical facts, often justified through references to hate speech, are incompatible with Article 19 of the ICCPR, unless they amount to incitement of hostility, discrimination or violence under Article 20 of the ICCPR (General Comment 34, para. 29; UN Special Rapporteur on freedom of expression, report A/74/486, at para. 22). While the right to freedom of expression is fundamental, it is not absolute. It may be restricted, but restrictions should meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). Facebook should seek to align its content moderation policies on hate speech with these principles (UN Special Rapporteur on freedom of expression, report A/74/486, at para. 58(b)). I. Legality Any rules restricting expression must be clear, precise, and publicly accessible (General Comment 34, para. 25). Individuals must have enough information to determine if and how their speech may be limited, so that they can adjust their behavior accordingly. Facebook’s Community Standards “permit content that includes someone else’s hate speech to condemn it or raise awareness,” but ask users to “clearly indicate their intent.” In addition, the Board also noted that Facebook removed an exception for humor from its Hate Speech policy following a Civil Rights Audit concluded in July 2020. While this exception was removed, the company kept a narrower exception for satire that is currently not communicated to users in its Hate Speech Community Standard. The Board also noted that Facebook wrongfully reported to the user that they violated the Cruel and Insensitive Community Standard, when Facebook based its enforcement on the Hate Speech policy. The Board found that it is not clear enough to users that the Cruel and Insensitive Community Standard only applies to content that depicts or names victims of harm. Additionally, the Board found that properly notifying users of the reasons for enforcement action against them would help users follow Facebook’s rules. This relates to the legality issue, as the lack of relevant information for users subject to content removal “creates an environment of secretive norms, inconsistent with the standards of clarity, specificity and predictability” which may interfere with “the individual’s ability to challenge content actions or follow up on content-related complaints.” (UN Special Rapporteur on freedom of expression, report A/HCR/38/35, at para. 58). Facebook’s approach to user notice in this case therefore failed the legality test. II. Legitimate aim Any restriction on freedom of expression should also pursue a “legitimate aim.” The Board agreed the restriction pursued the legitimate aim of protecting the rights of others (General Comment No. 34, para. 28). These include the rights to equality and non-discrimination, including based on ethnicity and national origin (Article 2, para. 1, ICCPR; Articles 1 and 2, ICERD). The Board also reaffirmed its finding in case decision 2021-002-FB-UA that “it is not a legitimate aim to restrict expression for the sole purpose of protecting individuals from offense (UN Special Rapporteur on freedom of expression, report A/74/486, para. 24), as the value international human rights law placed on uninhibited expression is high (General Comment No. 34, para. 38).” III. Necessity and proportionality Any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected” (General Comment 34, para. 34). The Board assessed whether the content removal was necessary to protect the rights of Armenians to equality and non-discrimination. The Board noted that freedom of expression currently faces substantial restrictions in Turkey, with disproportionate effects on ethnic minorities living in the country, including Armenians. In a report on his mission to Turkey in 2016, the UN Special Rapporteur on freedom of expression found censorship to be operating in “all the places that are fundamental to democratic life: the media, educational institutions, the judiciary and the bar, government bureaucracy, political space and the vast online expanses of the digital age” (UN Special Rapporteur on freedom of expression, report A/HRC/35/22/Add.3 , at para. 7). In the follow-up report published in 2019, the UN Special Rapporteur mentioned that the situation had not improved (UN Special Rapporteur on freedom of expression, report A/HRC/41/35/Add.2 , at para. 26). Turkish authorities have specifically targeted expression denouncing the atrocities committed by the Turkish Ottoman Empire against Armenians from 1915 onwards. In a joint allegation letter , a number of UN special procedures mentioned that Article 301 of the Turkish Criminal Code appears to constitute “a deliberate effort to obstruct access to the truth about what appears to be policy of violence directed against the Turkish Armenian community” and “the right of victims to justice and reparation.” The Board also noted the assassination, in 2007, of Hrant Dink, a journalist of Armenian origin who published a number of articles on the identity of Turkish citizens of Armenian origin. In one of these articles, Dink discussed the lack of recognition of the genocide and how this affects the identity of Armenians. Dink was previously found guilty of demeaning the “Turkish identity” through his writing by Turkish courts. In 2010, the European Court of Human Rights concluded that the verdict of Dink and the failure of Turkish authorities to take the appropriate measures to protect his life amounted to a violation of his freedom of expression (see European Court of Human Rights, Dink v Turkey , para. 139). A majority of the Board concluded that Facebook’s interference with the user’s freedom of expression was mistaken. The removal of the comment would not protect the rights of Armenians to equality and non-discrimination. The user was not endorsing the statements contrasted in the meme, but rather attributing them to the Turkish government. They did this to condemn and raise awareness of the government’s contradictory and self-serving position. The majority found that the effects of satire, such as this meme, would be lessened if people had to explicitly declare their intent. The fact that the “two buttons” or “daily struggle” meme is usually intended to be humorous, even though the subject matter here was serious, also contributed to the majority’s decision. The majority also noted that the content was shared in English on a Facebook page with followers based in several countries. While the meme could be misinterpreted by some Facebook users, the majority found that it does not increase the risk of Armenians being subjected to discrimination and violence, especially as the content is aimed at an international audience. They found that bringing this important issue to an international audience is in the public interest. Additionally, the Board found that removing information without cause cannot be proportionate. Removing content that serves the public on a matter of public interest requires particularly weighty reasons to be proportionate. In this regard, the Board was concerned with Facebook content moderators’ capacity to review this meme and similar pieces of content containing satire. Contractors should follow adequate procedures and be provided with time, resources and support to assess satirical content and relevant context properly. While supporting majority’s views on protecting satire on the platform, the minority did not believe that the content was satire. The minority found that the user could be embracing the statements contained in the meme, and thus engaging in discrimination against Armenians. Therefore, the minority held that the requirements of necessity and proportionality have been met in this case. In case decision 2021-002-FB-UA , the Board noted Facebook’s position that the content depicting blackface would be removed unless the user clearly indicated their intent to condemn the practice or raise awareness of it. The minority found that, similarly, where the satirical nature of the content is not obvious, as in this case, the user’s intent should be made explicit. The minority concluded that, while satire is about ambiguity, it should not be ambiguous regarding the target of the attack, i.e., the Turkish government or the Armenian people. Right to be informed (Article 14, para. 3(a), ICCPR) The Board found that the incorrect notice given to the user of the specific content rule violated implicates the right to be informed in the context of access to justice (Article 14, para. 3(a) ICCPR). When limiting a user’s right to expression, Facebook must respect due process and inform the user accurately of the basis of their decision, including by revising that notice where the reason is changed (General Comment No. 32, para. 31). Facebook failed that responsibility in this case. 9. Oversight Board decision The Oversight Board overturns Facebook’s decision to remove the content and requires the content to be restored. 10. Policy advisory statement The following recommendations are numbered, and the Board requests that Facebook provides an individual response to each as drafted: Providing clear and accurate notice to users To make its policies and their enforcement clearer for users, Facebook should: 1. Make technical arrangements to ensure that notice to users refers to the Community Standard enforced by the company. If Facebook determines that (i) the content does not violate the Community Standard notified to user, and (ii) that the content violates a different Community Standard, the user should be properly notified about it and given another opportunity to appeal. They should always have access to the correct information before coming to the Board. 2. Include the satire exception, which is currently not communicated to users, in the public language of the Hate Speech Community Standard. Having adequate tools in place to deal with issues of satire To improve the accuracy of the enforcement of its content policies for the benefit of users, Facebook should: 3. Make sure that it has adequate procedures in place to assess satirical content and relevant context properly. This includes providing content moderators with: (i) access to Facebook’s local operation teams to gather relevant cultural and background information; and (ii) sufficient time to consult with Facebook’s local operation teams and to make the assessment. Facebook should ensure that its policies for content moderators incentivize further investigation or escalation where a content moderator is not sure if a meme is satirical or not. Allowing users to communicate that their content falls within policy exceptions To improve the accuracy of Facebook’s review in the appeals stage, the company should: 4. Let users indicate in their appeal that their content falls into one of the exceptions to the Hate Speech policy. This includes exceptions for satirical content and where users share hateful content to condemn it or raise awareness. 5. Ensure appeals based on policy exceptions are prioritized for human review. *Procedural note: The Oversight Board's decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world, provided expertise on socio-political and cultural context. The company Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world, provided linguistic expertise. Return to Case Decisions and Policy Advisory Opinions" fb-s6nrtdaj,Depiction of Zwarte Piet,https://www.oversightboard.com/decision/fb-s6nrtdaj/,"April 13, 2021",2021,,"TopicChildren / Children's rights, Culture, PhotographyCommunity StandardHate speech","Type of DecisionStandardPolicies and TopicsTopicChildren / Children's rights, Culture, PhotographyCommunity StandardHate speechRegion/CountriesLocationNetherlandsPlatformPlatformFacebook",Upheld,Netherlands,"The Oversight Board has upheld Facebook's decision to remove specific content that violated the express prohibition on posting caricatures of Black people in the form of blackface, contained in its Hate Speech Community Standard.",36550,5656,"Upheld April 13, 2021 The Oversight Board has upheld Facebook's decision to remove specific content that violated the express prohibition on posting caricatures of Black people in the form of blackface, contained in its Hate Speech Community Standard. Standard Topic Children / Children's rights, Culture, Photography Community Standard Hate speech Location Netherlands Platform Facebook To read this decision in Dutch click here. Als u deze beslissing in het Nederlands wilt lezen, klikt u hier . The Oversight Board has upheld Facebook’s decision to remove specific content that violated the express prohibition on posting caricatures of Black people in the form of blackface, contained in its Hate Speech Community Standard. About the case On December 5, 2020, a Facebook user in the Netherlands shared a post including text in Dutch and a 17-second-long video on their timeline. The video showed a young child meeting three adults, one dressed to portray “Sinterklaas” and two portraying “Zwarte Piet,” also referred to as “Black Pete.” The two adults portraying Zwarte Piets had their faces painted black and wore Afro wigs under hats and colorful renaissance-style clothes. All the people in the video appear to be white, including those with their faces painted black. In the video, festive music plays and one Zwarte Piet says to the child, “[l]ook here, and I found your hat. Do you want to put it on? You’ll be looking like an actual Pete!” Facebook removed the post for violating its Hate Speech Community Standard. Key findings While Zwarte Piet represents a cultural tradition shared by many Dutch people without apparent racist intent, it includes the use of blackface which is widely recognized as a harmful racial stereotype. Since August 2020, Facebook has explicitly prohibited caricatures of Black people in the form of blackface as part of its Hate Speech Community Standard. As such, the Board found that Facebook made it sufficiently clear to users that content featuring blackface would be removed unless shared to condemn the practice or raise awareness. A majority of the Board saw sufficient evidence of harm to justify removing the content. They argued the content included caricatures that are inextricably linked to negative and racist stereotypes, and are considered by parts of Dutch society to sustain systemic racism in the Netherlands. They took note of documented cases of Black people experiencing racial discrimination and violence in the Netherlands linked to Zwarte Piet. These included reports that during the Sinterklaas festival Black children felt scared and unsafe in their homes and were afraid to go to school. A majority found that allowing such posts to accumulate on Facebook would help create a discriminatory environment for Black people that would be degrading and harassing. They believed that the impacts of blackface justified Facebook’s policy and that removing the content was consistent with the company’s human rights responsibilities. A minority of the Board, however, saw insufficient evidence to directly link this piece of content to the harm supposedly being reduced by removing it. They noted that Facebook’s value of “Voice” specifically protects disagreeable content and that, while blackface is offensive, depictions on Facebook will not always cause harm to others. They also argued that restricting expression based on cumulative harm can be hard to distinguish from attempts to protect people from subjective feelings of offense. The Board found that removing content without providing an adequate explanation could be perceived as unfair by the user. In this regard, it noted that the user was not told that their content was specifically removed under Facebook’s blackface policy. The Oversight Board’s decision The Oversight Board upholds Facebook’s decision to remove the content. In a policy advisory statement, the Board recommends that Facebook: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board has upheld Facebook’s decision to remove specific content that violated the express prohibition on posting caricatures of Black people in the form of blackface, contained in its Hate Speech Community Standard. A majority of the Board found that removing the content complied with Facebook’s Community Standards, its values and its international human rights responsibilities. 2. Case Description On December 5, 2020, a Facebook user in the Netherlands shared a post including text in Dutch and a 17-second-long-video on their timeline. The caption of the post, as translated into English, states “happy child!” and thanks Sinterklaas and Zwarte Piets. The video showed a young child meeting three adults, one dressed to portray “Sinterklaas” and two portraying “Zwarte Piet,” also referred to as “Black Pete.” The two adults portraying Zwarte Piets had their faces painted black, wore Afro wigs under hats and colorful renaissance-style clothes. All the adults and the child in the video appear to be white, including those with their faces painted black. In the video, festive music plays in the background as the child shakes hands with Sinterklaas and one Zwarte Piet. The other Zwarte Piet places a hat on the child’s head and says to the child in Dutch: “[l]ook here, and I found your hat. Do you want to put it on? You’ll be looking like an actual Pete! Let me see. Look....” The post was viewed fewer than 1,000 times. While the majority of users who viewed the post were from the Netherlands, including the island of Curaçao, there were also views by users from Belgium, Germany and Turkey. The post received fewer than 10 comments and had fewer than 50 reactions, the majority of which were “likes” followed by “loves.” The content was not shared by other users. The post was reported by a Facebook user in the Netherlands for violating Facebook’s Hate Speech Community Standard. On December 6, 2020, Facebook removed the post for violating its Hate Speech Community Standard. Facebook determined that the portrayals of Zwarte Piet in the video violated its policy prohibiting caricatures of Black people in the form of blackface. Facebook notified the user that their post “goes against our Community Standards on Hate Speech.” After Facebook rejected the user’s appeal against their decision to remove the content, the user submitted their appeal to the Oversight Board on December 7, 2020. 3. Authority and scope The Board has authority to review Facebook's decision under Article 2 (Authority to review) of the Board's Charter and may uphold or reverse that decision under Article 3, Section 5 (Procedures for review: Resolution of the Charter). Facebook has not presented reasons for the content to be excluded in accordance with Article 2, Section 1.2.1 (Content not available for Board review) of the Board's Bylaws, nor has Facebook indicated that it considers the case to be ineligible under Article 2, Section 1.2.2 (Legal obligations) of the Bylaws. Under Article 3, Section 4 (Procedures for review: Decisions) of the Board's Charter, the final decision may include a policy advisory statement, which will be taken into consideration by Facebook to guide its future policy development. 4. Relevant standards The Oversight Board considered the following standards in its decision: I. Facebook’s Community Standards Facebook's Community Standards define hate speech as “a direct attack on people based on what we call protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability.” Direct attacks include “dehumanizing speech” and “harmful stereotypes.” Under “Tier 1,” prohibited content (“do not post”) includes content targeting a person or group of people on the basis of a protected characteristic with “designated dehumanizing comparisons, generalizations, or behavioral statements (in written or visual form).” “Caricatures of Black people in the form of blackface” is specifically listed as an example of violating content. In Facebook’s Hate Speech Community Standard, the company states that hate speech is not allowed on the platform ""because it creates an environment of intimidation and exclusion and, in some cases, may promote real-world violence."" II. Facebook’s values Facebook’s values are outlined in the introduction to the Community Standards. The value of “Voice” is described as “paramount”: The goal of our Community Standards has always been to create a place for expression and give people a voice. […] We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Facebook limits “Voice” in service of four values, and two are relevant here: “Safety”: We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn't allowed on Facebook. “Dignity” : We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade others. III. Human rights standards The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. The Board's analysis in this case was informed by the following human rights standards: 5. User statement The user stated in their appeal to the Board that the post was meant for their child, who was happy with it, and that they want the content back up on Facebook. The user also stated that “the color does not matter” in this case because, in their view, Zwarte Piet is important to children. 6. Explanation of Facebook’s decision Facebook removed this post as a Tier 1 attack under the Hate Speech Community Standard, specifically for violating its rule prohibiting harmful stereotypes and dehumanizing generalizations in visual form, which includes caricatures of Black people in the form of blackface. Facebook announced the “blackface” policy via its Newsroom and through the news media in August 2020. At the same time, the company updated its Hate Speech Community Standard to include the blackface policy. In November 2020, Facebook released a video in Dutch explaining the potential effects of this policy on the portrayal of Zwarte Piet on the platform. Facebook also noted that the policy is the outcome of extensive research and external stakeholder engagement. As a result, Facebook concluded that the portrayals of Zwarte Piet “insult, discriminate, exclude, and dehumanize Black people by representing them as inferior and even subhuman” because the figure’s characteristics are “exaggerated and unreal.” Moreover, Facebook stated that Zwarte Piet is “a servile character whose typical behavior includes clumsiness, buffoonery, and speaking poorly.” Facebook submitted that because “[t]he two people in the video were dressed in the typical Black Pete costume -- their faces were painted in blackface and they wore Afro-wigs,” its decision to remove the content was consistent with its blackface policy. Facebook also noted there was no indication the content was shared to condemn or raise awareness about the use of blackface, which is a general exception built into the Hate Speech Community Standard. Facebook also submitted that its removal of the content was consistent with its values of “Dignity” and “Safety,” when balanced against the value of “Voice.” According to Facebook, the harms caused by portrayals of Zwarte Piet on its platform “even if intended to do no harm by the user, cause such extreme harm and negative experience that they must be removed.” Facebook further stated that its decision to remove the content was consistent with international human rights standards. Facebook stated that (a) its policy was clearly and easily accessible, (b) the decision to remove the content was legitimate to protect the rights of others from harm and discrimination, and (c) its decision was “necessary to prevent harm to the dignity and self-esteem of children and adults of African descent.” In order to meet the requirement of proportionality for restrictions on expression, Facebook argued its policy applied to a narrow set of “the most egregious stereotypes.” 7. Third-party submissions The Oversight Board received 22 public comments related to this case. Seven of the comments were submitted from Europe and 15 from the United States and Canada. The submissions covered themes including: the history of Zwarte Piet, whether the character's portrayal is harmful to Black people, especially Black children, and how Facebook's Hate Speech Community Standard relates to this case and its compliance with international human rights standards. To read public comments submitted for this case, please click here . 8. Oversight Board analysis This case presents several tensions for the Board to grapple with, because it involves a longstanding cultural tradition shared and enjoyed by many Dutch people without apparent racist intent. The tradition, however, includes people in blackface, which is widely recognized around the globe, and even increasingly in the Netherlands, as a harmful racial stereotype. In this case, a user objects to Facebook’s removal of a family video shared with a relatively small audience, celebrating a festive tradition with a child. It features the character Zwarte Piet in blackface. This is a form of expression Facebook recently chose to prohibit based on its values of “Voice,” “Safety” and “Dignity.” There is no suggestion that the user intended to cause harm and they do not feel this was hate speech. At the same time, many people, including academics, social and cultural experts, public authorities as well as a growing number of national actors in the Netherlands, believe that the practice is discriminatory and can cause harm (evidence supporting this view is set out in section 8.3 below). Numerous human rights are implicated in this case beyond expression, including cultural rights, equality and non-discrimination, mental health, and the rights of children. The Board seeks to evaluate whether this content should be restored to Facebook through three lenses: Facebook’s Community Standards; the company’s values; and its human rights responsibilities. The complexity of these issues allows reasonable people to reach different conclusions, and the Board was divided on this case. 8.1 Compliance with Community Standards Facebook enforces its Community Standard on Hate Speech by identifying (i) a “direct attack” and (ii) a “protected characteristic” the direct attack was based upon. In this case, the Board agrees with Facebook that both elements required for enforcing the Community Standard were satisfied. The policy rationale for the Hate Speech Community Standard lists “dehumanizing speech” and “harmful stereotypes” as examples of an attack. Under the “do not post” section, “designated dehumanizing comparisons, generalizations, or behavioral statements (in written or visual form)” are prohibited, expressly including “caricatures of Black people in the form of blackface.” The Hate Speech Community Standard includes race and ethnicity among the list of protected characteristics. In this case, Facebook notified the user that their content violated the Hate Speech Community Standard. However, the user was not informed that the post was specifically removed under the blackface policy. The Board notes the user claimed their intent was to share a celebration of a festive tradition. The Board has no reason to believe this view was not sincerely held. However, the Hate Speech Community Standard, including the rule on blackface, does not require a user to intend to attack people based on a protected category. Facebook’s rule is structured to presume that any use of blackface is inherently a discriminatory attack. On this basis, Facebook’s action to remove this content was consistent with its content policies. The Board notes that the Hate Speech Community Standard provides a general exception to allow people to “share content that includes someone else’s hate speech to condemn it or raise awareness.” They further state: “our policies are designed to allow room for these types of speech, but we require people to clearly indicate their intent. If intention is unclear, we may remove content.” The Board agreed that this exception did not apply in this case. A majority of the Board noted that the two adults in the video had their whole faces painted black, wore Afro wigs, colorful renaissance-style clothes and acted as servants of Sinterklaas. The majority also found that the content included potentially harmful stereotypes, such as servitude and inferiority. In light of this, as well as of the analysis of Facebook’s values and human rights responsibilities below, the majority affirms that removing the content was in line with Facebook’s Hate Speech Community Standard. For a minority, however, Facebook’s general rule that blackface intimidates, excludes or promotes violence, raised concerns addressed below. 8.2 Compliance with Facebook’s values For a majority of the Board, the decision to remove this content, and the prohibition on blackface, complied with Facebook’s values of “Voice,” “Safety” and “Dignity.” The use of blackface, including portrayals of Zwarte Piet, is widely agreed to be degrading towards Black people. In this regard, the Board references reports by international human rights mechanisms as well as regional and national authorities, which are discussed in more detail under section 8.3(III.). The user’s content included caricatures that are inextricably linked to negative and racist stereotypes originating in the enslavement of Black people. In relation to the value of “Voice”: the user’s video is not political speech or a matter of public concern and is, on its own, purely private. These caricatures are considered by parts of Dutch society to sustain systemic racism in the Netherlands today. For the majority, it cannot be decisive that the user shared this content without malicious intent or hatred towards Black people. Allowing the accumulation of such posts on Facebook would create a discriminatory environment for Black people that would be degrading and harassing. At scale, the policy is clear and ensures Black people’s dignity, safety and voice on the platform. Restricting the voice of people who share depictions of blackface in contexts where it is not condemning racism is acceptable to achieve this objective. A minority of the Board found that Facebook should have given greater weight to the user’s voice in this case, even if it is of a private nature. They recall that Facebook’s value of “Voice” specifically protects disagreeable and objectionable content. While blackface may offend, the minority believed that depictions on Facebook will not always cause harm to others, and exceptions to the Hate Speech Community Standard are too narrow to allow for these situations. In this case, the minority believed that during an apparently private occasion, the child was encouraged to identify themselves with Zwarte Piet and that the interaction could be regarded as positive. The minority therefore believes Facebook has presented insufficient evidence of harm to justify the suppression of “Voice.” In their view, the removal of this post, without notice of the specific rule violated, caused confusion for the user who posted it and did not advance the values of “Dignity” or “Safety.” 8.3 Compliance with Facebook’s human rights responsibilities A majority found the removal of the user’s content under the Community Standard on Hate Speech was consistent with Facebook’s human rights responsibilities, in particular to address negative human rights impacts that can arise from its operations (UNGPs, Principles 11 and 13). Human rights due diligence (UNGPs) Facebook’s rule on blackface was the outcome of a wider process set up to build a policy on harmful stereotypes . This process involved extensive research and engagement with more than 60 stakeholders, including experts in a variety of fields, civil society groups, and groups affected by discrimination and harmful stereotypes. For the majority, this was in line with international standards for on-going human rights due diligence to evolve the company’s operations and policies (Principle 17(c) and 18(b) UNGPs; UN Special Rapporteur on freedom of expression, report A/74/486, paras 44 and 58(e)). For the minority, Facebook provided insufficient information on the extent of research and stakeholder engagement in countries where the Sinterklaas tradition is present, such as the Netherlands. Freedom of expression (Article 19 ICCPR) Article 19, para. 2 of the ICCPR provides broad protection for expression of “all kinds.” The UN Human Rights Committee has made clear the protection of Article 19 extends to expression that may be considered “deeply offensive” (General Comment No. 34, paras. 11, 12). The Board noted that the right to participate in cultural life, protected under Article 15 of the ICESCR, is also relevant. Participating in the Sinterklaas festival and posting related content on Facebook – including images of Zwarte Piet in blackface – could be understood as taking part in the cultural life of the Netherlands. Both the right to freedom of expression and the right to participate in cultural life should be enjoyed by all without discrimination on grounds of race or ethnicity (Article 2, para. 1, ICCPR; Article 2, para. 2, IESCR). While the right to freedom of expression is fundamental, it is not absolute. It may be restricted, but restrictions should meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). Facebook should seek to align its content moderation policies on hate speech with these principles (UN Special Rapporteur on freedom of expression, report A/74/486, at para. 58(b)). Likewise, the right to participate in cultural life may be subject to similar restrictions in order to protect other human rights (General Comment No. 21, para. 19). I. Legality The Board found that Facebook’s Hate Speech Community Standard was sufficiently clear and precise to put users on notice that content featuring blackface would be removed unless a relevant exception was engaged (General Comment No. 34, para. 25). Facebook further sought to raise awareness of the potential effects of this policy change in the Netherlands by releasing a video in Dutch ahead of the Sinterklaas festival in November 2020. This explained the reasons why portrayals of Zwarte Piet are not permitted on the platform. II. Legitimate aim The Board agreed the restriction pursued the legitimate aim of protecting the rights of others (General Comment No. 34, para. 28). These include the rights to equality and non-discrimination, including based on race and ethnicity (Article 2, para. 1, ICCPR; Article 2, ICERD). Facebook sought the legitimate aim of preventing discrimination in equal access to a platform for expression (Article 19 ICCPR), and to protect against discrimination in other fields, which in turn is important to protect the right to health of persons targeted by discrimination (Article 12, ICESCR), especially for children, who under the CRC receive additional protection against discrimination, and guarantees for their right to development (Articles 2 and 6 CRC). The Board further agreed that it is not a legitimate aim to restrict expression for the sole purpose of protecting individuals from offense (UN Special Rapporteur on freedom of expression, report A/74/486, para. 24), as the value international human rights law placed on uninhibited expression is high (General Comment No. 34, para. 38). III. Necessity and proportionality Any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected” (General Comment 34, para. 34). A majority of the Board considered Facebook’s Hate Speech Community Standard, and whether the application of the rule on blackface in this case was necessary to protect the rights of Black people to equality and non-discrimination, in particular for children. This was consistent with Facebook’s responsibility to adopt policies to avoid causing or contributing to adverse human rights impacts (UNGPs, Principle 13). As the Board also identified in case decision 2020-003-FB-UA , moderating content to address the cumulative harms of hate speech, even where the expression does not directly incite violence or discrimination, can be consistent with Facebook’s human rights responsibilities in certain circumstances. For the majority, the accumulation of degrading caricatures of Black people on Facebook creates an environment where acts of violence are more likely to be tolerated and reproduce discrimination in a society. As with degrading slurs, context will always be important even for the enforcement of a general rule. Here the experience of discrimination against Black people in the Netherlands, and the connection of Zwarte Piet and blackface to that experience, was crucial. As the UN Special Rapporteur on freedom of expression has observed, “the scale and complexity of [social media companies] addressing hateful expression presents long-term challenges and may lead companies to restrict such expression even if it is not clearly linked to adverse outcomes (as hateful advocacy is connected to incitement in article 20 of the [ICCPR]).” (report A/HRC/38/35, para. 28). The Special Rapporteur has also indicated companies may remove hate speech that falls below the threshold of incitement to discrimination or violence; when departing from the high standard states must meet to justify restrictions in the criminal or civil law on expression, companies must provide a reasoned explanation of the policy difference in advance, clarified in accordance with human rights standards (A/74/486, paras. 47 – 48). The Board notes international human rights law would not allow a state to impose a general prohibition on blackface through criminal or civil sanctions, except under the conditions foreseen in ICCPR Article 20, para. 2 and Article 19, para. 3 (e.g., advocacy of hatred constituting incitement to violence) (A/74/486, para. 48). Expression that does not reach this threshold may still raise concern in terms of tolerance, civility and respect for others, but would not be necessary or proportionate for a state to restrict (Rabat Plan of Action, para. 12, 20). In the Board’s view, the individual post in this case would fall within this category of protection from state restriction. The majority found Facebook followed international guidance and met its human rights responsibilities in this case. Numerous human rights mechanisms have found the portrayal of Zwarte Piet to be a harmful stereotype, connecting it to structural racism in the Netherlands, with severe harms at a societal and individual level. For the majority, this justified Facebook adopting a policy that departs from the human rights standards binding states, where the intent of the person sharing content featuring blackface is only material if they are condemning its use or raising awareness. The CERD Committee observed in its ‘Concluding Observations on the Netherlands’ that Zwarte Piet “is experienced by many people of African descent as a vestige of slavery” and is connected to structural racism in the country ( CERD/C/NLD/CO/19-21 , para. 15 and 17). The majority noted that the UN Working Group of Experts on People of African Descent has also reached similar conclusions ( A/HRC/30/56/Add.1 , para. 106). The Board agrees with the CERD Committee that “even a deeply rooted cultural tradition does not justify discriminatory practices and stereotypes” ( CERD/C/NLD/CO/19-21 , para. 18; see also, UN ESCR Committee, General Comment No. 21, paras 18 and 51). The majority was also persuaded by the documented experiences of Black people in the Netherlands of racial discrimination and violence that were often linked to, and exacerbated by, the cultural practice of Zwarte Piet. The Dutch Ombudsman for Children’s finding that ""portrayals of Zwarte Piet can contribute to bullying, exclusion and discrimination against Black children” along with reports that during the Sinterklaas festival Black children felt scared and unsafe in their homes and were afraid to go to school is persuasive. Additionally, the Board noted reported episodes of intimidation and violence against people peacefully protesting Zwarte Piet ( CERD/C/NLD/CO/19-21 , para. 17). The Board noted also the work the European Commission against Racism and Intolerance ( ECRI report on the Netherlands , paras. 30-31), the Netherlands Institute for Human Rights and the European Commission’s network of legal experts in gender equality and non-discrimination ( European Commission Country Report Netherlands 2020 , page 24, footnote 89). The majority of the Board further noted that repeated negative stereotypes about an already marginalized minority, including in the form of images shared on social media, have a psychological impact on individuals with societal consequences. Repeated exposure to this particular stereotype may nurture in people who are not Black ideas of racial supremacy that may lead individuals to justification and even incitement of discrimination and violence. For Black people, the cumulative effect of repeated exposure to such images, as well as being on the receiving end of violence and discrimination, may impact self-esteem and health, in particular for children (Article 12, ICESCR; Articles 2 and 6, CRC). The Board notes the work of Izalina Tavares, “ Black Pete: Analyzing a Racialized Dutch Tradition Through the History of Western Creations of Stereotypes of Black Peoples ”, in this regard. Other academic studies have also drawn a causal connection between portrayals of Zwarte Piet and harm, several of which Facebook also included in its decision rationale to the Board. These include Judi Mesman, Sofie Janssen and Lenny van Rosmalen, “ Black Pete through the Eyes of Dutch Children ,” and Yvon van der Pijl and Karina Gourlordava, “ Black Pete, “Smug Ignorance,” and the Value of the Black Body in Postcolonial Netherlands .” This fits within a broader literature on this topic, including John F. Dovidio, Miles Hewstone, Peter Glick, and Victoria M. Esses, “ Prejudice, Stereotyping and Discrimination: Theoretical and Empirical Overview .” According to the majority, there is sufficient evidence of objective harm to individuals’ rights to distinguish this rule from one that seeks to insulate people from subjective offense. The majority also found the removal to be proportionate. Less severe interventions, such as labels, warning screens, or other measures to reduce dissemination, would not have provided adequate protection against the cumulative effects of leaving this content of this nature on the platform. The challenge of assessing intent when enforcing against content at scale should also be considered. It would require a case-by-case examination that would give rise to a risk of significant uncertainty, weighing in favor of a general rule that can more easily be enforced (see, for a comparative perspective: European Court of Human Rights, Case of Animal Defenders International v. the United Kingdom , para. 108). The majority further noted that the prohibition Facebook imposed is not blanket in nature, and that the availability of human review will be essential for accurate enforcement. There is an exception under the Hate Speech Community Standard that also applies to the blackface policy, allowing depictions of blackface to condemn or raise awareness about hate speech. The newsworthiness allowance further allows Facebook to permit violating content on the platform where the public interest in the expression outweighs the risk of harm (for example, if pictures or footage of a public figure in blackface were to become a topic of national news coverage). Modified “Piet” traditions that have abandoned the use of blackface are also not affected by the Hate Speech Community Standard, and this was significant for the majority. The user can therefore adapt their tradition if they wish to share footage of it through their Facebook account. A growing number of national actors in the Netherlands have distanced themselves from and/ or promoted alternative and inclusive forms of the tradition ( European Race and Imagery Foundation report , pages 7, 24 and 56-58). Against the backdrop of a global reckoning with racism and white supremacy, it is consistent with Facebook’s human rights responsibilities to adopt operational rules and procedures that promote equality and non-discrimination. While appreciating the arguments of the majority, the minority did not believe the requirements of necessity and proportionality had been met. They noted that the rule is unduly broad, and a more nuanced policy would allow Facebook to address well-placed concerns relating to discrimination, while avoiding collateral damage to expression that does not intend or directly cause harm. The minority believed that, while certainly relevant, the evidence presented was insufficient to demonstrate in precise terms a causative link between the expression under review, and the harm being prevented or reduced by limiting it (General Comment 34, para. 34): the policy should allow for the possibility that such expression will not always intend or contribute to harm. The minority noted that the excessive enforcement of the current policy is likely to have a chilling effect on freedom of expression. They also found that predicating content removal on the notion of cumulative harm makes restrictions of this sort difficult to distinguish from rules that seek to protect people from subjective feelings of offense. Likewise, the minority believed that a general negative psychological impact on individuals with societal consequences was not sufficiently demonstrated and would not justify interference with speech, unless it reaches the threshold of incitement (Article 20, para. 2), ICCPR), under international human rights law. They also expressed concern that Facebook’s power may be exercised in a way that interferes with a matter under national discussion and may distort or even supplant processes in a democratic society that would counter discrimination. For the minority, removing potentially discriminatory content at scale where the user does not intend harm and where harm is unlikely to result, will not effectively address racial discrimination. They agreed with the majority that removing content without providing the user with an adequate explanation could be perceived as unfair. The confusion that may result from being accused of “attack” and “hate speech” where no harm was intended, could undermine efforts on and off Facebook to bring awareness and clarity about Facebook’s content policies to people. For the majority, this would be addressed, and the platform made more inclusive, if content removal notices provided more information to the user on the justification for the rule enforced, including access to resources explaining the potential harms Facebook is seeking to mitigate. 9. Oversight Board decision The Oversight Board upholds Facebook’s decision to remove the content. 10. Policy advisory statement The following recommendations are numbered, and the Board requests that Facebook provides an individual response to each as drafted. Explaining the blackface policy on Facebook to users *Procedural note: The Oversight Board's decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world, provided expertise on socio-political and cultural context. Return to Case Decisions and Policy Advisory Opinions" fb-si0clwax,Federal Constituency in Nigeria,https://www.oversightboard.com/decision/fb-si0clwax/,"December 8, 2023",2023,December,"TopicFreedom of expression, Marginalized communities, PoliticsCommunity StandardDangerous individuals and organizations",Dangerous individuals and organizations,Overturned,Nigeria,A user appealed Meta’s decision to remove a Facebook post containing an image of Nigerian politician Yusuf Gagdi with a caption referring to a federal constituency in Nigeria.,4891,725,"Overturned December 8, 2023 A user appealed Meta’s decision to remove a Facebook post containing an image of Nigerian politician Yusuf Gagdi with a caption referring to a federal constituency in Nigeria. Summary Topic Freedom of expression, Marginalized communities, Politics Community Standard Dangerous individuals and organizations Location Nigeria Platform Facebook This is a summary decision . Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors. They are approved by a Board Member panel, not the full Board. They do not involve a public comment process and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement. Case Summary A user appealed Meta’s decision to remove a Facebook post containing an image of Nigerian politician Yusuf Gagdi with a caption referring to a federal constituency in Nigeria. Removal was apparently based on the fact that the Nigerian constituency goes by the same initials (PKK) that are used to designate a terrorist organization in Turkey, though the two entities are completely unrelated. This case highlights the company’s overenforcement of the Dangerous Organizations and Individuals policy. This can have a negative impact on users’ ability to make and share political commentary, resulting in an infringement of users’ freedom of expression. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Case Description and Background In July 2023, a Facebook user posted a photograph of Nigerian politician Yusuf Gagdi with the caption “Rt Hon Yusuf Gagdi OON member of the house of reps PKK.” Mr. Gagdi is a representative in the Nigerian Federal House of Representatives from the Pankshin/Kanam/Kanke Federal Constituency in Plateau state. The constituency encompasses three areas, which the user refers to by abbreviating their full names to PKK. However, PKK is also an alias of the Kurdistan Workers’ Party, a designated dangerous organization. Meta initially removed the post from Facebook, citing its Dangerous Organizations and Individuals policy , under which the company removes content that “praises,” “substantively supports” or “represents” individuals and organizations it designates as dangerous. In their appeal to the Board, the user stated the post contains a picture of a democratically elected representative of a Nigerian federal constituency presenting a motion in the house, and does not violate Meta's community standards. After the Board brought this case to Meta’s attention, the company determined that the post’s removal was incorrect because it does not contain any reference to a designated organization or individual, and it restored the content. Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation processes involved, reduce errors and increase fairness for Facebook and Instagram users. Case Significance This case highlights an error in Meta’s enforcement of its Dangerous Organizations and Individuals policy. These errors can result in an infringement of users’ freedom of expression. The Board has issued several recommendations about the Dangerous Organizations and Individuals policy. This includes a recommendation to “evaluate automated moderation processes for enforcement of the Dangerous Individuals and Organizations policy,” which Meta declined to implement ( Öcalan’s Isolation decision, recommendation no. 2). The Board has also recommended Meta ""implement an internal audit procedure to continuously analyze a statistically representative sample of automated content removal decisions to reverse and learn from enforcement mistakes,” ( Breast Cancer Symptoms and Nudity decision, recommendation no.5). Meta described this recommendation as work it already does but did not publish information to demonstrate implementation. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to the company’s attention. The Board also urges Meta to speed up the implementation of still-open recommendations to reduce such errors. Return to Case Decisions and Policy Advisory Opinions" fb-t8jdddjv,Political dispute ahead of Turkish elections,https://www.oversightboard.com/decision/fb-t8jdddjv/,"August 23, 2023",2023,,"TopicElections, Journalism, Natural disastersCommunity StandardHate speech","Policies and TopicsTopicElections, Journalism, Natural disastersCommunity StandardHate speech",Overturned,Turkey,"The Oversight Board has overturned Meta’s original decisions to remove the posts of three Turkish media organizations, all containing a similar video of a politician confronting another in public.",61879,9677,"Overturned August 23, 2023 The Oversight Board has overturned Meta’s original decisions to remove the posts of three Turkish media organizations, all containing a similar video of a politician confronting another in public. Standard Topic Elections, Journalism, Natural disasters Community Standard Hate speech Location Turkey Platform Facebook English-Language PDF of Political Dispute Ahead of Turkish Elections Decision Political Dispute Ahead of Turkish Elections Public Comments Appendix The Oversight Board has overturned Meta’s original decisions to remove the posts of three Turkish media organizations, all containing a similar video of a politician confronting another in public, using the term “İngiliz uşağı,” which translates as “servant of the British.” The Board finds that the term is not hate speech under Meta’s policies. Furthermore, Meta’s failure to qualify the content as permissible “reporting,” or to apply the public newsworthiness allowance, made it difficult for the outlets to freely report on issues of public interest. The Board recommends that Meta make public an exception for permissible reporting on slurs. About the cases For these decisions, the Board considers three posts – two on Facebook, one on Instagram – from three different Turkish media organizations, all independently owned. They contain a similar video featuring a former Member of Parliament (MP) of the ruling party confronting a member of the main opposition party in the aftermath of the Turkish earthquakes in February 2023. In the run-up to the Turkish elections, the earthquakes were expected to significantly impact voting patterns. The video shows Istanbul’s Mayor Ekrem İmamoğlu, a key opposition figure, visiting one of the most heavily impacted cities when he is confronted by a former MP, who shouts that he is “showing off,” calls him a “servant of the British,” and tells him to return to “his own” city. Both the public and expert commentators confirm the phrase “İngiliz uşağı” is understood by Turkish speakers to mean “a person who acts for the interests and benefits” of Britain or the West in general. Meta removed all three posts for violating its Hate Speech policy rule against slurs. Although several of Meta’s mistake-prevention systems had been engaged, including cross-check, which led to the posts in each case undergoing several rounds of human review, this did not result in the content being restored. In total, the posts were viewed across the three accounts more than 1,100,000 times before being removed. While the three users were notified they had violated the Hate Speech Community Standard, they were not told the specific rule they had broken. Additionally, feature limits to the accounts of two of the media organizations were applied, which prevented one from being able to create new content for 24 hours, and another losing its ability to livestream video for three days. After the Board identified the cases, Meta decided that its original decisions were wrong because the term “İngiliz uşağı” should not have been on its slur lists, and it restored the content. Separately, Meta had been conducting an annual audit of its slur lists for Turkey ahead of the elections, which led to the term “İngiliz uşağı” being removed in April 2023. Key findings The role of the media in reporting information across the digital ecosystem is critical. The Board concludes that removing the three posts was an unnecessary and disproportionate restriction on the rights of individuals in the Turkish media organizations and on access to information for their audience. Furthermore, Meta’s measures in these cases made it difficult for two of the three organizations to freely share their reporting for the duration of the feature limits on their accounts. This had real impact since the earthquakes and run-up to the elections made access to independent local news especially important. The Board finds that the term “İngiliz uşağı” is not hate speech under Meta’s policies because it does not attack people on the basis of “a protected characteristic.” The public confrontation in the videos involves politicians from competing political parties. Since the term used has historically functioned as political criticism in Türkiye (Turkey), it is political speech on a matter of significant public interest in the context of elections. Even if Meta had designated the term correctly as a slur, the content should nevertheless have been allowed because of its public interest value. The Board is concerned the three posts were not escalated for an assessment under the newsworthiness allowance by Meta’s Core Policy Team. Meta’s policies also allow people to share hate speech and slurs to raise awareness of them, provided the user’s intent is clear. In responses to these cases, Meta has explained that in order to “qualify as reporting that is awareness raising, it is not enough to restate that someone else used hate speech or a slur. Instead, we [Meta] need specific additional context.” None of the media organizations in these cases would have qualified because the content was shared with a neutral caption, which would not have been considered sufficient context. The politician’s use of the term in the video was not the main story being told, so a caption focused on explaining or condemning it would not have made sense. Rather, the main news story was the disagreement between politicians in the context of the earthquake response. Finally, the Board finds that Meta should make public that reporting on hate speech is permitted, ideally in a standalone exception that distinguishes journalistic “reporting” from “raising awareness.” Meta’s internal guidance seems to permit broader exceptions than those communicated publicly to users at present. This information would be especially important to help media organizations to report on incidents during which a slur has been used by third parties in a matter of public interest, including when it is not the main point of the news story. The framing of this information should recognize that media outlets and others engaged in journalism may not always state intent for “raising awareness,” in order to impartially report on current events. The Oversight Board’s decision The Oversight Board overturns Meta’s original decisions to remove three posts. The Board recommends that Meta: * Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Board overturns Meta’s original decisions to remove the posts of three Turkish media organizations – BirGün Gazetesi, Bolu Gündem, and Komedya Haber – which all contained a similar video. The videos all featured Ms. Nursel Reyhanlıoğlu, a former Member of Parliament of President Erdoğan’s Justice and Development Party (AKP party), referring to Istanbul Mayor Ekrem İmamoğlu, a member of the largest opposition party in Türkiye (Turkey), as an “İngiliz uşağı,” translated as “servant of the British.” In all three cases, Meta removed the video for violating its Hate Speech Community Standard, which prohibits “slurs that are used to attack people on the basis of their protected characteristics.” After the Board identified these cases, Meta reversed each of its decisions to remove the posts, deciding the term “İngiliz uşağı” should not be on its internal slur list. 2. Case description and background On February 6, 2023, a series of powerful earthquakes struck southern Türkiye (Turkey) near the northern border of Syria. The disaster killed over 50,000 people in Türkiye (Turkey) alone, injured more than 100,000, and triggered the displacement of three million people in the provinces most affected by the tremors. On February 8, 2023, Istanbul Municipality Mayor Ekrem İmamoğlu, a member of the main opposition party, the Republican People’s Party (CHP), visited Kahramanmaraş, one of the cities impacted by the disaster. During his visit, a former Member of Parliament (MP) from the ruling Justice and Development Party (AKP), Nursel Reyhanlıoğlu, confronted him. In the recorded confrontation, former MP Reyhanlıoğlu shouted at Mayor İmamoğlu that he was “showing off” with his visit, calling him a “British servant” (Turkish: İngiliz uşağı), and that he should “get out” and return to “his own” Istanbul. Public comments and experts the Board consulted confirmed that the phrase “İngiliz uşağı” is understood by Turkish speakers to mean “a person who acts for the interests and benefits of the British nation or government officials or the West in general.” External experts underlined that implying that someone is betraying their own country by serving the interests of foreign powers can be a serious and damaging accusation as it questions a person’s loyalty and commitment to their own country, particularly in a political context. The three media organizations in these cases do not have ties to the Turkish government and are independently owned. External experts noted that BirGün Gazetesi has had the most contentious relationship with the government. One of its columnists, Turkish-Armenian journalist Hrant Dink, was assassinated in 2007 and the paper has also been subject repeatedly to criminal prosecution. In the immediate aftermath of the February earthquakes, there was significant attention on the presidential and parliamentary elections due to take place in May. Meta announced in an April 2023 blog post that it was ready to combat “misinformation” and “false news” in the upcoming Turkish election. Experts the Board consulted described how election observers had expected the earthquakes to impact voting patterns. One of the main points of criticism centered on the government’s legislation to provide amnesty to construction companies for erecting buildings that failed to meet safety codes, a law that Reyhanlıoğlu supported as an MP in 2018. Public criticism of disaster management agency Afet ve Acil Durum Yönetimi Başkanlığı (AFAD) for failures in its earthquake response became an election issue. In the first case that the Board accepted on appeal, the Turkish news site page Bolu Gündem posted the video of the confrontation to its Facebook page. Users reported the post, and it was queued for moderator review. At the time of review, Meta had enabled a mistake prevention system known as Dynamic Multi Review, which allows for jobs to be assessed by multiple reviewers in order to get a majority outcome. Two out of three reviewers found that the content violated Meta’s Hate Speech policy, and one reviewer found it did not. Due to the Early Response Secondary Review (ERSR) protocol, which is a form of cross-check , the content was escalated for secondary review rather than being immediately removed (The various mistake prevention systems engaged in these cases are further explained in Section 8.1). During this secondary review, two reviewers found that the content violated the Hate Speech policy, and it was removed. Meta applied a strike and 24-hour feature limit to this case’s content creator’s account (and not to the page), which prevented the user from creating new content on the platform (including any pages they administer) and creating or joining Facebook messenger rooms. Before being removed, the post was viewed more than one million times. In the second case, the Turkish media outlet BirGün Gazetesi posted a longer video including the same confrontation as the other two shorter videos as a live stream on its Facebook page. After the live stream ended, it became a permanent post on the page. Distinct from the other two videos, it included further footage of Mayor İmamoğlu and CHP leader and presidential candidate Kemal Kılıçdaroğlu speaking to two members of the public. In the conversation that followed the confrontation, the two members of the public requested more aid to rescue people trapped under the rubble and expressed frustration at the government’s emergency response. A user reported the Facebook post for violating Meta’s policies. At the time of review, Meta had enabled Dynamic Multi-Review (see section 8.1), and two out of three reviewers found the content violated the Hate Speech policy, while one reviewer found it did not. The content was sent for additional review due to the General Secondary Review (GSR) ranker, which is another cross-check protocol running alongside ERSR. The GSR algorithm ranks content for additional review based on criteria such as topic sensitivity, enforcement severity, false-positive probability, predicted reach, and entity sensitivity (see further explanation of this protocol in Section 8.1 and cross-check policy advisory opinion , para 42). A reviewer in Meta’s regional market team determined the post violated the Hate Speech policy and it was removed. Meta applied a standard strike to both the content creator’s profile and the Facebook page, but it did not apply any feature limits (such as restricting the ability to post) because the number of strikes did not reach the necessary threshold. Before being removed, the post was viewed more than 60,000 times. In the third case, a digital media outlet called Komedya Haber posted the video to Instagram. A classifier designed to identify the “most viral and potentially violating content” detected the content as potentially violating the Hate Speech policy, lining it up for moderator review. The reviewer found that the content violated the Hate Speech policy. Later, a user reported the content for violating Meta’s policies. At the time of review, Meta had enabled Dynamic Multi-Review, and two reviewers found that the content violated the Hate Speech policy. The GSR ranker prioritized the content for additional review, so the post was sent to another moderator. Based on information from Meta in the cross-check policy advisory opinion, the GSR review is conducted by either an employee or a contractor on Meta’s Regional Market Team ( cross-check policy advisory opinion , page 21). Through GSR, a reviewer assessed the content as violating the Hate Speech policy and it was removed. Meta applied a standard strike resulting in a three-day feature limit preventing the Instagram account from using live video. Before being removed, the post was viewed more than 40,000 times. After each post was removed, all of the three users were notified that they violated Meta’s Hate Speech Community Standard, but not the specific rule within that policy they had broken. The notifications the two Facebook users received stated that hate speech includes “attacks on people because of their race, ethnicity, religion, caste, physical or mental ability, gender, or sexual orientation” and lists several examples, but do not mention slurs. The Instagram user received a shorter notification, stating that the content was removed “because it goes against our [Instagram] Community Guidelines, on hate speech or symbols.” Though Meta applied a 24-hour feature limit on the content creator who posted the content to Bolu Gündem’s Facebook page, the user notification did not alert the user to the restriction. On Instagram, Komedya Haber received a notification that its Instagram account was temporarily restricted from creating live videos. All three users then appealed Meta’s decision to remove the content, and Meta’s reviewers again concluded that each post violated the Hate Speech policy. Each user was notified that the content had been reviewed once more but that the content violated Facebook’s Community Standards or Instagram’s Community Guidelines. The appeal messages did not tell them which policy was violated. As a result of the Board selecting these three appeals, Meta identified that all three of its original decisions were wrong, and restored the content on each account on March 28, 2023, reversing the applicable strikes. By this point, the feature limits applied to two of the cases had already expired. Meta explained to the Board that the phrase was not used as a slur and therefore the three posts did not violate the Hate Speech policy. Between January and April 2023, Meta was conducting an annual audit of its slurs list for the Turkish market, which eventually led to the phrase “İngiliz uşağı” being removed from the list. This took place in parallel to the Board selecting these three cases, which, in line with regular process, led to Meta reviewing its original decisions. Through that review, the company also determined “İngiliz uşağı” was not a slur. 3. Oversight Board authority and scope The Board has authority to review Meta’s decision following an appeal from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). The Board monitors implementation of recommendations Meta has committed to act on, and may follow-up on any prior recommendation in its case decisions. When the Board selects cases like these, in which Meta subsequently acknowledges that it made an error, the Board reviews the original decisions to increase understanding of the content moderation process and to make recommendations to reduce errors and increase fairness for people who use Facebook and Instagram. The Board further notes that Meta’s reversal of its original decisions in each of these cases was partly based on a change to its internal guidance after each post was made, removing the phrase “İngiliz uşağı” from its non-public slur list in April 2023. The Board understands that at the time of the company’s original decisions, Meta’s at-scale reviewers applied the policy and internal guidance that were in force at the time. When the Board identifies cases in which the appeals give rise to similar or overlapping issues, including related to content policies or their enforcement, or Meta’s human rights responsibilities, they may be joined and assigned to a panel to deliberate the appeals together. A binding decision will be made in respect of each post. 4. Sources of authority and guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board decisions: The most relevant previous decisions of the Oversight Board include: II. Meta’s content policies: The Instagram Community Guidelines state that content containing hate speech will be removed. Under the heading “Respect other members of the Instagram community,” the guidelines state that it is “never OK to encourage violence or attack anyone based on their race, ethnicity, national origin, sex, gender, gender identity, sexual orientation, religious affiliation, disabilities, or diseases.” The Instagram Community Guidelines do not mention any specific rule on slurs, but the words “hate speech” link to the Facebook Community Standard on Hate Speech . In the rationale for its Hate Speech policy , Meta prohibits “the usage of slurs that are used to attack people on the basis of their protected characteristics.” Protected characteristics in Meta’s policy include, for example, national origin, religious affiliation, race, and ethnicity. At the time each of the three posts were created, removed and appealed to the Board, and when Meta reversed its original decisions in all three cases, “slurs” were defined as “words that are inherently offensive and used as insulting labels for the above characteristics.” Following a policy update on May 25, 2023, Meta now defines “slurs” as “words that inherently create an atmosphere of exclusion and intimidation against people on the basis of a protected characteristic, often because these words are tied to historical discrimination, oppression, and violence” and adds that “they do this even when targeting someone who is not a member of the [protected characteristic] group that the slur inherently targets.” The policy rationale also outlines several exceptions that allow the use of a slur “to condemn it or raise awareness” or to be used “self-referentially or in an empowering way.” However, Meta may still remove the content “if the intention is unclear.” The May 25 revisions to the Hate Speech policy did not alter this language. In addition to the exceptions set out in the Hate Speech policy, the newsworthiness allowance allows “content that may violate [the] Facebook Community Standards or Instagram Community Guidelines, if it’s newsworthy and if keeping it visible is in the public interest.” Meta only grants newsworthiness allowances “after conducting a thorough review that weighs the public interest against the risk of harm” and looks to “international human rights standards, as reflected in [its] Corporate Human Rights Policy , to help make these judgments.” Meta states it assesses whether content raises “an imminent threat to public health or safety, or gives voice to perspectives currently being debated as part of a political process.” This assessment takes into account country circumstances such as whether an election or conflict is under way, whether there is a free press, and whether Meta’s products are banned. Meta states there is “no presumption that content is inherently in the public interest solely on the basis of the speaker’s identity, for example their identity as a politician.” The Board’s analysis was informed by the Meta’s commitment to “ Voice ” which the company describes as “paramount”, and its values of “Safety,” “Privacy” and “Dignity.” III. Meta’s human rights responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , in which it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following international standards: 5. User submissions All three media outlets separately appealed Meta’s removal decisions to the Board. In its appeal to the Board, Bolu Gündem pointed out that it paid a news agency for the video and that other news organizations had shared the video on Facebook without it being removed. BirGün Gazetesi emphasized the public’s right to receive information, while Komedya Haber’s appeal contested that the content included hate speech. 6. Meta’s submissions Meta removed all three posts under its Hate Speech Community Standard, because the phrase “İngiliz uşağı” was, from the time the videos were posted to when they were reinstated, a designated slur in Meta’s Turkish market, translated as “servant of the British.” At the time the three posts were removed, the Community Standard defined slurs consistent with the public-facing policy as “words that are inherently offensive and used as insulting labels for [...] protected characteristics” including national origin. Meta shared with the Board that, following an internal Policy Forum , it decided to move away from the concept of “inherently offensive” as its basis for describing slurs towards “a research-based definition focused on the word’s connection to historical discrimination, oppression, and violence against protected characteristic groups.” Meta has shared with the Board that this definitional change did not impact operational guidance to reviewers on how to implement the policy. The only change that would have impacted the outcome of these cases was the removal of “İngiliz uşağı” from the slur list. Meta explained that its policies “allow people to share hate speech and slurs to condemn, to raise awareness, self-referentially, or in an empowering way. However, the user’s intent must be clear. In order to qualify as reporting that is awareness raising, it is not enough to restate that someone else used hate speech or a slur. Instead, we [Meta] need specific additional context.” In response to the Board’s questions, Meta clarified that it allows slurs in a “reporting” context only when shared to raise awareness about the use of the slur with “specific additional context” and that “a neutral caption is not enough.” Meta explained that it didn’t apply this exception in these posts because the videos did not include clear awareness-raising or condemning context. In its response to the Board’s questions, Meta stated that the newsworthiness allowance was not necessary to apply in these cases because the content did not contain a violating slur. However, at the time of the original removals, Meta did consider the phrase to be a slur. For that scenario, Meta added that it would find that the public interest value of the content in the context of an election to outweigh any risk of harm, so it would also have restored the content. For the Board’s assessment of newsworthiness, see Section 8.1. In November 2022, Meta staff identified the need to update the Turkish slur list as part of the company’s preparations for the May 2023 presidential and parliamentary elections in Türkiye (Turkey). The annual audit of the country’s market slur list began in January 2023 and the company’s regional team submitted its proposed changes in mid-March 2023. In its audit, Meta decided that the phrase “İngiliz uşağı” did not constitute a slur and removed it, effective April 12, 2023, two weeks after the content was restored in all three cases. At the same time, Meta removed from its slur list other terms that combined the use of “uşak” (servant) with specific nationalities. In response to the Board’s questions, Meta stated it does not have documentation on when and why the phrase was originally designated as a slur, but it now recognizes it does not attack people based on a protected characteristic. The company also added that “İngiliz uşağı” was still on the slur list for the Turkish market at the time the three posts in this case were reviewed and therefore moderators acted in accordance with internal guidance by removing the content. Meta audits its slur lists through a process led by regional market teams “with the goal of de-designating any slurs that should not be on the lists” in January each year. Meta used a new auditing process that was trialled in the 2023 annual audit of the Turkish market slur list. The new process involves two steps: first, a qualitative analysis to determine the history and use of the term; and second, a quantitative analysis, to determine key data questions such as how much of the sample falls within policy exceptions. Meta explained that because “İngiliz uşağı” did not qualitatively meet its “slurs” definition (step one), it was removed from the list without progressing to a quantitative analysis of its use (step two). The Board asked Meta 23 questions in writing. The questions addressed issues related to the criteria and processes for slur designation; the internal guidance on slurs and application of policy exceptions; how mistake prevention systems operated differently in the reviews of the three posts, and evaluation of account level enforcements resulting from each content decision. Of the 23 questions, 22 were answered and one partially. The partial response was about when and why the phrase “İngiliz uşağı” was designated as a slur, with the company explaining that it lacked documentation. Meta also provided the Board with an oral briefing on the changes to its slurs definition and designation process. 7. Public comments The Oversight Board received 11 public comments relevant to these three cases. One of the comments was submitted from Central and South Asia; nine from Europe; and one from the United States and Canada. The submissions covered the following themes: the importance of a contextual approach to moderating slurs; proper user notice of the reasons for content removals; the effects of erroneous removals of content on news outlets; the relevance of newsworthiness allowance to the content; and calls for a public list of slur examples. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board examined whether to uphold or overturn Meta’s original decisions in these three cases by analyzing Meta’s content policies, human rights responsibilities and values. Taking these decisions together also provides the Board with a greater opportunity to assess their implications for Meta’s broader approach to content governance, particularly in the context of elections. 8.1 Compliance with Meta’s content policies I. Content rules Hate Speech The Board finds that the term “İngiliz uşağı” in these three cases is not hate speech under Meta’s Community Standards. Whether assessed against the definition of slurs prior to or following the May 25, 2023 policy changes, the term “İngiliz uşağı” does not attack individuals on the basis of a protected characteristic. The removal of content containing this term in all three cases is inconsistent with the rationale of the Hate Speech policy, as it does not attack people on the basis of a protected characteristic. The term “İngiliz uşağı” has a long history functioning as political criticism in Türkiye (Turkey). According to experts consulted by the Board, the use of the phrase preceded the founding of modern Türkiye (Turkey), when the term was used to criticize leaders in the Ottoman Empire for serving the interests of Britain, and the term is not discriminatory in nature. The confrontation in these three cases involves politicians from competing political parties. The AKP, MP Reyhanlıoğlu’s party, has faced criticism and public anger over the government’s handling of the earthquake response and its legislation granting amnesty to building developers for constructing buildings that did not adhere to earthquake safety codes. She directed the slur at Mayor İmamoğlu, a key figure of the CHP, the country’s largest opposition party. The tense relationship between the AKP and CHP leading up to the election, including the importance of the earthquake as an electoral topic, played out publicly during Mayor İmamoğlu and CHP presidential candidate Kemal Kılıçdaroğlu’s visit to Kahramanmaraş. The content in each of the three cases is therefore political speech on a matter of significant public interest in the electoral context. As Meta has explained, in order “to qualify as reporting that is awareness raising, it is not enough to restate that someone else used hate speech or a slur. In other words, a neutral caption is not enough.” If the content had included a slur, none of the media organizations would have qualified as “discussing” or “reporting” hate speech because the content was shared with a neutral caption in all three cases. In the Board’s view, even if this slur was appropriately designated on the list, the content in all three cases should nevertheless have been protected as “reporting.” As the internal guidance is currently framed, if the content had included a slur, none of the media organizations would have qualified as “discussing” or “reporting” hate speech because the content was shared with a neutral caption in all three cases. In the Board’s view, even if this slur was appropriately designated on the list, the content in all three cases should nevertheless have been protected as “reporting.” The Board finds that the phrase “İngiliz uşağı” should not have been added to Meta’s confidential slur list, as it is not a form of hate speech. In other contexts, accusations of being a “foreign agent” may amount to a credible threat to individuals’ safety, but these can be addressed under other policies (for example, under Violence and Incitement ). Even in those situations, Meta should distinguish threats from a speaker in an influential position from media reporting on those threats. Given the facts of these cases and the internal guidance in place at the time, content reviewers, who are moderating content at scale, acted in accordance with that guidance to remove content containing terms on Meta’s slur lists. At the time, that list included “İngiliz uşağı.” The reason for the errors in these cases was the policy decision to add the term to the slur list and the inappropriately narrow and confidential guidance on how reviewers should apply the “raising awareness” exception to posts “reporting” on slur usage. Newsworthiness allowance The Board expresses its concern that, at a time when Meta’s internal policies categorized “İngiliz uşağı” as a violating slur, the three posts were not escalated for a newsworthiness allowance assessment by Meta’s Core Policy Team (previously known within the company as the “Content Policy Team”). Turkish freedom of expression organization İfade Özgürlüğü Derneği (İFÖD) argued in its public comment that because of its public interest value, the content in all three cases should have qualified for a newsworthiness allowance. If the content contained a slur properly designated in accordance with Meta’s Hate Speech policy, the Board would agree. The three posts concern reporting on speech by one (former) politician, targeting a current politician, in a way that is within the boundaries of (even offensive) criticism that a politician should be expected to tolerate, including insulting epithets. That assessment could be different, for example, if a term was used in its particular context as a discriminatory slur. The video emerged at a moment of significant political and social importance after a series of devastating earthquakes had struck Türkiye (Turkey). The earthquakes, as well as discussions related to the government response and preparation for them, were important topics for President Erdoğan and CHP Presidential Candidate Kemal Kılıçdaroğlu in the campaign period prior to the May 2023 elections. In the aftermath of the earthquakes, the Turkish government also temporarily restricted access to Twitter and other social media sites as criticism of the government’s earthquake response spread. Since this footage was in the public interest and its removal would not reduce any risk of harm, Meta should have allowed the term to be used for public interest reporting, even if it had properly qualified as a slur. The Board has previously insisted that Meta leave up content containing discriminatory slurs when the content otherwise related to significant moments in a country’s history (see Colombia protests case). II. Systemic challenges for enforcement and error prevention Slur list designation and audit processes Meta could not provide the Board with information on when or why it originally designated “İngiliz uşağı” as a slur because of insufficient documentation, a concern it seeks to address with its new slur designation and audit processes. Under the previous auditing process, the company’s regional teams with the support of policy and operations experts would conduct qualitative and quantitative analysis on the language and culture of the related region or market to create slur lists. This process would include reviewing the word’s associated meaning, its prevalence in Meta’s platforms, and its local and colloquial usage. Meta had required collecting and assessing at least 50 pieces of content containing that term in this process. However, Meta noted in its recent Policy Forum that the previous slur designation process had a number of issues, including indexing on offensiveness, lack of documentation, and subjective criteria; and as Meta noted to the Board, this was “inconsistently applied” with removal criteria not fixed or weighted. When Meta was trialing its new designation process in 2023 for the Turkish market, “İngiliz uşağı” was removed from the slur list. By coincidence, that audit was ongoing at the time the Board selected these three cases. The term had been on Meta’s slur list since at least 2021. According to Meta, the new process intends to better quantify alternative meanings and usages of a term for removing a slur designation, a process that focuses on better accounting for the changing meanings of words over time. These governance changes are generally positive, and if effectively implemented should reduce over-enforcement of the slurs policy. However, the new process would be enhanced if it specifically aimed to identify terms that were incorrectly added to the slur list. Meta should also ensure it updates and makes more comprehensive its explanation of slurs designation and auditing in the Transparency Center, aligning this with its new definition of slurs and its revised approach to slur lists audits. Mistake prevention measures and escalation challenges Reviewing these three cases together allowed the Board to assess how a variety of Meta’s mistake-prevention systems worked with respect to similar content and revisit a broader systematic challenge it has also noted in prior decisions. The Board is concerned that while various mistake-prevention systems were engaged in the review of each post, it appears they did not operate consistently for the benefit of media organizations or their audiences. In addition, the measures did not empower reviewers to escalate any of the three posts for further contextual review. Such escalations could have either led to the content being left up (e.g., for newsworthiness), and/or the error in adding this term to the slur list being identified earlier, outside of the annual audit. Cross-check was engaged in all three cases, but operated differently in the decision for each post. Only Bolu Gündem was listed as a media organization for the purpose of Early Response Secondary Review (ERSR), whereas BirGün Gazetesi and Komedya Haber were not. ERSR is the entity-based form of cross-check, for which any post from a listed entity receives additional review if marked for removal (see cross-check policy advisory opinion , paras 27-28). Of the three media organizations in these cases, only Bolu Gündem had a partner manager. According to Meta, a media organization must have a “partner manager” to be eligible for ERSR. According to Meta, partner managers “act as the link between external organizations and individuals who use Meta’s platforms and services” and they help account holders “optimize their presence and maximize the value they generate from Meta’s platforms and services.” The Board notes that the posts from BirGün Gazetesi and Komedya Haber both received cross-check review under General Secondary Review (GSR), which prioritizes content based on the “cross-check ranker.” Nevertheless, the Board is concerned that local or smaller media entities are not systematically included as ERSR listed entities as they do not have a partner manager. This reinforces concerns the Board expressed in its policy advisory opinion on cross-check about the program’s lack of transparency, and lack of objective criteria for inclusion in ERSR. Entities engaged in public interest journalism ought to have access to clear information on how their accounts can benefit from cross-check protection; if having a partner manager is a necessary condition for inclusion, there should be clear instructions on applying for a partner manager. In addition, the Board is concerned that the fact that all three posts were reviewed through cross-check did not lead to closer consideration of whether a policy exception should have applied, and/or an escalation to be made for a newsworthiness assessment. Moreover, Dynamic Multi-Review (DMR) was also “turned on” for the applicable review queue at the time the three posts were sent for initial moderator review. For the purpose of DMR, automation identified all three posts for multiple moderator reviews prior to removal, for the accuracy of human review and to mitigate the risk of incorrect decisions based on several factors such as virality and number of views. Out of a total of eight reviews across the three similar posts, which all preceded the additional cross-check reviews, only two reviewers (of one post each) determined those posts did not violate the Hate Speech policy. The Board is concerned that reviewers are not prompted when automation is identifying a higher risk of enforcement error, as this might encourage them to examine the content more closely, either to consider potentially applicable policy exceptions and/or to escalate the content for closer contextual analysis. Meta’s current mistake-prevention measures, in both DMR and cross-check, appear to be almost entirely geared towards ensuring moderators enforce the policies in line with internal guidance. They do not contain, it seems, additional mechanisms for reviewers to identify when strict adherence to Meta’s internal guidance is leading to the wrong decision, because the policy itself is wrong (as Meta later admitted was the case with respect to all three of its initial decisions). While automation correctly identified that the posts in all three cases were at risk of false-positive enforcement, the additional reviews by moderators did not lead to escalations for applying a newsworthiness allowance. Given the challenges of false positives in at-scale review, escalations should be more systematic and frequent for content relating to public interest debates, in particular in the context of elections. The fact that Meta applied its resource-intensive mistake-prevention systems to these cases, but still reached incorrect outcomes in all three, shows that they require further review. Meta previously dismissed similar concerns the Board raised about escalation pathways for newsworthiness assessments in the “ Colombia protests ” decision (see Meta’s response to “Colombia protests” recommendation no. 3) as it felt the work it was already doing was sufficient. The Board finds that these three cases demonstrate this issue requires re-examination. 8.2 Compliance with Meta’s human rights responsibilities The Board finds that Meta’s decision to remove the content in all three cases was inconsistent with Meta’s human rights responsibilities. Freedom of expression (Article 19 ICCPR) Article 19 of the ICCPR provides for broad protection of expression, including the “freedom to seek, receive, and impart information and ideas of all kinds.” The scope of the protection includes expression that “may be regarded as deeply offensive” ( General Comment 34 , para. 11). The protection of expression is also “particularly high” when public debate concerns “figures in the public and political domain” ( General Comment 34 , para. 34). The role of the media in reporting information across the digital ecosystem is critical. The Human Rights Committee has stressed that a “free, uncensored and unhindered press or other media is essential” with press or other media being able to “comment on public issues without censorship or restraint and to inform public opinion” ( General Comment 34 , para. 13). The expression at issue in each of these three cases deserves “particularly high” protection because the political dispute came during a significant political debate concerning the government’s earthquake response in the lead up to presidential and parliamentary elections in Türkiye (Turkey). Public anger and criticism after the earthquakes came as President Erdoğan and CHP presidential candidate Kemal Kılıçdaroğlu were campaigning in the months before the May 2023 presidential and parliamentary elections. In the Joint Declaration on Media Freedom and Democracy , UN and regional freedom of expression mandate holders advise that “large online platforms should privilege independent quality media and public interest content on their services in order to facilitate democratic discourse” and “swiftly and adequately remedy wrongful removals of independent quality media and public interest content, including through expedited human review” (Recommendations for social media platforms, page 8). Where restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human rights commitments, both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression” ( A/74/486 , para. 41). I. Legality (clarity and accessibility of the rules) The principle of legality requires rules that limit expression to be clear and publicly accessible (General Comment No. 34, para. 25). The Human Rights Committee has further noted that rules “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” (Ibid.). In the context of online speech, the UN Special Rapporteur on freedom of expression has stated that rules should be specific and clear ( A/HRC/38/35 , para. 46). Meta’s hate speech prohibition on slurs is not sufficiently clear to users. Meta’s slurs definition prior to May 25, 2023, focused on offensiveness, which was excessively subjective and much broader than Meta’s definition of hate speech as framed in the policy rationale. Prior to the changes, Facebook and Instagram users were likely to have different interpretations of what “offensive” meant, creating confusion that may include circumstances where there is an attack on a protected characteristic, others where there is not. The May 25 changes have clarified Meta’s policy position to some extent, moving away from the vague concept of “offense.” The notifications in each of the three cases did not inform the respective users that the posts were removed because of slur usage, only that the content was removed for violating Meta’s Hate Speech policy. In its Q2 2022 update on the Oversight Board, Meta stated they are “planning on assessing the feasibility of further increasing the depth by adding additional granularity to which aspect of the policy has been violated at scale (e.g., violating the slurs prohibition within the Hate Speech Community Standard).” Meta noted in this report that its review systems are most accurate at the policy level and accordingly prioritize “correct, broader messaging” over “specific, yet inaccurate messaging.” For example, Meta has greater confidence it can accurately inform users they have violated the Hate Speech policy, but has less confidence it can accurately inform users the specific rule within that policy (e.g., prohibition on slurs) they have violated. In Meta’s response to the “South Africa slurs” case recommendation, however, the company said it is “building new capabilities to provide more detailed notifications” which is now offered in English, with testing in Arabic, Spanish, and Portuguese notifications on Facebook. This would not have benefited the users in these cases because the Board understands the notifications the users received were in Turkish. The Board urges Meta to provide this level of detail for non-English users. Meta’s list of exceptions to the prohibition on slurs, and hate speech more broadly, could be explained more clearly to users and content reviewers. Though the Board has reservations with requiring clear statements of intent as a requirement to benefit from exceptions, to the extent intent should be a necessary consideration, Meta needs to more clearly specify to users how they can demonstrate intent for each of the policy exceptions listed. In addition, internal guidance for reviewers seems to permit broader exceptions than those communicated publicly to users, creating accessibility and clarity concerns. Meta’s policy guidance states that “reporting” is permitted under the Hate Speech policy when it is raising awareness. The Board has previously criticized Meta’s public-facing Hate Speech policy for failing to explain rules that are contained in internal guidance to reviewers (see, e.g., Two buttons meme case). Meta should make public that reporting on hate speech is permitted, ideally in a standalone exception that distinguishes journalistic “reporting” from “raising awareness”. This information is particularly important to aid media organizations and others who wish to report on incidents during which a slur has been used by third parties in a matter of public interest, including when the slur is incidental to or not the main point of the news story, in ways that do not create an atmosphere of exclusion and/or intimidation. It should be framed in such a way that recognizes that media outlets and others engaged in journalism, in order to impartially report on current events, may not always state intent for “awareness raising” and that this may need to be inferred from other contextual cues. II. Legitimate aim Any restriction on expression should pursue one of the legitimate aims listed in the ICCPR, which include the “rights of others.” In several decisions, the Board has found that Meta’s Hate Speech policy, including the slurs prohibition, pursues the legitimate aim of protecting the rights of others, namely not to be discriminated against (see, for example, “ Armenians in Azerbaijan ” decision). The Board notes that Meta’s May 25 update to its slurs definition has made clearer this aim. Prior references to slurs as “inherently offensive” may have been read to imply a right of individuals to protection from offensive speech per se. This would not be a legitimate aim, as no right to be protected from offensive speech exists under international human rights law. Meta’s new definition, substituting the concept of offensiveness for a more objective definition for terms that “inherently create an atmosphere of exclusion and intimidation against people on the basis of a protected characteristic” more closely aligns with the legitimate aim of protecting the rights of others. III. Necessity and proportionality The principle of necessity and proportionality provides that any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected"" (General Comment 34, para. 34). The Board finds that it was not necessary to remove the content in these three cases. When combined with systemic failures to apply relevant exceptions, Meta’s internal list of slurs can amount to a near-absolute ban, raising both necessity and proportionality concerns in the context of journalistic reporting. In relation to necessity, the inclusion of “İngiliz uşağı” on the slurs list was not necessary to protect people from hate speech because it is not used to attack persons on the basis of a protected characteristic. Meta’s slurs list also appears to include terms that do not meet the company’s own definition of slurs, prior to or following the May 25 policy revisions. The Board has been given full access to slurs lists last updated in the first quarter of 2023, and there are many terms listed, across markets, that are questionable in terms of whether they are hate speech or would be better understood as offensive insults that are not discriminatory in nature. Some board members have also expressed concern that the list is under-inclusive of many hate speech terms one would expect to see on such lists but are not there; whereas for some markets or languages the list of designated terms run for several pages, for other markets the lists are much shorter. At the time of Meta’s original decisions in these cases, Meta’s prior definition of slurs, which then hinged on the concept of offensiveness, was overbroad and led to disproportionate restrictions on expression when three media organizations reported on events of political importance involving slur usage by public figures. Even if the phrase had been properly designated as a slur, when reporting about events that included its use by third parties in ways that would not incite violence or discrimination, the content should have been qualified as permissible “reporting.” Meta’s undisclosed policy guidance on how the reporting of slurs must be accompanied with additional context to be considered “awareness raising” interfered with each of the news outlets’ editorial discretion and attempts to inform the Turkish public. The media entities in these three cases shared the video without the additional context that would indicate an intent to condemn or raise awareness (see above for the Board’s analysis of Meta’s exceptions in section 8.1). In the “ Mention of the Taliban in news reporting ” decision, the Board examined the challenges of requiring clear user intent “even where contextual clues make clear the post is, in fact, reporting”. While that case concerned the Dangerous Organizations and Individuals policy (where there is a public exception for reporting on designated entities), the observations on intent there apply to these cases on hate speech too. It is often considered good practice in journalism to report facts neutrally or impartially, without value judgment, a practice that is in tension with Meta’s qualifications for reporting requiring clear intent to condemn or raise awareness. These cases bring an additional facet to that critique. While Meta’s “raising awareness” exception addresses reporting on slur usage, that narrow application is underinclusive of circumstances, such as those in these cases, in which slur use was largely incidental to the main topic being reported. In these cases, removing the three posts was an unnecessary and disproportionate restriction on the freedom of expression of the rights of the individuals in the media and on the access to information rights of their audience. The Board is concerned about Meta mechanically enforcing its hate speech policy on slurs and failing to account for when a public figure is present and the target of criticism. The Human Rights Committee has observed that public officials are “legitimately subject to criticism and political opposition” ( General Comment 34 , para. 38). The Board has raised this concern before in its “ Colombia protests ” decision. In that case, the Board said context should be carefully considered, not only the political context where a slur is used, but also if a slur is used as part of criticism of political leaders. The Board’s “ Iran protests slogan ” decision addressed hypothetical threats against political leaders, emphasizing the importance of protecting rhetorical political speech while also ensuring all people, including public figures, are protected from credible threats. Criticism of public figures can take a variety of forms, even forms that include offensive language, but Meta’s current enforcement approach does not give the space necessary to thoughtfully balance these competing factors under either the undisclosed rules for reporting, or the parallel and more generally applicable newsworthiness allowance. A policy that can better accommodate news reporting would allow for more thoughtful assessment of context during at-scale review, without requiring escalation. As the Board stressed above (Section 8.1: mistake prevention measures and escalation challenges), and in its “ Colombia protests ” decision, potentially newsworthy posts that merit closer contextual assessment appear not to be escalated to Meta’s policy team as systematically or frequently as they should be. Whereas Meta presents the newsworthiness allowance as somewhat of a fail-safe for protecting public interest expression, Meta’s own transparency reporting reveals the allowance was only applied 68 times in the year from June 2021 – May 2022. As the Board previously noted in its “ Colombia protests ” decision, the “newsworthiness exception should not be construed as a broad permission for hate speech to remain up.” However, there needs to be stronger mechanisms to protect public interest expression, which can too easily be wrongly removed. In two of the cases, Meta’s strikes and penalty systems compounded necessity and proportionality concerns, with the wrongful removals resulting in further limitations on user expression and media freedom. These measures made it more difficult for both media organizations to freely share their reporting for the duration of those feature limits. Because of the chilling effect of likely future, even more grave sanctions, this had a real impact at a time when the earthquakes and pre-electoral period made access to independent local news particularly important. The Board also encourages Meta to experiment with proactive in-house procedures to avoid false positives and less intrusive means of regulating the use of slurs, besides the removal of content that can result in strikes and feature limits. Given that freedom of expression, reflected in Meta’s paramount value of “voice,” is the rule and Meta’s prohibition on slurs the exception, Meta’s internal guidance to moderators should establish a presumption that journalistic reporting (including citizen journalism) should not be removed. While the Board emphasized in the Colombia Protests decision that the “newsworthiness exception should not be construed as a broad permission for hate speech to remain up,” Meta’s internal rules should encourage the full consideration of the specific circumstances, to ensure that public interest reporting, which is not hate speech, is not incorrectly removed. The Board also recalls its decision in the Wampum Belt case, in which it emphasized the importance of Meta assessing content as a whole, rather than making assessments based on isolated parts of the content. In addition, revising user notifications to include behavior nudges, for example to inform users when their posts appear to contain prohibited slurs, and inviting them to edit their posts, may increase compliance with the company’s policies. Additional resources for media organizations are needed to understand how they should report on stories that include slur usage in ways that will not lead to content removal. Advice to users on how to edit broadcast video to obscure slur usage while still allowing current events to be reported on may also reduce the number of media organizations that find their accounts restricted as a result of reporting on public interest issues. 9. Oversight Board decision The Oversight Board overturns Meta’s original decisions to take down the content in each of these three cases. 10. Recommendations Content policy 1. To ensure media organizations can more freely report on topics of public interest, Meta should revise the Hate Speech Community Standard to explicitly protect journalistic reporting on slurs, when such reporting, in particular in electoral contexts, does not create an atmosphere of exclusion and/or intimidation. This exception should be made public, and be separate from the “raising awareness” and “condemning” exceptions. There should be appropriate training to moderators, especially outside of English languages, to ensure respect for journalism, including local media. The reporting exception should make clear to users, in particular those in the media, how such content should be contextualized, and internal guidance for reviewers should be consistent with this. The Board will consider this recommendation implemented when the Community Standards are updated, and internal guidelines for Meta’s human reviewers are updated to reflect these changes. 2. To ensure greater clarity of when slur use is permitted, Meta should ensure the Hate Speech Community Standard has clearer explanations of each exception with illustrative examples. Situational examples can be provided in the abstract, to avoid repeating hate speech terms. The Board will consider this implemented when Meta restructures its Hate Speech Community Standard and adds illustrative examples. Enforcement 3. To ensure fewer errors in the enforcement of its Hate Speech policy, Meta should expedite audits of its slur lists in countries with elections in the second half of 2023 and early 2024, with the goal of identifying and removing terms mistakenly added to the company’s slur lists. The Board will consider this implemented when Meta provides an updated list of designated slurs following the audit, and a list of terms de-designated, per market, following the new audits. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg, which draws on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Linguistic expertise was provided by Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world. Return to Case Decisions and Policy Advisory Opinions" fb-ttxibh8s,Fictional Assault on Gay Couple,https://www.oversightboard.com/decision/fb-ttxibh8s/,"December 18, 2023",2023,December,"TopicDiscrimination, LGBT, ViolenceCommunity StandardHate speech",Hate speech,Overturned,United Kingdom,"A user appealed Meta’s decision to leave up a Facebook post that depicts a fictional physical assault on a gay couple who are holding hands, followed by a caption containing calls to violence.",5009,771,"Overturned December 18, 2023 A user appealed Meta’s decision to leave up a Facebook post that depicts a fictional physical assault on a gay couple who are holding hands, followed by a caption containing calls to violence. Summary Topic Discrimination, LGBT, Violence Community Standard Hate speech Location United Kingdom Platform Facebook This is a summary decision. Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors and inform the public about the impact of the Board’s work. They are approved by a Board Member panel, not the full Board. They do not involve a public comments process and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement. Case Summary A user appealed Meta’s decision to leave up a Facebook post that depicts a fictional physical assault on a gay couple who are holding hands, followed by a caption containing calls to violence. This case highlights errors in Meta’s enforcement of its Hate Speech policy. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and removed the post. Case Description and Background In July 2023, a Facebook user posted a 30-second video clip, which appears to be scripted and produced with actors, showing a gay couple being beaten and kicked by people. The video then shows another group of individuals dressed in religious attire approaching the fight. After a few seconds, this group joins in, also assaulting the couple. The video ends with the sentence in English: “Do your part this pride month.” The accompanying caption, also in English, states, “Together we can change the world.” The post was viewed approximately 200,000 times and reported fewer than 50 times. According to Meta: “Our Hate Speech policy prohibits calls to action and statements supporting or advocating harm against people based on a protected characteristic, including sexual orientation.” The post’s video and caption endorse violence against a protected characteristic, which is clearly depicted through visuals of two men holding hands and references to Pride month. Therefore, the content violates Meta’s Hate Speech policy. Meta initially left the content on Facebook. After the Board brought this case to Meta’s attention, the company determined that the content did violate its Community Standards and removed the content. Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the person who reported content that was then left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation processes involved, reduce errors and increase fairness for Facebook and Instagram users. Case Significance This case highlights errors in Meta’s enforcement of its Hate Speech policy. The content in this case contained multiple indicators that the user was advocating the use of violence against a protected-characteristic group, in its visual depictions of a physical assault against two men holding hands. The text at the end of the video encourages users to “do their part” during Pride month. Moderation errors like this one can negatively impact the protected-characteristic group. The Board notes that this content was reported multiple times during a month that is meant to celebrate LGBTQIA+ people and, as such, there should have been heightened awareness and more robust content-moderation processes in place. Previously, the Board has issued a recommendation on improving the enforcement of Meta’s Hate Speech policy. Specifically, the Board recommended that “Meta should clarify the Hate Speech Community Standard and the guidance provided to reviewers, explaining that even implicit references to protected groups are prohibited by the policy when the reference would reasonably be understood,” ( Knin Cartoon decision, recommendation no. 1). Partial implementation of this recommendation by Meta has been demonstrated through published information. The Board highlights this recommendation again and urges Meta to address these concerns to reduce the error rate in moderating hate speech content. Decision The Board overturns Meta’s original decision to leave up the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to the company’s attention. The Board also urges Meta to speed up the implementation of still-open recommendations to reduce such errors. Return to Case Decisions and Policy Advisory Opinions" fb-tye2766g,South Africa slurs,https://www.oversightboard.com/decision/fb-tye2766g/,"September 28, 2021",2021,,"TopicGovernments, Marginalized communities, PoliticsCommunity StandardHate speech","Type of DecisionStandardPolicies and TopicsTopicGovernments, Marginalized communities, PoliticsCommunity StandardHate speechRegion/CountriesLocationSouth AfricaPlatformPlatformFacebookAttachmentsPublic Comments 2021-011-FB-UA",Upheld,South Africa,The Oversight Board has upheld Facebook's decision to remove a post discussing South African society under its Hate Speech Community Standard.,30924,4785,"Upheld September 28, 2021 The Oversight Board has upheld Facebook's decision to remove a post discussing South African society under its Hate Speech Community Standard. Standard Topic Governments, Marginalized communities, Politics Community Standard Hate speech Location South Africa Platform Facebook Public Comments 2021-011-FB-UA The Oversight Board has upheld Facebook’s decision to remove a post discussing South African society under its Hate Speech Community Standard. The Board found that the post contained a slur which, in the South African context, was degrading, excluding and harmful to the people it targeted. About the case In May 2021, a Facebook user posted in English in a public group that described itself as focused on unlocking minds. The user’s Facebook profile picture and banner photo each depict a black person. The post discussed “multi-racialism” in South Africa, and argued that poverty, homelessness, and landlessness have increased for black people in the country since 1994. It stated that white people hold and control the majority of the wealth, and that wealthy black people may have ownership of some companies, but not control. It also stated that if “you think” sharing neighborhoods, language, and schools with white people makes you “deputy-white” then “you need to have your head examined.” The post then concluded with “[y]ou are” a “sophisticated slave,” “a clever black,” “’n goeie kaffir” or “House nigger” (hereafter redacted as “k***ir” and “n***er”). Key findings Facebook removed the content under its Hate Speech Community Standard for violating its policy prohibiting the use of slurs targeted at people based on their race, ethnicity and/or national origin. The company noted that both “k***ir” and “n***er” are on Facebook’s list of prohibited slurs for the Sub-Saharan market. The Board found removing this content to be consistent with Facebook’s Community Standards. The Board evaluated public comments and expert research in finding that both “k***ir” and “n***er” have discriminatory uses, and that “k***ir” is a particularly hateful and harmful word in the South African context. The Board agreed with Facebook that the content did not condemn or raise awareness of the use of “k***ir,” and did not use the word in a self-referential or empowering manner. As such, no exception to the company’s Hate Speech Community Standard applied in this case. While the user’s post discussed relevant and challenging socio-economic and political issues in South Africa, the user racialized this critique by choosing the most severe terminology possible in the country. In the South African context, the slur “k***ir” is degrading, excluding and harmful to the people it targets. Particularly in a country still dealing with the legacy of apartheid, the use of racial slurs on the platform should be taken seriously by Facebook. The Board supports greater transparency around Facebook’s slur list. The company should provide more information about the list, including how it is enforced in different markets and why it remains confidential. The Board also urged Facebook to improve procedural fairness in enforcing its Hate Speech policy, issuing the recommendation below. This would help users understand why Facebook removed their content and allow them to change their behavior in the future. The Oversight Board’s decision The Oversight Board upholds Facebook’s decision to remove the post. In a policy advisory statement, the Board recommends that Facebook: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board has upheld Facebook’s decision to remove a post discussing South African society under its Hate Speech Community Standard which prohibits the use of slurs. 2. Case description In May 2021, a Facebook user posted in English in a public group that described itself as focused on unlocking minds. The user’s Facebook profile picture and banner photo each depict a black person. The post discussed “multi-racialism” in South Africa, and argued that poverty, homelessness, and landlessness have increased for black people in South Africa since 1994. It stated that white people hold and control the majority of wealth, and that wealthy black people may have ownership of some companies, but not control. It also stated that if “you think” sharing neighborhoods, language, and schools with white people makes you “deputy-white” then “you need to have your head examined.” The post then concluded with “[y]ou are” a “sophisticated slave,” “a clever black,” “’n goeie kaffir” or “House nigger” (hereafter redacted as “k***ir” and “n***er”). The post was viewed more than 1,000 times, receiving fewer than five comments and more than 10 reactions. It was shared over 40 times. The post was reported by a Facebook user for violating Facebook’s Hate Speech Community Standard . According to Facebook, the user who posted the content, the user who reported the content, and “all users who reacted to, commented on and/or shared the content” have accounts located in South Africa. The post remained on the platform for approximately one day. Following review by a moderator, Facebook removed the post under its Hate Speech policy. Facebook’s Hate Speech Community Standard prohibits content that “describes or negatively targets people with slurs, where slurs are defined as words that are inherently offensive and used as insulting labels” based on their race, ethnicity and/or national origin. Facebook noted that while its prohibition against slurs is global, the designation of slurs on its internal slurs list is market oriented. Both “k***ir” and “n***er” are on Facebook’s list of prohibited slurs for the Sub-Saharan market. Facebook notified the user that their post violated Facebook’s Hate Speech Community Standard. Facebook stated that the notice to the user explained that this Standard prohibits, for example, hateful language, slurs, and claims about the coronavirus. The user appealed the decision to Facebook, and, following a second review by a moderator, Facebook confirmed the post was violating. The user then submitted an appeal to the Oversight Board. 3. Authority and scope The Board has authority to review Facebook’s decision following an appeal from the user whose post was removed (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1). The Board may uphold or reverse that decision, and its decision is binding on Facebook (Charter Article 3, Section 5). The Board’s decisions may include policy advisory statements with non-binding recommendations that Facebook must respond to (Charter Article 3, Section 4). The Board is an independent grievance mechanism to address disputes in a transparent and principled manner. 4. Relevant standards The Oversight Board considered the following standards in its decision: I. Facebook’s Community Standards Facebook's Community Standards define hate speech as “a direct attack on people based on what we call protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability.” Under “Tier 3,” prohibited content includes content that “describes or negatively targets people with slurs, where slurs are defined as words that are inherently offensive and used as insulting labels for the above characteristics.” II. Facebook’s values Facebook’s values are outlined in the introduction to the Community Standards. The value of “Voice” is described as “paramount”: The goal of our Community Standards has always been to create a place for expression and give people a voice. […] We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Facebook limits “Voice” in service of four values, and two are relevant here: “Safety”: We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn't allowed on Facebook. “Dignity” : We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade others. III. Human rights standards The UN Guiding Principles on Business and Human Rights ( UNGPs ), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Facebook announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis in this case was informed by the following human rights standards: 5. User statement The user stated in their appeal to the Board that people should be allowed to share different views on the platform and “engage in a civil and healthy debate.” The user also stated that they “did not write about any group to be targeted for hatred or for its members to be ill-treated in any way by members of a different group.” The user argued that their post instead “encouraged members of a certain group to do introspection and re-evaluate their priorities and attitudes.” They also stated that there is nothing in the post or “in its spirit or intent” that would promote hate speech, and that it is unfortunate that Facebook is unable to tell them what part of their post is hate speech. 6. Explanation of Facebook’s decision Facebook removed the content under the Hate Speech Community Standard , specifically for violating its policy prohibiting the use of slurs targeted at people based on their race, ethnicity and/or national origin. Facebook noted in its decision rationale that it prohibits content containing slurs, which are inherently offensive and used as insulting labels, unless the user clearly demonstrates that that content “was shared to condemn, to discuss, to raise awareness of the slur, or the slur is used self-referentially or in an empowering way.” Facebook argued that these exceptions did not apply in this case. Facebook argued the post addressed itself to “Clever Blacks” and that this phrase “has been used to criticize Black South Africans who are perceived to be ‘excessively anxious to appear impressively clever or intelligent.’” Facebook also noted that the post used the words “k***ir” and “n***er,” both of which are on its confidential list of prohibited slurs. According to Facebook, the word “k***ir” is deemed as “South Africa’s most charged epithet” and historically used by white people in South Africa “as a derogatory term to refer to black people.” Facebook added that this term “has never been reclaimed by the Black community.” Facebook stated that the word “n***er” is also “highly offensive in South Africa” but that it “has been reclaimed by the Black community for use in a positive sense.” Facebook also noted that, as part of the process for determining whether a word or phrase constitutes a slur, it must be recommended by its internal or external stakeholders. Facebook specified that it recently held consultations with stakeholders that confirmed the need for the exception of the Hate Speech policy that allows the use of slurs when “used self-referentially or in an empowering way.” According to Facebook, external stakeholders generally agreed that it is important “to allow people to use a reclaimed slur in an empowering way,” but it is also critical that Facebook does not “guess, decide, or gather data about users’ membership in a protected characteristic” to decide whether the use of a slur violates its policies. Facebook confirmed in its response to the Board that the external stakeholders included seven experts/organizations in North America, 16 from Europe, 30 from Middle East, two from Africa, six in Latin America and one in the Asia Pacific/India region. Facebook concluded that while the user’s profile picture depicts a black person, the user “does not identify themselves with the slurs or argue that they should be reconsidered or reclaimed.” According to Facebook, “the slurs in this post are being used in an offensive manner to attack” black people who live among white people. As such, Facebook stated that the removal of the post was consistent with its Hate Speech Community Standard. Facebook also stated that its removal was consistent with its values of “Dignity” and “Safety,” when balanced against the value of “Voice.” According to Facebook, the slurs in the post were used “to attack other people in a harmful manner antithetical to Facebook’s values.” In this regard, Facebook referred to the Board’s case decision 2020-003-FB-UA . Facebook argued that its decision was consistent with international human rights standards. It stated that its decision complied with the international human rights law requirements that restrictions on freedom of expression respect the principles of legality, legitimate aim, and necessity and proportionality. According to Facebook, its policy was “easily accessible” in the Community Standards and “‘the user’s choice of words fell squarely within the prohibition’ on slurs.” Additionally, the decision to remove the content was legitimate to protect “the rights of others from harm and discrimination,” and consistent with the requirement under Article 20, para. 2 of the ICCPR to prohibit speech that advocates “national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence.” Finally, Facebook argued that its decision to remove the content was “necessary and proportionate to limit harm” against members of the black community and “to other viewers of seeing hate speech,” referring to the Israel Democracy Institute and Yad Vashem’s “ Recommendations for Reducing Online Hate Speech ,” and Richard Delgado’s “ Words That Wound: A Tort Action for Racial Insults, Epithets, and Name-Calling .” 7. Third-party submissions The Oversight Board received six public comments related to this case. Three of the comments were from Sub-Saharan Africa, specifically South Africa, one was from Middle East and North Africa, one was from Asia Pacific and Oceania, and one was from the United States and Canada. The Board received comments from stakeholders including academia and civil society organizations focusing on freedom of expression and hate speech in South Africa. The submissions covered themes including the analysis of the words “clever blacks,” “n***er” and “k***ir;” whether the words “n***er” and “k***ir” qualify as hate speech; the user’s and reporter’s identity and its impact on how the post was perceived; and the applicability of Facebook’s Hate Speech policy exceptions. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board looked at the question of whether this content should be restored through three lenses: Facebook’s Community Standards; the company’s values; and its human rights responsibilities. 8.1 Compliance with Community Standards The Board finds that removing this content is consistent with Facebook’s Community Standards. The use of the word “k***ir” in the user’s post violated the Hate Speech Community Standard, and no policy exception applied. The Hate Speech Community Standard prohibits attacks based on protected characteristics. This includes “[c]ontent that describes or negatively targets people with slurs, where slurs are defined as words that are inherently offensive and used as insulting labels for the above characteristics.” Facebook considers “k***ir” and “n***er” racial slurs. The Board evaluated public comments and expert research in finding that both slurs have discriminatory uses, and that “k***ir” is a particularly hateful and harmful word in the South African context. The internet is a global network and content that is posted on Facebook by a user in one context may circulate and cause damage in other contexts. At the same time, Facebook’s confidential slur list is divided by markets in recognition that words carry different meaning and may cause different impacts in some situations. The Board notes that it has previously dealt with the use of the word “kafir” in case decision 2020-007-FB-FBR, where the Board ordered the restoration of the content. In that case, Facebook did not treat the term as a slur, but rather meaning “non-believers” as the target group of an alleged “veiled threat” under the Violence and Incitement policy. The term with one “f,” used in that case in India, has the same origins in Arabic as the South African term with two. This demonstrates the difficulty for Facebook of enforcing a blanket prohibition on certain words globally, where similar or identical terms in the same or different languages can hold different meanings and pose different risks depending on their contextual use. The Board notes that the post was targeted at a group of black South Africans. The Board further notes that the user's critique discussed this group’s presumed economic, educational and professional status and privilege. The user argued in their statement to the Board that they were not targeting or inciting hate or discrimination against persons on account of their race. A few Board Members found this argument compelling. However, the user chose the most severe terminology possible in South Africa to racialize this critique. The use of the “k***ir” term, with the prefix “good” in Afrikaans, has a clear historical association that carries significant weight in South Africa. The Board finds that the use of the “k***ir” term in this context cannot be separated from its harmful and discriminatory meaning. Facebook told the Board that it reviews its slur list annually. About the designation of “k***ir” on the list, Facebook shared that in 2019 it held a consultation with civil society organizations in South Africa. In that meeting stakeholders told Facebook that “k***ir” “is used in a way to denigrate and demean a Black person as inferior and worthy of contempt.” To meet its human rights responsibilities when developing and reviewing policies, including the slur list, Facebook should consult potentially affected groups and other relevant stakeholders, including human rights experts. Facebook has four exceptions to its slur policy that are referenced in the policy rationale of the Hate Speech Community Standard: “We recognize that people sometimes share content that includes someone else’s hate speech to condemn it or raise awareness. In other cases, speech that might otherwise violate our standards can be used self-referentially or in an empowering way.” The majority of the Board is of the view that Facebook’s exceptions did not apply in this case. This is because the content did not condemn the use of the word “k***ir,"" it did not raise awareness, and it was not used in an empowering manner. The Board also found this content was not self-referential, despite a few members considering this exception should have applied because it expresses criticism against some privileged members of the targeted group. However, the Board found that nothing in the post suggests the user considers themself to be in that targeted group. Further, the user’s reference to “you” and “your” in the post distanced the user from the targeted group. Therefore, the Board finds that Facebook was acting according to its Community Standard on Hate Speech when it decided to remove this content. 8.2 Compliance with Facebook’s values The Board recognizes that “Voice” is Facebook’s paramount value, and that Facebook wants users of the platform to be able to express themselves freely. However, Facebook’s values also include “Dignity” and “Safety.” The Board finds that value of “Voice” to be of particular importance to political discourse about racial and socio-economic equality in South Africa. Arguments about the distribution of wealth, racial division and inequality are highly relevant, especially in a society that many argue is still undergoing transition from apartheid towards greater equality. Those targeted by slurs also may see “Voice” impacted as their use may have a silencing impact on those targeted and inhibit their participation on Facebook. The Board also considers the values of “Dignity” and “Safety” to be of vital concern in this context. The Board found that the use of slur “k***ir” in the context of South Africa can be degrading, excluding and harmful to the people targeted by the slur (see, for example, 2019 PeaceTech Lab and Media Monitoring Africa’s Lexicon of Hateful Terms , pages 12 and 13). Particularly in a country still dealing with the legacy of apartheid, the mention of racial slurs on the platform should be taken seriously by Facebook. It is relevant that in this context the user opted to deploy a slur term that is particularly incendiary in South Africa. It was possible for the user to engage in political and socio-economic discussions on Facebook in ways that appealed to the emotions of their audience without referencing this slur. This justified displacing the user’s “Voice” to protect the “Voice,” “Dignity” and “Safety” of others. 8.3 Compliance with Facebook’s human rights responsibilities The Board concludes that removing the content is consistent with Facebook’s human rights responsibilities as a business. Facebook has committed itself to respect human rights under the UN Guiding Principles on Business and Human Rights ( UNGPs ). Its Corporate Human Rights Policy states this includes the International Covenant on Civil and Political Rights (ICCPR). Article 19 of the ICCPR provides for broad protection of expression. While protection is “particularly high” for political expression and debate ( General Comment 34 , para. 38). The International Convention on the Elimination of All Forms of Racial Discrimination (ICERD), also provides protection to freedom of expression (Article 5), and the Committee tasked with monitoring states’ compliance has emphasized the importance of the right to assist “vulnerable groups in redressing the balance of power among the components of society” and to offer “alternative views and counterpoints” in discussions (CERD Committee, General Recommendation 35, para. 29). At the same time, the Board has upheld Facebook’s decisions to restrict content that meet the Article 19 ICCPR three-part test of legality, legitimacy, and necessity and proportionality. The Board concluded that Facebook’s actions satisfied its responsibilities under this test. I. Legality (clarity and accessibility of the rules) The principle of legality under international human rights law requires rules used by states to limit expression to be clear and accessible ( General Comment 34, para. 25). The Human Rights Committee has further noted that rules “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” (General Comment 34, para. 25). In some situations, Facebook’s concepts of “inherently offensive” and “insulting” may be too subjective and raise concerns for legality ( A/74/486 , para. 46, see also A/HRC/38/35 , para. 26). Additionally, there may be situations where a slur has multiple meanings or can be used in ways that would not be considered an “attack.” The Board asked Facebook how its market-specific slur list is enforced, and if a slur’s appearance on any market list means it cannot be used globally. Facebook responded that its “prohibition against slurs is global, but the designation of slurs is market-specific, as Facebook recognizes that cultural and linguistic variations mean that words that are slurs in some places may not be in others.” The Board reiterated its initial question. Facebook then responded “[i]f a term appears on a market slur list, the hate speech policy prohibits its use in that market. The term could be used elsewhere with a different meaning; therefore, Facebook would independently evaluate whether to add it to the other market’s slur list.” It remains unclear to the Board how Facebook enforces the slur prohibition in practice and at scale. The Board does not know how Facebook’s enforcement processes to identify and remove violating content operate globally for market-specific terms, how markets are defined, and when and how this independent evaluation occurs. In this case, as noted above, the sources consulted by the Board concur that “k***ir” is widely understood as South Africa’s most charged racial epithet. As the expression fell unambiguously within the prohibition, Facebook met its responsibility of legality in this case. The Board notes its decision in case 2021-010-FB-UA and its recommendation that Facebook provide illustrative examples from the slurs policy in the public-facing Community Standards (Recommendation No. 1). The Board supports greater transparency around the slur list and continues to discuss how Facebook could provide users with sufficient clarity while respecting the rights to equality and non-discrimination. A minority of the Board believes Facebook should make its slur list public, so it is available to all users. A majority believes the Board should better understand the procedure and criteria for building the list and how specifically it is enforced, as well as possible risks in publication, including strategic behavior to evade slur violations and whether certain words accumulate with harmful effect. Facebook should contribute to this discussion by publishing more information about the slur list, designation and review processes, its enforcement and application globally and/or by market or language, and why it remains confidential. II. Legitimate aim Any state restriction on expression should pursue one of the legitimate aims listed in the ICCPR. These include the “rights of others.” Previously the Board has stated that the slur prohibition “seeks to protect people’s rights to equality and non-discrimination (Article 2, para. 1, ICCPR [and] to exercise their freedom of expression on the platform without being harassed or threatened (Article 19, ICCPR),” among other rights (case decision 2020-003-FB-UA ). The Board reiterates that these are legitimate aims. III. Necessity and proportionality The principle of necessity and proportionality under international human rights law requires that restrictions on expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected” ( General Comment 34 , para. 34). In this case, the Board decides that removing the content was appropriate to achieve a protective function. The Board also issues a policy recommendation to Facebook on improving the enforcement of its Hate Speech Community Standard. Facebook’s Hate Speech Community Standard prohibits some discriminatory expression including slurs, absent any requirement that the expression incite violence or discriminatory acts. While such prohibitions would raise concerns if imposed by a government at a broader level ( A/74/486 , para. 48), particularly if enforced through criminal or civil sanctions, the Special Rapporteur indicates that entities engaged in content moderation like Facebook can regulate such speech: The scale and complexity of addressing hateful expression presents long-term challenges and may lead companies to restrict such expression even if it is not clearly linked to adverse outcomes (as hateful advocacy is connected to incitement in Article 20(2) of the ICCPR). Companies should articulate the bases for such restrictions, however, and demonstrate the necessity and proportionality of any content actions ( A/HRC/38/35 , para. 28). In this case, the historical and social context was crucial, as the Board notes the use of the word “k***ir” is closely linked with discrimination and the history of apartheid in South Africa. The Board also discussed the status of the speaker and their intent. The Board acknowledges that there may be instances in which the racial identity of the speaker is relevant to analysis of the content’s impact. The Board notes the Special Rapporteur’s concerns that inconsistent Hate Speech policy enforcement may “penaliz[e] minorities while reinforcing the status of dominant or powerful groups” to the extent that harassment and abuse remains online while “critiques of racist phenomena and power structures” may be removed ( A/HRC/38/35 , para. 27). While a profile photo may lead to inferences about the user, the Board notes it is generally not possible to confirm if profile photos depict those responsible for content. Additionally, the Board discussed concerns Facebook said stakeholders raised about it attempting to determine users’ racial identities. The Board agreed that Facebook gathering or maintaining data on users’ perceived racial identities presents serious privacy concerns. In relation to intent, while the user stated they wished to encourage introspection, the post invoked a racial slur with charged historical implications to criticize some black South Africans. This was a complex decision for the Board. It results in the removal of expression that discusses relevant and challenging socio-economic and political issues in South Africa. Such discussions are important, and a certain degree of provocation should be tolerated when discussing such matters on Facebook. However, the Board finds that given the information analyzed in the previous paragraphs, Facebook’s decision to remove the content was appropriate. The Board also issues a policy recommendation that Facebook prioritize improving procedural fairness to users about its hate speech policy enforcement, so that users can understand with greater clarity the reasons for content removal where it occurs and have the possibility to consider changing their behavior. 9. Oversight Board decision The Oversight Board upholds Facebook’s decision to remove the content. 10. Policy recommendation Enforcement To ensure procedural fairness for users, Facebook should: *Procedural note: The Oversight Board's decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world, provided expertise on socio-political and cultural context. The company Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world, provided linguistic expertise. Return to Case Decisions and Policy Advisory Opinions" fb-u2hha647,Mention of the Taliban in news reporting,https://www.oversightboard.com/decision/fb-u2hha647/,"September 15, 2022",2022,,"TopicJournalism, News events, PoliticsCommunity StandardDangerous individuals and organizations","Policies and TopicsTopicJournalism, News events, PoliticsCommunity StandardDangerous individuals and organizations",Overturned,Afghanistan,The Oversight Board has overturned Meta’s original decision to remove a Facebook post from a news outlet page reporting a positive announcement from the Taliban regime in Afghanistan on women and girls’ education.,49313,7645,"Overturned September 15, 2022 The Oversight Board has overturned Meta’s original decision to remove a Facebook post from a news outlet page reporting a positive announcement from the Taliban regime in Afghanistan on women and girls’ education. Standard Topic Journalism, News events, Politics Community Standard Dangerous individuals and organizations Location Afghanistan Platform Facebook Mention of the Taliban in news reporting public comments Pashto translation Dari translation This decision is also available in Urdu (via the ‘language' tab accessed through the menu at the top of this screen), in Pashto ( here ), and in Dari ( here ). په پښتو ژبه د دغې پريکړې د لوستلو لپاره دلته کليک وکړئ. برای خواندن اين تصميم اينجا فشار دهيد. The Oversight Board has overturned Meta’s original decision to remove a Facebook post from a news outlet page reporting a positive announcement from the Taliban regime in Afghanistan on women and girls’ education. Removing the post was inconsistent with Facebook’s Dangerous Individuals and Organizations Community Standard, which permits reporting on terrorist groups, and Meta’s human rights responsibilities. The Board found Meta should better protect users’ freedom of expression when it comes to reporting on terrorist regimes and makes policy recommendations to help achieve this. About the case In January 2022, a popular Urdu-language newspaper based in India posted on its Facebook page. The post reported that Zabiullah Mujahid, a member of the Taliban regime in Afghanistan and its official central spokesperson, had announced that schools and colleges for women and girls would reopen in March 2022. The post linked to an article on the newspaper’s website and was viewed around 300 times. Meta found that the post violated the Dangerous Individuals and Organizations policy which prohibits “praise” of entities deemed to “engage in serious offline harms,"" including terrorist organizations. Meta removed the post, imposed “strikes” against the page administrator who had posted the content and limited their access to certain Facebook features (such as going live on Facebook). The user appealed and after a second human reviewer assessed the post as violating, it was placed in a queue for the High-Impact False Positive Override (HIPO) system. HIPO is a system Meta uses to identify cases where it has acted incorrectly, for example, by wrongly removing content. However, as there were less then 50 Urdu-speaking reviewers allocated to HIPO at the time, and the post was not deemed high priority, it was never reviewed in the HIPO system. After the Board selected the case, Meta decided the post should not have been removed as its rules allow “reporting on” terrorist organizations. It restored the content, reversed the strike, and removed the restrictions on the user’s account. Key findings The Oversight Board finds that removing this post is not in line with Facebook’s Dangerous Individuals and Organizations Community Standard, Meta’s values, or the company’s human rights responsibilities. The Dangerous Individuals and Organizations Community Standard prohibits “praise” of certain entities, including terrorist organizations. “Praise” is defined broadly in both the Community Standard, and the internal guidance for moderators. As a result, the Board understands why two reviewers interpreted the content as praise. However, the Community Standard permits content that ""reports on” dangerous organizations. The Board finds this allowance applies in this case. The Board also finds that removing the post is inconsistent with Meta’s human rights responsibilities; it unjustifiably restricts freedom of expression, which encompasses the right to impart and receive information, including on terrorist groups. This is particularly important in times of conflict and crisis, including where terrorist groups exercise control of a country. The Board is concerned that Meta’s systems and policies interfere with freedom of expression when it comes to reporting on terrorist regimes. The company’s Community Standards and internal guidance for moderators are not clear on how the praise prohibition and reporting allowance apply, or the relationship between them. The fact that two reviewers found the post was violating suggests that these points are not well understood. The Board is concerned that Meta’s default is to remove content under the Dangerous Individuals and Organizations policy if users have not made it clear that their intention is to “report.” The Board is also concerned that the content was not reviewed within the HIPO system. This case may indicate a wider problem. The Board has considered a number of complaints on errors in enforcing the Dangerous Individuals and Organizations policy, particularly in languages other than English. This raises serious concerns, especially for journalists and human rights defenders. In addition, sanctions for breaching the policy are unclear and severe. The Oversight Board’s decision The Oversight Board overturns Meta’s original decision to remove the post. The Board recommends that Meta: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board overturns Meta's original decision to remove a Facebook post on the page of a popular Urdu language newspaper in India. This post reports an announcement from a prominent member and spokesperson of the Taliban regime in Afghanistan regarding women and girls’ education in Afghanistan. Meta reversed its decision as a result of the Board selecting this case, and reversed sanctions on the administrator’s account. The Board finds that the post does not violate the Dangerous Individuals and Organizations Community Standard because the policy allows “reporting on” designated entities. The Board is concerned by Meta’s broad definition of “praise,” and the lack of clarity to reviewers on how to enforce exceptions to the policy on news reporting of actions taken by designated entities that are exercising control of a country. This interferes with the ability of news outlets to report on the actions and statements of designated entities in situations like this one, where the Taliban regime forcibly removed the recognized government of Afghanistan. The Board finds Meta did not meet its responsibilities to prevent or mitigate errors when enforcing these policy exceptions. The decision recommends that Meta change its policy and enforcement processes on the Dangerous Individuals and Organizations Community Standard. 2. Case description and background In January 2022, the Facebook page of a news outlet based in India shared a text post in Urdu containing a link to an article on its own website. Meta states that the post was viewed about 300 times. The post reported that Zabiullah Mujahid, acting as “Culture and Information Minister” and official central spokesman for the Taliban regime in Afghanistan, had announced that schools and colleges for girls and women would open at the start of the Afghan New Year on March 21. The linked article contains a fuller report on the announcement. The news outlet is an Urdu-language newspaper based in Hyderabad, India, a city with a high number of Urdu-speaking residents. It is the largest-circulated Urdu newspaper in the country and claims a daily readership of more than a million people. There are approximately 230 million Urdu-speakers around the world. No state has given formal diplomatic recognition to the Taliban regime in Afghanistan since the group seized power in August 2021. Schools and colleges for girls and women did not open at the start of the Afghan New Year as the spokesperson announced they would, and girls aged 12 and older (from the sixth grade on) and women remain barred from attending school at the time of the Board’s decision in this case. On January 20, 2022, a Facebook user clicked “report post” on the content but did not complete their complaint. This triggered a classifier (a machine learning tool trained to identify breaches of Meta’s Community Standards) that assessed the content as potentially violating the Dangerous Individuals and Organizations policy and sent it for human review. An Urdu-speaking reviewer determined that the content violated the Dangerous Individuals and Organizations policy and removed it on the same day it was posted. Meta explained this was because it “praised” a designated organization. The Taliban is a Tier 1 designated terrorist organization under Meta’s Dangerous Individuals and Organizations policy. As a result of the violation, Meta also applied both a severe strike and a standard strike against the page administrator. In general, while content posted to Facebook pages appears to come from the page itself (for example, the news outlet), they are authored by page administrators with personal Facebook accounts. Strikes result in Meta imposing temporary restrictions on users’ ability to perform essential functions on the platform (such as share content), known as “feature-limits,” or disabling of the account. Severe strikes result in stronger penalties. In this case, the strikes meant that a three-day and an additional, longer feature-limit was imposed against the page’s administrator. The former prevented the user from creating new public content and creating or joining Messenger rooms. The latter prevented the user going live on Facebook, using ad products, and creating or joining Messenger rooms. Additionally, the news outlet page itself also received one standard strike and one severe strike. On January 21, the administrator of the news outlet page (“the user”) appealed the removal of the content to Meta. The content was reviewed by another Urdu-speaking reviewer who also found that the content violated the Community Standard on Dangerous Individuals and Organizations. Though the content was then placed in a queue for identifying and reversing “false positive” mistakes (content wrongly actioned for violating the Community Standards), known as High Impact False Positive Override (HIPO), it received no additional review. According to Meta, this was because of the number of Urdu speaking HIPO reviewers in mid-2022 and because the content in this case, after it was removed, was not given a priority score by Meta’s automated systems as high as other content in the HIPO queue at that time. As a result of the Board’s selection of the user’s appeal for review, Meta determined that its original removal decision was in error because its Community Standards allow “reporting on” designated organizations and individuals. On February 25, 2022, Meta subsequently restored the content. Meta also removed the longer feature-limit it had imposed and reversed the strikes against the administrator’s account and the page. 3. Oversight Board authority and scope The Board has authority to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include policy advisory statements with non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). 4. Sources of authority The Oversight Board considered the following authorities and standards: I. Oversight Board decisions: The most relevant prior Oversight Board decisions include: II. Meta’s content policies: The Community Standard on Dangerous Individuals and Organizations states that Facebook does ""not allow organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on Facebook."" Meta divides its designations of “dangerous” entities into three tiers, explaining these ""indicate the level of content enforcement, with Tier 1 resulting in the most extensive enforcement because Meta states that these entities have the most direct ties to offline harm."" Tier 1 designations are focused on ""entities that engage in serious offline harms,"" including “terrorist, hate and criminal organizations.” Meta removes “praise,” “substantive support,” and “representation” of Tier 1 entities as well as their leaders, founders, or prominent members. Meta designates the Taliban as a Tier 1 entity. The Community Standards define “praise” as any of the following: “speak positively about a designated entity or event”; “give a designated entity or event a sense of achievement”; “legitimiz[e] the cause of a designated entity by making claims that their hateful, violent, or criminal conduct is legally, morally, or otherwise justified or acceptable”; or “align[...] oneself ideologically with a designated entity or event.” Meta recognizes that “users may share content that includes references to designated dangerous organizations and individuals to report on, condemn, or neutrally discuss them or their activities.” Meta says its policies are designed to “allow room for these types of discussions while simultaneously limiting risks of potential offline harm.” However, Meta requires “people to clearly indicate their intent when creating or sharing such content. If a user's intention is ambiguous or unclear, we default to removing content.” III. Meta’s values: The value of ""Voice"" is described as ""paramount"": The goal of our Community Standards is to create a place for expression and give people a voice. Meta wants people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Facebook limits ""Voice"" in the service of four values. ""Safety"" is the most relevant in this case: We’re committed to making Facebook a safe place. We remove content that could contribute to a risk of harm to the physical security of persons. Content that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook. IV. International human rights standards: The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following human rights standards: 5. User submissions In their statement to the Board, the user states that they are a representative of a media organization and do not support extremism. The user says that their articles are based on national and international media sources and that this content was shared to provide information about women and girls’ education in Afghanistan. Also, the user says they always ensure the content they share is in the public interest and that it is acceptable under Meta’s Community Standards. 6. Meta’s submissions Upon re-examining its original decision, Meta decided that the content in this case should not have been removed as praise of a designated organization under the Dangerous Individuals and Organization policy. Meta explained that the underlying news context meant the content should have benefitted from the policy allowance for users to report on designated entities. Meta explained that the post and linked article include reporting on school reopening dates and details, an issue of public interest. According to Meta, the Dangerous Individuals and Organization policy allows news reporting that mentions designated entities. To benefit from the allowances built into the Dangerous Individuals and Organization policy, Meta clarified that “we require people to clearly indicate their intent. If a user’s intention is ambiguous or unclear, we default to removing content.” Meta also explained that it prefers its reviewers not to infer intent as this “helps to reduce subjectivity, bias, and inequitable enforcement during content review while maintaining the scalability of our policies.” Meta informed the Board that it was unable to explain why two human reviewers incorrectly removed the content and did not properly apply the allowance for reporting. The company noted that moderators are not required to document the reasons for their decision beyond classifying the content as part of their review — in this case, as a violation of Meta’s Dangerous Individuals and Organizations policy on the grounds of praise. In response to the Board questioning whether praise of dangerous organizations can be disseminated as part of news reporting, Meta stated its policy “allows news reporting where a person or persons may praise a designated dangerous individual or entity.” Answering the Board’s question on the difference between standard and severe strikes, Meta explained that the strike system contains two tracks for Community Standards enforcement: one that applies to all violation types (standard), and one that applies to the most egregious violations (severe). Meta states that all violations of the Dangerous Individuals and Organizations policy are treated as severe. The company explained to the Board that severe strikes are those that apply stricter penalties against more serious harms, and limit access to high-risk services such as Facebook Live Video and ads. Meta also referenced a page in its Transparency Centre on “Restricting Accounts” (updated February 11, 2022) that it says explains its approach to strikes. In response to the Board’s questions, Meta provided further explanation of its systems for correcting enforcement errors and how those impacted this case, leading to the Board asking several follow-up questions. The content in this case was automatically detected and sent to a High Impact False Positive Override (called HIPO by Meta) channel. This is a system designed to correct potential false positive mistakes after action is taken on the content. Meta clarified to the Board that this system contrasts with Meta’s General Secondary Review system (part of the cross-check program), which is designed to prevent false positive mistakes before action is taken on the content. Content sent to the HIPO channel joins a queue for additional review, but review will only occur where capacity allows. The position of content in the queue depends on a priority score automatically assigned to the content. Meta explained that content is prioritized for HIPO review based on factors including: topic sensitivity (if a topic is trending or sensitive); false positive probability; predicted reach (the estimated number of views the content might obtain); and entity sensitivity (the identity of the group or user sharing the content). Meta explained that content can be restored in two ways: either the specific piece of content is reviewed by moderators and found to be non-violating; or Meta’s automated systems find that the content matches other content that has been reviewed and determined to be non-violating. The page of the news outlet was previously subject to cross-check, but as part of a platform wide update to the cross-check system, the page was not subject to cross-check when the case content was reviewed, and cross-check did not impact the review of the case content. According to Meta, cross-check now includes two systems: General Secondary Review applies to all organic content on Facebook and Instagram; Early Response Secondary Review applies to all content posted by specific listed entities, including some news outlets. Meta stated that when content from those specific entities is identified as violating a content policy, instead of being enforced, it is sent for additional review. It is first sent to Meta’s Markets team. If a reviewer on this team finds the content is not violating, the process ends, and the content remains on the platform. However, if a reviewer on this team finds the content is violating, it is escalated to another team. This team, the Early Response team, is made up of specialized Meta content reviewers. A reviewer on this team would need to find the content violating before it can be removed. At the time the case content was identified as violating, the news outlet page was not on the Early Response Secondary Review list in the current cross-check system. Additionally, the case content in question was not reviewed as part of the General Secondary Review system, which would also involve additional review before enforcement. According to Meta, the content was sent to the HIPO channel after it was removed, but it was not prioritized for human review. It did not receive an additional human review “due to the capacity allocated to the market” and because the content in this case was not given a priority score by Meta’s automated systems as high as other content in the HIPO queue at that time. Content prioritized by HIPO is only reviewed by outsourced reviewers after an enforcement action is taken. Meta allocates Urdu reviewers to different workflows based on need. These reviewers are shared across multiple review types, meaning they are not solely dedicated to a single workflow. In mid-2022, Meta’s HIPO workflow had less than 50 Urdu reviewers based on that need at the time. The Board asked Meta 34 questions. Meta responded to 30 fully, three partially and declined to answer one. The partial responses were to questions on: providing the percentage of removals under the Dangerous Individuals and Organizations policy that are restored on appeal or second review; accuracy rates for enforcing the prohibitions on praise and support in at-scale review; and how Meta determines intent for the reporting allowance and applicable contextual factors. Meta left one of the questions on providing data regarding the volume of Dangerous Individuals and Organizations content that is removed through automation versus human review unanswered on the grounds that it was unable to verify the requested data in the time available. 7. Public comments The Oversight Board received and considered six public comments related to this case. One of the comments was submitted from Asia Pacific and Oceania, four were from Europe, and one was from the United States and Canada. The submissions covered the importance of access to social media to people who live in or near Afghanistan, concerns about limitations on the discussion of designated groups, and the public interest in allowing a wider range of media reporting on the Taliban’s actions. Several public comments argued that the reliance of Meta's Dangerous Individuals and Organizations policy on the vague terms praise and support may suppress critical political discussion and disproportionately affect minority communities and the Global South. Public comments also criticized Meta for using US law as an “excuse” to prohibit “praise” of designated groups, rather than be transparent that it is Meta’s policy choice to restrict more expression than US law requires. To read public comments submitted for this case, please click here . 8. Oversight Board analysis This case is significant because it shows that a lack of clarity in the definition of praise seems to be resulting in uncertainty among reviewers and users. It also considers important issues of content moderation as it applies to gender discrimination and conflict. This case is difficult because there is an interest in ensuring that terrorist groups or their supporters do not use platforms for propaganda and recruitment efforts. However, this interest, when applied too broadly can lead to censorship of any content that reports on these groups. The Board looked at the question of whether this content should be restored, and the broader implications for Meta’s approach to content moderation, through three lenses: Meta's content policies, the company’s values and its human rights responsibilities. 8.1 Compliance with Meta’s content policies I. Content rules The Board finds that the content falls within the allowance that “users may share content that includes references to designated dangerous organizations (…) to report on (…) them or their activities” and is therefore not violating. The content is not violating despite the broad definition of praise provided in the public facing Community Standards, even as the content can be understood as speaking positively about an action of a designated entity, the Taliban, and may give them “a sense of achievement.” The Board notes the Known Questions guidance provided to moderators for interpreting praise is even broader. It instructs reviewers to remove content for praise if it “makes people think more positively about” a designated group, making the meaning of “praise” less about the intent of the speaker but the effects on the audience. Reporting on a designated entity’s claimed intentions to allow women and girls access to education, however dubious those claims, would arguably make people think more positively about that group. Given the instructions they were provided, the Board understands why two human reviewers (in error) would interpret the content as praise. The Board accepts that Meta intends for its Community Standard on Dangerous Individuals and Organizations to make room for reporting on entities Meta has designated as dangerous, even if that reporting also meets the company’s definition of praise. However, the Board does not think the language of the Community Standard or the Known Questions makes that definition clear. As a matter of fact, without specification, “praise” remains overbroad. The Community Standards do not provide any examples of what would constitute acceptable reporting. There is also no internal guidance to moderators on how to interpret this allowance. II. Enforcement action The Board notes that the moderation action in this case occurred after a user began to report the content but never finished the report. An automated system is triggered when this process is initiated, even if the user does not end up completing the report, and so the content was enqueued for human review. The reporting user was not made aware that their action could trigger consequences even if they decide against finishing the report, whereas a user would be told of the consequences of their report if they submitted it. Meta argues in its responses that the “automated report is not tied to the [reporting] user” but the Board finds this noteworthy when the whole process for this case began with a user initiating a report. Also, the Board is concerned that the reporting button does not provide users with sufficient information on the consequences of clicking it. The enforcement actions taken in this case (the content removal, strikes, and feature-limitations), should not have been imposed as there was no underlying violation of the Community Standards. The Board is concerned that Meta’s systems for preventing enforcement errors of this kind were ineffective, particularly given the severity of the sanctions imposed. The Board notes that in this case, the page of the news outlet was previously subject to cross-check, but as part of a platform wide update to the cross-check system, the page was not subject to cross-check when the case content was reviewed, and cross-check did not impact the review of the case content. In Meta's current cross-check system, guaranteed secondary human review is provided to users on the Early Response Secondary Review List. While some news outlets’ Facebook pages are on that list, this page is not. Being on an Early Response Secondary Review list also guarantees that Meta employees, and not at-scale reviewers, review the content before it can be removed. The Board finds it unlikely that this content would have been removed if the page were on the Early Response Secondary Review list at the time. The Board commends Meta for the introduction of its HIPO system but is concerned that it did not lead to secondary review of a post that conformed with Meta’s Community Standards. The content in this case did not receive an additional human review “due to the capacity allocated to the market” and because it was not given a priority score by Meta’s automated systems as high as other content in the HIPO queue at that time. Given the public interest nature of the reporting in this case, and the identity of the page as a news outlet posting the content, it should have scored highly enough for additional review to have taken place. As explained by Meta, the factor “entity sensitivity” takes the identity of the posting entity into account and can lead to a higher ranking for content from news outlets, especially those reporting on significant world events. For the same reasons, the Board is concerned that the Urdu language queue only had less than 50 reviewers in mid-2022. The Board considers the size of the India market, the number of groups Meta has designated as dangerous in that region, and therefore the heightened importance of independent voices, warrant greater investment from the company in correcting (and ideally preventing) errors from occurring on such important issues. 8.2 Compliance with Meta’s values Content containing praise of dangerous groups may threaten the value of “Safety” for Meta’s users and others because of its links to offline violence and its potential to “intimidate, exclude or silence others.” However, there is no significant safety issue in this case as the content only reports on the announcement of a designated organization. “Voice” is particularly important in relation to media outlets as they provide their audiences with essential information and play a crucial role in holding governments to account. Removing the content in this case did not materially contribute to “Safety” and was an unnecessary restriction of “Voice.” 8.3 Compliance with Meta’s human rights responsibilities The Board finds that removing the content from the platform was inconsistent with Meta’s human rights responsibilities, and that Meta should have more effective systems in place for preventing and correcting such errors. Meta adhering to its human rights responsibilities is particularly important in the context of crisis or conflict situations. Following the forceful takeover of a government by a group renowned for human rights abuses and due to the importance of informing the public of the acts of such designated groups, the company should be particularly attentive to protecting news reporting about that group. Freedom of expression (Article 19 ICCPR) Article 19 of the ICCPR provides protection of the right to freedom of expression and encompasses the right of all individuals to impart information and to receive it. International human rights law places particular value on the role of journalism in providing information that is of interest to the public. The UN Human Rights Committee has stated that “a free, uncensored and unhindered press or other media is essential in any society to ensure freedom of opinion and expression and the enjoyment of other Covenant rights” (General Comment No. 34, at para. 13). Social media platforms like Facebook have become a vehicle for transmitting journalist's reporting around the world, and Meta has recognized its responsibilities to journalists and human rights defenders in its corporate human rights policy. The right to freedom of expression encompasses the ability of Meta’s users to access information about events of public interest in Afghanistan, especially when a designated dangerous group forcibly removed the recognized government. It is imperative that users, including commentators on Afghanistan within and outside the country, and the general public have access to real-time reporting on the situation there. The Taliban’s approach to media freedom in the country makes the role of international reporting even more important. The information in this case would be essential to people concerned about girls’ and women’s equal right to access education. This remains the case even when the Taliban fails to meet those commitments. Article 19, para. 3 of the ICCPR, as interpreted by the Human Rights Committee in General Comment No. 34, requires that where restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality. The Board applies these international standards to assess whether Meta complied with its human rights responsibilities. I. Legality (clarity and accessibility of the rules) The principle of legality requires laws that States use to limit expression to be clear and accessible, so people understand what is permitted and what is not. Further, it requires laws restricting expression to be specific, to ensure that those charged with their enforcement are not given unfettered discretion (General Comment 34, para. 25). The Human Rights Committee has warned that “offences of ‘praising’, ‘glorifying’, or ‘justifying’ terrorism, should be clearly defined to ensure that they do not lead to unnecessary or disproportionate interference with freedom of expression. Excessive restrictions on access to information must also be avoided” (General Comment No. 34, at para. 46; see also: UN Special Rapporteur on counter-terrorism and human rights at paras 36-37(Report A/HRC/40/52 )). Following its approach in previous cases, the Board applies these principles to Meta’s content rules. While the Board welcomes that Meta’s policies on Dangerous Individuals and Organizations contain more detail now than when the Board issued its first recommendations in this area, serious concerns remain. For users reporting on the Taliban, it is unclear whether the Taliban remains a designated dangerous entity when it forcibly removed the recognized government of Afghanistan. The Board has previously recommended that Meta discloses either a full list of designated entities, or an illustrative one, to bring users clarity (""Nazi quote"" case, ""Ocalan’s isolation"" case). The Board regrets the lack of progress on this recommendation, and notes that while the company has not disclosed this information proactively, whistleblowers and journalists have sought to inform the public by disclosing a version of the “secret” list publicly . As noted previously in this decision (see section 8.1), the definition of “praise” in the public-facing Community Standards as “speaking positively about” a designated entity is too broad. For people engaged in news reporting, it is unclear how this rule relates to the reporting allowance built into the same policy. According to Meta, this allowance permits news reporting even where a user praises the designated entity in the same post. The Board finds the relationship between the “reporting” allowance in the Dangerous Individuals and Organizations policy and the overarching newsworthiness allowance remains unclear to users. In the “Shared Al Jazeera post"" case, the Board recommended that Meta provides criteria and illustrative examples in the Community Standards on what constitutes news reporting. Meta responded in the Q1 2022 implementation report that it was currently consulting with several teams internally to develop criteria to help users understand what constitutes news reporting. It said it expects to conclude this process by Q4 2022. The Board remains concerned that changes to the relevant Community Standard are not translated into all available languages and there are inconsistencies across languages. Following the Board’s “ Shared Al Jazeera post” decision, the US English version of the Dangerous Individuals and Organizations policy was amended in December 2021 to change the discretionary “we may remove content” to “we default to remove content” when a user’s intentions were unclear. However, other language versions of the Community Standards, including in Urdu and UK English, do not reflect this change. While the company has stated publicly in response to previous recommendations from the Board ( Meta Q4 2021 Quarterly Update on the Oversight Board ) that it aims to complete translations into all available languages in four to six weeks, it appears the relevant policy line for this case has not been completed after five months. Therefore, the policy is not equally accessible to all users, making it difficult for them to understand what is permitted and what is not. The Board is also concerned that Meta has not done enough to clarify to its users how the strikes system works. While a page on “Restricting Accounts” in Meta’s Transparency Centre contains some detail, it does not comprehensively list the feature-limits the company may apply and their duration. Nor does it list the “set periods of time” for severe strikes as it does for standard strikes. This is especially concerning because severe strikes carry more significant penalties and there is no mechanism in place for appealing account-level sanctions separately from the content decision. Even when the content is restored, feature-limits cannot always be fully reversed. In this case, for instance, the user had already experienced several days of feature-limits which were not fully rectified when Meta reversed its decision. That two Urdu-speakers assessed this content as violating further indicates that the praise prohibition, and its relationship to the reporting allowances, is unclear to those tasked with enforcing the rules. Content reviewers are provided with internal guidance on how to interpret the rules (the Known Questions and the Implementation Standards). The Known Questions document defines praise as content that “makes people think more positively about” a designated group. This is arguably broader than the public-facing definition in the Community Standards, making the meaning of “praise” less about the intent of the speaker than the effects on the audience. Also, neither the Community Standards, nor the Known Questions document constrain the reviewer’s discretion on restricting freedom of speech. Standard dictionary definitions of “praise” are not this broad, and as phrased the rule captures statements of fact, including impartial journalistic statements, as well as opinion. In response to the Board’s questions, Meta clarified that the reporting allowance allows anyone , and not only journalists, to speak positively about a designated organization in the context of reporting. However, this clarity is not provided to reviewers in internal guidance. Meta admits this guidance does not provide reviewers with a definition of how to interpret “reporting on.” II. Legitimate aim The Oversight Board has previously recognized that the Dangerous Individuals and Organizations policy pursues the aim of protecting the rights of others, including the right to life, security of person, and equality and non-discrimination (Article 19(3) ICCPR, Oversight Board decision “Punjabi Concern over the RSS in India”). The Board further recognises that propaganda from designated entities, including through proxies presenting themselves as independent media, may pose risks of harm to the rights of others. Seeking to mitigate those harms through this policy is a legitimate aim. III. Necessity and proportionality Any restrictions on freedom of expression ""must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected"" (General comment 34, para. 34). Meta has acknowledged that the removal of content in this case was not necessary, and therefore additional sanctions should not have been imposed on the user. The Board understands that when moderating content at scale, mistakes will be made. However, the Board does receive complaints on errors in enforcing the Dangerous Individuals and Organizations policy that affect reporting, particularly in languages other than English, which raises serious concerns (see “Shared Al Jazeera post” decision, “Ocalan’s isolation ” decision). The UN Human Rights Committee has emphasized that “the media plays a crucial role in informing the public about acts of terrorism and its capacity to operate should not be unduly restricted. In this regard, journalists should not be penalized for carrying out their legitimate activities” (General Comment 34, para 46). Meta therefore has a responsibility to prevent and mitigate its platforms’ negative human rights impact on news reporting. The Board is concerned that the type of enforcement error in this case may be indicative of broader failures in this regard. Those engaged in regular commentary on the activities of Tier 1 dangerous individuals and organizations face heightened risks of enforcement errors leading to their accounts facing severe sanctions. This may undermine their livelihoods and deny the public access to information at key moments. The Board is concerned the policy of defaulting to remove content when the intent to report on dangerous entities is not clearly indicated by the user may be leading to over-removal of non-violating content, even where contextual cues make clear the post is, in fact, reporting. Moreover, the system for mistake prevention and correction did not benefit this user as it should have. This indicates problems with how the ranker within the HIPO system prioritized the content decision for additional review, which meant it never reached the front of the queue. It also raises questions about the resources allocated to human review of the HIPO queue potentially being insufficient for Urdu language content. In this case, the enforcement error and failure to correct it denied a number of Facebook users access to information on issues of global importance and hampered a news outlet in carrying out its journalistic function to inform the public. Journalists may report on events in an impartial manner that avoids the kind of overt condemnation that reviewers may be looking to see. To avoid content removals and account sanctions, journalists may engage in self-censorship, and may even be incentivized to depart from their ethical professional responsibilities. Further, there have been reports of anti-Taliban Facebook users avoiding mentioning the Taliban in posts because they are concerned about being subjected to erroneous sanctions. The Board also notes that Meta has issued what it calls “spirit of the policy” exceptions related to the Taliban. This indicates recognition from Meta that at times, its approach under the Dangerous Individuals and Organizations policy is producing results that are inconsistent with the policy’s objectives, and therefore do not meet the requirement of necessity. Internal company materials obtained by journalists reveal that in September 2021, the company created an exception “to allow content shared by the [Afghanistan] Ministry of Interior” on matters such as new traffic regulations, and to allow two specific posts from the Ministry of Health in relation to COVID-19. Other exceptions have reportedly been more tailored and shorter-lived. For 12 days in August 2021, “government figures” could reportedly acknowledge the Taliban as the “official gov of Afghanistan [ sic ]” without risking account sanctions. From late August 2021 to September 3, users could “post the Taliban’s public statements without having to ‘neutrally discuss, report on, or condemn’ these statements.” Meta spokespersons acknowledged that some ad hoc exceptions were issued. In a Policy Forum on Crisis Policy Protocol, on January 25, 2022, Meta stated that it will deploy “policy levers” in crisis situations and provided the example of allowing “praise of a specific designated org (e.g. a guerrilla group signing a peace treaty).” These exceptions to the general prohibition on praise could cause more uncertainty for reviewers, as well as for users who may not be aware if or when an exception applies. They show that there are situations when Meta has reportedly recognized a more nuanced approach to content is warranted dealing with a designated entity that overthrows a legitimate government and assumes territorial control. The Board finds that removing the content from the platform was an unnecessary and disproportionate measure. The volume of these enforcement errors, their effects on journalistic activity, and the failure of Meta’s error-prevention systems have all contributed to this conclusion. 9. Oversight Board decision The Oversight Board overturns Meta's original decision to remove the content. 10. Policy advisory statement Content policy 1. Meta should investigate why the December 2021 changes to the Dangerous Individuals and Organizations policy were not updated within the target time of six weeks, and ensure such delays or omissions are not repeated. The Board asks Meta to inform the Board within 60 days of the findings of its investigation, and the measures it has put in place to prevent translation delays in future. 2. Meta should make its public explanation of its two-track strikes system more comprehensive and accessible, especially for “severe strikes.” It should include all policy violations that result in severe strikes, which account features can be limited as a result and specify applicable durations. Policies that result in severe strikes should also be clearly identified in the Community Standards, with a link to the “Restricting Accounts” explanation of the strikes system. The Board asks Meta to inform the Board within 60 days of the updated Transparency Center explanation of the strikes system, and the inclusion of the links to that explanation for all content policies that result in severe strikes. Enforcement 3. Meta should narrow the definition of “praise” in the Known Questions guidance for reviewers, by removing the example of content that “seeks to make others think more positively about” a designated entity by attributing to them positive values or endorsing their actions. The Board asks Meta to provide the Board within 60 days with the full version of the updated Known Questions document for Dangerous Individuals and Organizations. 4. Meta should revise its internal Implementation Standards to make clear that the “reporting” allowance in the Dangerous Individuals Organizations policy allows for positive statements about designated entities as part of the reporting, and how to distinguish this from prohibited “praise.” The Known Questions document should be expanded to make clear the importance of news reporting in situations of conflict or crisis and provide relevant examples, and that this may include positive statements about designated entities like the reporting on the Taliban in this case. The Board asks Meta to share the updated Implementation Standards with the Board within 60 days. 5. Meta should assess the accuracy of reviewers enforcing the reporting allowance under the Dangerous Individuals and Organizations policy in order to identify systemic issues causing enforcement errors. The Board asks Meta to inform the Board within 60 days of the detailed results of its review of this assessment, or accuracy assessments Meta already conducts for its Dangerous Individuals and Organizations policy, including how the results will inform improvements to enforcement operations, including for HIPO. 6. Meta should conduct a review of the HIPO ranker to examine if it can more effectively prioritize potential errors in the enforcement of allowances to the Dangerous Individuals and Organizations Policy. This should include examining whether the HIPO ranker needs to be more sensitive to news reporting content, where the likelihood of false-positive removals that impacts freedom of expression appears to be high. The Board asks Meta to inform the Board within 60 days of the results of its review and the improvements it will make to avoid errors of this kind in the future. 7. Meta should enhance the capacity allocated to HIPO review across languages to ensure that more content decisions that may be enforcement errors receive additional human review. The Board asks Meta to inform the Board within 60 days of the planned capacity enhancements. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg which draws on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Linguistic expertise was provided by Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world. Return to Case Decisions and Policy Advisory Opinions" fb-u2ytz2aa,Mistreatment by Ecuadorian Forces,https://www.oversightboard.com/decision/fb-u2ytz2aa/,"June 4, 2024",2024,,"TopicFreedom of expression, Governments, PoliticsCommunity StandardHate speech, Violence and incitement, Violent and graphic content","Hate speech, Violence and incitement, Violent and graphic content",Overturned,Ecuador,"A user appealed Meta’s decision to remove a Facebook video from Ecuador showing people being tied up, stepped on and beaten with a baton by individuals dressed in what appears to be military uniforms.",7059,1081,"Overturned June 4, 2024 A user appealed Meta’s decision to remove a Facebook video from Ecuador showing people being tied up, stepped on and beaten with a baton by individuals dressed in what appears to be military uniforms. Summary Topic Freedom of expression, Governments, Politics Community Standard Hate speech, Violence and incitement, Violent and graphic content Location Ecuador Platform Facebook Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company ’ s attention and include information about Meta ’ s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta ’ s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Summary A user appealed Meta’s decision to remove a Facebook video from Ecuador showing people being tied up, stepped on and beaten with a baton by individuals dressed in what appears to be military uniforms. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post, also applying a “Mark as Sensitive” warning screen. About the Case In January 2024, a Facebook user posted a video showing a group of people tied up and lying face down on the ground, while several other people in camouflage clothing step on their necks and backs repeatedly, holding them in position and beating them with a baton. No person’s face is visible in the video. The audio includes someone saying “maricón,” which the Board in its Colombia Protests decision noted had been designated as a slur by Meta. The post also contains text in Spanish condemning the beating of “defenceless” and “unarmed” prisoners. Around the time the content was posted, inmates rioted in jails in Ecuador, taking prison guards and administrative workers hostage. Ecuador’s government declared a state of emergency and imposed a curfew. The police and military then regained control of some of the prisons, with the army sharing images of hundreds of inmates, shirtless and barefoot, lying on the ground. Meta initially removed the user’s post from Facebook under its Violence and Incitement Community Standard , which prohibits threats of violence, defined as “statements or visuals representing an intention, aspiration or call for violence against a target.” When the Board brought this case to Meta’s attention, the company did not give reasons for why it had removed the content under the Violence and Incitement Community Standard. The company also assessed the content under the Hate Speech Community Standard. Meta explained that although the content contained a slur, this was allowed under the Hate Speech Community Standard as the content included “slurs or someone else’s hate speech in order to condemn the speech or report on it.” Meta also explained that under the Violent and Graphic Content Community Standard , the company applies a “Mark as Sensitive” warning screen to “imagery depicting one or more persons subjected to violence and/or humiliating acts by one or more uniformed personnel doing a police function.” The company restored the content to Facebook and applied a “Mark as Sensitive” label to it. Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook, Instagram and Threads users. Significance of Case This case illustrates the challenges faced by Meta in enforcing its policies on Violence and Incitement, Violent and Graphic Content, and Hate Speech. These challenges are particularly difficult when dealing with documentation of violence or abuses during crisis situations for the purposes of raising awareness. For the Violence and Incitement Community Standard, the Board recommended that Meta should “add to the public-facing language ... that the company interprets the policy to allow content containing statements with ‘neutral reference to a potential outcome of an action or an advisory warning,’ and content that ‘condemns or raises awareness of violent threats,’” ( Russian Poem , recommendation no. 1). This recommendation has been implemented. As of February 2024 , Meta has updated this policy including a clarification that it does not prohibit threats when shared in awareness-raising or condemning contexts. In terms of the Violent and Graphic Content Community Standard, the Board recommended that Meta should “notify Instagram users when a warning screen is applied to their content and provide the specific policy rationale for doing so,” ( Video After Nigeria Church Attack , recommendation no. 2). Meta reported progress on implementing this recommendation. In its Q4 2023 update on the Board , Meta stated: “Individuals using our platforms can anticipate receiving more comprehensive details about enforcement determinations and safety measures taken regarding their content, including the implementation of warning screens. Given that this is an integral component of our broader compliance initiative, we anticipate delivering a more comprehensive update later in 2024.” Regarding Hate Speech, the Board recommended that Meta “revise the Hate Speech Community Standard to explicitly protect journalistic reporting on slurs, when such reporting, in particular in electoral contexts, does not create an atmosphere of exclusion and/or intimidation. This exception should be made public, and be separate from the ‘raising awareness’ and ‘condemning’ exceptions,” ( Political Dispute Ahead of Turkish Elections , recommendation no. 1). Progress has been reported in implementing this recommendation. It was also recommended that Meta should “develop and publicize clear criteria for content reviewers for escalating for additional review public interest content that potentially violates the Community Standards,” ( Colombia Protests , recommendation no. 3). Meta described this as work it already does but did not publish information to demonstrate implementation. The Board believes that full implementation of these recommendations could contribute to decreasing the number of enforcement errors across the policies on Violence and Incitement, Violent and Graphic Content, and Hate Speech. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-uk2rus24,Post in Polish Targeting Trans People,https://www.oversightboard.com/decision/fb-uk2rus24/,"January 16, 2024",2024,January,"TopicLGBT, Sex and gender equality","Policies and TopicsTopicLGBT, Sex and gender equality",Overturned,Poland,The Oversight Board has overturned Meta’s original decision to leave up a Facebook post in which a user targeted transgender people with violent speech advocating for members of this group to commit suicide.,52058,7971,"Overturned January 16, 2024 The Oversight Board has overturned Meta’s original decision to leave up a Facebook post in which a user targeted transgender people with violent speech advocating for members of this group to commit suicide. Standard Topic LGBT, Sex and gender equality Location Poland Platform Facebook Post in Polish Targeting Trans People Decision PDF Post in Polish Targeting Trans People Public Comments Appendix Polish Translation To read this decision in Polish, click here . Kliknij tutaj , aby przeczytać postanowienie w języku polskim. The Oversight Board has overturned Meta’s original decision to leave up a Facebook post in which a user targeted transgender people with violent speech advocating for members of this group to commit suicide. The Board finds the post violated both the Hate Speech and Suicide and Self-Injury Community Standards. However, the fundamental issue in this case is not with the policies, but their enforcement. Meta’s repeated failure to take the correct enforcement action, despite multiple signals about the post’s harmful content, leads the Board to conclude the company is not living up to the ideals it has articulated on LGBTQIA+ safety. The Board urges Meta to close enforcement gaps, including by improving internal guidance to reviewers. About the Case In April 2023, a Facebook user in Poland posted an image of a striped curtain in the blue, pink and white colors of the transgender flag, with text in Polish stating, “New technology … Curtains that hang themselves,” and above that, “spring cleaning <3.” The user’s biography includes the description, “I am a transphobe.” The post received less than 50 reactions. Between April and May 2023, 11 different users reported the post a total of 12 times. Only two of the 12 reports were prioritized for human review by Meta’s automated systems, with the remainder closed. The two reports sent for human review, for potentially violating Facebook’s Suicide and Self-Injury Standard, were assessed as non-violating. None of the reports based on Hate Speech were sent for human review. Three users then appealed Meta’s decision to leave up the Facebook post, with one appeal resulting in a human reviewer upholding the original decision based on the Suicide and Self-Injury Community Standard. Again, the other appeals, made under the Hate Speech Community Standard, were not sent for human review. Finally, one of the users who originally reported the content appealed to the Board. As a result of the Board selecting this case, Meta determined the post did violate both its Hate Speech and Suicide and Self-Injury policies and removed it from Facebook. Additionally, the company disabled the account of the user who posted the content for several previous violations. Key Findings The Board finds the content violated Meta’s Hate Speech policy because it includes “violent speech” in the form of a call for a protected-characteristic group’s death by suicide. The post, which advocates for suicide among transgender people, created an atmosphere of intimidation and exclusion, and could have contributed to physical harm. Considering the nature of the text and image, the post also exacerbated the mental-health crisis being experienced by the transgender community. A recent report by the Gay and Lesbian Alliance Against Defamation (GLAAD) notes “the sheer traumatic psychological impact of being relentlessly exposed to slurs and hateful conduct” online. The Board finds additional support for its conclusion in the broader context of online and offline harms the LGBTQIA+ community is facing in Poland, including attacks and political rhetoric by influential government and public figures. The Board is concerned that Meta’s human reviewers did not pick up on contextual clues. The post’s reference to the elevated risk of suicide (“curtains that hang themselves”) and support for the group’s death (“spring cleaning”) were clear violations of the Hate Speech Community Standard, while the content creator’s self-identification as a transphobe, alone, would amount to another violation. The Board urges Meta to improve the accuracy of hate speech enforcement towards LGBTQIA+ people, especially when posts include images and text that require context to interpret. In this case, the somewhat-coded references to suicide in conjunction with the visual depiction of a protected group (the transgender flag) took the form of “malign creativity.” This refers to bad actors developing novel means of targeting the LGBTQIA+ community through posts and memes they defend as “humorous or satirical,” but are actually hate or harassment. Additionally, the Board is troubled by Meta’s statement that the human reviewers’ failures to remove the content aligns with a strict application of its internal guidelines. This would indicate that Meta’s internal guidance inadequately captures how text and image can interact to represent a group defined by the gender identity of its members. While the post also clearly violated Facebook’s Suicide and Self-Injury Community Standard, the Board finds this policy should more clearly prohibit content promoting suicide aimed at an identifiable group of people, as opposed to only a person in that group. In this case, Meta’s automated review prioritization systems significantly affected enforcement, including how the company deals with multiple reports on the same piece of content. Meta monitors and deduplicates (removes) these reports to “ensure consistency in reviewer decisions and enforcement actions.” Other reasons given for the automatic closing of reports included the content’s low severity and low virality (amount of views the content has accumulated) score, which meant it was not prioritized for human review. In this case, the Board believes the user’s biography could have been considered as one relevant signal when determining severity scores. The Board believes that Meta should invest more in the development of classifiers that identify potentially violating content impacting the LGBTQIA+ community and enhance training for human reviewers on gender identity-related harms. The Oversight Board’s Decision The Oversight Board overturns Meta’s original decision to leave up the content. The Board recommends that Meta: * Case summaries provide an overview of the case and do not have precedential value. 1. Decision Summary The Oversight Board overturns Meta’s original decision to leave up a piece of content in Polish on Facebook in which a user targeted transgender people with violent speech that advocated for members of this group to commit suicide. After the Board identified the case for review, Meta concluded that its original decision to allow the post to remain on the platform was mistaken, removed the content and applied sanctions. The Board finds that the post violated both the Hate Speech and the Suicide and Self-Injury Community Standards. The Board takes this opportunity to urge Meta to improve its policies and guidance to its reviewers to better protect transgender people on its platforms. Specifically, in assessing whether a post is hate speech, Meta should ensure that flag-based visual depictions of gender identity that do not contain a human figure are understood as representations of a group defined by the gender identity of its members. Meta should also clarify that encouraging a whole group to commit suicide is just as violating as encouraging an individual to commit suicide. The Board finds, however, that the fundamental issue in this case is not with the policies, but with their enforcement. Even as written, the policies clearly prohibited this post, which had multiple indications of hate speech targeting a group of people based on gender identity. Meta’s repeated failure to take the correct enforcement action in this case, despite multiple user reports, leads the Board to conclude that Meta is failing to live up to the ideals it has articulated on LGBTQIA+ safety . The Board urges Meta to close these enforcement gaps. 2. Case Description and Background In April 2023, a Facebook user in Poland posted an image of a striped curtain in the blue, pink and white colors of the transgender flag. On the image, text in Polish states: “New technology. Curtains that hang themselves.” Above that line, more text in Polish states: “spring cleaning <3.” A description in Polish in the user’s bio reads, “I am a transphobe.” The post received under 50 reactions from other users, the majority of which were supportive. The most frequently used reaction emoji was “Haha.” Between April and May 2023, 11 different users reported the content a total of 12 times. Of these, 10 reports were not prioritized for human review by Meta’s automated systems for a variety of reasons, including “low severity and virality scores.” Meta generally prioritizes content for human review based on its severity, virality and likelihood of violating content policies. Only two of the reports, falling under the Facebook Community Standard on Suicide and Self-Injury, resulted in the content being sent for human review. None of the reports based on the Hate Speech policy were sent for human review. According to Meta, reviewers have both the training and tools to “assess and act” on content beyond their designated policy queue (i.e., Hate Speech or Suicide and Self-Injury). Nevertheless, both reviewers assessed the content to be non-violating and did not escalate it further. Three users appealed Meta’s decision to keep the content on Facebook. One appeal resulted in a human reviewer upholding Meta’s original decision that the content did not violate its Suicide and Self-Injury policy. The other two appeals, made under Facebook’s Hate Speech policy, were not sent for human review. This is because Meta will “monitor and deduplicate” multiple reports on the same piece of content to ensure consistency in reviewer decisions and enforcement actions. One of the users who originally reported the content then appealed to the Board. As a result of the Board selecting this case, Meta determined that the content did violate both its Hate Speech and Suicide and Self-Injury policies and removed the post. Moreover, as part of Meta’s review of the case, the company determined that the content creator’s account already had several violations of the Community Standards and met the threshold to be disabled. Meta disabled the account in August 2023. The Board noted the following context in reaching its decision in this case: Poland is often reported to have high levels of hostility toward the LGBTQIA+ community. [Note: The Board uses “LGBTQIA+” (Lesbian, Gay, Bisexual, Transgender, Queer, Intersex and Asexual) when referring to groups based on sexual orientation, gender identity and/or gender expression. However, the Board will preserve the acronyms or usages employed by others when citing or quoting them.] The Council of Europe Commissioner for Human Rights has previously called attention to the “stigmatisation of LGBTI people” as a “long-standing problem in Poland.” The International Lesbian, Gay, Bisexual, Trans and Intersex Association’s (ILGA) Rainbow Europe report ranks countries on the basis of laws and policies that directly impact LGBTI people’s human rights. The report ranks Poland as the lowest-performing European Union (EU) member state and 42nd out of 49 European countries assessed. National and local governments, as well as prominent public figures, have increasingly targeted the LGBTQIA+ community through both discriminatory speeches and legislative action. Beginning in 2018, ILGA-Europe tracked what the organization called “high profile political hate-speech against LGBTI people from Polish political leaders,” including statements that the “entire LGBT movement” is a “threat” to Poland. In the same year, the mayor of Lublin, Poland, attempted to ban the city’s Equality March, although the Court of Appeal lifted the ban shortly before the scheduled march. In 2019, the mayor of Warsaw introduced the Warsaw LGBT+ Charter to “improve the situation of LGBT people” in the city. Poland’s ruling Law and Justice party (PiS) and religious leaders criticized the charter. Poland’s president and central government have also singled out the transgender community as targets. For example, the Chairman of the ruling PiS party has referred to transgender individuals as “abnormal.” Poland’s Minister of Justice has also asked Poland’s Supreme Court to consider that “in addition to their parents, trans people should also sue their children and spouse [for permission to transition] when they want to access LGR [Legal Gender Recognition].” Poland has also enacted anti-LGBTQIA+ legislation. In the words of Human Rights Watch, cities began calling for “the exclusion of LGBT people from Polish society” by implementing, among other measures, “LGBT-free zones” in 2019. Human Rights Watch has reported that these zones are places “where local authorities have adopted discriminatory ‘family charters’ pledging to ‘protect children from moral corruption’ or declared themselves free from ‘LGBT ideology.’” More than 100 cities have created such zones. ILGA-Europe reports that due to local, EU and international pressure, some of these municipalities have withdrawn “anti-LGBT resolutions or Family Rights Charters.” On June 28, 2022, Poland’s Supreme Administrative Court ordered four municipalities to withdraw their anti-LGBTQI+ resolutions. Nevertheless, as Rainbow Europe report’s ranking of Poland suggests, the climate in the country is notably hostile to the LGBTQIA+ community. A 2019 survey of LGBTI people in the EU, conducted by the European Union Agency for Fundamental Rights, compared LGBTI peoples’ experiences of assault and harassment in Poland and other parts of the European Union. According to the survey , 51% of LGBTI people in Poland often or always avoid certain locations for fear of being assaulted. This compares to 33% for the rest of the European Union. The survey also found that one in five transgender people were physically or sexually attacked in the five years before the survey, more than double that of other LGBTI groups. The Board commissioned external experts to analyze social-media responses to derogatory statements by Polish government officials. Those experts noted “a concerning uptick in online hate speech targeting minority communities in Poland, including LGBTQIA+ communities since 2015.” In its analysis of anti-LGBTQIA+ content in Polish on Facebook, these experts noted that spikes occurred during “court rulings relating to anti-LGBTQIA+ legislation.” These include the Supreme Administrative Court decision discussed above and determinations relating to legal challenges to the adoption of several anti-LGBT declarations brought before local administrative courts by the Polish ombudsmen for the Commissioner for Human Rights, which have been ongoing since 2019. The Board also asked linguistic experts about the meaning of the two Polish phrases in the post. With regard to the phrase “curtains that hang themselves,” the experts observed that in the context of a “trans flag hanging in the window,” the phrase was “a play on words” that juxtaposed “to hang curtains” with “to commit suicide by hanging.” The experts concluded that the phrase was “a veiled transphobic slur.” On the “spring cleaning” phrase, experts said that the phrase “normally refers to thorough cleaning when spring comes” but, in certain contexts, “it also means ‘throwing out all trash’ and ‘getting rid of all unwanted items (and/or people).’” Several public comments, including the submission from the Human Rights Campaign Foundation (PC-16029) argued that the post’s reference to “spring cleaning” was a form of “praising the exclusion and isolation of trans people out of Polish society (through their deaths).” The issues of online and offline harms at play in this case extend beyond the LGBTQIA+ community in Poland to affect that community around the globe. According to the World Health Organization, suicide is the fourth-leading cause of death among 15–29-year-olds worldwide . WHO notes that “suicide rates are also high amongst vulnerable groups who experience discrimination, such as refugees and migrants, indigenous peoples; and lesbian, gay, bisexual, transgender, intersex (LGBTI) persons.” Other research studies have found a “positive association” between cyber-victimization and self-injurious thoughts and behaviors. Suicide risk is a particular concern for the transgender and nonbinary community. The Trevor Project’s 2023 National Survey on LGBTQ Mental Health found that half of transgender and nonbinary youth in the United States considered attempting suicide in 2022. The same study estimates that 14% of LGBTQ young people have attempted suicide in the past year, including nearly one in five transgender and nonbinary young people. According to the CDC’s Youth Risk Behavior Survey , 10% of high school students in the United States attempted suicide in 2021. Numerous studies from around the world have found that transgender or nonbinary people are at a higher risk of both suicidal thoughts and attempts compared to cisgender people. In a public comment to the Board, the Gay and Lesbian Alliance Against Defamation (GLAAD) (PC-16027) underscored findings from their annual survey, the Social Media Safety Index , on LGBTQ user safety on five major social-media platforms. The 2023 report assigned Facebook a score of 61% based on 12 LGBTQ-specific indicators. This score represented a 15-point increase from 2022, with Facebook ranked second to Instagram and above the three other major platforms. However, GLAAD wrote, “safety and the quality of safeguarding of LGBTQ users remain unsatisfactory.” The report found that there are “very real resulting harms to LGBTQ people online, including a chilling effect on LGBTQ freedom of expression for fear of being targeted, and the sheer traumatic psychological impact of being relentlessly exposed to slurs and hateful conduct.” 3. Oversight Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the person who previously reported content that was left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). When Meta commits to act on recommendations, the Board monitors their implementation. When the Board selects cases like this one, in which Meta subsequently acknowledges that it made an error, the Board reviews the original decision to increase understanding of the content moderation process and to make recommendations to reduce errors and increase fairness for people who use Facebook and Instagram. 4. Sources of Authority and Guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board Decisions The most relevant previous decisions of the Oversight Board include: II. Meta’s Content Policies The Hate Speech policy rationale defines hate speech “as a direct attack against people – rather than concepts or institutions – on the basis of . . . protected characteristics,” including sex and gender identity. Meta defines “attacks” as “violent or dehumanizing speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation.” In the policy rationale, Meta further states: “We believe that people use their voice and connect more freely when they don’t feel attacked on the basis of who they are. That is why we don’t allow hate speech on Facebook. It creates an environment of intimidation and exclusion, and in some cases may promote offline violence.” Meta’s Hate Speech Community Standard separates attacks into “tiers.” Tier 1 attacks include content targeting a person or group of people on the basis of their protected characteristic(s) with “violent speech or support in written or visual form.” Meta ultimately found that the post in this case violated this policy line. On December 6, 2023, Meta updated the Community Standards to reflect that the prohibition on violent speech against protected-characteristic groups was moved to the Violence and Incitement policy. Tier 2 attacks include content targeting a person or group of people on the basis of their protected characteristic(s) with “expressions of contempt (in written or visual form).” In the Hate Speech Community Standard, Meta defines expressions of contempt to include “[s]elf-admission to intolerance on the basis of a protected characteristic” and “[e]xpressions that a protected characteristic shouldn’t exist.” The Suicide and Self-Injury Community Standard prohibits “any content that encourages suicide or self-injury, including fictional content such as memes or illustrations.” Under this policy, Meta removes “content that promotes, encourages, coordinates, or provides instructions for suicide and self-injury.” The Board’s analysis was informed by Meta’s commitment to voice , which the company describes as “paramount,” and its values of safety and dignity. III. Meta’s Human-Rights Responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human-rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , in which it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board’s analysis of Meta’s human-rights responsibilities in this case was informed by the following international standards: 5. User Submissions In their appeal to the Board, the user who reported the content noted that the person who posted the image had previously harassed transgender people online and had created a new account after being suspended from Facebook. They also said that applauding the high rate of suicide in the transgender community “shouldn’t be allowed.” 6. Meta’s Submissions Meta eventually removed the post under Tier 1 of its Hate Speech Community Standard because the content violated the policy line prohibiting content targeting a person or group of people on the basis of their protected characteristics with “violent speech or support in written or visual form.” In its internal guidelines about how to apply this policy, Meta says that content should be removed if it is “violent speech in the form of calls for action or statements of intent to inflict, aspirational or conditional statements about, or statements advocating or supporting death, disease or harm (in written or visual form).” In those internal guidelines, the company also describes what it considers to be visual representation of protected-characteristic groups in an image or video. Meta did not allow the Board to publish more detailed information related to this guidance. The company instead said that, “under the Hate Speech policy, Meta may take visual elements in the content into consideration when establishing whether the content targets a person or group of people based on their protected characteristics.” Meta said that the multiple assessments of the content as a non-violation of the Hate Speech Community Standard by its reviewers align with “a strict application of our internal guidelines.” It elaborated: “Although the curtains resemble the Trans Pride flag, we would interpret an attack on a flag, standing alone, as an attack on a concept or institution, which does not violate our policies, and not on a person or groups of people.” However, Meta subsequently determined that the “reference to hanging indicates this post is attacking a group of people.” This assessment was based on the determination that the phrase “‘curtains which hang themselves’ implicitly refers to the suicide rate in the transgender community because the curtains resemble the Trans Pride flag, and the curtains hanging in the photo (as well the text overlay) is a metaphor for suicide by hanging oneself.” Meta also noted that “concepts or institutions cannot ‘hang themselves,’ at least not literally.” For this reason, Meta found that the user was referring to “Transgender people, not just the concept.” Therefore, according to Meta, “this content violates the Hate Speech policy because it is intended to be interpreted as a statement in favor of a P[rotected] C[haracteristic] group’s death by suicide.” Following an update to the Hate Speech policy, in which the policy line prohibiting violent speech against protected-characteristic groups was moved to the Violence and Incitement policy, Meta told the Board that the content remains violating. Meta also reported that the statement in the biography of the user’s account reading, “I am a transphobe,” violated Tier 2 of the Hate Speech policy as a “self-admission to intolerance on the basis of protected characteristics.” This statement was, according to Meta, assessed as violating as part of Meta’s review of both the case and the user’s account following the Board’s selection of the case. Meta said that this statement helped inform its understanding of the user’s intent in the case content. In response to the Board asking whether the content violates the Suicide and Self-Injury policy, Meta confirmed that the “content violates the Suicide and Self-Injury policy by encouraging suicide, consistent with our determination that the content constitutes a statement in favor of a protected characteristic group’s death by suicide.” Meta also reported that the Suicide and Self-Injury policy “does not differentiate between content that promotes or encourages suicide aimed at a specific person versus a group of people.” The Board asked Meta 13 questions in writing. Questions related to Meta’s content-moderation approach to transgender and LGBTQIA+ issues; the relationship between the Hate Speech and Suicide and Self-Injury Community Standards; how “humor” and “satire” are assessed by moderators when reviewing content for hate speech violations; the role of “virality” and “severity” scores in prioritizing content for human review; and how Meta’s content-moderation practices deal with prioritization of content for human review that has multiple user reports. Meta answered all 13 questions. 7. Public Comments The Oversight Board received 35 public comments relevant to this case, including 25 from the United States and Canada, seven from Europe and three from Asia Pacific and Oceania. This total includes public comments that were either duplicates or were submitted with consent to publish, but did not meet the Board’s conditions for publication. Such exclusion can be based on the comment’s abusive nature, concerns about user privacy and/or other legal reasons. Public comments can be submitted to the Board with or without consent to publish, and with or without consent to attribute. The submissions covered the following themes: the human-rights situation in Poland, particularly as it is experienced by transgender people; LGBTQIA+ safety on social-media platforms; the relationship between online and offline harms in Poland; the relationship between humor, satire, memes and hate/harassment against transgender people on social-media platforms; and the challenges of moderating content that requires context to interpret. To read public comments submitted for this case, please click here . 8. Oversight Board Analysis The Board examined whether this content should be removed by analyzing Meta's content policies, human rights-responsibilities and values. The Board also assessed the implications of this case for Meta’s broader approach to content governance. The Board selected this case to assess the accuracy of Meta’s enforcement of its Hate Speech policy, as well as to better understand how Meta approaches content that involves both hate speech and the promotion of suicide or self-injury. 8.1 Compliance With Meta’s Content Policies I. Content Rules Hate Speech The Board finds that the content in this case violates the Hate Speech Community Standard. The post included “violent speech or support” (Tier 1) in the form of a call for a protected-characteristic group’s death by suicide, which clearly violates the Hate Speech policy. The Board agrees with Meta’s eventual conclusion that the reference to hanging in the post is an attack on a group of people rather than a concept because “concepts or institutions cannot ‘hang themselves.’” The Board also finds support for its conclusion in the broader context around online and offline harms that members of the LGBTQIA+ community, and specifically transgender people, face in Poland. A post that uses violent speech to advocate for and support the death of transgender people by suicide creates an atmosphere of intimidation and exclusion, and could contribute to physical harm. This context in which the post’s language was used makes clear it was meant to dehumanize its target. In light of the nature of the text and image, the post also exacerbates the ongoing mental health crisis that is currently being experienced by the transgender community. According to multiple studies, transgender or nonbinary people are at a higher risk of both suicidal thoughts and attempts than cisgender individuals. Moreover, the experience of online attacks and victimization has been positively correlated with suicidal thoughts. In this context, the Board finds that the presence of the transgender flag, coupled with the reference to the elevated risk of suicide within the transgender community (“curtains that hang themselves”), is a clear indication that transgender people are the post’s target. The Board finds that the phrase “spring cleaning,” followed by a “<3” (heart) emoticon, also constitutes support for the group’s death. As such, it also violates the Hate Speech Community Standard’s prohibition (Tier 2) on “expressions that a protected characteristic shouldn’t exist.” With respect to this Community Standard, the Board believes that the policy and internal guidelines for its enforcement could be more responsive to “ malign creativity ” in content trends that target historically marginalized groups. The Wilson Center coined this phrase with research on gendered and sexualized abuse, and GLAAD also underscored its relevance in their public comment (PC-16027). “Malign creativity” refers to “the use of coded language; iterative, context-based visual and textual memes; and other tactics to avoid detection on social-media platforms.” In applying the concept to the post in this case, GLAAD said “malign creativity” involves “bad actors develop[ing] novel means of targeting the LGBTQ community” and vulnerable groups more generally through posts and memes that they defend as “humorous or satirical,” but are “actually anti-LGBTQ hate or harassment.” Specifically, “malign creativity” took the form of a post that uses two somewhat-coded references to suicide (“curtains that hang themselves” and “spring cleaning”) in conjunction with a visual depiction of a protected group (the transgender flag) to encourage death by suicide. In the Armenians in Azerbaijan decision, the Board noted the importance of context in determining that the term under consideration in that case was meant to target a group based on a protected characteristic. While the context of war prevailed in that case, the threats faced by transgender people in Poland show that situations can be dire for a community, short of war. As noted above, one in five transgender people in Poland reported having been physically or sexually attacked in the five years before 2019, more than double the number of individuals from other LGBTI groups reporting such attacks. The Board is concerned that Meta’s initial human reviewers did not pick up on these contextual clues within the content and, as a result, concluded the content was non-violating. While the Board is recommending some revisions to the guidance on enforcing the Hate Speech Community Standard, it underscores that the post violated the policies even as they were written at the time. Both statements made in the post support the death of transgender people by suicide. An additional signal in the user’s bio supports this conclusion. The user’s self-identification as a transphobe would – in and of itself – constitute a Tier 2 violation of the Hate Speech Standard’s prohibition on “self-admission to intolerance on the basis of protected characteristics.” Meta must improve the accuracy of its enforcement on hate speech towards the LGBTQIA+ community, either through automation or human review, especially when posts include images and text that require context to interpret. As GLAAD observed (PC-16027), Meta “consistently fails to enforce its policies when reviewing reports on content that employs ‘malign creativity.’” The Board is also troubled by Meta’s statement that the reviewers’ failure to remove the content “aligns with a strict application of our internal guidelines.” Meta’s statement indicates that the internal guidance to reviewers inadequately captures how text and image can interact in a social-media post to represent a group defined by the gender identity of its members. The Board finds that the guidance may not suffice for at-scale content reviewers to be able to reach the correct enforcement outcome on content that targets protected-characteristic groups that are represented visually, but are not named or depicted in human figures. Meta did not allow the Board to publish additional details that would have enabled a more robust discussion of how enforcement of this type of content could be improved. However, the Board believes Meta should modify its guidance to ensure that visual depictions of gender identity are adequately understood when assessing content for attacks. The Board underscores that in suggesting this course, it does not seek to diminish Meta’s protection of challenges to concepts, institutions, ideas, practices or beliefs. Rather, the Board wants Meta to clarify that posts need not depict human figures to constitute an attack on people. Suicide and Self-Injury The Board finds that the content in this case also violates the Suicide and Self-Injury Community Standard. This policy prohibits “content that promotes, encourages, coordinates, or provides instructions for suicide and self-injury.” According to internal guidelines that Meta provides to reviewers, “promotion” is defined as “speaking positively of.” The Board agrees with Meta’s eventual conclusion that the content constitutes a statement in favor of a protected-characteristic group’s death by suicide, and therefore encourages suicide. The Board also finds that the Suicide and Self-Injury Community Standard should more expressly prohibit content that promotes or encourages suicide aimed at an identifiable group of people, as opposed to a person in that group. Meta disclosed to the Board that the policy does not differentiate between these two forms of content. Given the challenges that reviewers faced in identifying a statement encouraging a group’s suicide in this case, however, the Board urges Meta to clarify that the policy forbids content that promotes or encourages suicide aimed at an identifiable group of people. Meta should clarify this point on its Suicide and Self-Injury policy page as well as in its associated internal guidelines to reviewers. II. Enforcement Action The Board finds that Meta’s automated review prioritization systems significantly affected the enforcement actions in this case. Of the 12 user reports of the post, 10 were automatically closed by Meta’s automated systems. Of the three user appeals against Meta’s decisions, two were automatically closed by Meta’s automated systems. The Board is concerned that the case history shared with the Board contains numerous indications of a violation and thus suggests that Meta’s policies are not being adequately enforced. The Board notes that many user reports were closed as a result of Meta’s content moderation practices to deal with multiple reports on the same piece of content. The first user report for hate speech was not prioritized for human review because of a “low severity and a low virality score.” Subsequent reports for hate speech were not prioritized for human review because when multiple reports are given on the same piece of content, Meta will “deduplicate those reports to ensure consistency in reviewer decisions and enforcement actions.” The Board acknowledges that deduplication is a reasonable practice for content moderation at scale. However, the Board notes that the practice puts more pressure on the initial determination made on a report, as that will also determine the fate of all reports that are grouped with it. The Board believes it would be important for Meta to prioritize improving the accuracy of automated systems that both enforce content policies and prioritize content for review, particularly when dealing with content that potentially impacts LGBTQIA+ people. Such improvements to the ability of automated systems to recognize the kind of coded language and context-based images considered in this case would undoubtedly improve enforcement on content that targets other protected-characteristic groups as well. The Board believes that the user’s biography, for example, which included a self-admission of transphobia, could have been considered as one relevant signal when determining severity scores for the purpose of deciding whether to prioritize content for review and/or to take an enforcement action. This signal could supplement existing behavioral and social-network analyses that Meta might use to surface potentially violating content. Additionally, the Board emphasizes that it would be important for Meta to ensure automated systems are well calibrated and content reviewers are trained to effectively assess LGBTQIA+ community-related posts at scale. The Board is concerned about Meta’s current approach, under which reviewers tasked with assessing appeals often seem to have the same level of expertise as those performing the first content assessment. The Board believes that Meta should invest more in the development and training of classifiers that surface potentially violating content impacting the LGBTQIA+ community and prioritize that content for human review. Hate speech, especially the highest severity content that falls under Tier 1 of Meta’s policy, should always be prioritized for review. The Board also suggests bolstering these process improvements with: i) enhanced training on harms relating to gender identity for reviewers; ii) a task force on transgender and non-binary people’s experiences on Meta’s platforms; and iii) the creation of a specialized group of subject-matter experts to review content related to issues impacting the LGBTQIA+ community. While the facts of this case pertain specifically to the harms faced by transgender people on Facebook, the Board also encourages Meta to explore how to improve enforcement against hateful content impacting other protected-characteristic groups. While the Board is only issuing two formal recommendations below, the Board underscores that this is because the challenges highlighted in this case have less to do with the policies as written than with their enforcement. The Board counts at least five indicia of harmful content in this case: (1) the post’s references to “self-hanging curtains”; (2) the post’s reference to “spring cleaning <3”; (3) the user’s self-description as a “transphobe” in a country context where high levels of hostility toward the LGBTQIA+ community are reported; (4) the number of user reports and appeals on the content; and (5) the number of reports and appeals relative to the virality of the content. The Board is concerned Meta missed these signals and believes this suggests that its policies are underenforced. The Board is adamant that Meta should think rigorously and creatively about how to close the gap between its ideals of safeguarding LGBTQIA+ individuals on its platforms and its enforcement of those ideals. 8.2 Compliance With Meta’s Human-Rights Responsibilities Freedom of Expression (Article 19 ICCPR) Article 19, para. 2 of the International Covenant on Civil and Political Rights (ICCPR) provides that “everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media.” General Comment No. 34 (2011) further specifies that protected expression includes expression that may be considered “deeply offensive” (para. 11). Where restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human-rights commitments, both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression” ( A/74/486 , para. 41). I. Legality (Clarity and Accessibility of the Rules) The principle of legality under international human-rights law requires rules that limit expression to be clear and publicly accessible (General Comment No. 34, para. 25). Rules restricting expression “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” and “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not” ( Ibid. ). Applied to rules that govern online speech, the UN Special Rapporteur on freedom of expression has said they should be clear and specific ( A/HRC/38/35 , para. 46). People using Meta’s platforms should be able to access and understand the rules, and content reviewers should have clear guidance on their enforcement. The Board finds that Meta’s prohibitions of “violent speech or support” in written or visual form targeting groups with protected characteristics, of expressions that a protected characteristic shouldn’t exist and of speech that promotes or encourages suicide and self-injury are sufficiently clear. The Board notes, however, that Meta could improve enforcement accuracy in relation to the policies engaged in this case by providing clearer guidance for human reviewers, as addressed under section 8.1 above. Meta should clarify that visual depictions of gender identity, such as through a flag, need not depict human figures to constitute an attack under the Hate Speech policy. Meta also should clarify that a call for a group (as opposed to an individual) to commit suicide violates the Suicide and Self-Injury Policy. II. Legitimate Aim Any restriction on expression should pursue one of the legitimate aims of the ICCPR, which include the “rights of others.” In several decisions, the Board has found that Meta’s Hate Speech policy, which aims to protect people from the harm caused by hate speech, has a legitimate aim that is recognized by international human-rights law standards (see, for example, Knin Cartoon decision). Additionally, the Board finds that, in this case, the Suicide and Self-Injury policy lines on content that encourages suicide or self-injury serve the legitimate aims of protecting people’s right to the enjoyment of the highest attainable standard of physical and mental health (ICESCR 12) and the right to life (Article 5, ICCPR). In cases such as this one, where a protected-characteristic group is encouraged to commit suicide, the Suicide and Self-Injury policy also protects people’s rights to equality and non-discrimination (Article 2, para. 1, ICCPR). III. Necessity and Proportionality The principle of necessity and proportionality provides that any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected” ( General Comment No. 34 , para. 34). When analyzing the risks posed by violent content, the Board is typically guided by the six-factor test described in the Rabat Plan of Action, which addresses advocacy of national, racial or religious hatred that constitutes incitement to hostility, discrimination or violence. Based on an assessment of the relevant factors, especially the content and form of expression, the intent of the speaker and the context further described below, the Board finds that removing the content is in compliance with Meta’s human-rights responsibilities as it poses imminent and likely harm. Removing the content is a necessary and proportionate limitation on expression in order to protect the right to life, as well as the right to enjoyment of the highest attainable standard of physical and mental health of the broader LGBTQIA+ community and, in particular, transgender people in Poland. While the Board has previously noted the importance of reclaiming derogatory terms for LGBTQIA+ people in countering disinformation (see Reclaiming Arabic words decision), that is not the case here. The post also does not contain political nor newsworthy speech (see Colombia Protests decision). The post in this case features the image of the transgender flag hanging as curtains, with the description that curtains hang themselves. According to experts consulted by the Board, the use of curtains – in both visual and textual form – does not appear to be recurring coded language aimed at the transgender community. Nonetheless, as discussed above, the phenomenon of “malign creativity,” or the use of novel language and strategies of representation to express hate and harassment, has come to characterize content trends that target transgender people. The Board finds that the content in this case fits squarely within that trend. Although the post used imagery that some found “humorous” (as evidenced by the “Haha” emoji reactions), the post can still be interpreted as a violent and provocative statement targeting the transgender community. Humor and satire can, of course, be used to push the boundaries of legitimate criticism, but it cannot be a cover for hate speech. The post only engages with the topic of high suicide rates among the transgender community to celebrate this fact. When considering the intent of the content creator, the Board notes that their biography openly stated they are a “transphobe.” While Meta only later considered the implications of this statement for the case content itself, the Board finds it to be highly relevant to determining the intent of the user. It would also be an independent ground for removing the content as a Tier 2 violation of the Hate Speech policy. The post also described the act of transgender individuals dying by suicide to be “spring cleaning,” including a heart emoticon alongside the description. In light of this statement of support for a group’s death by suicide, the Board finds intent to encourage discrimination and violence based on the content of the post, image used and accompanying text and caption. The content in this case not only encourages transgender people to take violent action against themselves but also incites others to discriminate and act with hostility towards transgender people. This understanding is confirmed by the fact that the reaction emoji most frequently employed by other users engaging with the content was “Haha.” Finally, the Board notes the significant offline risks that the Polish LGBTQIA+ community faces in the form of increasing attacks through legislative and administrative action, as well as political rhetoric by central government figures and influential public voices. Since 2020, Poland has consistently ranked as the lowest-performing EU member country for LGBTQIA+ rights according to ILGA-Europe. It is also important to note that Poland does not have LGBTQIA+ protections in its hate speech and hate crime laws, an issue that ILGA- Europe and Amnesty International , among others, have called upon Poland to address. Furthermore, the rise in anti-LGBTQIA+ rhetoric in Polish on Facebook, flagged by external experts and numerous public comments, is not happening in isolation. Many organizations and institutions have expressed alarm at the prevalence of anti-LGBTQIA+ speech on social media. UN Independent Expert on Sexual Orientation and Gender Identity (IE SOGI) Victor Madrigal-Borloz has said that levels of violence and discrimination against gender-diverse and transgender people “offend the human conscience.” GLAAD’s research and reporting has found that there are “very real resulting harms to LGBTQ people online, including … the sheer psychological trauma of being relentlessly exposed to slurs and hateful conduct.” Content like the post in this case, especially when considered at scale, may contribute to an environment in which the already pervasive harm of dying by suicide within the transgender community is exacerbated. Moreover, content that normalizes violent anti-transgender speech, as is the case with this post, risks contributing to both the ongoing mental-health crisis that impacts the transgender community, as well as an increase in violence targeting the community offline. 9. Oversight Board Decision The Oversight Board overturns Meta's original decision to leave up the content. 10. Recommendations Content Policy 1. Meta’s Suicide and Self-Injury policy page should clarify that the policy forbids content that promotes or encourages suicide aimed at an identifiable group of people. The Board will consider this implemented when the public-facing language of the Suicide and Self-Injury Community Standard reflects the proposed change. Enforcement 2. Meta’s internal guidance for at-scale reviewers should be modified to ensure that flag-based visual depictions of gender identity that do not contain a human figure are understood as representations of a group defined by the gender identity of its members. This modification would clarify instructions for enforcement of this form of content at-scale whenever it contains a violating attack. The Board will consider this implemented when Meta provides the Board with the changes to its internal guidance. *Procedural Note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg, which draws on a team of more than 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. Memetica, an organization that engages in open-source research on social-media trends, also provided analysis. Linguistic expertise was provided by Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world. Return to Case Decisions and Policy Advisory Opinions" fb-vj6fo5uy,Dehumanizing speech against a woman,https://www.oversightboard.com/decision/fb-vj6fo5uy/,"June 27, 2023",2023,,TopicSex and gender equalityCommunity StandardHate speech,Hate speech,Overturned,United States,"A user appealed Meta’s decision to leave up a Facebook post that attacked an identifiable woman and compared her to a motor vehicle (“truck”). After the Board brought the appeal to Meta's attention, the company reversed its original decision and removed the post.",5332,836,"Overturned June 27, 2023 A user appealed Meta’s decision to leave up a Facebook post that attacked an identifiable woman and compared her to a motor vehicle (“truck”). After the Board brought the appeal to Meta's attention, the company reversed its original decision and removed the post. Summary Topic Sex and gender equality Community Standard Hate speech Location United States Platform Facebook This is a summary decision. Summary decisions examine cases where Meta reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors. They are approved by a Board Member panel, not the full Board. They do not consider public comments, and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas of potential improvement in its policy enforcement. Case summary A user appealed Meta’s decision to leave up a Facebook post that attacked an identifiable woman and compared her to a motor vehicle (“truck”). After the Board brought the appeal to Meta's attention, the company reversed its original decision and removed the post. Case description and background In December 2022, a Facebook user posted a photo of a clearly identifiable woman. The caption above the photo, in English, referred to her as a pre-owned truck for sale. It continued to describe the woman using the metaphor of a “truck,” requiring paint to hide damage, emitting unusual smells, and being rarely washed. The user added that the woman was “advertised all over town.” Another user reported the content to the Board, saying that it was misogynistic and offensive to the woman. The post received over two million views, and it was reported to Meta more than 500 times by Facebook users. Before Meta reassessed its original decision, the user who posted the content edited the original post to superimpose a “vomiting” emoji over the woman’s face. They updated the caption saying they had concealed her identity out of their embarrassment “to say that I owned this pile of junk.” They also added information naming various dating websites on which the woman supposedly had a profile. Under Meta’s Bullying and Harassment policy, the company removes content that targets private figures with “[a]ttacks through negative physical descriptions” or that makes “[c]laims about sexual activity.” Meta initially left the content on Facebook. When the Board brought this case to Meta’s attention, it reviewed both the original post and the updated post. The company noted that both versions of the content include a negative physical description of a private individual by comparing her to a truck and both make inferences about her sexual activity by claiming she is “advertised all over town,” though the edited post is more explicit with the references to dating websites. Therefore, Meta determined that both versions violated its Bullying and Harassment policy, and its original decision to leave up the content was incorrect. The company then removed the content from Facebook. Board authority and scope The Board has authority to review Meta's decision following an appeal from the user who reported content that was then left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1). Where Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, to reduce errors and increase fairness for people who use Facebook and Instagram. Case significance The case highlights a concern with how Meta fails to enforce its policy when a post contains bullying and harassment, which can be a significant deterrent to open online expression for women and other marginalized groups. Meta failed to remove the content which violates two elements of the Bullying and Harassment Community Standard, as an attack with “negative physical description” and “claims about sexual activity,"" despite the post receiving millions of views and hundreds of reports by Facebook users. Previously, the Board issued a series of recommendations for Meta to clarify several points of ambiguity in its Bullying & Harassment policy (“ Pro-Navalny protest in Russia ,” recommendations no. 1-4), half of which Meta implemented, and half of which the company declined after a feasibility assessment. The Board is concerned that this case may indicate a more widespread problem of underenforcement of the anti-bullying standard - which likely has disproportionate impacts on women and members of other vulnerable groups. The Board underlines the need for Meta to holistically address concerns the Board includes in its case decisions and implement relevant recommendations to reduce the error rate in moderating bullying content impacting all users, while balancing the company’s values of “Safety,” “Dignity,” and “Voice.” Decision The Board overturns Meta’s original decision to leave up the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-vxeeg6dn,Link to Wikipedia Article on Hayat Tahrir al-Sham,https://www.oversightboard.com/decision/fb-vxeeg6dn/,"May 13, 2025",2025,,"TopicFreedom of expression, Governments, War and conflictCommunity StandardDangerous individuals and organizations",Dangerous individuals and organizations,Overturned,Syria,"A user appealed Meta’s decision to remove a reply to a Facebook comment that included a link to a Wikipedia article about Hayat Tahrir al-Sham (HTS). After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the content.",7624,1137,"Overturned May 13, 2025 A user appealed Meta’s decision to remove a reply to a Facebook comment that included a link to a Wikipedia article about Hayat Tahrir al-Sham (HTS). After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the content. Summary Topic Freedom of expression, Governments, War and conflict Community Standard Dangerous individuals and organizations Location Syria Platform Facebook Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Summary A user appealed Meta’s decision to remove a reply to a Facebook comment that included a link to a Wikipedia article about Hayat Tahrir al-Sham (HTS). After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the content. About the Case In December 2024, amid a rebel offensive that led to the fall of Bashar al-Assad's regime in Syria, a Facebook user in Macedonia posted about the former Syrian president fleeing to Moscow. Another user commented with a quote in Bulgarian, referring to Hayat Tahrir al-Sham (HTS) as “Islamists from Al-Qaeda.” A third user then replied to the comment in Bulgarian, stating that this is one of the groups that had driven Assad away and included a link to a Wikipedia article about HTS. Meta originally removed the reply from the third user from Facebook under its Dangerous Organizations and Individuals (DOI) policy. This policy prohibits the “glorification,” “support” and “representation” of designated entities, their leaders, founders, prominent members and any unclear references to them. However, the policy allows neutral discussion, including “factual statements, commentary, questions, and other information that do not express positive judgement around the designated dangerous organisation or individual and their behaviour.” The third user’s appeal to the Board stated that they shared the link for informational purposes and they “do not support the organization, on the contrary, [they] condemn it.” After the Board brought this case to Meta’s attention, the company determined that the content did not violate its DOI policy. It found that its original decision to remove the content was incorrect because the comment does not express “positive judgment” about Hayat Tahrir al-Sham. The company restored the content to the platform. Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook, Instagram and Threads users. Significance of Case This case highlights the over-enforcement of Meta’s Dangerous Organizations and Individuals policy. The Board previously noted in the Karachi Mayoral Election Comment decision that such mistakes can negatively impact users' ability to “share political commentary and news reporting” about organizations labeled as “dangerous,” therefore infringing on freedom of expression. The Board has issued several recommendations aiming to increase transparency around and the accuracy of the enforcement of Meta’s Dangerous Organizations and Individuals policy and its exceptions. This includes a recommendation to “assess the accuracy of reviewers enforcing the reporting allowance under the Dangerous Organizations and Individuals policy in order to identify systemic issues causing enforcement errors.” ( Mention of the Taliban in News Reporting , recommendation no. 5). While Meta reported it had implemented this recommendation, the company did not publish information to demonstrate this. In the same decision, the Board recommended that Meta “enhance the capacity allocated to HIPO [High Impact False Positive Override system] review across languages to ensure that more content decisions that may be enforcement errors receive additional human review” ( Mention of the Taliban in News Reporting , recommendation no. 7). HIPO is a system Meta uses to identify cases in which it has acted incorrectly, for example, by wrongly removing content. Meta reported exploring improvements to increase HIPO’s review capacity, which resulted in a “multifold increase in HIPO overturns” (Meta Q4 2022 Quarterly Update on the Oversight Board). The Board considered that this recommendation has been reframed by Meta, given that it is unclear from the company’s response whether the changes involved resource increases or only reallocation for better efficiency. In the Punjabi Concern Over the RSS in India decision, the Board recommended that Meta “improve its transparency reporting to increase public information on error rates by making this information viewable by country and language for each Community Standard.” The Board underscored that “more detailed transparency reports will help the public spot areas where errors are more common, including potential specific impacts on minority groups” (recommendation no. 3). The implementation of this recommendation is currently in progress. In its last update on this recommendation, Meta explained that the company is “in the process of compiling an overview of enforcement data to confidentially share with the Board.” The document will outline data points that provide indicators of enforcement accuracy across various policies – including the Dangerous Organizations and Individuals policy. Meta stated that the company “remain[s] committed to compiling an overview that addresses the Board’s overarching call for increased transparency on enforcement accuracy across policies” (Meta’s H2 2024 Bi-Annual Report on the Oversight Board – Appendix). Furthermore, in a policy advisory opinion , the Board asked Meta to “explain the methods it uses to assess the accuracy of human review and the performance of automated systems in the enforcement of its Dangerous Organizations and Individuals policy,” (Referring to Designated Dangerous Individuals as “Shaheed,” recommendation no. 6). The Board considered that this recommendation has been reframed by Meta. The company stated it conducts audits to assess the accuracy of its content moderation decisions and that this informs areas for improvement. Meta did not, however, explain the methods it deploys to perform these assessments. The Board urges Meta to continue to improve its ability to accurately enforce content that falls within the exceptions to the Dangerous Organizations and Individuals policy. A full commitment to the recommendations mentioned above would further strengthen the company’s ability to improve enforcement accuracy. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-xwjqbu9a,Claimed COVID cure,https://www.oversightboard.com/decision/fb-xwjqbu9a/,"January 28, 2021",2021,January,"TopicHealth, Misinformation, SafetyCommunity StandardViolence and incitement","Policies and TopicsTopicHealth, Misinformation, SafetyCommunity StandardViolence and incitement",Overturned,France,"The Oversight Board has overturned Facebook's decision to remove a post which it claimed, 'contributes to the risk of imminent… physical harm'.",28131,4274,"Overturned January 28, 2021 The Oversight Board has overturned Facebook's decision to remove a post which it claimed, 'contributes to the risk of imminent… physical harm'. Standard Topic Health, Misinformation, Safety Community Standard Violence and incitement Location France Platform Facebook To read this decision in French click here . Pour lire l’intégralité de la décision en français, veuillez cliquer ici . The Oversight Board has overturned Facebook’s decision to remove a post which it claimed, “contributes to the risk of imminent… physical harm.” The Board found Facebook’s misinformation and imminent harm rule (part of its Violence and Incitement Community Standard) to be inappropriately vague and recommended, among other things, that the company create a new Community Standard on health misinformation. About the case In October 2020, a user posted a video and accompanying text in French in a public Facebook group related to COVID-19. The post alleged a scandal at the Agence Nationale de Sécurité du Médicament (the French agency responsible for regulating health products), which refused to authorize hydroxychloroquine combined with azithromycin for use against COVID-19, but authorized and promoted remdesivir. The user criticized the lack of a health strategy in France and stated that “[Didier] Raoult’s cure” is being used elsewhere to save lives. The user’s post also questioned what society had to lose by allowing doctors to prescribe in an emergency a “harmless drug” when the first symptoms of COVID-19 appear. In its referral to the Board, Facebook cited this case as an example of the challenges of addressing the risk of offline harm that can be caused by misinformation about the COVID-19 pandemic. Key findings Facebook removed the content for violating its misinformation and imminent harm rule, which is part of its Violence and Incitement Community Standard, finding the post contributed to the risk of imminent physical harm during a global pandemic. Facebook explained that it removed the post as it contained claims that a cure for COVID-19 exists. The company concluded that this could lead people to ignore health guidance or attempt to self-medicate. The Board observed that, in this post, the user was opposing a governmental policy and aimed to change that policy. The combination of medicines that the post claims constitute a cure are not available without a prescription in France and the content does not encourage people to buy or take drugs without a prescription. Considering these and other contextual factors, the Board noted that Facebook had not demonstrated the post would rise to the level of imminent harm, as required by its own rule in the Community Standards. The Board also found that Facebook’s decision did not comply with international human rights standards on limiting freedom of expression. Given that Facebook has a range of tools to deal with misinformation, such as providing users with additional context, the company failed to demonstrate why it did not choose a less intrusive option than removing the content. The Board also found Facebook’s misinformation and imminent harm rule, which this post is said to have violated, to be inappropriately vague and inconsistent with international human rights standards. A patchwork of policies found on different parts of Facebook’s website make it difficult for users to understand what content is prohibited. Changes to Facebook’s COVID-19 policies announced in the company’s Newsroom have not always been reflected in its Community Standards, while some of these changes even appear to contradict them. The Oversight Board’s decision The Oversight Board overturns Facebook’s decision to remove the content and requires that the post be restored In a policy advisory statement, the Board recommends that Facebook: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision Summary The Oversight Board has overturned Facebook’s decision to remove content that it designated as health misinformation that “contributes to the risk of imminent . . . physical harm.” The Oversight Board found that Facebook’s decision did not comply with its Community Standards, its values, or international human rights standards. 2. Case Description In October 2020, a user posted a video and accompanying text in French in a Facebook public group related to COVID-19. The video and text alleged a scandal at the Agence Nationale de Sécurité du Médicament (the French agency responsible for regulating health products) which refused to authorize hydroxychloroquine combined with azithromycin for use against COVID-19, but authorized and promoted remdesivir. The user criticized the lack of a health strategy in France and stated that “[Didier] Raoult’s cure” is being used elsewhere to save lives. Didier Raoult (who is mentioned in the post) is a professor of microbiology at the Faculty of Medicine of Marseille, and directs the “Institut Hospitalo-Universitaire Méditerranée Infection” (IHU) in Marseille. The user’s post also questioned what society had to lose by allowing doctors to prescribe in an emergency a “harmless drug” when the first symptoms of COVID-19 appear. The video claimed that the combination of hydroxychloroquine and azithromycin was administrated to patients at early stages of the disease and implied this was not the case for remdesivir. The post was shared in a public group related to COVID-19 with more than 500,000 members and received about 50,000 views, about 800-900 reactions (the majority of which were ""angry"" followed by ""like""), 200-300 comments on the post made by 100-200 different people and was shared by 500-600 people. Facebook removed the content for violating its Community Standard on Violence and Incitement. In referring its decision to the Oversight Board, Facebook cited this case as an example of the challenges of addressing the risk of offline harm that can be caused by misinformation about the COVID-19 pandemic. 3. Authority and Scope The Board has authority to review Facebook’s decision under Article 2 (Authority to Review) of the Board’s Charter and may uphold or reverse that decision under Article 3, Section 5 (Procedures for Review: Resolution of the Charter). Facebook has not presented reasons for the content to be excluded in accordance with Article 2, Section 1.2.1 (Content Not Available for Board Review) of the Board’s Bylaws , nor has Facebook indicated that it considers the case to be ineligible under Article 2, Section 1.2.2 (Legal Obligations) of the Bylaws. Under Article 3, Section 4 (Procedures for Review: Decisions) of the Board’s Charter, the final decision may include a policy advisory statement, which will be taken into consideration by Facebook to guide its future policy development. 4. Relevant Standards The Oversight Board considered the following standards in its decision: I. Facebook’s Community Standards: The introduction to Facebook’s Community Standards includes a link titled “COVID-19: Community Standards Updates and Protections” that states: As people around the world confront this unprecedented public health emergency, we want to make sure that our Community Standards protect people from harmful content and new types of abuse related to COVID-19. We're working to remove content that has the potential to contribute to real-world harm, including through our policies prohibiting coordination of harm, sale of medical masks and related goods, hate speech, bullying and harassment and misinformation that contributes to the risk of imminent violence or physical harm. Facebook stated that it relied specifically on the prohibition on “misinformation and unverifiable rumors that contribute to the risk of imminent violence or physical harm,” which is contained within the Community Standard on Violence and Incitement (referred to as the “misinformation and imminent harm rule” from this point on). The rule appears under the qualification that it “require[s] additional information and/or context to enforce.” Facebook’s policy rationale for Violence and Incitement states that it aims “to prevent potential offline harm that may be related to content on Facebook.” Facebook further states that it removes content “that incites or facilitates serious violence” and “when it believes there is a genuine risk of physical harm or direct threats to public safety.” Although Facebook did not rely on its Community Standard on False News in this case, the Board notes the range of enforcement options besides removal under this policy. II. Facebook’s Values: The introduction to the Community Standards notes that “Voice” is Facebook’s paramount value. The Community Standards describe this value as: The goal of our Community Standards has always been to create a place for expression and give people a voice. […] We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. However, the platform may limit ""Voice” in service of several other values, including “Safety”. Facebook defines its “Safety” value as: We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook. III. Relevant Human Rights Standards: The UN Guiding Principles on Business and Human Rights ( UNGPs ), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for businesses’ human rights responsibilities. The Board's analysis in this case was informed by UN treaty provisions and the authoritative guidance of the UN’s human rights mechanisms, including the following: 5. User Statement Facebook referred this case to the Oversight Board. Facebook confirmed to the Oversight Board that the platform sent the user notification of the opportunity to file a statement with respect to this case, but the user did not submit a statement. 6. Explanation of Facebook’s Decision Facebook removed the content for violating its misinformation and imminent harm rule under its Violence and Incitement Community Standard. According to Facebook, the post contributed to the risk of imminent physical harm during a global pandemic. Facebook explained that it removed this content because (1) the post claimed a cure for COVID-19 exists, which is refuted by the World Health Organization (WHO) and other credible health authorities, and (2) leading experts have told Facebook that content claiming that there is a guaranteed cure or treatment for COVID-19 could lead people to ignore preventive health guidance or attempt to self-medicate. Facebook explained that is why it does not allow false claims about cures for COVID-19. Facebook elaborated that in cases involving health misinformation, the company consults with the WHO and other leading public health authorities. Through that consultation, Facebook has identified different categories of health misinformation about COVID-19, such as false claims about immunity (e.g., “People under age thirty cannot contract the virus”), false claims about prevention (e.g., “Drinking a gallon of cold water gives you about an hour of immunity”), and false claims about treatments or cures (e.g., “Drinking a tablespoon of bleach cures the virus”). Facebook considered this case as significant because it concerns a post that was shared within a large public Facebook group related to COVID-19, and therefore had the potential to reach a large population at risk of COVID-19 infection. Also, Facebook considered this case to be difficult because it creates tension between Facebook’s values of “Voice” and “Safety.” Facebook observed that the ability to discuss and share information about the COVID-19 pandemic and to debate the efficacy of potential treatments and mitigation strategies must be preserved while the spread of false information that could lead to harm must be limited. 7. Third party submissions The Board received eight public comments: one from Asia Pacific and Oceania, three from Europe and four from United States and Canada. Seven of these public comments have been published with this case, while one comment was submitted without consent to publish. The submissions covered a number of themes, including the importance of meaningful transparency and less intrusive measures as alternatives to removal; general critique on censorship, bias and Facebook’s handling of misinformation related to the pandemic, as well as feedback for improving the public comment process. 8. Oversight Board Analysis 8.1 Compliance with Community Standards Facebook removed the content on the basis that it violated its misinformation and imminent physical harm rule. Facebook stated the post constituted misinformation because it asserted there was a cure for COVID-19 whereas the WHO and leading health experts had found there is no cure. Facebook noted that leading experts had advised the platform that COVID-19 misinformation can be harmful because, if those reading misinformation believe it, then they may disregard precautionary health guidance and/or self-medicate. Facebook relied on this general expert advice to assert that the post in question could contribute to imminent physical harm. In addition, Facebook noted someone had died after ingesting a chemical that is commonly used to treat aquariums because of COVID-19 related misinformation. The Board finds that Facebook has not demonstrated how this user’s post contributed to imminent harm in this case. Instead, the company appeared to rely on equating any misinformation about COVID-19 treatments or cures as necessarily rising to the level of imminent harm. Facebook’s Community Standards state that additional information and context is needed before Facebook removes content under its misinformation and imminent harm rule. However, the Community Standards do not explain what contextual factors are considered and Facebook did not discuss specific contextual factors in its rationale for this case. Deciding whether misinformation contributes to Facebook’s own standard of “ imminent” harm requires an analysis of a variety of contextual factors, including the status and credibility of the speaker, the reach of his/her speech, the precise language used, and whether the alleged treatment or cure is readily available to an audience vulnerable to the message (such as the misinformation noted by Facebook about resorting to water or bleach as a prevention or cure for COVID-19). In this case, a user is questioning a government policy and promoting a widely known though minority opinion of a medical doctor. The post is geared towards pressuring a governmental agency to change its policy; the post does not appear to encourage people to buy or take certain drugs without a medical prescription. Serious questions remain about how the post would result in imminent harm. While some studies indicate the combination of anti-malarial and antibiotic medicines that are alleged to constitute a cure may be harmful, experts the Board consulted noted that they are not available without a prescription in France. Moreover, the alleged cure has not been approved by the French authorities and thus it is unclear why those reading the post would be inclined to disregard health precautions for a cure they cannot access. The Board also notes that this public group on Facebook could have French speaking users based outside of France. Facebook did not address particularized contextual factors indicating potential imminent harm with respect to such users. The Board remains concerned about health misinformation in France and elsewhere (see Policy Recommendation II. b.). In sum, while the Board acknowledges that misinformation in a global pandemic can cause harm, Facebook failed to provide any contextual factors to support a finding that this particular post would meet its own imminent harm standard. Facebook therefore did not act in compliance with its Community Standard. The Board also notes that this case raises important issues of distinguishing between opinion and fact; along with the question of when “misinformation” (which is undefined in the Community Standards) is an appropriate characterization. It also raises the question of whether an allegedly factually incorrect claim in a broader post criticizing governmental policy should trigger the removal of the entire post. While we need not consider these issues in deciding whether Facebook acted consistently with its misinformation and imminent harm rule in this case, the Board notes such issues could be critical in future applications of the rule. 8.2 Compliance with Facebook Values The Oversight Board finds that the decision to remove the content was not consistent with Facebook’s values. Facebook’s rationale did not demonstrate the danger of this post to the value of “Safety” in a manner sufficient to displace “Voice” to the extent of justifying removal of the post. 8.3 Compliance with Human Rights Standards on Freedom of Expression This section examines whether Facebook’s decision to remove the post from its platform is consistent with international human rights standards. Article 2 of our Charter specifies that we must “pay particular attention to the impact of removing content in light of human rights norms protecting free expression.” Under the UNGPs companies are expected “to respect international human rights standards in their operations and address negative human rights impacts with which they are involved” (UNGPs, Principle 11.). International human rights standards are defined by reference to UN instruments, including the ICCPR (UNGPs, Principle 12.). In addition, the UNGPs specify that non-judicial grievance mechanisms (such as the Oversight Board) should deliver outcomes that accord with internationally recognized human rights (UNGPs, Principle 31.). In explaining its rationale for removing the content, Facebook acknowledged the applicability of the UNGPs and ICCPR to its content moderation decision. Article 19 para. 2 of the ICCPR provides broad protection for expression of “all kinds.” The UN Human Rights Committee has highlighted that the value of expression is particularly high when discussing matters of public concern (General Comment No. 34, paras. 13, 20, 38). The post in question is a direct critique of governmental policy and appears aimed at getting the attention of the Agence Nationale de Sécurité du Médicament. The user raises a matter of public concern, albeit by including the invocation and promotion of a minority opinion within the medical community. The fact that an opinion reflects minority views does not make it less worthy of protection. The user questions why doctors should not be allowed to prescribe a particular drug in emergency situations and does not call on the general public to independently act on Raoult’s minority opinion. That said, ICCPR Article 19, para. 3 permits restrictions on freedom of expression when a speech regulator can prove three conditions are met. In this case Facebook should show that its decision to remove content met the conditions of legality, legitimacy and necessity. The Board examines Facebook’s removal of the user’s post in light of this three-part test. I. Legality Any restriction on expression should give appropriate notice to individuals, including those charged with implementing the restrictions, of what is prohibited. (See General Comment No. 34, para. 25). In this case, the legality test requires assessing whether the misinformation and imminent harm rule is inappropriately vague. To begin with, this rule contains no definition of “misinformation.” As noted by the UN Special Rapporteur on Freedom of Opinion and Expression, “vague and highly subjective terms-such as ‘unfounded,’ ‘biased,’ ‘false,’ and ‘fake’- do not adequately describe the content that is prohibited” (Research Paper 1/2019, p. 9). They also provide authorities with “broad remit to censor the expression of unpopular, controversial or minority opinions” (Research Paper 1/2019, p. 9). Further, such vague prohibitions empower authorities with “the ability to determine truthfulness or falsity of content in the public and political domain” and “incentivize self-censorship” (Research Paper 1/2019, p. 9). The Board also notes that this policy falls under a heading that states additional information and/or context is necessary to determine violations, but no indication is given of what type of additional information/context is relevant to this assessment. Moreover, Facebook has announced multiple COVID-19 policy changes through its Newsroom without reflecting those changes in the current Community Standards. Unfortunately, the Newsroom announcements sometimes appear to contradict the text of the Community Standards. For example, in the Newsroom post “ Combating COVID-19 Misinformation Across Our Apps ” (March 25, 2020) Facebook specified it will “remove COVID-19 related misinformation that could contribute to imminent physical harm,” implying a different threshold than the misinformation and imminent harm rule, which addresses misinformation that “contributes” to imminent harm. In its mid-December 2020 Help Desk article, “ COVID-19 Policy Updates and Protections ,” Facebook states that it would: remove misinformation that contributes to the risk of imminent violence or physical harm. In the context of a pandemic such as COVID-19, this applies to (…) claims that there is a ‘cure’ for COVID-19, until and unless the World Health Organization or other leading health organization confirms such cure. This does not prevent people from discussing medical trials, studies or anecdotal experiences about cures or treatments for the known symptoms of COVID-19 (e.g. fever, cough, breathing difficulties). This announcement (which was made after the post in question was removed) reflects the constantly evolving nature of both scientific and governmental stances on health issues. However, it was not integrated into the Community Standards. Given this patchwork of rules and policies that appear on different parts of Facebook’s website, the lack of definition of key terms such as “misinformation,” and the differing standards relating to whether the post “could contribute” or actually contributes to imminent harm, it is difficult for users to understand what content is prohibited. The Board finds the rule applied in this case was inappropriately vague. The legality test is therefore not met. II. Legitimate aim The legitimacy test provides Facebook’s removal of the post should serve a legitimate and specified public interest objective in Article 19, para. 3 of the ICCPR (General Comment No. 34, paras. 28-32). The goal of protecting public health is specifically listed in this Article. We find that Facebook’s purpose of protecting public health during a global pandemic satisfied this test. III. Necessity and proportionality With regard to the necessity test, Facebook should demonstrate that it has selected the least intrusive means to address the legitimate public interest objective (General Comment No. 34, para. 34). Facebook should show three things: (1) the public interest objective could not be addressed through measures that do not infringe on speech, (2) among the measures that infringe on speech, Facebook has selected the least intrusive measure, and (3) the selected measure actually helps achieve the goal and is not ineffective or counterproductive (A/74/486, para. 52). Facebook has a range of options available to deal with false and potentially harmful health-related content. The Board asked Facebook whether less intrusive means could have been deployed in this case. Facebook responded that for cases of imminent harm, its sole enforcement measure is removal, but for content assessed by external partners as false (but not linked to imminent harm), it deploys a range of enforcement options short of content removals. This response essentially re-stated how its Community Standards work but did not explain why removal was the least intrusive means of protecting public health. As noted in its Community Standard on False News, Facebook’s tools to address such content include the disruption of economic incentives for people and pages that promote misinformation; the reduction of the distribution of content rated false by independent fact checkers; and the ability to counter misinformation by providing users with additional context and information about a particular post, including through Facebook’s COVID-19 Information Center . The Board takes note of Facebook’s False News policy - not to imply that it should be used to judge opinions, but to note that Facebook has a range of enforcement options beyond content removals to deal with misinformation. Facebook did not explain how removal of content in this case constituted the least intrusive means of protecting public health because, among other things, it did not explain how the post related to imminent harm; it merely asserted imminent harm to justify removal. The removal of the post therefore failed the necessity test. 9. Oversight Board Decision 9.1 Content Decision The Oversight Board decides to overturn Facebook’s decision to remove the post in question. 9.2 Policy Advisory Statements I. Facebook should clarify its Community Standards with respect to health misinformation, particularly with regard to COVID-19. The Board recommends that Facebook set out a clear and accessible Community Standard on health misinformation, consolidating and clarifying existing rules in one place (including defining key terms such as misinformation). This rule-making should be accompanied with “detailed hypotheticals that illustrate the nuances of interpretation and application of [these] rules” to provide further clarity for users (See report A/HRC/38/35 , para. 46 (2018)). Facebook should conduct a human rights impact assessment with relevant stakeholders as part of its process of rule modification (UNGPs, Principles 18-19). II. Facebook should adopt less intrusive enforcement measures for policies on health misinformation. a.) To ensure enforcement measures on health misinformation represent the least intrusive means of protecting public health, the Board recommends that Facebook: b.) In cases where users post information about COVID-19 treatments that contradicts the specific advice of health authorities and where a potential for physical harm is identified but is not imminent, the Board strongly recommends Facebook to adopt a range of less intrusive measures. This could include labelling which alerts users to the disputed nature of the post’s content and provides links to the views of the World Health Organization and national health authorities. In certain situations it may be necessary to introduce additional friction to a post - for example, by preventing interactions or sharing, to reduce organic and algorithmically driven amplification. Downranking content, to prevent visibility in other users’ newsfeeds, might also be considered. All enforcement measures, including labelling or other methods of introducing friction, should be clearly communicated to users, and subject to appeal. III. Facebook should increase transparency of its content moderation of health misinformation. The Board recommends that Facebook improves its transparency reporting on health misinformation content moderation and drawing upon public comments received: *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and must be agreed by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world, provided expertise on socio-political and cultural context. Return to Case Decisions and Policy Advisory Opinions" fb-ylrv35wd,Armenian prisoners of war video,https://www.oversightboard.com/decision/fb-ylrv35wd/,"June 13, 2023",2023,,"TopicFreedom of expression, Safety, War and conflictCommunity StandardCoordinating harm and publicizing crime","Policies and TopicsTopicFreedom of expression, Safety, War and conflictCommunity StandardCoordinating harm and publicizing crime",Upheld,"Armenia, Azerbaijan",The Oversight Board has upheld Meta’s decision to leave up a Facebook post that included a video depicting identifiable prisoners of war and add a “mark as disturbing” warning screen to the video.,53579,8238,"Upheld June 13, 2023 The Oversight Board has upheld Meta’s decision to leave up a Facebook post that included a video depicting identifiable prisoners of war and add a “mark as disturbing” warning screen to the video. Standard Topic Freedom of expression, Safety, War and conflict Community Standard Coordinating harm and publicizing crime Location Armenia, Azerbaijan Platform Facebook Armenian translation Public comments appendix Azerbaijani translation This decision is also available in Armenian and Azerbaijani . Այս որոշումը հայերեն կարդալու համար սեղմեք այստեղ : Bu qərarı azərbaycan dilində oxumaq üçün buraya klikləyin. The Oversight Board has upheld Meta’s decision to leave up a Facebook post that included a video depicting identifiable prisoners of war and add a “mark as disturbing” warning screen to the video. The Board found that Meta correctly applied a newsworthiness allowance to the post, which would have otherwise been removed for violating its Coordinating Harm and Promoting Crime Community Standard. However, the Board recommends that Meta strengthen internal guidance around reviewing this type of content and develop a protocol for preserving and sharing evidence of human rights violations with the appropriate authorities. About the case In October 2022, a Facebook user posted a video on a page that identifies itself as documenting alleged war crimes committed by Azerbaijan against Armenians in the context of the Nagorno-Karabakh conflict. This conflict reignited in September 2020 and escalated into fighting in Armenia in September 2022, leaving thousands dead , and hundreds of people missing. The video begins with a user-inserted age warning that it is only suitable for people over the age of 18, and an English text, which reads “Stop Azerbaijani terror. The world must stop the aggressors.” The video appears to depict a scene where prisoners of war are being captured. It shows several people who appear to be Azerbaijani soldiers searching through rubble, with their faces digitally obscured with black squares. They find people in the rubble who are described in the caption as Armenian soldiers, whose faces are left unobscured and identifiable. Some appear to be injured, others appear to be dead. The video ends with an unseen person, potentially the person filming, continuously shouting curse words and using abusive language in Russian and Turkish at an injured soldier sitting on the ground. In the caption, which is in English and Turkish, the user states that the video depicts Azerbaijani soldiers torturing Armenian prisoners of war. The caption also highlights the July 2022 gas deal between the European Union and Azerbaijan to double gas imports from Azerbaijan by 2027. Key findings The Board finds that although the content in this case violates the Coordinating Harm and Promoting Crime Community Standard, Meta correctly applied the newsworthiness allowance to allow the content to remain on Facebook, and the contents of the video required a “mark as disturbing” warning screen under the Violent and Graphic Content Community Standard. These decisions were consistent with Meta’s values and human rights responsibilities. The case raises important questions about Meta’s approach to content moderation in conflict situations, where revealing identities and locations of prisoners of war could undermine their dignity or expose them to immediate harm. Concerns regarding human dignity are acute in situations where prisoners are shown in degrading or inhumane circumstances. At the same time, such exposure can inform public debate and raise awareness of potential mistreatment, including violations of international human rights and international humanitarian law. It can also build momentum for action that protects rights and ensures accountability. Meta is in a unique position to assist in the preservation of evidence that may be of relevance in prosecuting international crimes and supporting human rights litigation. The scale and speed at which imagery of prisoners of war can be shared via social media complicates the task of resolving these competing interests. Given the acute harms and risks facing prisoners of war, the Board finds that Meta’s default rule prohibiting the posting of information that could reveal the identities or locations of prisoners of war is consistent with the company’s human rights responsibilities under the UN Guiding Principles of Business and Human Rights (UNGPs, commentary to Principle 12). These responsibilities are heightened during armed conflict and must be informed by the rules of international humanitarian law. The Board agrees with Meta that the public interest value in keeping the content on the platform with a warning screen outweighed the risk to the safety and dignity of the prisoners of war. The Oversight Board’s decision The Oversight Board upholds Meta’s decision to leave the post on Facebook with a “mark as disturbing” warning screen. The Board also recommends that Meta: * Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board upholds Meta’s decision to leave up a Facebook post that included a video depicting identifiable prisoners of war, and adding a “mark as disturbing” warning screen to the video. The Board found that Meta correctly applied the newsworthiness allowance to the content, which it would have otherwise removed for violating the Community Standard on Coordinating Harm and Promoting Crime for revealing the identity of prisoners of war in the context of an armed conflict. The “mark as disturbing” warning screen was required under the Community Standard on Graphic and Violent Content. These decisions were consistent with Meta’s values and human rights responsibilities. The case, which Meta referred to the Board, raises important questions about the company’s approach to content moderation in conflict situations, where the disclosure of the identities or locations of prisoners of war could expose them to immediate harm or affect their dignity. Concerns regarding human dignity may be acute in situations where prisoners are shown as defenseless or in humiliating circumstances, engaging their right to life, security, and privacy, and right to be free from torture, inhuman and degrading treatment as well as their families’ rights to privacy and security. At the same time, such exposure can also inform public debate and raise awareness of potential mistreatment, including violations of international human rights and humanitarian law. It can also build momentum for action that protects rights and ensures accountability. Meta is also in a unique position to assist in the preservation of evidence that may be of use in the prosecution of international crimes and in support of human rights litigation, whether the content is removed or left up. The scale and speed at which imagery of prisoners of war can be shared via social media complicates the task of resolving these competing interests. Given the acute harms and risks facing prisoners of war, the Board finds that Meta’s default rule prohibiting the posting of information that could reveal the identities or locations of prisoners of war is consistent with the company’s human rights responsibilities under the UN Guiding Principles of Business and Human Rights (UNGPs, commentary to principle 12), which are heightened during armed conflict and must be informed by the rules of international humanitarian law. The Board agrees with Meta’s assessment that the public interest value in keeping the content on the platform with a warning screen outweighed the risk to the safety and dignity of the prisoners of war. Keeping the content on the platform was necessary to ensure the public’s right to know about severe wrongdoing, and in this specific context, to potentially prevent, mitigate and remedy severe human rights harms through public disclosure of wrongdoing. The Board recommends that Meta provides further guidance to reviewers and escalation teams to ensure that content revealing the identity or locations of prisoners of war can be reviewed on a case-by-case basis by those with the necessary expertise. Meta should develop more granular criteria to guide assessments of newsworthiness in these cases, which should be shared transparently. The Board calls on Meta to preserve and, where appropriate, share with competent authorities, information to assist in investigations and legal processes to remedy or prosecute grave violations of international criminal, human rights and humanitarian law. 2. Case description and background In October 2022, a Facebook user posted a video on a page that identifies itself as documenting alleged war crimes committed by Azerbaijan against Armenians in the context of the Nagorno-Karabakh conflict. This conflict reignited in September 2020 during the 44-day Second Nagorno-Karabakh war, and escalated into fighting in Armenia in September 2022, leaving thousands dead , and hundreds of people missing. In the caption, which is in English and Turkish, the user states that the video depicts Azerbaijani soldiers torturing Armenian prisoners of war. The caption also calls attention to the July 2022 gas deal between the European Union and Azerbaijan to double gas imports from Azerbaijan by 2027 to reduce European reliance on Russian gas. The video begins with a user-inserted age warning that it is only suitable for people over the age of 18, and an English text, which reads “Stop Azerbaijani terror. The world must stop the aggressors.” The video shows soldiers in the process of being detained as prisoners of war. It shows several people who appear to be Azerbaijani soldiers searching through rubble; their faces have been digitally obscured with black squares. They find people in the rubble who are described in the caption as Armenian soldiers; their faces have been left unobscured and are identifiable. Some appear to be injured, others appear to be dead. They pull one soldier from the rubble, who cries out in pain. His face is visible, and he appears injured. The video ends with an unseen person, potentially the person filming, continuously shouting curse words and using abusive language in Russian and Turkish at an injured soldier sitting on the ground, telling him to stand up. The individual attempts to do so. The page the content was posted to has fewer than 1,000 followers. This content has been viewed fewer than 100 times, and received fewer than 10 reactions. It has not been shared, or been reported as violating, by any user. Meta informs the Board it was monitoring the situation as the conflict was ongoing. Meta’s Global Operations team coordinated with Meta’s security team to conduct risk monitoring that involved monitoring of external signals (such as news and social media trends) related to the issue. During the monitoring, the security team found Twitter posts that showed a video of Azerbaijani soldiers torturing Armenian prisoners of war circulating online, and then identified the same video on Facebook in this case. The security team sent the post on Facebook for additional review, a process Meta describes as an “escalation.” When content is escalated, it is sent to additional teams within Meta for policy and safety review. In this case, Meta’s Global Operations team decided to escalate the content further to Meta’s policy teams for newsworthiness review. Upon review, within two days of the content being posted, Meta issued a newsworthiness allowance, which permits content on Meta’s platforms that might otherwise violate its policies if the public interest in the content outweighs the risk of harm. The newsworthiness allowance can only be applied by specialist teams within Meta after content has been escalated for additional layers of review. As part of escalated review by Meta’s policy teams, a “marked as disturbing” warning screen was applied under the Violent and Graphic Content policy, and the content was added to a Graphic Violence Media Matching Service (MMS) Bank that automatically places a warning screen over the video and identical videos identified on the platform. However, due to a combination of technical and human errors, this failed, and had to be completed manually about one month later. Meta referred this case to the Board, stating that it demonstrates the challenge required “to balance the value of raising awareness of these issues against the potential harm caused by revealing the identity of prisoners of war.” Meta asked the Board to consider whether Meta’s decision to allow the content represents an appropriate balancing of its values of “Safety,” “Dignity,” and “Voice,” and is consistent with international human rights principles. 3. Oversight Board authority and scope The Board has authority to review decisions that Meta submits for review (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1.1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include policy advisory statements with non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). Where Meta commits to act on recommendations, the Board monitors their implementation. 4. Sources of authority and guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board decisions: The most relevant previous decisions of the Oversight Board include: II. Meta’s content policies: The Coordinating Harm and Promoting Crime Community Standard states under the heading “policy rationale” that “[i]n an effort to prevent and disrupt offline harm and copycat behaviour, we prohibit people from facilitating, organizing, promoting or admitting to certain criminal or harmful activities targeted at people, businesses, property or animals.” It further states “we allow people to [...] draw attention to harmful or criminal activity that they may witness or experience as long as they do not advocate for or coordinate harm.” Under a rule added on May 4, 2022, beneath the heading “additional information and/or context to enforce,” Meta specifically prohibits “content that reveals the identity or location of a prisoner of war in the context of an armed conflict by sharing their name, identification number and/or imagery” and does not enumerate any specific exceptions to this rule. The Violent and Graphic Content Community Standard states under the heading “policy rationale” that it exists to “protect users from disturbing imagery.” The policy further specifies that “imagery that shows the violent death of a person or people by accident or murder” and “imagery that shows acts of torture committed against a person or people” is placed behind a warning screen so that “people are aware that the content may be disturbing,” and only adults aged 18 and over are able to view the content. The “do not post” section of the rules explains that users cannot post sadistic remarks towards imagery that requires a warning screen under the policy. The Board’s analysis was informed by the Meta’s commitment to “ Voice ,” which the company describes as “paramount,” and its values of “Safety,” “Privacy” and “Dignity.” The newsworthiness allowance is a general policy exception that can potentially be applied across all policy areas within the Community Standards, including to the rule on prisoners of war. The newsworthiness allowance is explained under Meta’s “commitment to voice.” It allows otherwise violating content to be kept on the platform if the public interest value in doing so outweighs the risk of harm. According to Meta’s approach to newsworthy content , which is linked from the introduction to the Community Standards, such assessments are made only in “rare cases,” following escalation to the Content Policy Team. This team assesses whether the content in question surfaces an imminent threat to public health or safety, or gives voice to perspectives currently being debated as part of a political process. This assessment considers country-specific circumstances, including if the country is at war. While the identity of the speaker is a relevant consideration, the allowance is not limited to content that is posted by news outlets. III. Meta’s human rights responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. Significantly, the UNGPs impose a heightened responsibility on businesses operating in a conflict setting (“Business, human rights and conflict-affected regions: towards heightened action,” A/75/212 ). The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following international standards, including in the field of international humanitarian law (also known as ‘the law of armed conflict’): 5. User submissions Following Meta’s referral and the Board’s decision to accept the case, the user was sent a message notifying them of the Board’s review and providing them with an opportunity to submit a statement to the Board. The user did not submit a statement. 6. Meta’s submissions Meta referred this case to the Board “because it highlights the challenges [Meta] face[s] when determining whether the value of content’s newsworthiness outweighs the risk of harm in the context of war and violence.” While the content violated Meta’s Coordinating Harm and Promoting Crime policy, the newsworthiness allowance was applied “in order to raise awareness of the violence against prisoners of war” in the conflict between Azerbaijan and Armenia. Meta said the case was “significant” due to its relationship to an ongoing military conflict, and “difficult” because it requires “balancing the value of raising awareness of these issues against the potential harm caused by revealing the identity of prisoners of war.” Meta explains that it prohibits “outing” prisoners of war, but its escalation teams require “additional context” to enforce that rule. Meta added this rule to reflect the fact that they take the safety and dignity of prisoners of war seriously, given the risks of their platforms being used to expose them to public curiosity, noting both freedom of expression principles from Article 19 of the ICCPR, and the guidance of the Geneva Conventions. The Board understands this to mean that for such content to be removed, in-house investigation teams would need to flag it for review to Meta’s internal teams, or at-scale reviewers would need to escalate the content to those teams. These teams are able to consider contextual factors outside of the content to decide an enforcement action. Content would only be removed through automation if it is identical or near identical to content Meta’s internal teams have already assessed as violating and added to media matching banks. In response to the Board’s questions, Meta confirmed that the additional context it considered to identify the content as violating in this case included (i) the uniforms confirmed the identifiable prisoners were Armenian soldiers, and (ii) knowledge of the ongoing conflict between Azerbaijan and Armenia. “Prisoner of war” is defined in internal guidance for reviewers called the Known Questions as “a member of the armed forces who has been captured or fallen into the hands of an opposing power during or immediately after an armed conflict.” In the Internal Implementation Standards accompanying this rule, Meta explains that content which exposes a prisoner of war’s identity or location by sharing either the first name, last name, their identification number, and/or imagery identifying their face, even if it is shared with “a condemning or raising awareness context,” should be removed. In response to the Board’s questions, Meta noted that the Crisis Policy Protocol, was not used during the second Nagorno-Karabakh war or the ongoing border clashes between Azerbaijan and Armenia. The Crisis Policy Protocol provides Meta review teams with additional opportunities to apply escalation-only policies, and was created in response to the Board’s recommendation in the “former President Trump’s suspension” case. Meta explained that as there is a permanent policy to remove content outing prisoners of war, the standard process of the internal escalations team removing this content would not have changed had the protocol been activated. On applying the newsworthiness allowance, Meta says it conducted a balancing test that weighs the public interest against the risk of harm, considering several factors: (i) whether the content surfaces imminent threats to public health or safety; (ii) whether the content gives voice to perspectives currently being debated as part of a political process; (iii) the nature of the speech, including whether it relates to governance or politics; (iv) the political structure of the country, including whether it has a free press; and (v) other country-specific circumstances (for example, whether there is an election underway, or the country is at war). Meta recognized the graphic nature of this video and the risks that prisoners of war may face when they are identified on social media. In response to the Board’s questions, Meta acknowledged that family members and friends may be the target of ostracism, suspicion, or even violence when prisoners of war are identified or exposed. Meta also noted that this kind of imagery can have a variety of effects on civilians and people in the military, including reinforcing antagonism towards the other side and intensifying prejudice. Meta noted that a prisoner of war who is recorded criticizing their own nation or their military’s conduct may be at higher risk of ostracism and reprisal upon their return home than prisoners who are shown being mistreated by the enemy. In the context of this conflict, Meta did not have evidence that videos of this kind were producing these negative impacts but did see evidence that international organizations were using such videos to increase pressure on Azerbaijan to end mistreatment of prisoners of war. Given the potential public interest value of this content in raising awareness and serving as evidence of possible war crimes, Meta concluded that, given its overall newsworthiness, removing this content would not be a proportionate action. Meta also added that content that identifies witnesses, informants, hostages, or other detained people may be removed under the Coordinating Harm and Promoting Crime Community Standard, if public knowledge of the detention may increase risks to their safety. Content identifying or outing individuals may also be removed under the Privacy Violations policy when personally identifiable information is shared on Meta platforms. Finally, the Violent and Graphic Content policy would have applied even if the victims were not prisoners of war, as under that policy, Meta considers both the dignity and safety of the victims of violence and the fact that people may not want to see this content. In response to the Board’s questions, Meta also provided examples of how it applies the newsworthiness allowance to content identifying prisoners of war more broadly. For example, Meta informed the Board that it generally removes content that reveals the identity of prisoners of war in Ethiopia but makes a case-by-case newsworthy determination for some content. Factors considered in applying the newsworthiness allowance in previous cases include whether the content (i) reports on the capture of senior combatants, such as high-ranking officers or leaders of armed groups; (ii) reveals the identity of a prisoner when it is potentially in their interest to do so (e.g. when they have been reported missing); or (iii) raises awareness about potential human rights abuses. Meta also stated it had granted newsworthiness allowances to “leave some content up that shows Russian [prisoners of war] in Ukraine.” Meta also noted that it assesses content at its face value, unless authenticity is in question, or where there are indicators of manipulated media or where they have context that the information is false. In this case, Meta saw no indications that its misinformation policies were engaged. The Board asked Meta 16 questions. Questions related to application of the newsworthiness allowance; factors in assessing context in Meta’s decision; and the application of the Crisis Policy Protocol. Meta answered the 16 questions. 7. Public comments The Oversight Board received 39 public comments relevant to this case. One comment was submitted from Asia Pacific and Oceania, three from Central and South Asia, 23 from Europe, four from the Middle East and North Africa and eight from the United States and Canada. The submissions covered the following themes: background on the Nagorno-Karabakh conflict and recent escalations; application of international humanitarian law to the moderation of content revealing the identity or location of prisoners of war; concern about content on social media showing the faces of the prisoners of war; potential adverse and positive impacts that can result from leaving up or removing content depicting prisoners of war; independent mechanisms preserving potential evidence of international crimes; cooperation between social media companies, civil society organizations and international justice mechanisms; concern over verification of video content; operational/technical suggestions on how to keep the content on social media platforms and protect the safety and dignity of prisoners of war; potential of such content to assist in preventing further atrocities, and the public’s right to know about mistreatment of prisoners of war. To read public comments submitted for this case, please click here . In April 2023, as part of ongoing stakeholder engagement, the Board consulted representatives of advocacy organizations, academics, inter-governmental organizations and other experts on issues relating to the moderation of content depicting prisoners of war. A roundtable was held under the Chatham House Rule. This focused on motivations, potential risk factors and advantages to posting content depicting identifiable prisoners of war and ways to balance the benefits of raising awareness of violence against prisoners of war against the potential harm caused by revealing their identity. The insights provided at this meeting were valuable, and the Board extends its appreciation to all participants. 8. Oversight Board analysis The Board analyzed Meta's content policies, human rights responsibilities and values to determine whether this content should be kept up with a warning screen. The Board also assessed the implications of this case for Meta’s broader approach to content governance, particularly in conflict and crisis situations. The Board selected this case as an opportunity to assess Meta’s policies and practices in moderating content that depicts identifiable prisoners of war. Additionally, the case allows the Board to examine Meta’s compliance with its human rights responsibilities in crisis and conflict situations generally. 8.1 Compliance with Meta’s content policies The Board finds that while the content violates the Coordinating Harm and Promoting Crime Community Standard, Meta correctly applied the newsworthiness allowance to allow the content to remain on Facebook, and the contents of the video required a “mark as disturbing” warning screen under the Violent and Graphic Content Community Standard. I. Content rules Coordinating Harm and Promoting Crime The Board finds that the content in this case exposed the identity of prisoners of war, through imagery in the video that showed the faces of detained Armenian soldiers. It therefore clearly violated the rule prohibiting such content in the Coordinating Harm and Promoting Crime Community Standard. Acknowledging that this rule requires additional information and/or context to enforce, the Board agrees with Meta that the soldiers’ uniforms indicated that the individuals with their faces visible were members of the Armenian armed forces. The context of the war indicated that these soldiers were being detained by the opposing Azerbaijani armed forces, meeting the definition of “prisoner of war” contained in Meta’s internal guidance to reviewers. This information was sufficient to find that the content was contrary to the rule prohibiting content revealing the identity of prisoners of war through the sharing of imagery. Newsworthiness allowance The Board finds that the public interest in the video outweighed the potential risks of harm, and that it was appropriate for escalation teams with access to expertise and additional contextual information, including cross-platform trends, to apply the newsworthiness allowance to keep the content on the platform. While it is not presumed that anyone’s speech is inherently newsworthy, the assessment of the newsworthiness allowance accounts for various factors, including the country-specific circumstances and the nature of the speech and the speaker. In such cases, Meta should conduct a thorough review that weighs the public interest, including the public’s right to know about serious wrongdoing and the potential to prevent, mitigate and remedy severe human rights harms through public disclosure of wrongdoing, against the risks of harm to privacy, dignity, security and voice, pursuant to the international human rights standards, as reflected in Meta’s Corporate Human Rights Policy. The application of the newsworthiness allowance in a situation as complex and fast-moving as an armed conflict requires a case-by-case contextual assessment to mitigate risks and secure the public’s access to important information. The factors Meta identified in its assessment, detailed in Section 6, were all pertinent to assessing the potential for serious harm resulting from the display of the video against adverse impacts that could result from suppressing this kind of content. The absence of evidence of videos like this being used in this particular conflict to further mistreatment of detainees, taken together with clear trends of similar content being primarily available through social media and highly relevant to campaigns and legal proceedings for accountability of serious crimes, militated in favour of keeping the content on the platform. The Board emphasizes that it is important that Meta has systems in place to gain the kind of highly context specific insights required to enable a rapid case-by-case assessment of potential harms, taking into account Meta’s human rights responsibilities. Violent and Graphic Content Following its decision that the content should be left up under the newsworthiness allowance, the Board finds that the violent and graphic nature of the video justified the imposition of a “mark as disturbing” warning screen, which serves a dual function of warning users of the graphic nature of the content and limiting the ability to view the content to adults over the age of 18. Although Meta did not specify the policy line it relied upon to impose this screen, the Board finds two rules were engaged. First, the video shows what appear to be dead bodies of Armenian soldiers lying in the rubble. While the internal guidelines to moderators exclude violence committed by one or more uniformed personnel preforming a police function, in which case a “mark as sensitive” warning screen would be applied, the internal guidelines further define “police function” as “maintaining public order by performing crowd control and/or detaining people” and clarifies that “war does not qualify as a police function.” As the content concerns an armed conflict situation, the Board finds it was consistent with Meta’s policies to add a “mark as disturbing” warning screen. The Board notes the video further engaged a second policy line as it showed acts meeting Meta’s definition of torture against people. For the purpose of Meta’s policy enforcement, the internal guidelines to moderators define such “torture” imagery as (i) imagery of a person in a dominated or forcibly restrained position and any of the following: (a) there is an armament pointed at the person; (b) there is evidence of injury on the person; or (c) person is being subjected to violence; or (ii) imagery of a person subjected to humiliating acts. Meta further defines “dominated position” as “any position including where the victim is kneeling, cornered, or unable to defend themselves” and “forcibly restrained” as “being physically tied, bound, buried alive or otherwise held against one’s will.” Noting that Meta’s definition of “torture” is much broader than the term as understood under international law, the Board finds that it was consistent with Meta’s rules to apply the “mark as disturbing” screen and accompanying age-gating restrictions. In line with the internal guidance, there are sufficient indicators in the content that individuals are being held against their will as detainees, and are unable to defend themselves. Moreover, several detainees seemed to be injured, while others appeared to be deceased. 8.2 Compliance with Meta’s human rights responsibilities The Board finds that Meta’s decision to leave up the content is consistent with Meta’s human rights responsibilities, which are heightened in a situation of armed conflict. Freedom of expression (Article 19 ICCPR) Article 19, para. 2 of the ICCPR provides for broad protection of expression, including the right to access information. These protections remain engaged during armed conflicts, and should continue to inform Meta’s human rights responsibilities, alongside the mutually reinforcing and complementary rules of international humanitarian law that apply during such conflicts, including to protect prisoners of war ( General Comment 31 , Human Rights Committee, 2004, para. 11; Commentary to UNGPs, Principle 12 ; see also UN Special Rapporteur’s report on Disinformation and freedom of opinion and expression during armed conflicts, Report A/77/288 , paras. 33-35 (2022); and OHCHR report on International legal protection of human rights in armed conflict (2011) at p. 59). International humanitarian law provides specific guarantees for the treatment of prisoners of war, particularly prohibiting acts of violence or intimidation against prisoners of war as well as exposing them to insults and public curiosity (Article 13, para. 2 of the Geneva Convention (III)). In a situation of armed conflict, the Board’s freedom of expression analysis is informed by the more precise rules in international humanitarian law. The ICRC commentary to Article 13 explains that “being exposed to ‘public curiosity’ as a prisoner of war, even when such exposure is not accompanied by insulting remarks or actions, is humiliating in itself and therefore specifically prohibited [...] irrespective of which public communication channel is used, including the internet” and provides narrow exceptions to this prohibition that are discussed below (ICRC Commentary, at 1624). The UN Special Rapporteur has stated that “[d]uring armed conflict, people are at their most vulnerable and in the greatest need of accurate, trustworthy information to ensure their own safety and well-being. Yet, it is precisely in those situations that their freedom of opinion and expression, which includes ‘the freedom to seek, receive and impart information and ideas of all kinds, is most constrained by the circumstances of war and the actions of the parties to the conflict and other actors to manipulate and restrict information for political, military and strategic objectives” (Report A/77/288, para. 1). The connection between the right of access to information, including for victims of human rights violations, has also been emphasized by the mandate holder (Report A/68/362, para. 92 (2013)). Some of the most important journalism in conflict situations has included sharing information and imagery of prisoners of war. Eyewitness accounts of detainees following the liberation of Nazi concentration camps in 1945, as well as in Omarska camp in Bosnia in 1992, were crucial in galvanizing global opinion regarding the horrors of these wars and the atrocities committed. Similarly, widely circulated images of detainee abuse at Abu Ghraib prison in Iraq in 2004 led to public condemnation and several prosecutions for these abuses. Where restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s human rights responsibilities, which include the responsibility to respect freedom of expression. As the UN Special Rapporteur on freedom of expression has stated, although ""companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users' right to freedom of expression"" (report A/74/486 , para. 41). I. Legality (clarity and accessibility of the rules) The principle of legality under international human rights law requires rules that limit expression to be clear and publicly accessible (General Comment No.34, para. 25). Rules restricting expression ""may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution"" and ""provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not"" (Ibid). Applied to rules that govern online speech, the UN Special Rapporteur on freedom of expression has said they should be clear and specific ( A/HRC/38/35 , para. 46). People using Meta’s platforms should be able to access and understand the rules, and content reviewers should have clear guidance on their enforcement. The Board finds that Meta’s rule prohibiting content which reveals the identity of prisoners of war is sufficiently clear to govern the content in this case, as is the potential for Meta’s Content Policy Team to exceptionally leave up content that would otherwise violate this rule where the public interest requires it. The policy lines for applying a “mark as disturbing” screen to graphic and violent content are sufficiently clear to govern the content in this case. At the same time, Meta’s public explanations of the newsworthiness allowance could provide further detail on how it may apply to content revealing the identity of prisoners of war. Of the three examples of content where a newsworthiness allowance was applied, illustrating Meta’s approach to newsworthy content, none concern the Coordinating Harm and Promoting Crime Community Standard. While one example relates to a conflict situation, the criteria or factors particular to conflict could be more comprehensively made public as part of the explanations of the underlying policy rules. In response to Board’s prior recommendations, Meta has already provided more clarity around the newsworthiness allowance, including through adding information to its public explanation of the newsworthiness allowance as to when it will apply a warning screen (“Sudan graphic video,” recommendation no. 2), and linking the public explanation to the landing page of the Community Standards and adding examples to the newsworthiness page, including about protests (“Colombia protests,” recommendation no. 2). The Board stresses the importance of enhanced transparency and guidance to users, especially in crisis and conflict situations. II. Legitimate aim a. The Community Standard prohibiting depictions of identifiable prisoners of war Respecting the rights of others, including the right to life, privacy, and protection from torture or cruel, inhuman, or degrading treatment, is a legitimate aim for restrictions on the right to freedom of expression (Article 19, para. 3, ICCPR). In this case, the assessment of the legitimacy of the aim underlying the prohibition on depicting identifiable prisoners of war is informed by the situation of armed conflict and the more specific rules of international humanitarian law which call for the protection of the life, privacy and dignity of prisoners of war, when the content exposes prisoners of war to “insult” and “public curiosity” (Article 13, para. 2 of the Geneva Convention (III)). In the context of an armed conflict, Article 13 of the Geneva Convention III provides protection for the humane treatment of prisoners of war, and Meta’s general rule coupled with the availability of the newsworthiness allowance supports that function. In addition to potential offline violence, the sharing of the images themselves can be humiliating and violate the detainees’ right to privacy, especially as detained individuals cannot meaningfully consent to such images being taken or shared. Experiencing those images being shared can revictimize and shows how social media can be abused to directly violate the laws of war. This applies not only to the depicted prisoners of war but serves a protective function to prisoners of war more broadly, as well as family members and others who could be targeted. The protection of these rights relates closely to Meta’s values of privacy, safety and dignity. The Board finds that Meta’s Community Standard prohibiting depictions of identifiable prisoners of war is legitimate. b. Meta’s rules on warning screens The Board affirmed that Meta’s rules on violent and graphic content pursue legitimate aims in the “Sudan graphic video” case, and in several cases since. In the context of this case and for other content like it, the rules providing for a “mark as disturbing” warning screen seek to empower users with more choices over what they see online. III. Necessity and proportionality The principle of necessity and proportionality provides that any restrictions on freedom of expression ""must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function” and “they must be proportionate to the interest to be protected"" (General Comment No. 34, para. 34). The Board’s analysis of necessity and proportionality is informed by the more specific rules in international humanitarian law. According to the ICRC, Geneva Convention III, Article 13 para. 2 requires a “reasonable balance” to be struck between the benefits of public disclosure of materials depicting prisoners of war, given the high value of such materials when used as evidence to prosecute war crimes, promote accountability, and raise public awareness of abuse, and the potential humiliation and even physical harm that may be caused to the persons in the shared materials. Further, the ICRC notes in its guidance towards media that such materials may be exceptionally disclosed, if there is a “compelling public interest” in revealing the identity of the prisoner or if it is in the prisoner’s “vital interest” to do so ( ICRC Commentary on Article 13 at (1627)). Meta’s default rule is consistent with goals embodied in international humanitarian law. Determining whether a person depicted is an identifiable prisoner of war in the context of an armed conflict requires expert consideration. Therefore, the rule requiring “additional context to enforce,” and thus requiring escalation to internal teams before it can be enforced, is necessary. Where content reveals the identity or location of prisoners of war, removal will generally be proportionate considering the severity of harms that can result from such content. Many public comments shared examples of such harms. Concerns were raised about the use of content depicting prisoners of war for propaganda purposes (see e.g., PC-11137 from Digital Rights Foundation, and PC-11144 from Igor Mirzakhanyan), especially when those images are disseminated by the detaining power. Prisoners of war can face many potential harms when their identities are revealed (see e.g., PC-11096 from Article 19). These can include the humiliation of the prisoner, and the ostracization of them or their family on release. Severe harms may still result even where the user shares the content with well-meaning intent to raise awareness or condemn mistreatment. In this case, the faces of alleged prisoners of war are visible, and they are depicted as they are being captured. This process is accompanied with continuous shouting of abusive language and curse words directed at the prisoners, some of whom seem to be injured, while others appear to be deceased. However, in this case the potential for newsworthiness was correctly identified, leading to escalation to the Content Policy Team. It is important to ensure that escalations of this kind reach teams with the expertise necessary for assessing complex human rights implications, where the potential harm is imminent. The seriousness of these risks in the context of an armed conflict distinguishes this case from prior decisions where the Board has raised concerns about the scalability of the newsworthiness allowance (see e.g., “Sudan graphic video,” or “India sexual harassment video”). In this case, the video documents alleged violations of international humanitarian law. While the video may have been made by the detaining power, it appears that the user’s post was aimed at raising awareness of potential violations. This is important to the public’s right to information around the fact of the detainees’ capture, proof of them being alive and physical conditions of detention as well as shedding light on potential wrongdoing. It is correct that Meta’s newsworthiness allowance can apply to content that is not shared by professional media. Nevertheless, guidance available to journalists on responsible reporting in conflict situations indicates a presumption against disclosure of images identifying prisoners of war, and that even where there is a compelling public interest, efforts should still be taken to safeguard detainees’ dignity. Social media companies preserving content depicting grave human rights violations or atrocity crimes, such as those crimes specified under the Rome Statute of the International Criminal Court, including against prisoners of war, is important. Public comments highlighted the need for greater clarity from Meta on its practices in this area, especially for cooperation with international mechanisms (see: PC-11128 from Trial International, PC-11136 from Institute for International Law of Peace and Armed Conflict, Ruhr University, Bochum, and PC-11140 from Syria Justice and Accountability Centre). They underlined that keeping such content up is important to identify not only the perpetrators, but also the victims (see e.g., PC-11133 from Center for International and Comparative Law, PC-11139 from Digital Security Lab Ukraine, and PC-11145 from Protection of Rights Without Borders NGO). In the Board’s view, this content, properly assessed in its particular context, not only informed the public but contributed to the pressure on the detaining power in real time to protect the rights of the detainees. In accordance with the Geneva Convention III, the ICRC facilitates the exchange of correspondence between the prisoners of war and their family members to “prevent missing cases and maintain family links without compromising the dignity or safety of the prisoners of war.” The decision to apply a warning screen to the content was necessary and proportionate, showing respect for the rights of the prisoners and their families, who could experience mental anguish as a result of being involuntarily exposed to such content. Similar to the Board’s “video after Nigeria church attack” decision, the content in this case included a video that showed deceased bodies and injured people at close range, with their faces visible, and audio of prisoners of war experiencing severe discomfort while being verbally abused by their captors. The case content contrasts with the content in the “Russian poem” case, which also concerned a conflict situation, where the Board decided the content should not have been placed behind a “marked as disturbing” screen. That case content was a still image of a body lying on the ground at long range, where the face of the victim was not visible. The Board concluded that “the photographic image lacked clear visual indicators of violence, as described in Meta’s internal guidelines to content moderators, which would justify the use of the warning screen.” In this case, while the warning screen would likely have reduced the reach of the content and therefore its impact on public discourse, providing users with the choice of whether to see disturbing content is a proportionate measure. Many public comments, including from people in regions experiencing conflict, favoured the application of warning screens given the graphic nature of the video and the high public interest in keeping the content (see e.g., PC-11139 from Digital Security Lab Ukraine, PC-11144 from Igor Mirzakhanyan and PC-11145 from Protection of Rights without Borders NGO). 9. Oversight Board Decision The Oversight Board upholds Meta’s decision to leave up the content with a “mark as disturbing” warning screen. 10. Recommendations A. Content Policy 1. In line with recommendation no. 14 in the “former President Trump’s suspension” case, Meta should commit to preserving, and where appropriate, sharing with competent authorities evidence of atrocity crimes or grave human rights violations, such as those specified in the Rome Statute of the International Criminal Court, by updating its internal policies to make clear the protocols it has in place in this regard. The protocol should be attentive to conflict situations. It should explain the criteria, process and safeguards for (1) initiating and terminating preservation including data retention periods, (2) accepting requests for preservation, (3) and for sharing data with competent authorities including international accountability mechanisms and courts. There must be safeguards for users’ rights to due process and privacy in line with international standards and applicable data protection laws. Civil society, academia, and other experts in the field should be part of developing this protocol. The Board will consider this recommendation implemented when Meta shares its updated internal documents with the Board. B. Enforcement 2. To ensure consistent enforcement, Meta should update the Internal Implementation Standards to provide more specific guidance on applying the newsworthiness allowance to content that identifies or reveals the location of prisoners of war, consistent with the factors outlined in Section 8 of this decision, to guide both the escalation and assessment of this content for newsworthiness. The Board will consider this recommendation implemented when Meta incorporates this revision and shares the updated guidance with the Board. C. Transparency 3. To provide greater clarity to users, Meta should add to its explanation of the newsworthiness allowance in the Transparency Center an example of content that revealed the identity or location of prisoners of war but was left up due to the public interest. The Board will consider this recommendation implemented when Meta updates its newsworthiness page with an example addressing prisoners of war. 4. Following the development of the protocol on evidence preservation related to atrocity crimes and grave human rights violations, Meta should publicly share this protocol in the Transparency Center. This should include the criteria for initiating and terminating preservation, data retention periods, as well as the process and safeguards for accepting requests for preservation and for sharing data with competent authorities, including international accountability mechanisms and courts. There must be safeguards for users’ rights to due process and privacy in line with international standards and applicable data protection laws. The Board will consider this recommendation implemented when Meta publicly shares this protocol. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg which draws on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Return to Case Decisions and Policy Advisory Opinions" fb-z1pgt5fk,Speech Against Femicide,https://www.oversightboard.com/decision/fb-z1pgt5fk/,"August 1, 2024",2024,,"TopicProtests, Sex and gender equality, ViolenceCommunity StandardViolence and incitement",Violence and incitement,Overturned,Mexico,"A user appealed Meta’s decision to remove a drawn image of Mexican women’s rights activist Yesenia Zamudio, which includes a quote from one of her speeches in which she sought justice for her murdered daughter and other female victims of violence.",5204,814,"Overturned August 1, 2024 A user appealed Meta’s decision to remove a drawn image of Mexican women’s rights activist Yesenia Zamudio, which includes a quote from one of her speeches in which she sought justice for her murdered daughter and other female victims of violence. Summary Topic Protests, Sex and gender equality, Violence Community Standard Violence and incitement Location Mexico Platform Facebook Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company's attention and include information about Meta's acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta's decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Summary A user appealed Meta’s decision to remove a drawn image of Mexican women’s rights activist Yesenia Zamudio, which includes a quote from one of her speeches in which she sought justice for her murdered daughter and other female victims of violence. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. About the Case In March 2024, a Facebook user posted a drawn image of Yesenia Zamudio , a Mexican mother whose daughter was killed in 2016, in what authorities believe to be murder. The quote by Zamudio in the post’s image says, in Spanish: ""Whoever wants to break something, break it, whoever wants to burn something, burn it, and if you don't, then don't interfere."" The image is accompanied by a caption in which the user is praising Yesenia Zamudio's fight for justice. The quote by Zamudio and videos of her speeches have been widely shared on social media. Similar stories were behind demonstrations against femicide , murders committed against women, in Mexico. Meta initially removed the user’s post from Facebook under its Violence and Incitement Community Standard , which prohibits threats of violence, defined as “statements or visuals representing an intention, aspiration, or call for violence against a target.” In their appeal to the Board, the user stated that the phrase was used by a woman who fought for justice for her daughter, and that the image was used to show who the woman was. After the Board brought this case to Meta’s attention, the company determined that the statement by Yesenia Zamudio is “vague and non-specific and does not meet the standard for removal under our Violence and Incitement or any other policy.” The company then concluded that removal of the image was incorrect and it restored the content to Facebook. Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook, Instagram and Threads users. Significance of Case This case is an example of over-enforcement of Meta’s Violence and Incitement policy that suppresses users’ freedom of expression. The company should prioritize reducing enforcement errors like the one in this case given its impact on the users’ ability to protest against femicide, among other concerning social or political events. The Board has issued several recommendations regarding Meta’s Violence and Incitement policy. These include a recommendation to “err on the side of issuing scaled allowances where (i) this is not likely to lead to violence; (ii) when potentially violating content is used in protest contexts; and (iii) where public interest is high. Meta should ensure that their internal process to identify and review content trends around protests that may require context-specific guidance to mitigate harm to freedom of expression, such as allowances or exceptions, are effective,” ( Iran Protest Slogan , recommendation no. 2). This is a recommendation that Meta reported as implemented without publishing information to demonstrate this. The Board has also recommended Meta to add to the public-facing language of its Violence and Incitement Community Standard that the company interprets the policy to allow content containing statements with “neutral reference to a potential outcome of an action or an advisory warning,” and content that “condemns or raises awareness of violent threats,” ( Russian Poem , recommendation no. 1). This is a recommendation for which Meta demonstrated partial implementation through published information. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to the company’s attention. Return to Case Decisions and Policy Advisory Opinions" fb-zt6ajs4x,Iran protest slogan,https://www.oversightboard.com/decision/fb-zt6ajs4x/,"January 9, 2023",2023,January,"TopicGovernments, Protests, Sex and gender equalityCommunity StandardViolence and incitement","Policies and TopicsTopicGovernments, Protests, Sex and gender equalityCommunity StandardViolence and incitement",Overturned,Iran,"The Oversight Board has overturned Meta's original decision to remove a Facebook post protesting the Iranian government, containing the slogan ""marg bar Khamenei.""",68181,10561,"Overturned January 9, 2023 The Oversight Board has overturned Meta's original decision to remove a Facebook post protesting the Iranian government, containing the slogan ""marg bar Khamenei."" Standard Topic Governments, Protests, Sex and gender equality Community Standard Violence and incitement Location Iran Platform Facebook Iran protest slogan public comments The Oversight Board has overturned Meta’s original decision to remove a Facebook post protesting the Iranian government, which contains the slogan “marg bar... Khamenei.” This literally translates as “death to Khamenei” but is often used as political rhetoric to mean “down with Khamenei.” The Board has made recommendations to better protect political speech in critical situations, such as that in Iran, where historic, widespread, protests are being violently suppressed. This includes permitting the general use of “marg bar Khamenei” during protests in Iran. About the case In July 2022, a Facebook user posted in a group that describes itself as supporting freedom for Iran. The post contains a cartoon of Iran’s Supreme Leader, Ayatollah Khamenei, in which his beard forms a fist grasping a chained, blindfolded woman wearing a hijab. A caption below in Farsi states “marg bar” the ""anti-women Islamic government"" and “marg bar” its ""filthy leader Khamenei."" The literal translation of “marg bar,” is “death to.” However, it is also used rhetorically to mean “down with.” The slogan “marg bar Khamenei” has been used frequently during protests in Iran over the past five years, including the 2022 protests. The content in this case was posted days before Iran’s “National Day of Hijab and Chastity,” around which critics frequently organize protests against the government, including against Iran’s compulsory hijab laws. In September 2022, Jina Mahsa Amini died in police custody in Iran, following her arrest for “improper hijab.” Her death sparked widespread protests which have been violently suppressed by the state. This situation was ongoing as the Board deliberated this case. After the post was reported by a user, a moderator found that it violated Meta’s Violence and Incitement Community Standard, removed it, and applied a “strike” and two “feature-limits” to its author’s account. The feature-limits imposed restrictions on creating content and engaging with groups for seven and 30 days respectively. The post’s author appealed to Meta, but the company’s automated systems closed the case without review. They then appealed to the Board. After the Board selected the case, Meta reviewed its decision. It maintained that the content violated the Violence and Incitement Community Standard but applied a newsworthiness allowance and restored the post. A newsworthiness allowance permits otherwise violating content if the public interest outweighs the risk of harm. Key findings The Board finds that removing the post does not align with Meta’s Community Standards, its values, or its human rights responsibilities. The Board finds that this post did not violate the Violence and Incitement Community Standard, which prohibits threats that could lead to death or high-severity violence. Applying a newsworthiness allowance was therefore unnecessary. In the context of the post, and the broader social, political and linguistic situation in Iran, “marg bar Khamenei” should be understood as “down with.” It is a rhetorical, political slogan, not a credible threat. The Board emphasizes the importance of context in assessing slogans calling for “death to,” and finds that it is impossible to adopt a universal rule on their use. For example, “marg bar Salman Rushdie,” cannot be equated with “marg bar Khamenei,” given the fatwa against Rushdie, and recent attempts on his life. Nor would “death to” statements used during events such as the January 6 riots in Washington D.C be comparable, as politicians were clearly at risk and “death to” statements are not generally used as political rhetoric in English, as they are in other languages. The centrality of language and context should be reflected in Meta’s policies and guidance for moderators. This is particularly important when assessing threats to heads of state, who are legitimately subject to criticism and opposition. In the Iranian context, the Board finds that Meta must do more to respect freedom of expression, and permit the use of rhetorical threats. The Iranian government systematically represses freedom of expression and digital spaces have become a key forum for dissent. In such situations, it is vital that Meta supports users’ voice. Given the “National Day of Hijab and Chastity” was approaching, Meta should have anticipated issues around the over-removal of Iranian protest content, and prepared an adequate response. For example, by instructing “at-scale” reviewers not to remove content containing the “marg bar Khamenei” slogan. As this case shows, its failure to do so led to the silencing of political speech aimed at protecting women’s rights, including through feature-limits, which can shut people out of social movements and political debate. Public comments submitted to the Board indicate that “marg bar Khamenei” has been used widely during the recent protests in Iran. This is supported by independent research commissioned by the Board. Many of these posts would have been removed without benefitting from the newsworthiness allowance, which Meta rarely applies (in the year to June 2022 it was used just 68 times globally). The Board is concerned that Meta is automatically closing appeals, and that the system it uses to do so fails to identify important cases. It recommends the company takes action to improve its respect for freedom of expression during protests, and in other critical political contexts. The Oversight Board's decision The Oversight Board overturns Meta's original decision to remove the post. The Board also recommends that Meta: * Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board overturns Meta’s original decision to remove a Facebook post protesting the Iranian government’s human rights record and its laws on mandatory hijab (head covering). The post contains a caricature of the country’s Supreme Leader, Ayatollah Ali Khamenei and the phrase, “marg bar [...] Khamenei,” a protest chant that literally means “death to ... Khamenei,” but has frequently been used as a form of political expression in Iran which can also be understood as “down with [...] Khamenei.” The Board found that the content did not violate the Violence and Incitement policy. Nationwide protests in Iran, triggered by the killing of Jina Mahsa Amini were being violently suppressed by the Iranian government at the time of the Board’s deliberation. Meta reversed its decision after it was informed that the Board had selected this case. The company maintained the content violated the Violence and Incitement Community Standard, but restored the content using the “newsworthiness allowance.” The case raises important concerns about Meta’s Violence and Incitement policy and its ""newsworthiness allowance.” It also raises concerns about how Meta’s policies may impact freedom of expression and women's rights in Iran and elsewhere. The Board finds Meta did not meet its human rights responsibilities in this case, in particular to prevent errors adversely impacting freedom of expression in protest contexts. The Board recommends that Meta review its Violence and Incitement Community Standard, its internal implementation guidelines for moderators, and its approach to newsworthy content, in order to respect freedom of expression in the context of protests. 2. Case description and background In mid-July 2022, a person posted in a public Facebook group that describes itself as supporting freedom for Iran, criticizing the Iranian government and Iran’s Supreme Leader, Ayatollah Khamenei, particularly their treatment of women, including Iran’s strict compulsory hijab laws. The post was made days before the “National Day of Hijab and Chastity” in Iran. The government intends this day to be a celebration of mandatory hijab, but critics have used it to protest against mandatory hijab and broader government abuses in Iran, including online. The post contains a cartoon of Ayatollah Khamenei, in which his beard forms a fist grasping a woman wearing a hijab. The woman is blindfolded with a chain around her ankles. A text bubble next to the caricature says that being a woman is forbidden. A caption below in Farsi reads, “marg bar hukumat-e zed-e zan-e eslami va rahbar-e kasifesh Khamenei.” The term “marg bar” translates literally as “death to.” The caption literally calls for “death to” the “anti-women Islamic government” and its “filthy leader Khamenei.” However, in some contexts, “marg bar” is understood to have a more rhetorical meaning equivalent to “down with.” The post also calls the Islamic Republic “the worst dictatorship in history,” in part due to restrictions on what women can wear. It also calls on women in Iran not to collaborate in the oppression of women. On the day the content was posted, another person on Facebook reported it as hate speech. One of Meta’s at-scale reviewers assessed the post as violating the Violence and Incitement Community Standard, which prohibits threats that could lead to death or high-severity violence against others. Meta removed the content, resulting in the author of the post receiving a strike, which led to the automatic imposition of 30-day and seven-day account restrictions known as “feature-limits.” While feature-limits vary in nature and duration, they can generally be understood as punitive and preventative measures denying individuals the regular use of the platform to express themselves. The 30-day feature-limit prevented the content’s author from posting or commenting in groups, inviting new members to groups, or creating new groups. The seven-day feature-limit prevented them from creating any new content on any Facebook surface, excluding the Messenger app. When their content was removed, the author was informed of the seven-day feature-limit through notifications, but did not receive notifications about the 30-day group-related feature-limit. Hours after Meta removed the content, the author of the post appealed the decision. Meta’s automated systems did not prioritize the appeal and it was later closed without being reviewed. The user received a notification that their appeal was not reviewed because of a temporary reduction in review capacity as a result of COVID-19. At this point, they appealed Meta’s removal decision to the Oversight Board. After it was informed that the Board had selected this case, Meta determined that its previous decision to remove the content was incorrect. It found that, although the post violated the Violence and Incitement Community Standard, it would restore the content under the newsworthiness allowance. This permits content that would otherwise violate Meta’s policies if the public interest in the content outweighs the risk of harm. The content was restored in August, more than a month after it was first posted, but after the “National Day of Hijab and Chastity” had already passed. Meta reversed the strike against the person’s account, but the account restrictions that had been imposed could not be reversed, as they had already run their full duration. In September, the Iranian government’s morality police arrested 22-year old Jina Mahsa Amini for wearing an “improper” hijab. Amini fell into a coma shortly after collapsing at the detention center and died three days later, while in custody. Her death at the hands of the state sparked widespread peaceful protests, which were met with extreme violence from the Iranian government. This situation was ongoing at the time the Board deliberated this case. The United Nations has raised concerns about Iranian security forces using illegitimate force against peaceful protesters, killing and injuring many, including children, as well as arbitrarily detaining protesters and imposing internet shutdowns. The United Nations has reiterated calls for the release of detained protesters, and the UN Human Rights Council convened a Special Session on November 24 to address the situation. The resolution adopted at that session ( A/HRC/Res/S-35/1 ) expressed “deep concern” about “reports of restrictions on communications […] including Internet shutdowns and blocking of social media platforms, which undermine the exercise of human rights.” It called on the Iranian Government to end all forms of discrimination and violence against women and girls in public and private life, to uphold freedom of expression and to fully restore internet access. The UN Human Rights Council also established an independent international fact-finding mission to investigate alleged human rights violations in Iran related to the protests that began on 16 September. Public comments and experts the Board consulted confirmed the “marg bar Khamanei” slogan was being widely used in these protests and online, and that it had been commonly used in protests in Iran in 2017, 2018, 2019, and 2021. Public comments often included perceptions that Meta over-enforces its policies against Farsi language content during protests, including in the most recent protests that have mostly been led by women and girls. These perceptions are also reflected in Memetica’s research on platform data that the Board commissioned, which found that from July 1 to October 31, 400 public Facebook posts and 1,046 public Instagram posts used the hashtag #MetaBlocksIranProtests. People in Iran have been protesting for gender equality and against compulsory hijab since at least the 1979 revolution. The Islamic Penal Code of Iran penalizes women who appear in public without a “proper hijab” with imprisonment or fine. Women in Iran are banned from certain fields of study, many public places, and from singing and dancing, among other things. Men are considered the head of the household and women need the permission of their father or husband to work, marry, or travel. A woman's court testimony is considered to have half the weight of a man’s, limiting access to justice for women. The Iranian government systematically represses freedom of expression. While online spaces have become a key forum for dissent, the government has taken extreme measures to silence debate there too. Human rights advocacy is a common target, in particular women’s rights advocacy, political dissent, artistic expression, and those calling for the government to be held to account for its human rights violations. Facebook, Twitter, and Telegram have all been banned in Iran since 2009. The Iranian government also blocked access to Instagram and WhatsApp in September 2022 amid protests over Amini’s death. The Open Observatory of Network Interference documented new forms of censorship and internet shutdowns in various parts of Iran during the protests. Usage of Virtual Private Networks (VPN) (tools that encrypt communications and can be used to circumvent censorship) reportedly increased more than 2000% during September 2022. Public comments the Board received emphasized that social media platforms are one of the only tools for people to freely express themselves, given the Iranian state’s tight control of traditional media. The state’s advanced capabilities to restrict online expression make the lifeline of social media particularly precarious. Social media plays a crucial role in ensuring people in Iran can exercise their rights, particularly in times of protest. 3. Oversight Board authority and scope The Board has authority to review Meta’s decision following an appeal from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include policy advisory statements with non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). Where Meta commits to act on recommendations, the Board monitors their implementation. When the Board selects cases like this one, where Meta subsequently revises its initial decision, the Board focuses its review on the decision that is appealed to it. In this case, while Meta recognized the outcome of its initial decision was incorrect and reversed it, the Board notes that this reversal relied on the newsworthiness allowance, which is among the enforcement options that are only available to Meta’s internal policy teams, and not to content moderators working at-scale. The case was not an “enforcement error,"" as the scaled content reviewer removed the content in accordance with the internal guidance they were given, though as noted below, these differ from the public facing Community Standards in ways that are material to this case. 4. Sources of authority and guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board decisions The most relevant previous decisions of the Oversight Board include: II. Meta’s content policies The policy rationale for the Violence and Incitement Community Standard explains it intends to “prevent potential offline harm that may be related to content on Facebook” while acknowledging that “people commonly express disdain or disagreement by threatening or calling for violence in non-serious ways.” Under this policy, Meta does not allow “threats that could lead to death (and other forms of high-severity violence),” where “threats” are defined as including “calls for high-severity violence” and “statements advocating for high-severity violence.” The Board’s analysis of the content policies was informed by Meta’s commitment to voice , which the company describes as “paramount,” and its values of safety and dignity. In explaining its commitment to voice, Meta explains that “in some cases, we allow content – which would otherwise go against our standards – if it’s newsworthy and in the public interest.” This is known as the newsworthiness allowance , which is linked from Meta’s commitment to voice. The newsworthiness allowance is a general policy exception applicable to all Community Standards. To issue the allowance, Meta conducts a balancing test, assessing the public interest in the content against the risk of harm. Meta says it assesses whether content “surfaces an imminent threat to public health or safety, or gives voice to perspectives currently being debated as part of a political process.” Both the assessment of public interest and harm take into account country circumstances such as whether an election or conflict is under way, whether there is a free press, and whether Meta’s products are banned. Meta states there is no presumption that content is inherently in the public interest solely on the basis of the speaker’s identity, for example their identity as a politician. Meta says it removes content, “even if it has some degree of newsworthiness, when leaving it up presents a risk of harm, such as physical, emotional and financial harm, or a direct threat to public safety.” III. Meta’s human rights responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following international standards: 5. User submissions In their appeal to the Board the person who authored the post shared that they intended to raise awareness of how people in Iran are being “abused” by the Iranian “dictatorship” and that people “need to know about this abuse.” For them, the “Facebook decision is unfair and against human rights."" 6. Meta’s submissions Meta explained that assessing whether the phrase “death to” a head of state constitutes rhetorical speech as opposed to a credible threat is challenging, particularly at scale. Meta said there has been much internal and external debate on this point and welcomed the Board’s input on where to draw the line. Meta also said that it would welcome guidance on drafting a policy it can apply at scale. Meta explained to the Board that the phrase “death to Khamenei” violated the Violence and Incitement Community Standard, and this was the reason the content was initially removed. The Community Standards have been available in Farsi since February 2022. According to Meta, the policy prohibits “calls for death targeting a head of state.” It currently distinguishes calls for lethal violence where the speaker expresses intent to act (e.g., “I am going to kill X""), which is violating, from content expressing a wish or hope that someone dies without expressing intent to act (e.g. ""I hope X dies"" or “death to X."") The latter is generally non-violating, because Meta considers that the word “death” “is not itself a method of violence.” Meta generally considers this to be “hyperbolic language,” where the speaker does not intend to incite violence. However, internal guidance instructs moderators to remove “death to” statements where the target is a “high-risk person.” The guidance is called “Known Questions,” and includes a confidential list of categories of person (rather than named individuals) who Meta considers high-risk. Essentially, Meta’s removals at scale are formulaic: the combination of [“death to”] plus [target is a high risk person] will result in removal, even if other context indicates the expression is hyperbolic or rhetorical, and therefore similar to speech permitted against other targets. “Heads of state” are listed as high-risk persons, and Meta explained this is because of “the potential safety risk” against them. Meta further said it has developed an evolving list of high-risk persons based on feedback from its policy teams, as well as external experts. Meta provided the full list to the Board. In addition to heads of state, other examples of “high-risk” persons include: former heads of state; candidates and former candidates for head of state; candidates in national and supranational elections for up to 30 days after election if not elected; people with a history of assassination attempts; activists and journalists. After it was informed that the Board selected the case, Meta revisited its decision and decided to restore the post under the “newsworthiness allowance.” While Meta maintained the content violated its policies, restoring it was the right thing to do because “the public interest value outweighed any risk of contributing to offline harm.” Meta has previously informed the Board that the kind of contextual analysis its policy teams can conduct to reach decisions on-escalation is not available to moderators at-scale, who must follow internal guidance. In this case, Meta determined that the public interest was high, as the post related to public discourse on compulsory hijab laws and criticized the government’s treatment of women. Meta found the cartoon to be political in nature, and given the religious significance of beards to some who practice Islam, that its imagery could be criticism of the use of religion to control and oppress women. The political context and timing of the post were important, in the run-up to the mid-July “National Day of Hijab and Chastity,” when Meta understood many people were using social media hashtags to organize protests. Meta cited the Board’s “Colombia protests” case in support of its public interest assessment, and pointed to the Iranian government’s history of suppressing freedom of expression and internet shutdowns. Meta determined the public interest outweighed the risk of offline harm, which was low. It was clear to Meta that the author of the content did not intend to call for violent action against Ayatollah Khamenei, but rather to criticize the government’s “anti-women” policies. In this situation, Meta gave more weight to the rhetorical meaning of “marg bar” as “down with,” noting its frequent use as a form of political expression in Iran. Restoring the content was, for Meta, consistent with its values of voice and safety. Meta explained to the Board that it has two categories of newsworthy allowances: “narrow” and “scaled.” In this case, Meta applied a narrow allowance, which only restores the individual piece of content, and has no effect on other content, even if it is identical. A “scaled” allowance, by contrast, applies to all uses of a phrase that would otherwise violate policy, regardless of the identity of the speaker. Scaled allowances are normally limited in duration. Both types of allowances can only be issued by Meta’s internal policy teams; a content moderator reviewing posts at-scale cannot issue such allowances, but they do have options for escalating content to Meta’s internal teams. Meta explained that it has three times granted scaled newsworthiness allowances for the “death to Khamenei” phrase, first in connection with the 2019 fuel price protests in Iran, second in the context of the 2021 Iranian election, and third, related to the 2021 water shortage protests. However, no scaled allowance has been issued to allow these statements since the beginning of the protests against compulsory hijabs in 2022. Meta disclosed to the Board that it has become more hesitant to grant “scaled” allowances and favors considering “narrow” allowances on a case-by-case basis. This is due to “public criticism” of Meta for temporarily allowing “death to” statements in a prior crisis situation. In response to the Board’s questions, Meta clarified that it does not publish the newsworthiness allowances it issues. Meta also clarified the number of times it issued scaled newsworthiness allowances globally for content that would otherwise violate the Violence and Incitement policy in the 12 months up to October 5, 2022, but requested this data be kept confidential as it could not be validated for release in the time available. The Board asked Meta how much content would have been impacted if the company had issued a scaled newsworthiness allowance to permit “death to Khamenei” statements. Meta said this cannot accurately be determined without assessing each post for other violations. While Meta provided the Board data on the usage of “death to Khamenei” hashtags between mid-July and early October 2022, it requested that data be kept confidential as it could not be validated for release in the time available. While Meta did not issue a scaled newsworthiness allowance for “marg bar Khamenei” statements, the company disclosed that on 23 September 2022, ten days after the killing of Jina Mahsa Amini, it issued a “spirit of the policy” allowance for the phrase “I will kill whoever kills my sister/brother.” This scaled allowance was still in effect when the Board was deliberating the present case. In this case, the author of the post received a “strike” as a result of their content being assessed as violating. Meta disclosed that in May 2022, it issued guidance that “marg bar Khamenei” slogans should be removed for violating the Violence and Incitement Community Standard, but should not result in a strike. Meta explained this was intended to mitigate the impact of removing content with some public interest value, though not enough to outweigh the risk of harm and warrant a newsworthiness allowance. It was still in effect as the Board finalized its decision. Meta explained this guidance is distinct from a newsworthiness allowance, as it does not affect the decision to remove the content, and only applies to the penalty imposed. In response to the Board’s questions, Meta explained that the author of the content in this case did not benefit from this penalty exemption because it is only available for content decisions made by internal teams “at-escalation.” As the post in this case was assessed as violating by a content moderator at-scale, a strike was automatically issued, and, taking into account the accrual of prior strikes, corresponding “feature-limits"" were imposed. In response to the Board’s questions, Meta disclosed that the user was notified about the seven-day feature-limit but not the 30-day group-related feature-limit. As such, the user would only find out about the 30-day group-related feature-limit if they were to access the status section of their account or if they attempted to perform a restricted action related to a group. In response to prior Oversight Board recommendations, Meta has provided more information publicly on the operation of its strikes system and resulting account penalties. During the finalization of this decision, Meta also informed the Board that, in response to recommendations in the ""Mention of the Taliban in news reporting"" case, it would increase the strike-threshold for the imposition of “read-only” penalties and update its transparency center with this information. Meta explained that when the author of the content appealed the initial removal decision, that appeal did not meet prioritization criteria and was automatically closed without review. In response to the Board’s questions, Meta provided further explanation of its review capacities for Farsi content. Meta explained that the Community Standards have been available on its website in Farsi since February 2022. Meta shared that for “higher risk” markets, such as the Persian market, which are characterized for example by recurring volume spikes due to real world events, or markets with “long lead times required to increase capacity,” it over-allocates content moderation resources so it can deal with any crisis situations that arise. Meta cited the human rights to freedom of expression (Article 19, ICCPR), freedom of assembly (Article 21, ICCPR), and the right to participate in public affairs (Article 25, ICCPR) in support of its revised decision. At the same time, Meta acknowledged that it needs “bright-line” rules to accomplish, at scale, the legitimate aim of its Violence and Incitement policy of protecting the rights of others from threats of violence. Meta told the Board that the “application of these bright-line rules sometimes results in removal of speech that, on escalation, we may conclude (as we did in this case) does not contain a credible threat.” Meta explained that it continuously monitors trends on its platforms to protect political speech that might otherwise violate policies, and in the past has made more use of scaled allowances. The company invited the Board’s input as to when it should grant these types of allowances and the criteria it should consider when doing so. The Board asked Meta 29 questions in writing. Questions related to: the criteria and process for issuing at-scale policy exceptions; automatic closure of appeals; measures taken by Meta to protect user's rights during protests; the company's content review capacity in countries where its products are banned; and alternative processes or criteria that Meta has considered to effectively permit rhetorical non-threatening political expression at scale. 26 questions were answered fully and three were answered partially. The partial responses were to questions on: data comparing auto closure of appeals for content in Farsi and English languages; the prevalence of several variations of the ""death to Khamenei"" slogan on Meta’s platforms, and the accuracy rates on the enforcement of the Violence and Incitement policy in Farsi. 7. Public comments The Oversight Board received 162 public comments related to this case. 13 comments were submitted from Asia Pacific and Oceania, six from Central and South Asia, 42 from Europe, and 36 from the Middle East and North Africa. 65 comments were submitted from the United States and Canada. The submissions covered the following themes: the distinction between political rhetoric and incitement; the importance of context, in particular for language and imagery, when moderating content; the limitations of the newsworthiness allowance in dealing with human rights violations and overreliance on automated decision-making; and freedom of expression, human rights, women's rights, government repression, and social media bans in Iran. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board examined whether this content should be restored by analysing Meta's content policies, human rights responsibilities and values. The Board selected this case because it offered the potential to explore how Meta assesses criticisms of government authorities, and whether heads of state in certain countries receive special protection or treatment, as well as important matters around advocacy for women’s rights and the participation of women in public life. Additionally, this content raises issues around criticism of political figures through rhetorical speech that may also be interpreted as threatening, and the use of the newsworthiness allowance. The case provides the Board with the opportunity to discuss Meta’s internal procedures, which determine when and why policy exceptions should be granted, as well as how policies and their exceptions should be applied. The case primarily falls into the Board’s elections and civic space priority, but also touches on the Board’s priorities of gender, Meta and governments, crisis and conflict situations, and treating users fairly. 8.1 Compliance with Meta’s content policies I. Content rules a. Violence and Incitement The Board finds that the content in this case does not violate the Violence and Incitement Community Standard. Therefore, it was not necessary for Meta to apply the newsworthiness allowance to the post. This conclusion is supported by the analysis Meta conducted when it revisited its decision after the Board selected the case. The policy rationale explains that Meta intends to “prevent potential offline harm” and that it “removes language that incites or facilitates serious violence.” Under the heading “do not post,” the Community Standard prohibits threats “that could lead to death or high-severity violence.” However, internal guidance indicates that, generally, “death to” statements against any targets, including named individuals, are permitted on Meta’s platforms and do not constitute a violation of this rule, except when the target of the “death to” statement is a “high-risk person.” The internal guidance instructs reviewers to treat “death to” statements against “high-risk persons” as violating regardless of other contextual cues. Further, the public-facing “do not post” rules in the Violence and Incitement Community Standard do not reflect this internal guidance and accordingly do not expressly prohibit “death to” statements targeting high-risk individuals, including heads of state. Meta’s analysis of the content found that it presented a low risk of offline harm; that it did not intend to call for Ayatollah Khamenei's death; and that the “death to Khamenei” slogan has frequently been used as a form of political expression in Iran which is better understood as “down with Khamenei.” This should have been sufficient for Meta to find the content non-violating and allow the post and other similar content to remain on its platform. The Board is concerned that Meta has not taken action to allow use of “marg bar Khamenei” at scale during the current protests in Iran, despite its assessment in this case that the slogan did not pose a risk of harm. Linguistic experts consulted by the Board confirmed that the “marg bar Khamenei” slogan is commonly used in Iran, in particular during protests, as a criticism of the political regime and Iran’s Supreme Leader, rather than as a threat to Ayatollah Khamenei’s safety. The post preceded by several days the “National Day of Hijab and Chastity,” during which Meta noted an increase in use of social media in Iran to organize protests. In this context, the slogan should have been interpreted as a rhetorical expression, meaning “down with” Khamenei and the Iranian government. It did not therefore fall within the rule on “threats that could lead to death,” and it did not advocate or intend to cause high-severity violence against the target. The Board notes that “down with” statements against a target are permissible under Meta’s policies, regardless of the target’s identity. This is consistent with Meta’s commitment to voice, and the importance of protecting political discontent. There is no “genuine risk of physical harm” or “direct threats to public safety,” which the policy aims to avoid. Rather, the content falls squarely in the category of statements through which people “commonly express disdain or disagreement by threatening or calling for violence in non-serious ways.” Meta’s internal guidance for moderators contains a presumption in favour of removing “death to” statements directed at “high-risk persons.” This would apply to Ayatollah Khamenei, as a head of state. This rule, while not public, is consistent with the policy rationale for content that places these persons at heightened risk. However, its enforcement in this case is not. The policy rationale of the Violence and Incitement Community Standard states that the language and context of a particular statement ought to be considered in determining whether a threat is credible. This did not occur in the present case, as the stated presumption was applied regardless of language and context, though the Board notes the reviewer acted in a manner consistent with the internal guidance. As Meta later acknowledged, various elements of the post, and the broader context in which it was posted, make clear it was not making a credible threat but employing political rhetoric. The Board’s decision in the “Wampum belt” case is relevant here. In that decision, the Board held that a seemingly violating phrase “kill the Indian,” should not be read in isolation but in the context of the full post, which made clear it was not threatening violence but opposing it. Similarly, in the “Russian poem” case, the Board found that various excerpts of a poem (e.g., “kill a fascist”) were not violating, as the post was using rhetorical speech to call attention to a cycle of violence, not urging violence. With the internal guidance drafted as it is, the Board understands why the content moderator made the decision they did in this instance. However, Meta should update this guidance so that it is more consistent with the stated policy rationale. The Board agrees that “death to” or similar threatening statements directed at high-risk persons should be removed due to the potential risk to their safety. Though the Board also agrees that heads of state may be considered high-risk persons, this presumption should be nuanced in Meta’s internal guidance. For these reasons, the Board also finds that removing the content was not consistent with Meta’s commitment to voice and was not necessary to advance safety. Meta should have issued scaled guidance that instructed moderators not to remove this protest slogan by default, and, accordingly, not removed this post during at-scale review. b. Newsworthiness allowance It follows from the Board’s assessment that the Violence and Incitement Community Standard was not violated, that the newsworthiness allowance was not required. Notwithstanding this conclusion, the Board finds that, when Meta chose to apply a newsworthiness allowance to this post, it should have been scaled to apply to all “marg bar Khamenei” slogans, regardless of the speaker. This was done in response to several similar previous widespread protests in Iran. In the Board’s view, those actions were more consistent with Meta’s commitment to voice than the action Meta took when revisiting its decision in this case. Many other people are in the same situation as the user in this case, so the allowance should not have been limited to an individual post. Scaling the decision was necessary given the importance of social media to protest in Iran, the human rights situation in the country, and the fact that Meta should reasonably have anticipated that the same issue would recur many times. The failure to apply a scaled allowance has had the effect of silencing political speech aimed at protecting women’s rights, by removing what the Board has concluded was non-violating speech. Criticisms of Meta’s use of allowances to permit otherwise violating “death to” statements in relation to Russia’s invasion of Ukraine should not, in the Board’s view, have led to Meta reducing the use of scaled allowances in protest contexts. The situation in Iran concerns a government violating the human rights of its own citizens, while repressing protests and severely limiting the free flow of information. There would have been many thousands of usages of the “marg bar Khamenei” slogan in recent months on Meta’s platforms. Very few posts, if any besides the content in this case, would have benefited from a newsworthiness allowance. While the newsworthiness allowance is often framed as a mechanism for safeguarding public interest speech, its use is relatively rare considering the global scope of Meta’s operations. According to Meta, only 68 newsworthiness allowances were issued across all policies globally between June 1, 2021 to June 1, 2022. In the Board’s view, in contexts of widespread protests, Meta should be less reluctant to scale allowances. This would help to protect voice where there are minimal risks to safety. This is particularly important where systematic violations of human rights have been documented, and avenues for exercising the right to freedom of expression are limited, as in Iran. II. Enforcement action The impact of “feature-limits” on individuals during times of protest is especially grave. Such limits hamper people’s ability to use the platform to express voice almost entirely. They can shut people out of social movements and political discourse in critical moments, potentially undermining calls for action gaining momentum through Meta’s products. Meta appeared to partly recognize this in May 2022, issuing directions to its escalation teams not to impose strikes for “marg bar Khamenei” statements. However, this measure was not intended to apply to decisions made by moderators at-scale. When the same content is assessed as violating at-scale (i.e. by Meta’s outsourced moderators), strikes result against the post author’s account automatically. A reviewer at-scale ordinarily has no discretion to withhold a strike or resulting penalties; this did not change when Meta issued the limited exception in May. Only content decisions that reached its escalation teams would have the option of withholding a strike for violating content. This may have applied to content or accounts that went through programs like cross-check, which enable users’ content to be reviewed on escalation prior to removal. Whereas high-profile accounts may have benefited from that exception, the author of the post in this case did not. The Board is also concerned that the author of the post was not sufficiently notified that feature-limits were imposed, particularly on group-related features. The first time they would be aware of some of them was when they attempted to use the relevant features. This compounds users’ frustrations at being placed in “Facebook jail,” when the nature of the punishment and reasons for it are often unknown. In the “Mention of the Taliban in news reporting” case, the Board expressed similar concerns when a user was blocked from full use of their account at a crucial political moment. According to Meta, this issue is already known to them and, consistent with the Board’s prior recommendations, relevant internal teams are working to improve its user-communication infrastructure on this issue. The Board welcomes that Meta has progressed with implementing other recommendations in the ""Mention of the Taliban in news reporting"" case, and that the strike threshold for imposing some feature-limits will increase. Reflecting those changes in the transparency center explanation of Meta’s enforcement practices is good practice. The Board notes that the author of the post in this case did not have their appeal reviewed, and received misleading notifications on the reasons for this. This is troubling. Meta has publicly announced that the company is shifting towards using automation to prioritize content for review. Appeals are automatically closed without review if they do not meet a set threshold concerning various signals including: the type and virality of the content, if the violation is not an extremely high severity violation, such as suicide content, and the time elapsed since the content was posted. The Board is concerned about Meta’s automatic closure of appeals, and that the prioritization signals the company is applying may not sufficiently account for public interest expression, particularly when it relates to protests. The signals should include features such as topic sensitivity and false-positive probability to help identify content of this nature, to avoid appeals against erroneous decisions being automatically closed. This is especially important where, as a result of incorrect enforcement of its policies, users are locked out of using key features on Meta’s products at crucial political moments. III. Transparency The Board welcomes the increase in data and examples Meta is providing of narrow newsworthiness exceptions in the Transparency Center. These disclosures would be further enhanced by distinguishing “scaled” and “narrow” allowances the company grants annually in its transparency reports. Providing examples of “scaled” newsworthiness allowances would also advance understanding of the steps Meta is taking to protect user voice. In key moments, such as the Iran protests, Meta should publicize when scaled newsworthiness allowances are issued at the time, so that people understand that their speech will be protected. 8.2 Compliance with Meta’s human rights responsibilities The Board finds that Meta’s initial decision to remove the content is inconsistent with Meta’s human rights responsibilities as a business. Freedom of expression (Article 19, ICCPR) Article 19 of the ICCPR provides “particularly high” protection for “public debate concerning public figures in the political domain and public institutions” (General Comment No. 34, para. 38). Extreme restrictions on freedom of expression and assembly in Iran make it especially crucial that Meta respects these rights, in particular at times of protest (“Colombia protests” case decision; General Comment No. 37 , at para. 31). The expression in this case was artistic, and a political protest. It related to discourse on the rights of women and their participation in political and public life, and freedom of religion or belief. The Board has recognized the importance of protest speech against a head of state, even where it is offensive, as they are “legitimately subject to criticism and political opposition” (“Colombia protests” case; General Comment No. 34, at paras 11 and 38). Freedom of expression in the form of art protects “cartoons that clarify political positions” and “memes that mock public figures” (A/HRC/44/49/Add.2, at para. 5). Where a State restricts expression, it must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). The Board uses this three-part test to interpret Meta’s voluntary human rights commitments, both for the individual content decision and what this says about Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although ""companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users' right to freedom of expression"" (A/74/486, at para. 41). I. Legality (clarity and accessibility of the rules) The principle of legality under international human rights law requires rules that limit expression to be clear and publicly accessible (General Comment No.34, at para. 25). It further requires that rules restricting expression ""may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution"" and ""provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not"" ( Ibid ). Applied to the Community Standards, the UN Special Rapporteur on freedom of expression has said they should be clear and specific (A/HRC/38/35, at para. 46). People using Meta’s platforms should be able to access and understand the rules, and content reviewers should have clear guidance on their enforcement. It is welcome that, consistent with prior Board decisions, Meta has ensured the translation of its content policies into more languages, including Farsi. Meta’s internal guidelines (the “Known Questions”) on its Violence and Incitement policy contain presumptions of risk that are not currently in the public-facing policy. The Community Standard does not reflect the explanation in the internal guidance that a “death to X” statement is generally permitted except when the “target” is a “high-risk person.” It is a serious concern that this hidden presumption also has a non-public exception, in particular as it relates to expression that may be legitimate political criticism of state actors. The Board further notes that there are no examples of ""high-risk persons"" in the public-facing Community Standard, so it is not known that heads of state receive this particular protection. Indeed, the rationale for including some high-ranking public officials on the internal list and not others, such as members of the legislature and judiciary, is unclear. At the same time, the Board acknowledges that there may be good reasons for not disclosing the full list of high-risk targets publicly, in particular for individuals who are not afforded the protection of the State’s security apparatus. In the policy rationale for the Violence and Incitement Community Standard, which is public, Meta states it considers “language and context” to differentiate content that contains a “credible threat to public or personal safety” and “casual statements.” However, the “do not post” section of the policy does not explain how language and context figure in the assessment of threats and calls for death or high-severity violence. Whereas the policy rationale appears to accommodate rhetorical speech of the kind that might be expected in protest contexts, the written rules and corresponding guidance to reviewers do not. Indeed, enforcement in practice, in particular at-scale, is more formulaic than the rules imply, and this may create misperceptions to users of how rules are likely to be enforced. The guidance to reviewers, as currently drafted, excludes the possibility of contextual analysis, even when there are clear cues within the content itself that threatening language is rhetorical. The Violence and Incitement policy requires revision, as do Meta’s internal guidelines. The policy should include an explanation of how Meta moderates rhetorical threats, including “death to” statements against “high-risk persons,” and how language and context are factored into assessments of whether a threat against a head of state is credible under the Violence and Incitement policy. Internal guidance should be especially sensitive to protest contexts where the protection of political speech is crucial. The related presumptions in its internal guidance should be nuanced and brought into alignment with this. While Meta has provided further public explanations of the newsworthiness allowance, the lack of public explanation of “scaled” allowances is a source of confusion. Meta should make a public announcement when it issues a scaled allowance in relation to events like the protests in Iran, and either specify their duration or announce when those exceptions are lifted. This would help people using its platforms to understand what expression is permissible. Such announcements are opportune moments to remind people who use Meta’s platforms of the existence of the rules, to raise awareness and understanding of them. This is especially important when those changes have material impacts on users’ ability to express themselves on the platform. II. Legitimate aim Restrictions on freedom of expression must pursue a legitimate aim. The Violence and Incitement Community Standard aims to “prevent potential offline harm” by removing content that poses “a genuine risk of physical harm or direct threats to public safety.” This policy therefore serves the legitimate aim of protecting the right to life and the right to security of person (Article 6, ICCPR; Article 9 ICCPR). III. Necessity and proportionality The principle of necessity and proportionality provides that any restrictions on freedom of expression ""must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected"" ( General Comment No. 34 , para. 34 ). The Board finds that the removal of this content was not necessary to protect Ayatollah Khamenei from violence or threats thereof. Meta is right to be concerned about threats of violence, including those targeting high-ranking public officials in many contexts. However, its decision not to interpret the Violence and Incitement policy to permit this rhetorical content and other content like it, is a serious concern. Not issuing a scaled allowance for “marg bar Khamenei” statements compounded the problem of Meta’s policy not protecting this speech, failing to mitigate the harm to freedom of expression. Meta knew in mid-July, when this post was made, that the “National Day of Hijab and Chastity” was approaching. In May 2022, Meta already issued guidance to remove the “marg bar Khamenei” statement for decisions made at escalation without imposing a strike. Meta also knew that its platforms have been crucial in similar moments in the recent past for organizing protests in Iran, having previously issued scaled allowances for “marg bar Khamenei” statements around protests. The company should therefore have anticipated issues around the over-removal of protest content in Iran, and it should have developed a response beyond the very limited strike exemption for escalated decisions. As this case shows, its failure to do more for users’ voice led to protestors’ freedom of expression being unnecessarily restricted. When feature-limits were imposed, the impacts of its wrongful decisions were made more severe, as they prevented users from organizing on Meta’s platforms. The fact that Meta scaled a spirit of the policy allowance on 23 September 2022 for the phrase “I will kill whoever kills my sister/brother” in Iran indicates that Meta should have also permitted known protest slogans at this critical time. “Death to” statements are not as directly threatening as “I will kill” statements. Although the “death to” phrase in this case targets a specific individual, that target is Ayatollah Khamenei, a head of state, who routinely uses the full coercive force of the state, both through judicial and extrajudicial means, to repress dissent. It is crucial that Meta prioritize its value of voice in support of individuals’ freedom of expression rights in situations such as this. The factors identified above weigh heavily in favor of presuming “marg bar Khamenei” statements made in the context of protests are political slogans and are not credible threats. The six-factor test described in the Rabat Plan of Action supports this conclusion. The speaker was not a public figure, and their rhetoric did not appear to be intended, and would not have been interpreted by others, as a call for violent action. As Meta itself determined, the protest context in Iran specifically made clear that rhetorical statements of this kind were expected, and the likelihood of violence resulting from them was low. The Board finds the content to be unambiguously political and non-violent in its intent, directly criticizing a government and its leaders of serious human rights violations and drawing attention to the abuse of religion to justify discrimination. In the Board’s view, this content posed very little risk of inciting violence. Therefore, both the removal and the additional penalties that resulted from this decision were not necessary or proportionate. In other contexts, “death to” statements against public figures and government officials should be taken seriously, as the internal guidance currently in place indicates. For example, content with the slogan “marg bar Salman Rushdie,” would pose a much more significant risk. The fatwa against Rushdie, the recent attempt on his life and ongoing concerns for his safety, all put him in a different position to Ayatollah Khamenei. In other linguistic and cultural contexts, “death to” statements may also not carry the same rhetorical meaning as the term “marg bar” can carry and should not be treated the same as the content in this case. For example, during events similar to the January 6 riots in Washington D.C., “death to” statements against politicians would need to be swiftly removed under the Violence and Incitement policy. In such a situation, politicians were clearly at risk, and “death to” statements are less likely to be understood as rhetorical or non-threatening in English. Moreover, the Board is concerned that the rationale for the list of “high-risk” persons appears in some respects overly broad in terms of the presumption for removal it creates, but then inexplicably narrow in other respects. In the case of heads of state, though the Board agrees that they may be considered high-risk persons, internal guidance should reflect that protest-related rhetorical political speech that does not incite violence and is aimed at criticizing them, their governments, the political regime or their policies, must be protected. This is the case even if it contains threatening statements that would be considered violating towards other high-risk individuals. When rhetorical threats against heads of state are used in the context of ongoing protests, reviewers should be required to consider language and context, bringing the guidance for moderators in line with the policy rationale. This would have the effect of permitting rhetorical threats targeted at heads of state, including “death to” a head of state, where, for example: historical and present usage of phrase across platforms evidences rhetorical political speech that is not intended to, and is not likely to, incite violence; the content as a whole is engaged in criticizing governments, political regimes, their policies and/or their leaders; the statement is used in protest contexts or other crisis situations where the role of government is a topic of political debate; or it is used in contexts where systematic restrictions on freedom of expression and assembly are imposed, or where dissent is being repressed. The Board acknowledges that this issue is not as straightforward as it may first appear, and it is not possible to adopt a global rule on the use of certain terms that excludes the need for consideration of contextual factors, including signals in the content itself that are possible to consider at-scale (see the Board’s decision in the “ Wampum Belt” case) . Meta’s current position is leading to over-removal of political expression in Iran at a historic moment and potentially creates more risks to human rights than it mitigates. In the Board’s view, the frequency with which Meta has needed to apply allowances in this situation indicates a more permanent solution to this problem is required. The reliance on allowances is too ad hoc and does not provide certainty that people’s expression rights will be respected. Meta needs to protect voice at scale in relation to Iran, and other critical political contexts and situations. The proportionality concerns with this content removal increase where “feature-limits” are imposed as a result of an incorrect decision. The nature and duration of the penalties were disproportionate. Meta’s approach to penalties should take into greater consideration the potential for them to deter people from future engagement on political issues on the platform. It is positive that Meta has introduced further transparency and coherence in this area as a result of implementing prior Oversight Board recommendations, moving towards what should be a more proportionate and transparent approach with higher strike-to-penalty thresholds. Meta’s plans to issue more comprehensive penalty notifications should ensure that users are better placed to understand the reasons for the consequences of strikes and the reasons for feature-limits in the future. Access to remedy Access to effective remedy is a core component of the UN Guiding Principles on the Business and Human Rights (UNGPs). In August 2020, Meta publicly announced that it would rely more on automated content review and “teams will be less likely to review lower severity reports that aren’t being widely seen or shared on our platforms.” The Board is concerned that Meta’s automatic closure of appeals means users are not provided with appropriate access to remedy. Additionally, the fact that the current automated system does not take into account signals such as topic sensitivity and the likelihood of enforcement error makes it very likely that the most important complaints will not be reviewed. The Board finds this may particularly affect online protesters’ right to remedy because content wrongfully removed is restored belatedly or not at all, shutting them out of social movements and political discourse in critical political moments. 8.3 Identical content with parallel context The Board expresses concern about the likely number of wrongful removals of Iran protest content including the phrase “marg bar Khamanei.” It is important that Meta take action to restore identical content with parallel context it has incorrectly removed where possible, and reverse any strikes or account-level penalties it has imposed as a result. 9. Oversight Board decision The Oversight Board overturns Meta's original decision to remove the content for violating the Violence and Incitement Community Standard. 10. Policy advisory statement Content policy 1. Meta’s Community Standards should accurately reflect its policies. To better inform users of the types of statements that are prohibited, Meta should amend the Violence and Incitement Community Standard to (i) explain that rhetorical threats like “death to X” statements are generally permitted, except when the target of the threat is a high-risk person; (ii) include an illustrative list of high-risk persons, explaining they may include heads of state; (iii) provide criteria for when threatening statements directed at heads of state are permitted to protect clearly rhetorical political speech in protest contexts that does not incite to violence, taking language and context into account, in accordance with the principles outlined in this decision. The Board will consider this recommendation implemented when the public-facing language of the Violence and Incitement Community Standard reflects the proposed change, and when Meta shares internal guidelines with the Board that are consistent with the public facing policy. Enforcement 2. Meta should err on the side of issuing scaled allowances where (i) this is not likely to lead to violence; (ii) when potentially violating content is used in protest contexts; and (iii) where public interest is high. Meta should ensure that their internal process to identify and review content trends around protests that may require context-specific guidance to mitigate harm to freedom of expression, such as allowances or exceptions, are effective. The Board will consider this recommendation implemented when Meta shares the internal process with the Board and demonstrates through sharing data with the Board that it has minimized incorrect removals of protest slogans. 3. Pending changes to the Violence and Incitement policy, Meta should issue guidance to its reviewers that “marg bar Khamenei” statements in the context of protests in Iran do not violate the Violence and Incitement Community Standard. Meta should reverse any strikes and feature-limits for wrongfully removed content that used the “marg bar Khamenei” slogan. The Board will consider this recommendation implemented when Meta discloses data on the volume of content restored and number of accounts impacted. 4. Meta should revise the indicators it uses to rank appeals in its review queues and to automatically close appeals without review. The appeals prioritization formula should include, as it does for the cross-check ranker, the factors of topic sensitivity and false-positive probability. The Board will consider this implemented when Meta shares with the Board their appeals prioritization formula and data that shows that it is ensuring review of appeals against the incorrect removal of political expression in protest contexts. Transparency 5. Meta should announce all scaled allowances that it issues, their duration and notice of their expiration, in order to give people who use its platforms notice of policy changes allowing certain expression, alongside comprehensive data on the number of “scaled” and “narrow” allowances granted. The Board will consider this recommendation implemented when Meta demonstrates regular and comprehensive disclosures to the Board. 6. The public explanation of the newsworthiness allowance in the Transparency Center should (i) explain that newsworthiness allowances can either be scaled or narrow; and (ii) provide the criteria Meta uses to determine when to scale newsworthiness allowances. The Board will consider this recommendation implemented when Meta updates the publicly available explanation of newsworthiness and issues Transparency Reports that include sufficiently detailed information about all applied allowances. 7. Meta should provide a public explanation of the automatic prioritization and closure of appeals, including the criteria for both prioritization and closure. The Board will consider this recommendation implemented when Meta publishes this information in the Transparency Center. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, a digital investigations group providing risk advisory and threat intelligence services to mitigate online harms, also provided research. The company Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world, provided linguistic expertise. Return to Case Decisions and Policy Advisory Opinions" fb-zwqupzlz,Myanmar bot,https://www.oversightboard.com/decision/fb-zwqupzlz/,"August 11, 2021",2021,,"TopicFreedom of expression, PoliticsCommunity StandardHate speech","Type of DecisionStandardPolicies and TopicsTopicFreedom of expression, PoliticsCommunity StandardHate speechRegion/CountriesLocationMyanmarPlatformPlatformFacebookAttachments2021-007-FB-UA Public Comments",Overturned,Myanmar,The Oversight Board has overturned Facebook's decision to remove a post in Burmese under its Hate Speech Community Standard.,26498,4070,"Overturned August 11, 2021 The Oversight Board has overturned Facebook's decision to remove a post in Burmese under its Hate Speech Community Standard. Standard Topic Freedom of expression, Politics Community Standard Hate speech Location Myanmar Platform Facebook 2021-007-FB-UA Public Comments The Oversight Board has overturned Facebook’s decision to remove a post in Burmese under its Hate Speech Community Standard. The Board found that the post did not target Chinese people, but the Chinese state. Specifically, it used profanity to reference Chinese governmental policy in Hong Kong as part of a political discussion on the Chinese government’s role in Myanmar. About the case In April 2021, a Facebook user who appeared to be in Myanmar posted in Burmese on their timeline. The post discussed ways to limit financing to the Myanmar military following the coup in Myanmar on February 1, 2021. It proposed that tax revenue be given to the Committee Representing Pyidaungsu Hlutaw (CRPH), a group of legislators opposed to the coup. The post received about half a million views and no Facebook users reported it. Facebook translated the supposedly violating part of the user’s post as “Hong Kong people, because the fucking Chinese tortured them, changed their banking to UK, and now (the Chinese) they cannot touch them.” Facebook removed the post under its Hate Speech Community Standard. This prohibits content targeting a person or group of people based on their race, ethnicity or national origin with “profane terms or phrases with the intent to insult.” The four content reviewers who examined the post all agreed that it violated Facebook’s rules. In their appeal to the Board, the user stated that they posted the content to “stop the brutal military regime.” Key findings This case highlights the importance of considering context when enforcing hate speech policies, as well as the importance of protecting political speech. This is particularly relevant in Myanmar given the February 2021 coup and Facebook’s key role as a communications medium in the country. The post used the Burmese phrase “$တရုတ်,” which Facebook translated as “fucking Chinese” (or “sout ta-yote”). According to Facebook, the word “ta-yote” “is perceived culturally and linguistically as an overlap of identities/meanings between China the country and the Chinese people.” Facebook stated that given the nature of this word and the fact that the user did not “clearly indicate that the term refers to the country/government of China,” it determined that “the user is, at a minimum, referring to Chinese people.” As such, Facebook removed the post under its Hate Speech Community Standard. As the same word is used in Burmese to refer to a state and people from that state, context is key to understanding the intended meaning. A number of factors convinced the Board that the user was not targeting Chinese people, but the Chinese state. The part of the post which supposedly violated Facebook’s rules refers to China’s financial policies in Hong Kong as “torture” or “persecution,” and not the actions of individuals or Chinese people in Myanmar. Both of the Board’s translators indicated that, in this case, the word “ta-yote” referred to a state. When questioned on whether there could be any possible ambiguity in this reference, the translators did not indicate any doubt. The Board’s translators also stated that the post contains terms commonly used by Myanmar’s government and the Chinese embassy to address each other. In addition, while half a million people viewed the post and over 6,000 people shared it, no users reported it. Public comments also described the overall tone of the post as a political discussion. Given that the post did not target people based on race, ethnicity, or national origin, but was aimed at a state, the Board found it did not violate Facebook’s Hate Speech Community Standard. The Oversight Board’s decision The Oversight Board overturns Facebook’s decision to remove the content, requiring the post to be restored. In a policy advisory statement, the Board recommends that Facebook: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board has overturned Facebook’s decision to remove content under its Hate Speech Community Standard. The Board found that the post was not hate speech. 2. Case description In April 2021, a Facebook user who appeared to be in Myanmar posted in Burmese on their timeline. The post discussed ways to limit financing to the Myanmar military following the coup in Myanmar on February 1, 2021. It proposed that tax revenue be given to the Committee Representing Pyidaungsu Hlutaw (CRPH), a group of legislators opposed to the coup. The post received about 500,000 views, about 6,000 reactions and was shared about 6,000 times. No Facebook users reported the post. Facebook translated the supposedly violating part of the user’s post as “Hong Kong people, because the fucking Chinese tortured them, changed their banking to UK, and now (the Chinese) they cannot touch them.” Facebook removed the post as “Tier 2” Hate Speech under its Hate Speech Community Standard the day after it was posted. This prohibits content targeting a person or group of people based on their race, ethnicity or national origin with “profane terms or phrases with the intent to insult.” A reshare of the post was, according to Facebook, “automatically selected as a part of a sample and sent to a human reviewer to be used for classifier training.” This involves Facebook creating data sets of examples of violating and non-violating content to train its automated detection and enforcement processes to predict whether content violates Facebook policies. The reviewer determined that the shared post violated the Hate Speech Community Standard. While the purpose of the process was to create sets of content to train the classifier, once the shared post was found to be violating it was deleted. Because the shared post was found to have violated Facebook’s rules, an “Administrative Action Bot” automatically identified the original post for review. Facebook explained that the Administrative Action Bot is an internal Facebook account that does not make any assessment of content but carries out “a variety of actions throughout the enforcement system based on decisions made by humans or automation.” Two human reviewers then analyzed the original post, and both determined it was “Tier 2” Hate Speech. The content was removed. The user appealed the removal to Facebook, where a fourth human reviewer upheld the removal. According to Facebook, “[t]he content reviewers in this case were all members of a Burmese content review team at Facebook.” The user then submitted their appeal to the Oversight Board. 3. Authority and scope The Board has authority to review Facebook’s decision following an appeal from the user whose post was removed (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1). The Board may uphold or reverse that decision (Charter Article 3, Section 5). The Board’s decisions are binding and may include policy advisory statements with recommendations. These recommendations are non-binding, but Facebook must respond to them (Charter Article 3, Section 4). The Board is an independent grievance mechanism to address disputes in a transparent and principled manner. 4. Relevant standards The Oversight Board considered the following standards in its decision: I. Facebook’s Community Standards Facebook's Community Standards define hate speech as “a direct attack on people based on what we call protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability.” Under “Tier 2,” prohibited content includes cursing, defined as “[p]rofane terms or phrases with the intent to insult, including, but not limited to: fuck, bitch, motherfucker.” II. Facebook’s values Facebook’s values are outlined in the introduction to the Community Standards. The value of “Voice” is described as “paramount”: The goal of our Community Standards has always been to create a place for expression and give people a voice. […] We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Facebook limits “Voice” in service of four values, and two are relevant here: “Safety”: We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn't allowed on Facebook. “Dignity” : We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade others. III. Human rights standards The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Facebook announced its Corporate Human Rights Policy , where it commemorated its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis in this case was informed by the following human rights standards: 5. User statement The user stated in their appeal to the Board that they posted this content to “stop the brutal military regime” and provide advice to democratic leaders in Myanmar. The user also reiterated the need to limit the Myanmar military regime’s funding. The user self-identified as an “activist” and speculated that the Myanmar military regime’s informants reported their post. The user also stated that “someone who understands Myanmar Language” should review their post. 6. Explanation of Facebook’s decision Facebook removed the content as a “Tier 2” attack under the Hate Speech Community Standard, specifically for violating its policy prohibiting profane curse words targeted at people based on their race, ethnicity and/or national origin. According to Facebook, the allegedly violating content was considered to be an attack on Chinese people. The content included the phrase in Burmese “$တရုတ်,” which Facebook’s regional team translated as “fucking Chinese” (or “sout ta-yote""). Facebook’s regional team further specified that “$” can be used as an abbreviation for “စောက်” or “sout,” which translates to “fucking.” According to Facebook’s team, the word “ta-yote” “is perceived culturally and linguistically as an overlap of identities/meanings between China the country and the Chinese people.” Facebook provided the Board with the relevant confidential internal guidance it provides to its moderators, or Internal Implementation Standards, on distinguishing language that targets people based on protected characteristics and concepts related to protected characteristics. Facebook also noted in its decision rationale that following the February 2021 coup “there were reports of increasing anti-Chinese sentiment” in Myanmar and that “several Chinese people were injured, trapped, or killed in an alleged arson attack on a Chinese-financed garment factory in Yangon, Myanmar.” In response to a question from the Board, Facebook stated that it did not have any contact with the Myanmar military regime about this post. Facebook stated that given the nature of the word “ta-yote” and the fact that the user did not “clearly indicate that the term refers to the country/government of China,” Facebook determined that “the user is, at a minimum, referring to Chinese people.” As such, Facebook stated that the removal of the post was consistent with its Hate Speech Community Standard. Facebook also stated that its removal was consistent with its values of “Dignity” and “Safety,” when balanced against the value of “Voice.” According to Facebook, profane cursing directed at Chinese people “may result in harm to those people” and is “demeaning, dehumanizing, and belittling of their individual dignity.” Facebook argued that its decision was consistent with international human rights standards. Facebook stated that its decision complied with the international human rights law requirements of legality, legitimate aim, and necessity and proportionality. According to Facebook, its policy was “easily accessible” in the Community Standards and “the user’s choice of words fell squarely within the prohibition on profane terms.” Additionally, the decision to remove the content was legitimate to protect “the rights of others from harm and discrimination.” Finally, its decision to remove the content was “necessary and proportionate” as “the accumulation of content containing profanity directed against Chinese people ‘creates an environment where acts of violence are more likely to be tolerated and reproduce discrimination in a society,’” citing the Board’s decision 2021-002-FB-UA related to Zwarte Piet. Facebook stated it was similar because “both cases involve hate speech directed at people on the basis of their race or ethnicity.” 7. Third-party submissions The Oversight Board received 10 public comments related to this case. Five of the comments were from Asia Pacific and Oceania, specifically Myanmar, and five were from the United States and Canada. The Board received comments from stakeholders including human rights defenders and civil society organizations focusing on freedom of expression and hate speech in Myanmar. The submissions covered themes including translation and analysis of the word “sout ta-yote;” whether the content was an attack on China or Chinese people; whether the post was political speech that should be protected in context of the conflict in Myanmar; whether there was an increase in anti-Chinese sentiment in Myanmar following the February 2021 coup; the relations between China and Myanmar’s military regime; and Facebook’s content moderation practices, particularly the use, training and audit of Facebook’s automation tools for Burmese language content. To read public comments submitted for this case, please click here . 8. Oversight Board analysis This case highlights the importance of context when enforcing content policies designed to protect users from hate speech, while also respecting political speech. This is particularly relevant in Myanmar due to the February 2021 coup and Facebook’s importance as a medium for communication. The Board looked at the question of whether this content should be restored through three lenses: Facebook’s Community Standards; the company’s values; and its human rights responsibilities. 8.1 Compliance with Community Standards The Board found that restoring this content is consistent with Facebook’s Community Standard on Hate Speech. Facebook’s policy prohibits “profane terms with the intent to insult” that targets a person or people based on race, ethnicity, or national origin. The Board concludes that the post did not target people, but rather was aimed at Chinese governmental policy in Hong Kong, made in the context of discussing the Chinese government’s role in Myanmar. In addition to public comments, the Board also sought two translations of the text. These included translations from a Burmese speaker located within Myanmar and another Burmese speaker located outside of Myanmar. Public comments and the Board’s translators noted that in Burmese, the same word is used to refer to states and people from that state. Therefore, context is key to understanding the intended meaning. This is particularly relevant for applying Facebook’s Hate Speech policy. At the time the content was removed, the Hate Speech Community Standard stated it prohibits attacks against people based on national origin but does not prohibit attacks against countries. The Board considered various factors in deciding this post did not target Chinese people based on their ethnicity, race, or national origin. First, the broader post suggests ways to limit financial engagement with the military regime and provide financial support for the CRPH. Second, the supposedly violating part of the post refers to China’s financial policies in Hong Kong as “torture” or “persecution,” and not the actions of individuals or Chinese people in Myanmar. Third, while the absence of reporting of a widely shared post does not always indicate it is not violating, more than 500,000 people viewed, and more than 6,000 people shared the post and no users reported it. Fourth, both translators consulted by the Board indicated that, while the same term is used to refer to both a state and its people, here it referred to the state. When questioned on any possible ambiguity in this reference, the translators did not indicate any doubt. Fifth, both translators stated that the post contains terms commonly used by the Myanmar government and the Chinese embassy to address each other. Lastly, public comments generally noted the overall tenor of the post as largely a political discussion. Therefore, given that the profanity did not target people based on race, ethnicity, or national origin, but targeted a state, the Board concludes it does not violate Facebook’s Hate Speech Community Standard. It is crucial to ensure that prohibitions on targeting people based on protected characteristics not be construed in a manner that shields governments or institutions from criticism. The Board recognizes that anti-Chinese hate speech is a serious concern, but this post references the Chinese state. The Board disagrees with Facebook’s argument that its decision to remove this content followed the Board’s rationale in case decision 2021-002-FB-UA (where the Board upheld the removal of depictions of people in blackface). In that case, Facebook had a rule against depictions of people in blackface, and the Board permitted Facebook to apply that rule to content that included blackface depictions of Zwarte Piet. Here, by contrast, the context of the post indicates that the language used did not violate Facebook’s rules at all. During the Board’s deliberation regarding this case, Facebook updated its Hate Speech Community Standard to provide more information on how it prohibits “concepts” related to protected characteristics in certain circumstances. This new rule states Facebook “require[s] additional information and/or context” for enforcement and that users should not post “Content attacking concepts, institutions, ideas, practices, or beliefs associated with protected characteristics, which are likely to contribute to imminent physical harm, intimidation or discrimination against the people associated with that protected characteristic.” As it was not part of the Community Standard when Facebook removed this content, and Facebook did not argue it removed the content under this updated Standard to the Board, the Board did not analyze the application of this policy to this case. However, the Board notes that “concepts, institutions, ideas, practices, or beliefs” could cover a very wide range of expression, including political speech. 8.2 Compliance with Facebook’s values The Board concludes that restoring this content is consistent with Facebook’s values. Although Facebook’s values of “Dignity” and “Safety” are important, particularly in the context of the February 2021 coup in Myanmar, this content did not pose a risk to these values such that it would justify displacing “Voice.” The Board also found that the post contains political speech that is central to the value of “Voice.” 8.3 Compliance with Facebook’s human rights responsibilities The Board concludes that restoring the content is consistent with Facebook’s human rights responsibilities as a business. Facebook has committed itself to respect human rights under the UN Guiding Principles on Business and Human Rights (UNGPs). Its Corporate Human Rights Policy states this includes the International Covenant on Civil and Political Rights (ICCPR). Article 19 of the ICCPR provides for broad protection of expression. This protection is “particularly high” for political expression and debate, including about public institutions ( General Comment 34 , para. 38). Article 19 requires state restrictions on expression to satisfy the three-part test of legality, legitimacy, and necessity and proportionality. The Board concludes that Facebook’s actions did not satisfy its responsibilities as a business under this test. I. Legality (clarity and accessibility of the rules) The principle of legality under international human rights law requires rules used by states to limit expression to be clear and accessible ( General Comment 34 , para. 25). Rules restricting expression must also “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not” ( General Comment 34 , para. 25). The Hate Speech Community Standard prohibits profanity that targets people based on race, ethnicity, or national origin. Facebook told the Board that because of the difficulties in “determining intent at scale, Facebook considers the phrase ‘fucking Chinese’ as referring to both Chinese people and the Chinese country or government, unless the user provides additional context that it refers solely to the country or government.” The policy of defaulting towards removal is not stated in the Community Standard. The Board concludes that the user provided additional context that the post referred to a state or country, as noted in the Board’s analysis of the Hate Speech Community Standard (Section 8.1 above). Multiple Facebook reviewers reached a different conclusion than the Board’s translators, people who submitted public comments, and presumably many of the more than 500,000 users who viewed the post and did not report it. Given this divergence, the Board questions the adequacy of Facebook’s internal guidance, resources and training provided to content moderators. Given the Board’s finding that the user did not violate Facebook’s Hate Speech policy, the Board does not decide whether the non-public policy of defaulting to removal violates the principle of legality. However, the Board is concerned that the policy of defaulting to removal when profanity may be interpreted as directed either to a people or to a state is not clear from the Community Standards. In general, Facebook should make public internal guidance that alters the interpretation of its public-facing Community Standards. II. Legitimate aim Any state restriction on expression should pursue one of the legitimate aims listed in the ICCPR. These include the “rights of others.” According to Facebook, its Hate Speech policy aims to protect users from discrimination. The Board agrees that this is a legitimate aim. III. Necessity and proportionality The principle of necessity and proportionality under international human rights law requires that restrictions on expression ""must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected"" ( General Comment 34 , para. 34). In this case, based on its interpretation of the content, the Board determined that restricting this post would not achieve a protective function. The UNGPs state that businesses should perform ongoing human rights due diligence to assess the impacts of their activities (UNGP 17) and acknowledge that the risk of human rights harms is heightened in conflict-affected contexts (UNGP 7). The UN Working Group on the issue of human rights and transnational corporations and other business enterprises noted that businesses’ diligence responsibilities should reflect the greater complexity and risk for harm in some scenarios ( A/75/212 , paras. 41-49). Similarly, in case decision 2021-001-FB-FBR the Board recommended that Facebook “ensure adequate resourcing and expertise to assess risks of harm from influential accounts globally,” recognizing that Facebook should devote attention to regions with greater risks. In this case, the Board found that these heightened responsibilities should not lead to default removal, as the stakes are high in both leaving up harmful content and removing content that poses little or no risk of harm. While Facebook’s concern about hate speech in Myanmar is well founded, it also must take particular care to not remove political criticism and expression, in this case supporting democratic governance. The Board noted that Facebook’s policy of presuming profanity mentioning national origin (in this case “$တရုတ်”) refers to states and people may lead to disproportionate enforcement in some linguistic contexts, such as this one, where the same word is used for both. The Board also noted that the impact of this removal extended beyond the case, as Facebook indicated it was used in classifier training as an example of content that violated the Hate Speech Community Standard. Given the above, international human rights standards support restoring the content to Facebook. 9. Oversight Board decision The Oversight Board overturns Facebook’s decision to remove the content and requires the content to be restored. Facebook is obligated under the Board’s Charter to apply this decision to parallel contexts, and should mark this content as non-violating if used in classifier training. 10. Policy recommendation Facebook should ensure that its Internal Implementation Standards are available in the language in which content moderators review content. If necessary to prioritize, Facebook should focus first on contexts where the risks to human rights are more severe. *Procedural note: The Oversight Board's decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world, provided expertise on socio-political and cultural context. The company Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world, provided linguistic expertise. Return to Case Decisions and Policy Advisory Opinions" ig-0u6fla5b,Ayahuasca brew,https://www.oversightboard.com/decision/ig-0u6fla5b/,"December 9, 2021",2021,December,"TopicCultural events, Health, ReligionCommunity StandardRegulated goods","Policies and TopicsTopicCultural events, Health, ReligionCommunity StandardRegulated goods",Overturned,Brazil,The Oversight Board has overturned Meta's decision to remove a post discussing the plant-based brew ayahuasca.,38451,5883,"Overturned December 9, 2021 The Oversight Board has overturned Meta's decision to remove a post discussing the plant-based brew ayahuasca. Standard Topic Cultural events, Health, Religion Community Standard Regulated goods Location Brazil Platform Instagram Public Comments 2021-013-IG-UA Note: On October 28, 2021, Facebook announced that it was changing its company name to Meta. In this text, Meta refers to the company, and Facebook continues to refer to the product and policies attached to the specific app. The Oversight Board has overturned Meta’s decision to remove a post discussing the plant-based brew ayahuasca. The Board found that the post did not violate Instagram’s Community Guidelines as they were articulated at the time. Meta’s human rights responsibilities also supported restoring the content. The Board recommended that Meta change its rules to allow users to discuss the traditional or religious uses of non-medical drugs in a positive way. About the case In July 2021, an Instagram account for a spiritual school based in Brazil posted a picture of a dark brown liquid in a jar and two bottles, described as ayahuasca in the accompanying text in Portuguese. Ayahuasca is a plant-based brew with psychoactive properties that has religious and ceremonial uses including among Indigenous groups in South America. The text states that “AYAHUASCA IS FOR THOSE WHO HAVE THE COURAGE TO FACE THEMSELVES” and includes statements that ayahuasca is for those who want to “correct themselves,” “enlighten,” “overcome fear” and “break free.” The post was flagged for review by Meta’s automated systems because it had received around 4,000 views and was “trending.” It was then reviewed by a human moderator and removed. Key findings Meta told the Board it removed the post because it encouraged the use of ayahuasca, a non-medical drug. The company stated that “the user described ayahuasca with a heart emoji, referred to it as ‘medicine,’ and stated that it ‘can help you.’” The Board finds that while the content violated Facebook’s Regulated Goods Community Standard which prohibits content which speaks positively about the use of non-medical drugs, it did not violate Instagram’s Community Guidelines which, at the time, only covered the sale and purchase of illegal or prescription drugs. Meta’s international human rights responsibilities support the Board’s decision to restore the content. The Board is concerned that the company continues to apply Facebook’s Community Standards on Instagram without transparently telling users it is doing so. The Board does not understand why Meta cannot immediately update the language in Instagram’s Community Guidelines to tell users this. Meta also did not tell the user in this case what part of its rules they violated. The Board also disagrees with Meta’s claim that prohibiting positive comments about ayahuasca was necessary in this case to protect public health. The post, which mainly discussed the use of ayahuasca in a religious context, was not closely linked to the possibility of harm. The user did not post instructions for using ayahuasca or information about its availability. To respect diverse traditional and religious practices, the Board recommends that Meta change its rules on regulated goods to allow positive discussion of traditional or religious uses of non-medical drugs which have a recognized traditional or religious use. The Oversight Board’s decision The Board overturns Meta’s decision to remove the content, and requires that the post be restored. As a policy advisory statement, the Board recommends that Meta: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board overturns Meta’s decision to remove an Instagram post discussing ayahuasca in the context of religious or traditional use. The Board concludes that while the content violates Facebook’s Community Standards and the updated Instagram Community Guidelines, the platforms’ stated values and international human rights principles support restoring the content. The Board also recommends that Instagram and Facebook adjust the relevant policies to permit positive discussion of religious or traditional uses of non-medical drugs where there is historic evidence of such use. 2. Case description In July 2021, an Instagram account for a spiritual school based in Brazil posted a picture of a dark brown liquid in a jar and two bottles, described as ayahuasca in the accompanying text in Portuguese. Ayahuasca is a plant-based brew with psychoactive properties that has deeply-rooted religious and ceremonial uses among Indigenous and other groups in some South American countries, and related communities elsewhere. Ayahuasca contains plants which are sources of dimethyltryptamine (DMT), a substance that is prohibited under Schedule I of the 1971 UN Convention on Psychotropic Substances and the law of many countries, though there are relevant exceptions under international law for substances containing DMT such as ayahuasca, and exceptions under some national laws, including in Brazil, for other uses such as religious and Indigenous use. The text states that “AYAHUASCA IS FOR THOSE WHO HAVE THE COURAGE TO FACE THEMSELVES” and includes statements that ayahuasca is for those who want to “correct themselves,” “enlighten,” “overcome fear” and “break free.” It further states ayahuasca is a “remedy” and “can help you” if one has humility and respect. It ends with “Ayahuasca, Ayahuasca!/Gratitude, Queen of the Forest!” The content was viewed over 15,500 times and no users reported it. The post was flagged for review by Meta’s automated systems because it had received around 4,000 views and was “trending.” Meta specified neither the image nor the text triggered automatic review. The post was subsequently reviewed by a human moderator and removed. Meta told the Board that it was removed for violating Facebook’s Community Standard on Regulated Goods, but later stated it removed the content for “violating the Instagram Community Guidelines, which include a link to the Facebook Community Standard on Regulated Goods.” Meta notified the user that the post went against Instagram’s Community Guidelines, stating “post removed for sale of illegal or regulated goods.” The messaging also noted that Meta removes “posts promoting the use of hard drugs.” After another human review of the content, Meta upheld its initial decision to remove the content. Meta notified the user of its decision and they then appealed to the Oversight Board. 3. Authority and scope According to its Charter, the Oversight Board is an independent body designed to protect free expression by making principled, independent decisions about important pieces of content. It operates transparently, exercising neutral, independent judgement and rendering decisions impartially. The Board has the power to review Meta’s decision following an appeal from the user whose post was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or reverse that decision, and its decision is binding on Meta (Charter Article 4). The Board’s decisions may include policy advisory statements with recommendations. These recommendations are non-binding, but Meta must respond to them (Charter Article 3, Section 4). 4. Relevant standards The Oversight Board considered the following standards in its decision: I. Meta’s content policies This case involves Instagram’s Community Guidelines and Facebook’s Community Standards. Meta’s Transparency Center states that “Facebook and Instagram share content policies. This means if content is considered violating on Facebook, it is also considered violating on Instagram.” At the time this content was posted and removed, Instagram’s Community Guidelines prohibited “buying or selling illegal or prescription drugs (even if legal in your region)” under the subheading “Follow the Law.” Users are instructed to “Remember to always follow the law when offering to sell or buy other regulated goods.” The phrase “regulated goods” links to Facebook’s Community Standard on Regulated Goods. On October 26, 2021, Meta updated this section of Instagram’s Community Guidelines in response to previous Board recommendations (see Section 8.3 below) and prompted by the Board’s decision to review this case. Under the subheading “Follow the Law,” Meta removed the reference to “illegal or prescription drugs,” replacing it with “non-medical or pharmaceutical drugs,” and added language to prohibit “buying or selling non-medical or pharmaceutical drugs [and] remove content that attempts to trade, co-ordinate the trade of, donate, gift, or ask for non-medical drugs, as well as content that either admits to personal use (unless in the recovery context) or coordinates or promotes the use of non-medical drugs.” Facebook’s Community Standard on Regulated Goods has a section on non-medical drugs which prohibits content that “[c]oordinates or promotes (by which we mean speaks positively about, encourages the use of, or provides instructions to use or make) non-medical drugs.” II. Meta’s values Meta’s values are outlined in the introduction to Facebook’s Community Standards. The value of “Voice” is described as “paramount”: The goal of our Community Standards has always been to create a place for expression and give people a voice. […] We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Meta limits “Voice” in service of four values, the relevant ones in this case being “Safety” and “Dignity”: “Safety”: We’re committed to making Facebook a safe place. Content that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook. “Dignity”: We believe that all people are equal in dignity and rights. III. Human rights standards Other sources of international law that informed this decision include: 5. User statement The user stated in their appeal that they are certain the post does not violate Instagram’s Community Guidelines, as their page is informative and never encouraged or recommended the purchase or sale of any product prohibited by the Community Guidelines. They said that they took the photo at one of their ceremonies, which are regulated and legal. According to the user, the account aims to demystify the sacred ayahuasca drink. They said that there is a great lack of knowledge about ayahuasca. The user stated that it brings spiritual comfort to people and their ceremonies can improve societal wellbeing. They further state that they have posted the same content previously on their account and that post remains online. 6. Explanation of Meta’s decision In its explanation for the decision, Meta stated it removed this content because it encouraged the use of ayahuasca, a non-medical drug. According to Meta, its decision aligns with Facebook’s Community Standards, Meta’s values, and international human rights principles. Meta stated that the content violated Facebook’s Community Standards because “the user described ayahuasca with a heart emoji, referred to it as ‘medicine,’ and stated that it ‘can help you.’” Following a question from the Board about whether the content was removed for violating Instagram’s Community Guidelines or Facebook’s Community Standards, Meta responded the content was removed “for violating the Instagram Community Guidelines, which include a link to the Facebook Community Standards on Regulated Goods.” Specifically, the user violated Instagram’s prohibition on content “buying or selling illegal or prescription drugs (even if legal in your region).” Meta also cited another line of the Community Guidelines which links to the Community Standard on Regulated Goods, which “clarifies that Facebook prohibits content that ‘[c]oordinates or promotes (by which we mean speaks positively about [...] non-medical drugs.’” Referring to Meta’s values, the company stated that “Safety” displaced “Voice.” Meta noted that users are permitted to advocate for the legalization of non-medical drugs and to discuss the medical and scientific benefits of non-medical drugs, but that there is no religious or traditional use allowance. Meta argued that this rule strikes the correct balance between “Voice” and “Safety.” Meta also stated that prohibiting this content follows human rights principles. It stated that it considered the right to freedom of expression under Article 19 ICCPR and the right to freedom of religion or belief under Article 18 ICCPR, and argued that its decision satisfied the conditions required to restrict these rights. According to Meta, Facebook’s Community Standard is easily accessible and its non-public definition of non-medical drug as a “substance that causes ‘a marked change in consciousness’” is apparent. It further argued that the decision sought to protect public health. It stated that dimethyltryptamine (DMT), one of the active hallucinogenic substances in ayahuasca, poses significant safety risks, citing the 1971 UN Convention on Psychotropic Substances. Meta cited court decisions from the Supreme Court of the Netherlands (ECLI:NL:HR:2019:1456 (Case No. 18/01356, Supreme Court of the Netherlands, Oct. 1, 2019)) and the European Court of Human Rights (Franklin-Beentjes and Ceflu-Luz da Floresta v. The Netherlands (Case No. 28167/07, European Court of Human Rights, May 6, 2014 (dec.))) which found that a ban on the use of ayahuasca was a necessary and proportionate restriction on the right to freedom of religion. It argued that “since a complete ban on ayahuasca use is necessary and proportional under human rights principles,” therefore, ""the lesser restriction of a non-state actor proscribing its promotion is likewise permissible.” Meta also provided information regarding the process followed in this case. In its response, Meta made a distinction between the ability to “appeal” a decision, and the ability to “disagree with a decision.” Meta explained that where people are offered the option to “disagree with a decision,” there is no guaranteed review of the decision, but Meta might review it if capacity allows. Meta said that while it could “not always offer people the option to appeal” due to capacity restrictions related to the pandemic, it reviewed decisions people disagreed with “when it has human review capacity to do so.” Meta told the Board that the user appealed the removal. However, the Board reviewed Meta’s transparency reports on Instagram policy enforcement and noted that no content was reported as appealed for violating the Regulated Goods policy since mid-2020. After the Board asked about this, Meta stated that “the user received outdated appeal messaging” in error. Meta explained the user should have received messaging stating they could “disagree with decision” and that it is currently investigating why they received the wrong messaging. 7. Third-party submissions The Oversight Board received seven public comments in this case. One comment was from Latin America and the Caribbean and six comments came from the United States and Canada. The submissions covered the following themes: the importance of recognizing the traditional practice and religious uses of ayahuasca; the need to take the local social and legal context into account during content moderation; the importance of the local context when the Community Standard is justified by reference to off-platform harm; academic studies as well as anecdotal evidence of harms and treatment benefits of hallucinogens; and the need for consistency in applying Facebook’s Community Standards. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board looks at the question of whether content should be restored through three lenses: Facebook’s Community Standards and Instagram’s Community Guidelines; Meta’s publicly stated values; and its human rights responsibilities. The Board concludes that while the content violates the updated Community Guidelines and Community Standards, Meta’s values and international human rights principles support restoring the content. The Board recommends changes in Meta’s content policies to allow users to make positive statements regarding traditional and religious uses of non-medical drugs where there is historic evidence of such use. 8.1 Compliance with Meta’s content policies The Board agrees with Meta that the content violates the Instagram Community Guidelines as updated, and the Facebook Community Standards. However, as discussed below, the Board nonetheless concludes that the content should be restored and makes a policy recommendation to change Meta’s relevant standards. I. Instagram’s Community Guidelines At the time the content was posted, the Instagram Community Guidelines prohibited the “buying or selling” of “illegal or prescription drugs (even if legal in your region)” and instructed users to “always follow the law when offering to buy or sell other regulated goods.” The Board observes that the reference to “illegal drugs,” “even if legal in your region” is confusing and contradictory. The user’s post made no reference to the sale or purchase of ayahuasca. The Instagram Guidelines also refer to illegality and following the law. The user is based in Brazil, where the use of ayahuasca is permissible for religious rituals and by Indigenous communities (see the 2010 resolution of Brazil’s National Anti-Drug Council (CONAD)). Ayahuasca has been held to be permitted for some religious purposes under federal law in the United States, where Meta is incorporated (see the US Supreme Court case of Gonzales v. O Centro Espirita Beneficente Uniao Do Vegetal , 546 U.S. 418 (2006)). In this respect, there is no indication the user was not following the law. Therefore, content positively discussing the use of ayahuasca as part of a religious practice that the user understands to be legal did not violate the Instagram Community Guidelines as communicated to the public. In response to the Board’s questions as to how this content violated Instagram’s rules, Meta stated that the enforcement of the Guideline “does not turn on either the legality of the substance or the nature of the intended use,” despite the text of the rule which referred to “illegal drugs” and “follow[ing] the law.” Meta has stated that in addition to the Instagram Community Guidelines, the Facebook Community Standards also apply to content on Instagram. The Board emphasizes that this relationship is still not made sufficiently clear to users, in particular where the two sets of rules seem to differ, as they did at the time. At the time this content was posted, it did not violate the Instagram Community Guidelines as then articulated, which were confined to content involving sale or purchase, although it did violate the linked Facebook Community Standard. As mentioned above, Meta updated the Instagram Community Guidelines on October 26, 2021 to replace the reference to “illegal or prescription drugs” with “non-medical or pharmaceutical drugs,” and explicitly add a prohibition on “content that either admits to personal use (unless in the recovery context) or coordinates or promotes the use of non-medical drugs,” which reflects the language of the Facebook Regulated Goods Community Standard. II. Facebook’s Community Standards There is a link to Facebook’s Regulated Goods Community Standard from the part of the Instagram Community Guideline stating “always follow the law when offering to buy or sell other regulated goods” – the Board notes that this does not make it clear to users that the full Regulated Goods Community Standard applies to all content on Instagram. The Regulated Goods Community Standard prohibits “speak[ing] positively about” or “encourag[ing]” the use of non-medical drugs. Unlike the pre-October 26 Instagram Guideline, this Standard is not limited to illegal drugs. Meta does not provide a public definition of non-medical drugs, but has stated to the Board that it includes substances which can be used to achieve a “high or altered mental state.” Meta states that it treats content speaking positively about non-medical drugs that are used to achieve an “altered mental state” as part of a “spiritual or religious practice” the same as other content speaking positively about non-medical drugs used to achieve an altered mental state. Meta’s Internal Implementation Standards allow discussion of the “medical or scientific merits of non-medical drugs.” The post contains some language about general healing properties of ayahuasca, and other language rooted in traditional and religious practices. On balance, the Board finds that the latter predominates, and the discussion here should be understood as an affirmation of those practices. Experts consulted by the Board stated that the text in this post is part of known prayers and rituals and the reference to ""Queen of the Forest"" is a reference to the Virgin Mary within these traditions. The Board agrees with the company that the content violates the Facebook Regulated Goods Community Standard, as incorporated by reference in the Instagram Guidelines. Ayahuasca may be used to achieve an altered mental state, the content spoke positively about it, and no allowance applied. The Board concludes that although the content violates the Regulated Goods Community Standard, Meta’s values and international human rights standards support the Board's decision to restore the content, as analyzed in Sections 8.2 and 8.3 below. The Board also makes policy recommendations to bring the Community Standard in line with Meta’s values and international human rights standards. 8.2 Compliance with Meta’s values The Board concludes that Meta’s decision to remove the content was not consistent with the company’s values. In this case, as in many, Meta's values of “Voice,” “Safety,” and “Dignity” point in different directions. Meta's decision to take down the post weighted “Safety” over “Voice.” The Board would balance the values differently, believing that the genuine but not particularly strong interests in “Safety” are outweighed in this context by the value of “Voice” and the importance of recognizing the “Dignity” of those engaging in traditional or religious uses where there is historic evidence of such use, including by Indigenous and religious communities. Scientific research indicates that the use of ayahuasca in a controlled context in traditional and religious ceremonies is not linked to a serious risk of harm. Meta cited the European Court of Human Rights case of Franklin-Beentjes and Ceflu-Luz da Floresta v. The Netherlands (Case No. 28167/07, European Court of Human Rights, May 6, 2014) to demonstrate the risks of ayahuasca use. Other courts, however, have reached different conclusions – for example, the United States Supreme Court, in considering the risk of harm from “the circumscribed, sacramental use of hoasca” by members of an ayahuasca based religion, found that the government had not put forward sufficient evidence of harm from religious use, which was its burden, to justify the prohibition in these circumstances. Meta’s rationale does not appear to have taken into account controlled uses of ayahuasca which aim to mitigate health risks. In light of scientific research, Meta’s rationale did not demonstrate the danger of this post to the value of “Safety” in a manner sufficient to displace “Voice” and “Dignity” to the extent to justify removal of the post. These interests are discussed further in Section 8.3.III below. 8.3 Compliance with Meta’s human rights responsibilities The Board finds that human rights norms point in the direction of restoring the post to Instagram. Meta has committed itself to respect human rights under the UN Guiding Principles on Business and Human Rights ( UNGPs ). Its Corporate Human Rights Policy states this commitment includes the International Covenant on Civil and Political Rights (ICCPR). Freedom of expression Article 19 of the ICCPR provides for broad protection of expression. The Human Rights Committee has stated that “freedom of expression is also indispensable to the enjoyment of all other rights [including freedom of religion and belief]” (A/HRC/40/58, para. 5). Expression serves as “‘enabler’ of other rights, including [...] the right to take part in cultural life” ( A/HRC/17/27 para. 22, see also General Comment 21, paras. 13-19, 37, 43). Freedom of expression facilitates the promotion of the diversity of cultural expressions ( UNESCO 2005 Convention ). In this case, as noted above, the post discussed the usage of ayahuasca in the context of a traditional or religious practice in the region where the post originated. Although ayahuasca’s use has recently spread to a wider population, it is a central part of the ceremonial practices of certain Indigenous and religious groups in Latin America and in the Latin American diaspora. Article 19 requires that where restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). As stated above, Meta has voluntarily committed itself to respecting human rights standards. I. Legality (clarity and accessibility of the rules) The first part of the test requires rules restricting expression to be clear and accessible so that those affected know the rules and may follow them (General Comment No. 34, paras. 24-25). Applied to Meta, users of its platforms should be able to understand what is allowed and what is prohibited. In this case, the Board concludes that Meta falls short of meeting that responsibility. The Board has repeatedly drawn attention to the lack of clarity for Instagram users about what policies apply to their content. See case decision 2020-004-IG-UA about breast cancer awareness in Brazil and case decision 2021-006-IG-UA about commentary on Ocalan’s confinement. The Board reiterates that concern here, notwithstanding changes made to these rules. Instagram’s Community Guidelines do not clearly inform users that Facebook’s Community Standards also apply. Although some sections of the Guidelines link to the Community Standards, the section of the Guidelines that Meta argued the user violated (“buying or selling illegal or prescription drugs”) contains no hyperlinks to Facebook’s Community Standards. Users would need to consult Transparency Center reports to find language stating “Facebook and Instagram share content policies. This means if content is considered violating on Facebook, it is also considered violating on Instagram.” The Board notes that there are exceptions to the shared policies – in response to the Board’s previous recommendation to clarify the relationship between the Guidelines and the Standards in case 2021-006-IG-UA, Meta has stated that, for example, people on Instagram may have multiple accounts for different purposes, while people on Facebook can only have one account using their “authentic identity.” While Meta has committed to provide additional information about this relationship to users and provide an update on its progress by the end of 2021 in response to earlier Board recommendations, the Board is concerned that the company continues to apply Facebook’s Community Standards on Instagram without transparently telling users it is doing so. The Board does not understand why Meta is unable to immediately provide users with a greater degree of transparency by updating language in the Guidelines. While there may be reasons that specific policies should apply on one platform and not the other, users need to know when this is the case. This case generates particular confusion because, at the time the content was posted, it did not violate Instagram’s Community Guidelines as communicated to the user, but did violate Facebook’s Community Standards. At that time, the Community Guidelines prohibited the content related to purchase and sale of illegal drugs, and emphasized following the law. As noted above, in both Brazil, where the user appears to be based, and the United States, where Meta is based, there are certain exceptions in national law that permit ayahuasca's use in the context of religious (and in Brazil, Indigenous) use. It is not clear to the Board how an Instagram user should have known this content was prohibited, given that the user was not buying or selling illegal drugs and believed they were following the law. As noted above, Meta updated the Instagram Community Guidelines on October 26, 2021 to replace the reference to “illegal or prescription drugs” with “non-medical or pharmaceutical drugs” and explicitly add a prohibition on “content that either admits to personal use (unless in the recovery context) or coordinates or promotes the use of non-medical drugs,” which reflects the language of the Regulated Goods Community Standard. This update provides a clearer and more accurate representation of the rules Meta applies. The Board further finds that the definitions of substances under the Facebook Community Standard on Regulated Goods are not sufficiently comprehensible and transparent to users. The Standard prohibits content related to certain goods, including guns, marijuana, pharmaceutical drugs, and non-medical drugs and alcohol and tobacco. Meta does not define non-medical drugs for users, but told the Board it maintains an internal definition for moderators, as well as a confidential list of non-medical drugs. Lastly, the user was not told what part of Meta’s content policies they violated. According to the company, the user received messaging stating the post was removed for “promoting the use of hard drugs.” As this term appears nowhere in Instagram’s Community Guidelines or Facebook’s Community Standards, the Board finds Meta did not clearly communicate the policy violation to the user. The Board has made recommendations in this regard in previous cases (see case decision 2021-005-FB-UA about the ‘Two Buttons’ meme and case decision 2020-005-FB-UA about a Nazi quote). Given these problems, the Board finds that Meta did not meet its responsibility to make its rules clear and accessible to users. The Board reiterates below previous recommendations on the relationship between Instagram’s Community Guidelines and Facebook’s Community Standards and the importance of informing users of how their content violated company policy. II. Legitimate aim Any state restriction on expression should pursue one of the legitimate aims listed in Article 19, para. 3 the ICCPR. The Board has found that these aims may also motivate Metas’s content policies. Here, Meta cited public health as the aim of its policy, and the Board agrees that this qualifies as a legitimate aim. III. Necessity and proportionality The Board concludes that international standards on necessity and proportionality point in the direction of restoring this content to Instagram. It disagrees with Meta’s argument that prohibiting positive comments about ayahuasca in this content was necessary to protect public health. In this case, the Board found that there was no direct and immediate connection between the content, which primarily discussed the use of ayahuasca in a religious context, and the possibility of harm. The user did not post instructions for using ayahuasca or information about its availability. Both the 1971 United Nations Convention on Psychotropic Substances and the 1988 United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances recognize exceptions for substances which are “traditionally used” and for “traditional licit uses, where there is historic evidence of such use” respectively. The scientific literature indicates that the use of ayahuasca in a controlled context in traditional and religious ceremonies is not linked to a serious risk of harm. Given these points, the current rule prohibiting all positive discussions of non-medical drugs is overbroad. The Board recognizes that the exceptions in these international instruments, as well as regional and national decisions, pertain to possession and use, not to speech. In light of the primacy of freedom of expression, the Board concludes that in most contexts it is not necessary and proportional to forbid speech that pertains to conduct that is itself permitted under a relevant exception. Meta has argued that discussion of the medical or scientific benefits of ayahuasca or advocating for its legality would not pose a risk, but that positive commentary about ayahuasca more generally in a traditional or religious context poses a severe enough risk to merit removal. In the Board’s view, Meta has not adequately explained this difference, nor is it consistent with its approach to other substances, such as marijuana, tobacco, and alcohol. The positive discussion of these substances are permitted despite the fact that they present serious health risks. The Board has considered other measures by which Meta can promote respect for public health when moderating non-medical drug-related content. Meta currently advises users who search for certain drug-related terms that they may be seeking content that violates content policies, and recommends resources to address drug abuse. However, this response does not seem to be generated in the same way for searches for all non-medical drugs, and does not appear when searching for ayahuasca on either Facebook or Instagram. Applying such messaging more consistently for users seeking drug-related content may help Meta better respect public health. Given that removal did not align with Meta’s values and that human rights principles and international law point in the direction of permitting this expression, the Board has decided that the content should be restored. Some Board Members, however, emphasized that content such as this may be restricted in accordance with human rights principles. For these Members, if Meta had a clearly articulated and non-arbitrary policy that restricted positive discussion about non-medical drugs, human rights norms do not bar Meta as a private company from enforcing that policy. Other Members believe that Facebook’s Community Standards are not inconsistent with international human rights law given considerations of enforcement at scale and the need to ensure the administrability of the rule. For these Members, a broad allowance for “traditional and religious” drugs would not be administrable and would likely be subject to users attempting to “game” the system. Enforcing such an allowance would require a case-by-case examination that would give rise to a risk of significant uncertainty, which weighs in favor of a general rule that can more easily be enforced (see, for a comparative perspective: European Court of Human Rights, Case of Animal Defenders International v the United Kingdom, para. 108). The Board recommends below that Meta modify its rules on Regulated Goods to permit positive discussion of traditional and religious uses of non-medical drugs where there is historic evidence of such use, and make public all allowance to these policies. While in agreement that Meta’s policies should be altered, a minority of the Board believe that positive statements in general about non-medical drugs with a recognized traditional or religious use should not be prohibited, regardless of whether they discuss those traditional or religious uses. The minority believes that Meta should not be in the position of attempting to distinguish posts that positively discuss traditional and religious practice, finding this to be too porous a line for effective enforcement. They observe that this modification can be more easily administered by removing non-medical drugs with traditional and religious uses from the internal drug list and instructing content moderators to consult the drug list when in doubt. 9. Oversight Board decision The Board overturns Meta’s decision to take down the content, requiring the post to be restored. 10. Policy advisory statement Enforcement 1. The Board reiterates its recommendation from case decision 2020-004-IG-UA and case decision 2021-006-IG-UA that Meta should explain to users that it enforces the Facebook Community Standards on Instagram, with several specific exceptions. The Board notes Meta’s response to these recommendations. While Meta may be taking other actions to comply with the recommendations, the Board recommends Meta update the introduction to the Instagram Community Guidelines (“The Short” Community Guidelines) within 90 days to inform users that if content is considered violating on Facebook, it is also considered violating on Instagram, as stated in the company’s Transparency Center, with some exceptions. 2. The Board reiterates its recommendation from case decision 2021-005-FB-UA and case decision 2020-005-FB-UA that Meta should explain to users precisely what rule in a content policy they have violated. Content Policy 3. To respect diverse traditional and religious expressions and practices, the Board recommends that Meta modify the Instagram Community Guidelines and Facebook Regulated Goods Community Standard to allow positive discussion of traditional and religious uses of non-medical drugs where there is historic evidence of such use. The Board also recommends that Meta make public all allowances, including existing allowances. *Procedural note: The Oversight Board's decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world, provided expertise on socio-political and cultural context. The company Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world, provided linguistic expertise. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Return to Case Decisions and Policy Advisory Opinions" ig-1bmh3dq6,Azov Removal,https://www.oversightboard.com/decision/ig-1bmh3dq6/,"December 8, 2023",2023,December,TopicWar and conflictCommunity StandardDangerous individuals and organizations,Dangerous individuals and organizations,Overturned,Ukraine,"A user appealed Meta’s decision to remove an Instagram post asking, “where is Azov?” in Ukrainian. The post's caption calls for soldiers of the Azov Regiment in Russian captivity to be returned.",5833,863,"Overturned December 8, 2023 A user appealed Meta’s decision to remove an Instagram post asking, “where is Azov?” in Ukrainian. The post's caption calls for soldiers of the Azov Regiment in Russian captivity to be returned. Summary Topic War and conflict Community Standard Dangerous individuals and organizations Location Ukraine Platform Instagram This is a summary decision . Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors. They are approved by a Board Member panel, not the full Board. They do not involve a public comment process and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement. Case Summary A user appealed Meta’s decision to remove an Instagram post asking, “where is Azov?” in Ukrainian. The post's caption calls for soldiers of the Azov Regiment in Russian captivity to be returned. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Case Description and Background In December 2022, an Instagram user created a post with an image of the Azov Regiment symbol. Overlaying the symbol was text in Ukrainian asking, “where is Azov?” The caption stated that more than 700 Azov soldiers remain in Russian captivity, with their conditions unknown. The user calls for their return, stating: “we must scream until all the Azovs are back from captivity!” The user appealed the removal of the post, emphasizing the importance of sharing information during times of war. The user also highlighted that the content did not violate Meta’s policies, since Meta allows content commenting on the Azov Regiment. The post received nearly 800 views and was detected by Meta’s automated systems. Meta originally removed the post from Facebook under its Dangerous Organizations and Individuals (DOI) policy , which prohibits content that ""praises,” “substantively supports” or “represents” individuals and organizations that Meta designates as dangerous. However, Meta allows “discussions about the human rights of designated individuals or members of designated dangerous entities, unless the content includes other praise, substantive support or representation of designated entities or other policy violations, such as incitement to violence.” Meta told the Board that it removed the Azov Regiment from its Dangerous Organizations and Individuals list in January 2023. A Washington Post article states that Meta now draws a distinction between the Azov Regiment, which it views as under formal control of the Ukrainian government, and other elements of the broader Azov movement, some which the company considers far-right nationalists and still designates as dangerous. After the Board brought this case to Meta’s attention, the company determined that its removal was incorrect and restored the content to Instagram. The company acknowledged that the Azov Regiment is no longer designated as a dangerous organization. Additionally, Meta recognized that regardless of the Azov Regiment’s designation, this post falls under the exception that allows references to dangerous individuals and organizations when discussing the human rights of individuals and members of designated entities. Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation processes involved, reduce errors and increase fairness for Facebook and Instagram users. Case Significance This case highlights shortcomings in the updating of Meta’s Dangerous Organizations and Individuals list and its enforcement, which raises greater concerns during times of war. The case also illustrates the systemic challenges in enforcing exceptions to Meta’s policy on Dangerous Organizations and Individuals. Previously, the Board issued a recommendation stating that Meta’s Dangerous Organizations and Individuals policy should allow users to discuss alleged human rights abuses of members of dangerous organizations ( Öcalan’s Isolation decision, recommendation no. 5), which Meta committed to implement. Furthermore, the Azov Regiment was removed from Meta’s Dangerous Organizations and Individuals list in January 2023. The Board has issued a recommendation stating that, when any new policy is adopted, internal guidance and training should be provided to content moderators ( Öcalan’s Isolation decision, recommendation no. 8). The Board has also issued recommendations on the enforcement accuracy of Meta’s policies by calling for further transparency regarding enforcement error rates on the “praise” and “support” of dangerous individuals and organizations ( Öcalan’s Isolation decision, recommendation no. 12), and the implementation of an internal audit procedure to learn from past automated enforcement mistakes ( Breast Cancer Symptoms and Nudity decision, recommendation no. 5). Fully implementing these recommendations could help Meta decrease the number of similar content moderation errors. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" ig-24cw5dhi,Lebanese activist,https://www.oversightboard.com/decision/ig-24cw5dhi/,"September 13, 2023",2023,,TopicFreedom of expressionCommunity StandardDangerous individuals and organizations,Dangerous individuals and organizations,Overturned,"Lebanon, United States","A user appealed Meta’s decision to remove an Instagram post of an interview where an activist discusses Hassan Nasrallah, the Secretary General of Hezbollah. This case highlights the over-enforcement of Meta’s Dangerous Organizations and Individuals policy. This can have a negative impact on users’ ability to share political commentary and news reporting, resulting in an infringement of users’ freedom of expression. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post.",6040,890,"Overturned September 13, 2023 A user appealed Meta’s decision to remove an Instagram post of an interview where an activist discusses Hassan Nasrallah, the Secretary General of Hezbollah. This case highlights the over-enforcement of Meta’s Dangerous Organizations and Individuals policy. This can have a negative impact on users’ ability to share political commentary and news reporting, resulting in an infringement of users’ freedom of expression. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Summary Topic Freedom of expression Community Standard Dangerous individuals and organizations Location Lebanon, United States Platform Instagram This is a summary decision . Summary decisions examine cases where Meta reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors. They are approved by a Board Member panel, not the full Board. They do not consider public comments, and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas of potential improvement in its policy enforcement. Case summary A user appealed Meta’s decision to remove an Instagram post of an interview where an activist discusses Hassan Nasrallah, the Secretary General of Hezbollah. This case highlights the over-enforcement of Meta’s Dangerous Organizations and Individuals policy. This can have a negative impact on users’ ability to share political commentary and news reporting, resulting in an infringement of users’ freedom of expression. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Case description and background On January 2023, the verified account of a Lebanese activist posted a video of himself being interviewed by a news anchor in Arabic. The news anchor begins by jokingly asking the activist whether a professional soccer player, or Hassan Nasrallah, the Secretary General of Hezbollah, is more useful. The activist responds by praising the soccer player and criticizing Nasrallah. The activist highlights the plane hijackings and kidnappings conducted by Hezbollah, along with Nasrallah’s support of former Lebanese politicians Nabih Berri and Michel Aoun—both of whom the activist claims were unwanted by the Lebanese people. Throughout the interview, video clips of Nasrallah play on mute. The caption the activist added continues this comparison, joking: “Let’s see how many goals Nasrallah can score first.” The post received 137,414 views and was reported to the Board 11 times. Meta initially removed the post from Instagram under its Dangerous Organizations and Individuals policy. In his appeal to the Board, the user claimed that Hezbollah uses coordinated reporting to remove content that criticizes the organization. The Board has not independently verified the claim that coordinated reporting was responsible for the removal of this content, or for any of the reports relating to the content. The user claimed that “Instagram’s community guidelines are being used to extend Hezbollah’s oppression against peaceful citizens like me.” After the Board brought this case to Meta’s attention, the company determined that its removal was incorrect and restored the content to Instagram. The company acknowledged that while Hassan Nasrallah is designated as a dangerous individual, Meta lets users criticize or neutrally report on the actions of a dangerous organization or individual. Specifically, Meta allows “[an] expression of a negative perspective about a designated entity or individual,” including, “disapproval, disgust, rejection, criticism, mockery etc.” Meta acknowledged that the video was posted with a satirical and condemning caption, making the content non-violating. Board authority and scope The Board has authority to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). Where Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, to reduce errors, and to increase the fair treatment of Facebook and Instagram users. Case significance This case highlights over-enforcement of Meta’s Dangerous Organizations and Individuals prohibition on “praise,” which can have a negative impact on users’ capacity to share political commentary and news reporting on Meta’s platforms. The Board has issued recommendations relating to the Dangerous Organizations and Individuals policy’s prohibition on praise of designated entities. This includes a recommendation to create a “reporting” allowance which would allow for positive statements about dangerous organizations and individuals in news reporting, which Meta committed to implement ( Mention of the Taliban in news reporting , recommendation no. 4); and to assess the accuracy of the “reporting” allowance to identify systemic issues causing enforcement errors, a recommendation which Meta is still assessing ( Mention of the Taliban in news reporting , recommendation no. 5). Furthermore, the Board has issued recommendations on clarifying the policy for users. This includes a recommendation to add criteria and illustrative examples to Meta’s DOI policy to increase understanding of exceptions, specifically around neutral discussion and news reporting—a recommendation Meta is still assessing ( Shared Al Jazeera post , recommendation no. 1). Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to the company’s attention. Return to Case Decisions and Policy Advisory Opinions" ig-2pj00l4t,Reclaiming Arabic words,https://www.oversightboard.com/decision/ig-2pj00l4t/,"June 13, 2022",2022,,"TopicLGBT, Marginalized communities, Sex and gender equalityCommunity StandardHate speech","Policies and TopicsTopicLGBT, Marginalized communities, Sex and gender equalityCommunity StandardHate speech",Overturned,"Egypt, Lebanon, Morocco","The Oversight Board has overturned Meta's original decision to remove an Instagram post which, according to the user, showed pictures of Arabic words that can be used in a derogatory way towards men with effeminate mannerisms.",41453,6488,"Overturned June 13, 2022 The Oversight Board has overturned Meta's original decision to remove an Instagram post which, according to the user, showed pictures of Arabic words that can be used in a derogatory way towards men with effeminate mannerisms. Standard Topic LGBT, Marginalized communities, Sex and gender equality Community Standard Hate speech Location Egypt, Lebanon, Morocco Platform Instagram Reclaiming Arabic words public comments The Oversight Board has overturned Meta’s original decision to remove an Instagram post which, according to the user, showed pictures of Arabic words which can be used in a derogatory way towards men with “effeminate mannerisms.” The content was covered by an exception to Meta’s Hate Speech policy and should not have been removed. About the case In November 2021, a public Instagram account which describes itself as a space for discussing queer narratives in Arabic culture posted a series of pictures in a carousel (a single Instagram post that can contain up to 10 images with a single caption). The caption, written in both Arabic and English, explained that each picture shows a different word that can be used in a derogatory way towards men with “effeminate mannerisms” in the Arabic-speaking world, including the terms “zamel,” “foufou,” and “tante/tanta.” The user stated that the post intended “to reclaim [the] power of such hurtful terms.” Meta initially removed the content for violating its Hate Speech policy but restored it after the user appealed. After being reported by another user, Meta then removed the content again for violating its Hate Speech policy. According to Meta, before the Board selected this case, the content was escalated for additional internal review which determined that it did not, in fact, violate the company’s Hate Speech policy. Meta then restored the content to Instagram. Meta explained that its initial decisions to remove the content were based on reviews of the pictures containing the terms “z***l” and “t***e/t***a.” Key findings The Board finds removing this content to be a clear error which was not in line with Meta’s Hate Speech policy. While the post does contain slur terms, the content is covered by an exception for speech “used self-referentially or in an empowering way,” as well as an exception which allows the quoting of hate speech to “condemn it or raise awareness.” The user’s statements that they did not “condone or encourage the use” of the slur terms in question, and that their aim was “to reclaim [the] power of such hurtful terms,” should have alerted the moderator to the possibility that an exception may apply. For LGBTQIA+ people in countries which penalize their expression, social media is often one of the only means to express themselves freely. The over-moderation of speech by users from persecuted minority groups is a serious threat to their freedom of expression. As such, the Board is concerned that Meta is not consistently applying exemptions in the Hate Speech policy to expression from marginalized groups. The errors in this case, which included three separate moderators determining that the content violated the Hate Speech policy, indicate that Meta’s guidance to moderators assessing references to derogatory terms may be insufficient. The Board is concerned that reviewers may not have sufficient resources in terms of capacity or training to prevent the kind of mistake seen in this case. Providing guidance to moderators in English on how to review content in non-English languages, as Meta currently does, is innately challenging. To help moderators better assess when to apply exceptions for content containing slurs, the Board recommends that Meta translate its internal guidance into dialects of Arabic used by its moderators. The Board also believes that to formulate nuanced lists of slur terms and give moderators proper guidance on applying exceptions to its slurs policy, Meta must regularly seek input from minorities targeted with slurs on a country and culture-specific level. Meta should also be more transparent around how it creates, enforces, and audits its market-specific lists of slur terms. The Oversight Board’s decision The Oversight Board overturns Meta’s original decision to remove the content. As a policy advisory statement, the Board recommends that Meta: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board overturns Meta’s original decision to remove an Instagram post by an account that explores “queer narratives in Arabic history and popular culture.” The content falls into an exception in Meta’s Hate Speech policy as it reports, condemns, and discusses the negative use of homophobic slurs by others and uses them in an expressly positive context. 2. Case description and background In November 2021, a public Instagram account which identifies itself as a space for discussing queer narratives in Arabic culture posted a series of pictures in a carousel (a single Instagram post that can contain up to 10 images with a single caption). The caption, which the user wrote in both Arabic and English, explains that each picture shows a different word that can be used in a derogatory way towards men with ""effeminate mannerisms"" in the Arabic speaking world, including the terms ""zamel,"" ""foufou,"" and ""tante""/""tanta."" In the caption the user stated that they did not ""condone or encourage the use of these words,"" but explained that they had previously been abused with one of these slurs and that the post was intended ""to reclaim [the] power of such hurtful terms."" The Board’s external experts confirmed that the terms quoted in the content are often used as slurs. The content was viewed approximately 9,000 times, receiving around 30 comments and approximately 2,000 reactions. Within three hours of the content being posted, a user reported it for ""adult nudity or sexual activity"" and another user reported it as ""sexual solicitation."" Each report was dealt with separately by different human moderators. No action was taken by the moderator who reviewed the first report, but the moderator who reviewed the second report removed the content for violating Meta’s Hate Speech policy . The user appealed this removal and a third moderator restored the content to the platform. After the content was restored, another user reported it as “hate speech” and another moderator carried out a fourth review, again removing the content. The user appealed a second time and, after a fifth review, another moderator upheld the decision to remove the content. After Meta notified the user of that decision, the user submitted an appeal to the Oversight Board. Meta later confirmed that all of the moderators who reviewed the content were fluent Arabic speakers. Meta explained that the initial decisions to take down the content were based on reviews of the pictures containing the terms “z***l” and “t***e/t***a”. In response to a question from the Board Meta also noted that the company considers another term used in the content, “moukhanath” to be a slur. According to Meta, after the user appealed to the Board but before the Board selected the case, the content was independently escalated for an additional internal review, which determined that it did not violate the Hate Speech Policy. The content was subsequently restored to the platform. 3. Oversight Board authority and scope The Board has authority to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include policy advisory statements with non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). When the Board selects cases like this one, where Meta has agreed that it made an error, the Board reviews the original decision to help increase understanding of why errors occur, and to make observations or recommendations that may contribute to reducing errors and to enhancing due process. 4. Sources of authority The Oversight Board considered the following as sources of authority: I. Oversight Board decisions: The Board’s most relevant decisions to this case include: The Board also refers to recommendations made in: The “Ocalan's isolation decision” (2021-006-IG-UA), the ""Two buttons meme decision” (2021-005-FB-UA), and the “Breast cancer symptoms and nudity decision” (2020-004-IG-UA). II. Meta’s content policies: This case involves Instagram's Community Guidelines and Facebook's Community Standards . Meta's Transparency Center states that ""Facebook and Instagram share Content Policies. This means that if content is considered violating on Facebook, it is also considered violating on Instagram."" Instagram's Community Guidelines state: We want to foster a positive, diverse community. We remove content that contains credible threats or hate speech… It's never OK to encourage violence or attack anyone based on their race, ethnicity, national origin, sex, gender, gender identity, sexual orientation, religious affiliation, disabilities, or diseases. When hate speech is being shared to challenge it or to raise awareness, we may allow it. In those instances, we ask that you express your intent clearly. Facebook's Community Standards define hate speech as ""a direct attack on people based on what we call protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity and serious disease or disability."" Meta divides attacks into three tiers. The slurs section of the hate speech policy prohibits “[c]ontent that describes or negatively targets people with slurs, where slurs are defined as words that are inherently offensive and used as insulting labels for the above characteristics.” The rest of tier three prohibits content targeting people with segregation or exclusion. As part of the policy rationale Meta explains that: We recognize that people sometimes share content that includes someone else's hate speech to condemn it or raise awareness. In other cases, speech that might otherwise violate our standards can be used self-referentially or in an empowering way. Our policies are designed to allow room for these types of speech, but we require people to clearly indicate their intent. If the intention is unclear, we may remove content. III. Meta’s values: Meta's values are outlined in the introduction to the Facebook Community Standards where the value of ""Voice"" is described as ""paramount"": The goal of our Community Standards has always been to create a place for expression and give people a voice. […] We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Meta limits ""Voice"" in service of four values, two of which are relevant here: ""Safety"": We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn't allowed on Facebook. ""Dignity"": We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade them. IV. International human rights standards: The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following human rights standards which are applied in Section 8 of this decision: 5. User submissions In their statement to the Board, the user described their account as a place “to celebrate queer Arab culture.” They explain that while it is a “safe space,” as its following has grown it has increasingly been targeted by homophobic trolls who write abusive comments and mass-report content. The user explained that their intent in posting the content was to celebrate “effeminate men and boys” in Arab society who are often belittled with the derogatory language highlighted in the post. They further explained that they were attempting to reclaim these derogatory words used against them as a form of resistance and empowerment, and argued that they made clear in the post’s content that they do not condone or encourage the use of the words in the pictures as slurs. The user also stated that they believed their content complied with Meta’s content policies which specifically permit the use of otherwise banned terms when used self-referentially or in an empowering way. 6. Meta’s submissions Meta explained in its rationale that the content was originally removed under its Hate Speech Policy as the content contains a prohibited word on Meta’s slur list which is “a derogatory term for gay people.” Meta ultimately reversed its original decision and restored the content as the use of the word concerned fell within Meta’s exceptions for “content that condemns a slur or hate speech, discusses the use of slurs including reports of instances when they have been used, or debates about whether they are acceptable to use.” Meta accepted that the context indicated that the user was drawing attention to the hurtful nature of the word and was therefore non-violating. In response to questions from the Board about how context is relevant in Meta’s application of Hate Speech policy exceptions, Meta stated that “hate speech and slurs are allowed” when they are mocked, condemned, discussed, reported, or used self-referentially and that the responsibility is on the user to make their intent clear when mentioning a slur. In response to another question from the Board, Meta stated that they “did not speculate” as to why the content was erroneously removed because its content reviewers do not document the reasons for their decisions. The Board asked Meta a total of 17 questions, 16 of which were answered fully and 1 of which was answered partially. 7. Public comments The Board received three public comments related to this case. One of the comments was submitted from the United States and Canada, one from the Middle East and North Africa, and one from Latin America and the Caribbean. The submissions covered the following themes: LGBT safety on major social media platforms, the consideration of local context in the enforcement of the hate speech policy, and the changing meanings of Arabic words. To read public comments submitted for this case, please click here . Additionally, as part of ongoing stakeholder engagement efforts, members of the Board held informative and enriching discussions with organizations that work on freedom of expression and the rights of LGBTQIA+ people, including Arabic speakers. This discussion highlighted concerns including: the difficulty in proclaiming a slur to be categorically reclaimed and universally inoffensive when the term in question may continue to be heard as a slur by some audiences, regardless of the intent of the speaker, the problems caused by a lack of input on content policy from LGBTQIA+ advocacy groups and non-English speaking communities, and the risks of content moderation which is not sufficiently sensitive to context. 8. Oversight Board analysis The Board looked at the question of whether this content should be restored through three lenses: Meta's content policies, the company's values, and its human rights responsibilities. This case was selected by the Board as the over-moderation of speech by users from persecuted minority groups is a serious and widespread threat to their freedom of expression. Online spaces for expression are particularly important to groups that face persecution and their rights require heightened attention for protection from social media companies. This case also demonstrates the tension for Meta in seeking to protect minorities from hate speech, while also seeking to create a space where minorities can fully express themselves, including by reclaiming hateful slurs. 8.1 Compliance with Meta’s content policies I. Content rules The Board finds that, while slur terms are used, the content is not hate speech because it falls into an exception in the Hate Speech policy for slur words that are “used self-referentially or in an empowering way,” as well as the exception for quoting hate speech to “condemn it or raise awareness.” In the ""Wampum Belt"" and ""Two buttons meme"" decisions the Board noted that it is not necessary for a user to explicitly state their intention in a post in order for it to meet the requirements of an exception to the Hate Speech policy. It is enough for a user to be clear in the context of the post that they are using hate speech terminology in a way for which the policy allows. However, the content in this case included the user’s statements that they did not “condone or encourage” the offensive use of the slur terms in question but that the post was instead “an attempt to resist and challenge the dominant narrative” and “to reclaim the power of such hurtful terms.” While clear statements of intent will not always be necessary or sufficient to legitimize the use or quotation of hate speech, they should alert a moderator to the possibility that an exception may apply. In this case, the Board finds that the statement of intent, coupled with the context, make clear that the content unambiguously falls within the exception. Despite this, Meta initially removed the content, with three separate moderators determining that the content violated the Hate Speech policy. While there are a range of possible reasons as to how multiple moderators failed to properly classify the content, Meta was unable to provide specific explanations for the error since the company does not require moderators to record the reasoning for their decisions. As noted in the ""Wampum Belt"" decision, the types of mistakes and the people or communities who bear the burden of them reflect design choices for enforcement systems on the platform that risk impairing the free speech rights of members of persecuted groups. When Meta observes a pattern of persistent over-enforcement of content in relation to a persecuted or marginalized group, such as in this case, it would be appropriate to investigate the reasoning behind the enforcement decisions and consider what modifications to moderation rules, or increased training or supervision with respect to existing rules, are necessary to avoid overzealous enforcement that burdens members of groups whose expressive rights are at particular risk. II. Enforcement action In response to questions from the Board, Meta explained that the content was only restored to the platform because it happened to be flagged by a Meta employee for an escalated level of review. “Escalated for review” means that, instead of the decision being revisited by at-scale review, which is often outsourced, it goes to an internal team at Meta. This appears to have required a Meta employee to notice the removal of the content, then fill out and submit an internal webform highlighting the issue. In addition to the element of chance, systems such as these can only identify errors in content to which Meta staff are personally exposed. Accordingly, content that is not in English, content not posted by accounts with many followers in the US, or content created for and by groups not well represented within Meta is far less likely to be noticed, flagged and given the additional attention. As part of its outreach, the Board was made aware of concerns from stakeholders that accurate enforcement of the exceptions to the Hate Speech policy requires a degree of subject-matter expertise and local knowledge that Meta may either lack or not always be able to apply. The Board shares concerns that, unless Meta regularly seeks input from minority groups targeted with slurs on a country-specific level, it will be unable to formulate nuanced lists of designated slur terms and give its moderators proper guidance on how exceptions to the slurs policy should be applied. 8.2 Compliance with Meta’s values The Board finds that the original decision to remove this content was inconsistent with Meta's values of ""Voice"" and ""Dignity"" and did not serve the value of ""Safety."" While it is consistent with Meta's values to prevent the use of slurs to abuse people on its platforms, the Board is concerned that Meta is not consistently applying exceptions in the policy to expression from marginalized groups. In the context of this case, “Voice” that seeks to promote free expression from members of a marginalized group is of the utmost importance. Meta is right to attempt to limit the use of slurs to denigrate and intimidate their targets, and also to allow good faith attempts to deprive those words of their negative impact through reclamation. The Board recognizes that the circulation of slurs impacts “Dignity.” Particularly when used with the intent to offend or absent contextual clues signifying that they are not being used to offend, encounters with slur words can intimidate, upset or offend users in ways that inhibit online expression. Where there are clear contextual clues that the slur is mentioned to condemn it, raise awareness for it, or mentioned self-referentially or in an empowering way, the value of ""Dignity"" does not dictate that the word must be removed from the platform. On the contrary, over-enforcement that ignores the exceptions particularly affects minority and marginalized groups. As recommended by the Board in the ""Two buttons meme"" decision, Meta must ensure that its moderators are sufficiently resourced and supported such that relevant context could be assessed properly. It is important that moderators are able to distinguish between permitted references to slurs and impermissible uses of slurs to protect the ""Voice"" and ""Dignity"" of its users, especially those from marginalized communities. As the “Dignity” and “Safety” of marginalized communities are at a heightened level of risk on social media platforms, those platforms have heightened responsibilities to protect them. The Board has already recommended in the ""Wampum Belt"" decision that Meta should conduct accuracy assessments on the application of Hate Speech policy allowances. Accuracy can be improved through the training of moderators so that they are able to identify content involving discriminated communities and receive instructions to carefully assess whether exceptions to the Hate Speech policy apply. An assessment of the content, along with supporting contextual cues, should be the triggering factor for the application of these exceptions. With regards to “Safety,” the Board also notes the particular importance of both safe online spaces and careful moderation to marginalized and threatened communities. LGBTQIA+ Arabic speakers, especially in the MENA region, face a degree of danger when openly expressing themselves online. Meta must balance the need to provide supportive arenas for this expression with ensuring that it does not over-moderate and silence people who already face censorship and oppression. While the Board acknowledges the complexity of moderation in this area, especially at scale, it is vital that platforms invest the resources required to do it properly. 8.3 Compliance with Meta’s human rights responsibilities The Board concludes that Meta’s initial decision to remove the content was inconsistent with its human rights responsibilities as a business. Meta has committed itself to respect human rights under the UN Guiding Principles on Business and Human Rights ( UNGPs ). Facebook’s Corporate Human Rights Policy states that this includes the International Covenant on Civil and Political Rights (ICCPR). 1. Freedom of expression (Article 19 ICCPR) Article 19 of the ICCPR provides for broad protection of expression, including discussion of human rights and expression which may be regarded as “deeply offensive” ( General Comment 34 , para. 11). The right to freedom of expression is guaranteed to all people without discrimination as to “sex” or “other status” (Article 2, para. 1, ICCPR). This includes sexual orientation and gender identity ( Toonen v. Australia (1992) ; A/HRC/19/41 , para. 7). This post relates to important social issues of discrimination against LGBTQIA+ people. The UN High Commissioner for Human Rights has noted concerns regarding restrictions on the freedom of expression arising from discriminatory limitations on advocacy for LGBTQIA+ rights ( A/HRC/19/41 , para. 65). Article 19 requires that where restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim and necessity and proportionality (Article 19, para. 3, ICCPR). Relying on the UNGPs framework, the UN Special Rapporteur on freedom of opinion and expression has called on social media companies to ensure that their content rules are guided by the requirements of Article 19, para. 3, ICCPR ( A/HRC/38/35 , paras. 45 and 70). I. Legality (clarity and accessibility of the rules) The requirement of legality provides that any restriction on freedom of expression is accessible and clear enough to provide guidance as to what is permitted and what is not. The Board recommended in the ""Breast cancer symptoms and nudity"" case (2020-004-IG-UA, Recommendation no. 9), the ""Ocalan's isolation"" case (2021-006-IG-UA, Recommendation no. 10) and the Policy Advisory Opinion on sharing private residential information (Recommendation no. 9) that Meta should clarify to Instagram users that Facebook’s Community Standards apply to Instagram in the same way they apply to Facebook, with some exceptions. In the Policy Advisory Opinion, the Board recommended that Meta complete this within 90 days. The Board notes Meta’s response to the Policy Advisory Opinion that, while this recommendation will be implemented fully, Meta is still working on building more comprehensive Instagram Community Guidelines clarifying their relationship with the Facebook Community Standards and cannot commit to the 90-day deadline. The Board, having reiterated this recommendation on multiple occasions, believes Meta has had sufficient time to prepare for these changes. The unclear relationship between the Instagram Community Guidelines and Facebook Community Standards is a source of continual confusion for users of Meta’s platforms. Currently, while the Instagram Community Guidelines contain a link to the Facebook Community Standard on Hate Speech, it is not clear to the user that the entire Facebook Community Standard on Hate Speech, including the slurs prohibition and exceptions, applies to Instagram. Timely and comprehensive updates to the Instagram Community Guidelines remain a top priority for the Board. With regards to the development of the slurs list, the Board reiterates the point made in the ""South Africa Slurs"" case ( 2021-011-FB-UA ) that Meta should be more transparent on the procedures and criteria for developing the list. In this case, Meta explained that it defines slur lists for each established market based on “analysis and vetting from relevant internal partners such as process, markets, and content policy teams.” Meta also stated that its market experts audit the slur list annually, with each term being assessed qualitatively and quantitatively, differentiating “words which are inherently offensive, even if written on their own, and words which are not inherently offensive.” It is unclear to the Board when that annual review takes place, but after the Board selected this case, Meta audited the use of the word “z***l.” Following this audit, the word was removed from the “Arabic” slur list while remaining on the slur list for the “Maghreb market.” The Board does not know whether this audit was part of regular procedures or an ad hoc review in response to the Board’s selection of this case. More generally, it is not apparent to the Board what the qualitative and quantitative assessments in annual reviews entail. Information on the processes and criteria for development of the slur list and market designation, especially regarding how linguistic and geographic markets are distinguished, is not available to users. Without this information, the users may have difficulty assessing what words might be considered slurs, based solely on the definition of slurs in the Hate Speech policy that relies on subjective concepts such as inherent offensiveness and insulting nature ( A/74/486 , para. 46; see also A/HRC/38/35 , para. 26). With regards to how the slur list is enforced, Meta stated in the ""South Africa Slurs"" case ( 2021-011-FB-UA ) that its “prohibition against slurs is global, but the designation of slurs is market-specific.” It explained that “[i]f a term appears on a market slur list, the hate speech policy prohibits its use in that market.” Meta’s explanation is confusing as to whether its enforcement practices, which may be global in scope, mean that market-designated slurs are also prohibited globally. Meta explained that it defined a market as “a combination of country(ies) and language(s)/dialect(s)” and that “the division between…market[s] is primarily based on a combination of language /dialect and country of the content.” Meta’s content reviewers are “designated to their market based on their linguistic aptitude and cultural and market knowledge.” According to Meta, this content involved the Arabic and Maghreb markets on the slur list. It was routed to these markets “based on a combination of multiple signals such as location, language, and dialect detected in the content, the type of the content and the report type.” It is not sufficiently clear to the Board how the multiple signals work together to determine which markets a piece of content would engage, and whether content containing a word which is a slur in a given market would only be removed if the content relates to that market, or whether it would be removed globally. The Community Standard itself does not explain this process. Meta should issue a comprehensive explanation of how slurs are enforced on the platform. There are multiple areas of opacity in the current policy, including whether slurs designated for particular geographies are removed from the platform only when posted in those geographies or when viewed in those geographies, or regardless of where they are posted or viewed. Meta should also explain how it handles words that are considered a slur in some settings but have an entirely different meaning, one that does not violate any of Meta's policies, elsewhere. The structure of the Community Standard on Hate Speech may also cause confusion. Although the prohibition on slurs appears below the heading for tier three hate speech, the Board finds it unclear whether the prohibition does belong to tier three as slurs do not necessarily target people with segregation or exclusion, which are the focus in the rest of that tier. II. Legitimate aim Any restriction on expression should pursue one of the legitimate aims listed in the ICCPR, which include the “rights of others.” The policy at issue in this case pursued the legitimate aim of protecting the rights of others ( General Comment No. 34 , para. 28) to equality, protection against violence, and discrimination based on sexual orientation and gender identity (Article 2, para. 1, Article 26 ICCPR; UN Human Rights Committee, Toonen v. Australia (1992) ; UN Human Rights Council Resolution 32/2 on the protection against violence and discrimination based on sexual orientation and gender identity). III. Necessity and proportionality The principle of necessity and proportionality provides that any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected” ( General Comment 34 , para. 34). It was not necessary to remove the content in this case as the removal was a clear error which was not line with the exception in Meta’s Hate Speech policies. Removal was also not the least intrusive instrument to achieve the legitimate aim because, in each review resulting in removal, the entire carousel containing 10 photos was taken down for alleged policy violations in only one of the photos. Even if the carousel had included one image with impermissible slurs not covered by an exception, removal of the entire carousel would not be a proportionate response. Meta explained to the Board that “the post is considered violating when any photo contains a violation of the Community Standards” and “[u]nlike Facebook, it is not possible for Meta to remove a single image from an Instagram multi-photo post.” Meta stated that an update to the content review tool had been proposed with the aim that reviewers could remove just the violating photo in a carousel, but the update had not been prioritized. The Board does not find this explanation clear, and believes that deprioritizing the update could lead to systemic overenforcement where entire carousels are taken down even though only parts of them are deemed violating. The Board also notes that, where a user posts the same series of photos on Facebook and Instagram, the different treatments of this kind of content on the two platforms would lead to inconsistent results which are not justified by any meaningful policy difference: if one of the photos is violating, this will cause removal of the whole carousel on Instagram, but not on Facebook. 2. Non-discrimination Given the importance of reclaiming derogatory terms for LGBTQIA+ people in countering discrimination, the Board expects Meta to be particularly sensitive to the possibility of wrongful removal of the content in this case and similar content on Facebook and Instagram. As the Board noted in the ""Wampum Belt"" decision ( 2021-012-FB-UA ) regarding artistic expression from Indigenous persons, it is not sufficient to evaluate the performance of Meta’s enforcement of Facebook’s Hate Speech policy as a whole – effects on particular marginalized groups must be taken into account. Under the UNGPs, ""business enterprises should pay special attention to any particular human rights impacts on individuals from groups or populations that may be at heightened risk of vulnerability or marginalization"" (UNGPs, Principles 18 and 20). For LGBTQIA+ people in countries which penalize their expression, social media is often one of the only means through which they can still express themselves freely. This is especially the case for Instagram, where the Community Guidelines permit users to not use their real name. The Board notes the same freedoms are not provided to Facebook users in the Community Standards. It would be important for Meta to demonstrate that it has undertaken human rights due diligence to ensure its systems are operating fairly and are not contributing to discrimination (UNGPs, Principle 17). The Board notes that Meta routinely evaluates the accuracy of its enforcement systems in dealing with hate speech (""Wampum Belt"" decision). However these assessments are not broken down into evaluations of accuracy that specifically measure Meta’s ability to distinguish impermissible hate speech from permitted content that attempts to reclaim derogatory terms. The errors in this case indicate that Meta’s guidance to moderators assessing references to derogatory terms may be insufficient. The Board is concerned that reviewers may not have sufficient resources in terms of capacity or training to prevent the kind of mistake seen in this case, especially in respect of content permitted under policy exceptions. In this case, Meta informed the Board that the Known Questions and Internal Implementation Standards are available in English only to “ensure standardized global enforcement” of its policies, and that “all of its content moderators are fluent in English.” In the ""Myanmar bot"" decision ( 2021-007-FB-UA ), the Board recommended that Meta should ensure its Internal Implementation Standards are available in the language in which content moderators review content. Meta took no further action on this recommendation, giving a similar response that its content moderators were fluent in English. The Board observes that providing reviewers with guidance in English on how to moderate content in non-English languages is innately challenging. The Internal Implementation Standards and Known Questions are often based in US-English language structures that may not apply in other languages, such as Arabic. In the ""Wampum Belt"" decision ( 2021-012-FB-UA , Recommendation no. 3), the Board recommended that Meta conduct accuracy assessments focused on Hate Speech policy exceptions that cover expression about human rights violations (e.g. condemnation, awareness-raising, self-referential use, empowering use), and that Meta should share results of the assessment, including how these results will inform improvements to enforcement operations and policy development. The Board issued this recommendation based on its understanding that the costs of over-removal of expression about human rights violations are particularly great. The Board notes Meta’s concerns with the recommendation in assessing feasibility, including (a) lack of specific categories in its policies on exceptions for areas such as human rights violations, and (b) lack of an easily identifiable sample of content that falls under Hate Speech exceptions. The Board believes these challenges can be overcome, as Meta could focus analysis on existing Hate Speech exceptions and prioritize identifying samples of content. The Board encourages Meta to commit to implement the recommendation in the ""Wampum Belt"" case ( 2021-012-FB-UA ) and welcomes updates from Meta in its next quarterly report. 9. Oversight Board decision The Oversight Board overturns Meta's original decision to take down the content. 10. Policy advisory statement Enforcement 1. Meta should translate the Internal Implementation Standards and Known Questions to Modern Standard Arabic. Doing so could reduce over-enforcement in Arabic-speaking regions by helping moderators better assess when exceptions for content containing slurs are warranted. The Board notes that Meta has taken no further action in response to the recommendation in the ""Myanmar Bot"" case (2021-007-FB-UA) that Meta should ensure that its Internal Implementation Standards are available in the language in which content moderators review content. The Board will consider this recommendation implemented when Meta informs the Board that translation to Modern Standard Arabic is complete. Transparency 2. Meta should publish a clear explanation on how it creates its market-specific slur lists. This explanation should include the processes and criteria for designating which slurs and countries are assigned to each market-specific list. The Board will consider this implemented when the information is published in the Transparency Center. 3. Meta should publish a clear explanation of how it enforces its market-specific slur lists. This explanation should include the processes and criteria for determining precisely when and where the slurs prohibition will be enforced, whether in respect to posts originating geographically from the region in question, originating outside but relating to the region in question, and/or in relation to all users in the region in question, regardless of the geographic origin of the post. The Board will consider this recommendation implemented when the information is published in Meta’s Transparency Center. 4. Meta should publish a clear explanation on how it audits its market-specific slur lists. This explanation should include the processes and criteria for removing slurs from or keeping slurs on Meta's market-specific lists. The Board will consider this recommendation implemented when the information is published in Meta’s Transparency Center. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. The company Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world, provided linguistic expertise. Return to Case Decisions and Policy Advisory Opinions" ig-2r3ueqrr,Praise be to God,https://www.oversightboard.com/decision/ig-2r3ueqrr/,"November 16, 2023",2023,,"TopicCulture, Marginalized communities, Race and ethnicityCommunity StandardDangerous individuals and organizations",Dangerous individuals and organizations,Overturned,"India, Pakistan, United Kingdom","A user appealed Meta’s decision to remove their Instagram post, which contains a photo of them in bridal wear, accompanied by a caption that states “alhamdulillah.”",6089,899,"Overturned November 16, 2023 A user appealed Meta’s decision to remove their Instagram post, which contains a photo of them in bridal wear, accompanied by a caption that states “alhamdulillah.” Summary Topic Culture, Marginalized communities, Race and ethnicity Community Standard Dangerous individuals and organizations Location India, Pakistan, United Kingdom Platform Instagram This is a summary decision. Summary decisions examine cases where Meta reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors. They are approved by a Board Member panel, not the full Board. They do not consider public comments, and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas of potential improvement in its policy enforcement. Case summary A user appealed Meta’s decision to remove their Instagram post, which contains a photo of them in bridal wear, accompanied by a caption that states “alhamdulillah,” a common expression meaning “praise be to God.” After the Oversight Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Case description and background In June 2023, an Instagram user in Pakistan posted a photo of themselves in bridal wear at a traditional pre-wedding event. The caption accompanying the post stated “alhamdulillah,” which is an expression used by many people in Muslim and Arab societies meaning “praise be to God.” The post received less than 1,000 views. The post was removed for violating the company’s Dangerous Organizations and Individuals policy . This policy prohibits content that contains praise, substantiative support, or representation of organizations or individuals that Meta deems as dangerous. In their statement to the Board, the user emphasized that the phrase “alhamdulillah” is a common cultural expression used to express gratitude and has no “remote or direct links to a hate group, a hateful nature or any association to a dangerous organization.” The Board would view the phrase as protected speech under Meta’s Community Standards, consistent with freedom of expression and the company's value of protecting “Voice.” The user stressed the popularity of the phrase by stating, “this is one of the most popular phrases amongst the population of 2+ billion Muslims on the planet... if this is the reason the post has been removed, I consider this to be highly damaging for the Muslim population on Instagram and inherently somewhat ignorant.” After the Board brought this case to Meta’s attention, the company determined that the content “did not contain any references to a designated organization or individuals,” and therefore did not violate its Dangerous Organizations and Individuals policy. Subsequently, Meta restored the content to Instagram. Board authority and scope The Board has authority to review Meta's decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). Where Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, to reduce errors and increase fairness for people who use Facebook and Instagram. Case significance This case highlights Meta’s inconsistent application of its Dangerous Organizations and Individuals policy, which can lead to wrongful removals of content. Enforcement errors such as these undermine Meta’s responsibility to treat users fairly. Previously, the Board has issued recommendations relating to the enforcement of Meta’s Dangerous Organizations and Individuals policy. The Board has recommended that Meta “evaluate automated moderation processes for enforcement of the Dangerous Organizations and Individuals policy” ( Öcalan’s isolation decision, recommendation no. 2). Meta declined to implement this recommendation. Additionally, the Board has recommended that Meta “enhance the capacity allocated to the High-Impact False Positive Override system across languages to ensure that more content decisions that may be enforcement errors receive additional human review” ( Mention of the Taliban in news reporting decision, recommendation no. 7). Meta stated this was work it already does but did not publish information to demonstrate this. Lastly, the Board has recommended that Meta publish “more comprehensive information on error rates for enforcing rules on “praise” and “support” of dangerous individuals and organizations, broken down by region and language” in Meta’s transparency reporting ( Öcalan’s isolation decision, recommendation no. 12). Meta declined to implement this recommendation. In this case, there was no mention of an organization or individual which might be considered dangerous. The Board has noted in multiple cases that problems of cultural misunderstanding and errors in translation can lead to improper enforcement of Meta’s policies. The Board has also issued recommendations relating to the moderation of Arabic content. The Board has recommended that Meta, “translate the Internal Implementation Standards and Known Questions,” which is guidance for content moderators “to Modern Standard Arabic” ( Reclaiming Arabic words decision, recommendation no. 1). Meta declined to implement this recommendation. The Board reiterates that full implementation of the recommendations above will help to decrease enforcement errors under the Dangerous Organizations and Individuals policy, reducing the number of users who are impacted by wrongful removals. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error, after the Board had brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" ig-3r8rqiaq,Iranian Make-Up Video for a Child Marriage,https://www.oversightboard.com/decision/ig-3r8rqiaq/,"October 10, 2024",2024,,"TopicChildren / Children's rights, DiscriminationCommunity StandardChild nudity and sexual exploitation of children","Policies and TopicsTopicChildren / Children's rights, DiscriminationCommunity StandardChild nudity and sexual exploitation of children",Upheld,Iran,"In the case of a video in which a beautician from Iran prepares a 14-year-old girl for her wedding, the Board agrees with Meta that the content should have been taken down under the Human Exploitation policy.",51288,7999,"Upheld October 10, 2024 In the case of a video in which a beautician from Iran prepares a 14-year-old girl for her wedding, the Board agrees with Meta that the content should have been taken down under the Human Exploitation policy. Standard Topic Children / Children's rights, Discrimination Community Standard Child nudity and sexual exploitation of children Location Iran Platform Instagram Full decision in English Farsi Translation To read the full decision in Farsi, click here . برای خواندن حکم کامل به زبان فارسی، در اینجا کلیک کنید . In the case of a video in which a beautician from Iran prepares a 14-year-old girl for her wedding, the Board agrees with Meta that the content should have been taken down under the Human Exploitation policy. However, the Board does not agree with Meta’s reason for removal, which was to use the spirit of the policy allowance. Rather, the Board finds the content clearly violated the Human Exploitation Community Standard rule for facilitation of child marriage by materially aiding this harmful practice. Child marriage, which disproportionately affects girls, is a form of forced marriage and gender-based violence and discrimination. The Board’s recommendations seek to clarify Meta’s public language and internal guidance to ensure such content is removed, and to specify that forced marriages include child marriage and involve children aged under 18 years. About the Case In January 2024, an Instagram user posted a short video on their account, which gives details of beauty salon services in Iran. In the video, a beautician gives a child a make-up session in preparation for the child’s marriage. Speaking in Farsi, the child confirms her age is 14 years and when asked by the beautician, she reveals the groom’s family made persistent requests before her father “gave her to them.” The beautician and child talk about prioritizing marriage over education and admire the results of the make-up transformation. Text overlay states the child is the youngest bride of the year, while the post’s caption includes details of the beautician’s services for brides. The content was viewed about 10.9 million times. Background research commissioned by the Board suggests the girl in the video may be acting in the role of a child about to get married, although this is not clear. A total of 203 users reported the content over a month. Following rounds of human review, Meta concluded it did not violate any policies so the video should stay up. The content was also initially flagged by Meta’s High Risk Early Review Operations system based on the high likelihood of it going viral, and it was escalated to Meta through the Trusted Partners program, which involves expert stakeholders reporting potentially violating content. Following a new round of escalated review by Meta’s policy and subject matter experts, Meta overturned its initial decision and removed the post for violating its Human Exploitation policy. Meta then referred the case to the Board. Child marriage, which the UN High Commissioner for Human Rights defines as “any formal marriage or informal union between a child under the age of 18 and an adult or another child,” is considered a form of forced marriage, and as a human rights violation by international and regional bodies. Iranian law allows for child marriage, with legal ages set for 13 for girls and 15 for boys, although marriage is permitted in Iran before these ages in certain circumstances. Key Findings The Board finds the content explicitly broke the rules of the Human Exploitation Community Standard for facilitating forced marriages because the video clearly showed a beautician providing material aid to a 14-year-old girl, therefore facilitating child marriage. While Meta removed the video, it did so for another reason: a spirit of the policy allowance under the Human Exploitation policy. This policy does not specifically prohibit support for child marriage, but its rationale states the policy’s goal is to remove all forms of “exploitation of humans,” which Meta believed should include “support” for child marriage. In this case, Meta used the spirit of the policy allowance, which it can apply when a strict application of a Community Standard produces inconsistent results with the policy’s rationale and objectives. The Board disagrees with Meta on the reason for removal because the beautician’s actions were a form of facilitation, with the post advertising beauty services for girls getting married, aiding the practice. There is no public definition of “facilitation” given by Meta although its internal guidance to reviewers has the following: “content that coordinates the transportation, transfer, harboring of victims before or during the exploitation.” The Board finds this definition is too narrow. Given the policy’s purpose, the Board’s own interpretation of “facilitation” – to include the provision of any type of material aid (which includes “services”) to enable exploitation – should be applied to this case as well as to Meta’s internal guidance. This would mean Meta could remove similar content without relying on the spirit of the policy allowance. The Human Exploitation policy does not explicitly state that forced marriages include child marriage. Additionally, while Meta’s internal definition for reviewers states that minors cannot consent and there is additional guidance around consent signs and human trafficking, neither the internal nor the public language are clear enough. Meta should therefore specify in the public language of the policy that child marriage is a form of forced marriage and update its internal guidance to explain that children are people under 18 who cannot fully consent to marriage or informal unions. The Board believes the spirit of the policy should be applied rarely because there are legality concerns over the allowance. Reiterating a previous recommendation, the Board urges Meta to complete its implementation of a public explanation of this allowance. The Oversight Board’s Decision The Oversight Board upholds Meta’s decision to take down the content. The Board recommends that Meta: * Case summaries provide an overview of cases and do not have precedential value. 1. Case Description and Background In January 2024, an Instagram user posted a one-minute video in Farsi on their account. The account shares information about beauty salon services and a beauty school in Iran. In the video, a beautician prepares a 14-year-old girl for her wedding, with clips showing the child before and after her make-up session. The child, whose face is clearly shown, also confirms her age in the video. The beautician and the child talk about education, age, marriage arrangements and the results of the make-up session. The beautician asks the child about prioritizing marriage over education, to which she replies that she would like to pursue both. When asked about the groom, the child explains that after persistent requests from his family, her father “gave her to them.” They both then admire the results of the make-up transformation and the beautician extends best wishes for the child’s future. Additional background research commissioned by the Board has suggested the girl in the video may be acting in the role of a child about to get married. However, the content does not make this clear. Text overlaying the video, also in Farsi, states the child is the youngest bride of the year. The post’s caption sends best wishes to all girls in Iran and provides information on the beautician’s services for brides. The content was viewed about 10.9 million times, received about 200,000 reactions – the majority “likes” – and 19,000 comments, and was shared less than 1,000 times. Between January and February 2024, 203 users reported the content 206 times, most frequently for “child exploitation images.” Out of those, 79 users reported the content for violating Child Exploitation Images, 40 users reported the content for Hate Speech and 30 users reported the content for Terrorism. Following multiple human reviews during that period, Meta concluded the content did not violate any of its policies and kept it up. During the same month, the content was also detected by Meta’s High Risk Early Review Operations (HERO) system, designed to identify potentially violating content that is predicted to have a high likelihood of going viral. Once detected and prioritized, content is sent for human review by specialists with language, market and policy expertise. The content in this case was detected due to high virality signals, but the report was later closed because the virality was not high enough for it to proceed to review stage. In February 2024, the content was escalated by one of Meta’s Trusted Partners for additional human review. Through the Trusted Partners Program , Meta partners with different stakeholders that provide expertise on the diverse communities in which Meta operates, report content and provide feedback on Meta’s content policies and enforcement. Following review by policy and subject matter experts, Meta overturned its original decision to keep up the content and removed the post for violating its Human Exploitation policy. However, Meta did not apply a strike against the user who posted the video because the company decided to remove the post based on the spirit of the policy allowance rather than the letter of the policy. In this instance, Meta stated that the decision was made that the removal was sufficient and did not warrant additional penalization in the form of a strike. Meta referred the case to the Board because it represents tension in its values of voice and safety relating to child marriages. Meta considers this case significant and difficult because “it highlights the issue of promotion or glorification of human exploitation (including child marriage), which is not explicitly covered under [Meta’s] policies … and because child marriages are legal in certain jurisdictions but criticized as a violation of human rights law by others.” The Board notes the following context in reaching its decision in this case. Child marriage is considered a human rights violation by international and regional bodies (e.g. United Nations , Organization of American States , African Union ) and civil society organizations , and affects millions of children worldwide. According to the Office of the UN High Commissioner for Human Rights, “child marriage refers to any formal marriage or informal union between a child under the age of 18 and an adult or another child. Forced marriage is a marriage in which one and/or both parties have not personally expressed their full and free consent to the union. A child marriage is considered to be a form of forced marriage, given that one and/or both parties have not expressed full, free and informed consent.” The Convention on the Elimination of All Forms of Discrimination against Women ( Article 16, para. 2 ), provides that “the betrothal and the marriage of a child shall have no legal effect.” Child marriage includes both formal marriages and informal unions. According to UNICEF, an informal union is one “in which a girl or boy lives with a partner as if married before the age of 18 [… and] in which a couple live together for some time, intending to have a lasting relationship, but do not have a formal civil or religious ceremony.” Informal unions raise the same human rights concerns as marriage (e.g. health risks, disruption to education), and in some regions , they are more prevalent than formal marriages. Girls are disproportionately affected and face additional risks due to biological and social differences. Globally, the prevalence of child marriage among boys is only one sixth of the prevalence among girls. The Report of the UN Secretary-General on the Issue of Child, Early and Forced Marriage ( A/77/282 , para. 4) has recognized that child marriage is rooted in gender inequalities and discriminatory social and cultural norms that consider women and girls to be inferior to men and boys. It is considered a form of gender-based violence and discrimination against women and girls. Long-standing customs are frequently used to justify child marriage, disregarding the discrimination and gender-based violence associated with it, as well as the threats to a child’s wellbeing and other human rights violations. UNICEF , the Committee on the Elimination of Discrimination Against Women (CEDAW), Committee on the Rights of the Child (CRC) and other UN human right experts have stated that girls who marry before 18 are more likely to experience domestic violence and abuse, and less likely to remain in school. They have worse economic and health outcomes than unmarried children, which are eventually passed down to their own children. Child marriage is often accompanied by early and frequent pregnancy and childbirth, affecting girls’ mental and physical health, and resulting in above average maternal mortality rates. Child forced marriage may also lead to girls attempting to flee their communities or commit suicide . As children cannot express full, free and informed consent to marry or enter informal unions, decisions are often made by parents or guardians, which takes away the child’s agency, autonomy and ability to make critical decisions (Article 12, Convention on the Rights of the Child , CRC). UNICEF has also stated that boys who marry or engage in an informal union in childhood are forced to take on adult responsibilities for which they may not be prepared. Marriage may bring early fatherhood and additional economic pressure to provide for the household, which, in turn, could limit the boy’s access to education and opportunities for career advancement. The United Nations High Commissioner for Human Rights noted that child marriage is rooted in factors such as socioeconomic issues (poverty and education), customs, tradition, cultural values, politics, economic interests, honor and religious beliefs ( A/HRC/26/22 , paras. 17-20). There is also a higher incidence during conflicts and humanitarian crises ( A/HRC/41/19 , para. 51). According to UNICEF, every three seconds a girl gets married somewhere in the world. UNICEF and Girls Not Brides have data identifying the regions with the highest occurrences of child marriage. Sub-Saharan Africa is the region with the highest prevalence of child marriage, with 31% of women married before the age of 18, followed by Central and Southern Asia at 25%, Latin America and the Caribbean at 21%, and the Middle East and North Africa at 17%. International human rights standards provide that the minimum legal age of marriage for girls and boys, with or without parental consent, is 18 years (2019 CEDAW and CRC Joint General Recommendation No. 31/18 , paras. 20 and 55.f; 2018 UN General Assembly Resolution, A/RES/73/153 ; 2023 UN Human Rights Council Resolution on Child, Early and Forced Marriage, A/HRC/RES/53/23 ; Report of the Office of the United Nations High Commissioner for Human Rights, Preventing and Eliminating Child, Early and Forced Marriage, A/HRC/26/22 ). The CRC and CEDAW revised their Joint General Recommendation No. 18/31 in 2019 to state that the minimum legal age for marriage should be 18 years, with no exceptions (paras. 20 and 55(f)). Raising the legal age of marriage to 18 years has been supported by many civil society organizations, for example, with the slogan “ 18, no exceptions ,” as mentioned in the public comment from Project Soar (see PC-29623). This has led to some States modifying their domestic legislation in recent years (2022 Report of the OHCHR, A/HRC/50/44 , para. 22). Countries adopt different legal approaches to child marriage. While many countries set the minimum age at 18 and significant progress has been made in reducing the prevalence of child marriage, others establish lower ages or allow exceptions (e.g. some states in the United States , Brazil ). These exceptions, such as parental consent, court authorization, or customary and religious laws, undermine legal protections for girls and have been criticized for hindering the goal of ending child marriage by 2030 as outlined in the Sustainable Development Goals. Many countries also have varied customary and religious laws, and tribal practices, which are often open to interpretation by chiefs and community or traditional tribunals. For example, according to experts consulted by the Board, as part of tribal practices like Khoon bas (“cease blood”) in Iran, young girls are legally married into rival families to avoid bloodshed. Child Marriage in Iran Iranian law currently allows for child marriages. According to experts consulted by the Board, the legal age for marriage is 13 for girls and 15 for boys. However, marriage before these ages is permitted under Article 1041 of the 2007 Civil Code , which establishes that “marriage of girls before the age of 13 and boys before the age of 15 is contingent upon the permission of the guardian and upon the condition of the child’s interest as determined by a competent court.” In 2020, Iran adopted the Law on the Protection of Children and Adolescents, which imposes new penalties for acts that harm a child’s safety and wellbeing, but fails to address child marriage (see also PC 29268 from Equality Now ). According to Girls Not Brides , child marriage in Iran is driven by poverty, religion, harmful traditional practices, family honor and displacement. Research commissioned by the Board identified notable spikes in interactions on social media platforms over the past year, discussing the deaths and suicides of women and girls forced into marriage as children. An expert also noted that data from Iran’s National Statistics Center (NSC) indicated that 33,240 girls and 19 boys were married before the age of 15 between 2021 and 2022. A public comment from Equality Now (see PC 29268) explained that figures could be higher given that the official numbers only reflect registered marriages and the NSC does not release disaggregated data for marriage registrations of girls aged 15 to 17 (only for ages 15-18 inclusive). The CRC has urged the state to increase the minimum age of marriage for both girls and boys to 18 years ( A/HRC/WG.6/34/IRN/2 , para. 70). Other human rights bodies and experts have raised similar concerns, including in the 2024 report of the UN Special Rapporteur on Iran ( A/HRC/55/62 , para. 75). While Iran initially agreed to review recommendations to raise the minimum age of marriage to 18 years without exception, little progress has been reported. According to experts consulted by the Board, in recent years, the political discourse on marriage has drastically changed in Iran, even to encourage women to marry early to increase birth rate s , which for girls often translates into marriage by force and has resulted in the increase of child marriage in certain regions of the country. In 2021, Iran submitted a periodic state report to the Human Rights Committee indicating that it will not consider increasing the minimum age of marriage from 13 and 15 “due to the importance of the family in Iranian society,” and “the general indecency of illegitimate sexual acts outside the marriage,” ( CCPR/C/IRN/4 , para. 148). 2. User Submissions Following Meta’s referral and the Board’s decision to accept the case, the user was notified and provided with an opportunity to submit a statement. No response was received. 3. Meta’s Content Policies and Submissions I. Meta’s Content Policies Instagram’s Community Guidelines Instagram’s Community Guidelines do not specify any prohibition of content under the Human Exploitation policy and do not directly link to the Human Exploitation Community Standard. Meta’s Community Standards Enforcement Report for Q1 2024 states that “Facebook and Instagram share content policies. Content that is considered violating on Facebook is also considered violating on Instagram.” Human Exploitation Policy According to the Human Exploitation policy’s rationale, Meta “remove[s] content that facilitates or coordinates the exploitation of humans, including human trafficking.” The Community Standards prohibits: “Content that recruits people for, facilitates or exploits people through any of the following forms of human trafficking: … Forced marriages.” Meta’s internal guidelines define forced marriage as “an institution or practice where individuals don’t have the option to refuse or are promised and married to another by their parents, guardians, relatives or other people and groups. This does not include arranged marriages, where the individuals getting married have the option to refuse.” The Board notes that Meta is considering updates to this definition, and it may change in the relatively near future. The company informed the Board that it considers child marriage to be forced marriage based on the recognition that minors (people under the age of 18) cannot fully consent, in line with international human rights standards. The policy includes exceptions to these rules and states that Meta “allow[s] content that is otherwise covered by this policy when posted in condemnation, educational, awareness raising, or news reporting contexts.” Spirit of the Policy Allowance According to Meta, it may apply a “spirit of the policy” allowance to content when the policy rationale (the text introducing each Community Standard) and Meta’s values demand a different outcome than a strict reading of the rules on prohibited content. Meta uses the spirit of the policy allowance when a strict application of the relevant Community Standard is producing results that are inconsistent with its rationale and objectives. The spirit of the policy is a general policy allowance, applicable to all Community Standards, and can only be issued by Meta’s internal teams on escalation and not by human moderators at-scale. In previous decisions, the Board has recommended that Meta provide a public explanation of the spirit of the policy allowance ( Sri Lanka Pharmaceuticals decision, recommendation no. 1, reiterated in Communal Violence in the State of Odisha ). This recommendation was accepted by Meta and is currently in the process of being implemented, according to the latest assessment by the Board. II. Meta’s Submissions According to Meta, the content removal in this case was the result of a spirit of the policy decision under the Human Exploitation policy. While the Human Exploitation policy does not specifically prohibit support for child marriage, its policy rationale states the goal of the policy is to remove all forms of “exploitation of humans.” Meta believes this encompasses support for child marriage, particularly when the post may create a financial benefit for the user, as in this case. Based on this and the policy rationale, Meta argued that it does not want to allow content, like the post in this case, in which a person is seeking financial benefit from and encouraging child marriage. Meta considered that the value of safety outweighed the potential expressive value of this speech (voice). The company considered the harm associated with child marriage and balanced the risks of allowing the post to remain on the platform, which could encourage further support for child marriage, and the expressive value of the content as well as the potential monetary gain for the user. Meta explained that even though monetary gain was not a decisive factor in its assessment, the company did consider it as a factor in its holistic evaluation of the post on escalation, in alignment with the role that monetary gain plays in Meta’s definition of “exploitation of humans.” When asked by the Board, the company stated that content would be assessed differently if it supported child marriage but did not seek to financially benefit from it, although the company would consider the overall context of content before making a decision. Meta said it does not define “support” in the context of child marriage and that its approach to content that supports (but does not facilitate) child marriage is addressed on a case-by-case basis on escalation. The company noted that while “support” for child marriage is addressed on escalation, the other actions (facilitates, recruits, exploits) are enforced at-scale, and human reviewers are trained to remove all content that seeks to facilitate forced marriage. Meta said that its instructions that minors cannot consent, and the definition of forced marriage, clarify that reviewers should remove content seeking to facilitate child marriage. Meta explained the company did not apply a strike against the user who posted the video because the company decided to remove the post based on the spirit of the policy allowance rather than the letter of the policy. In this instance the decision was made that the removal was sufficient and did not warrant additional penalization in the form of a strike. Meta did not notify the user about its decision to withhold a strike in this case. The company said that it does not notify users regarding application or withholding of strikes due to the risk that this exposes enforcement thresholds that can then be exploited by adversarial actors to circumvent the company’s systems by creating new accounts or staying just under the strike limit. However, Meta notifies users regarding feature limits applied to their accounts, including why the restrictions were applied. The Board asked Meta questions about the application of the spirit of the policy allowance, the reasons for content removal, Meta’s internal instructions for content moderators regarding prohibitions in the Human Exploitation policy and the enforcement of content that “supports” child marriage, and information about the company’s notifications to users and reporters. Meta responded to all the questions. 4. Public Comments The Oversight Board received seven public comments that met the terms for submission . Four of the comments were submitted from the Middle East and North Africa, two from the United States and Canada, and one from Asia Pacific and Oceania. To read public comments submitted with consent to publish, click here . The submissions covered the following themes: child marriage as a violation of human rights; the impact of this harmful practice; how it disproportionately affects girls; the international human rights standards applicable to child marriage; and child marriage in Iran and other parts of the world. 5. Oversight Board Analysis The Board selected this case to assess, for the first time, the impact of Meta’s Human Exploitation Community Standard on the rights of children, particularly girls involved in child marriages. This case highlights the tension between Meta’s values of protecting voice and ensuring the safety of children. The Board analyzed Meta’s decision in this case against Meta’s content policies, values and human rights responsibilities. The Board also assessed the implications of this case for Meta’s broader approach to content governance. 5.1 Compliance With Meta’s Content Policies I. Content Rules The Board agrees with Meta that the content in this case should be removed, but for a different reason. The Board finds the content violated the explicit rules of the Human Exploitation Community Standard for facilitating forced marriage, rather than under the spirit of the policy for “support.” The video clearly depicted the beautician providing beauty services (material services or material aid) to a girl to facilitate child marriage and seek financial benefit. The Board, unlike Meta, considers that the beautician’s actions were not simply support for child marriage but a form of facilitation involving a concrete action. In the post, beauty services were advertised, with girls encouraged to come and receive those services in the facilitation of child marriage, thereby aiding the practice and potentially receiving economic benefits from it. The Board notes that Meta does not provide a public-facing definition of “facilitation.” Given the purpose of the policy, the Board interprets “facilitation” as to include the provision of any type of material aid (which include “services”) to enable exploitation. The Board notes that Meta defines “facilitation” in its internal guidance to reviewers as “content that coordinates the transportation, transfer, harboring of victims before or during the exploitation.” The Board finds that this internal guidance to reviewers is overly narrow, and that the public-facing language provides for the term to be reasonably understood by users as to encompass the Board’s broader interpretation as to what content is not allowed on the platform. Nonetheless, to provide greater clarity, Meta should modify its internal guidelines to expand the definition of facilitation to also include the provision of any kind of material aid (which includes “services”) to enable exploitation. This will allow Meta to remove similar content in the future without relying on the spirit of the policy allowance. II. Enforcement Action Despite over 10 million views of this content, it was not prioritized for review by Meta’s HERO system, which seeks to identify high virality content for human review. Meta stated that in this case, virality was not high enough for this content to proceed to the review stage. The Board is concerned that Meta's systems fail to address content such as the post in this case, which received over 10 million views. However, without further information and investigation of the prioritization system and what content was prioritized above this, the Board is not in a position to assess whether this content should have been given a higher priority in comparison to the other content in the queue. 6. Compliance With Meta’s Human Rights Responsibilities The Board finds that removing the content from the platform was consistent with Meta’s human rights responsibilities, though Meta must address concerns about the clarity of its rules and spirit of the policy allowance. Freedom of Expression (Article 19 ICCPR) Article 19 of the ICCPR provides for broad protection of expression, including “freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other [means].” When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s human rights responsibilities in line with the UN Guiding Principles on Business and Human Rights, which Meta itself has committed to in its Corporate Human Rights Policy. The Board does this both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users' right to freedom of expression,” ( A/74/486 , para. 41). I. Legality (Clarity and Accessibility of the Rules) The principle of legality requires rules limiting expression to be accessible and clear, formulated with sufficient precision to enable an individual to regulate their conduct accordingly ( General Comment No. 34 , para. 25). Additionally, these rules “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” and must “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not” (Ibid). The UN Special Rapporteur on freedom of expression has stated that when applied to private actors’ governance of online speech, rules should be clear and specific ( A/HRC/38/35, para. 46). People using Meta’s platforms should be able to access and understand the rules and content reviewers should have clear guidance regarding their enforcement. The Board finds that the content violated the prohibition in the Human Exploitation policy on content that facilitates forced marriages, rather than the spirit of the policy. While the Board finds that the prohibition on facilitation included in the Community Standard was sufficiently clear as applied to this post, the policy’s public-facing language is not sufficiently clear on the general interpretation of the term “facilitation.” As discussed above, the Board interprets the term to encompass a broader definition than is provided for in Meta’s internal guidelines. Therefore, the Board recommends amending the guidance to encompass this broader definition. Meta removed the post based on the spirit of the policy allowance because the Human Exploitation policy does not specifically prohibit content that “supports” child marriage, which in the company’s opinion was the action that should be prohibited in this case. As mentioned above, the Board disagrees with Meta’s reasoning, and considers that the beautician’s actions were not simply “support” for child marriage but were in fact a form of “facilitation” involving a concrete action, which is prohibited. In previous decisions, the Board has noted that the spirit of the policy allowance may “fall short of the standard of legality” under the three-part test. While in previous cases, the Board has allowed use of “spirit of the policy” to both allow content ( Sri Lanka Pharmaceuticals decision) and remove it ( Communal Violence in Indian State of Odisha decision), the use of this allowance to remove content should be exceptional as it raises serious concerns under the legality test. Without providing clear guidance, users cannot be expected to regulate their conduct accordingly. The Board considers that the application of spirit of the policy, particularly to remove content, should be exceptional. In the Sri Lanka Pharmaceuticals decision, the spirit of the policy allowance was used to permit content that violated the explicit terms of the Community Standard but did not violate the underlying purposes of those Standards. In this decision, the Board acknowledged that when moderating vast amounts of content on a global scale, it is necessary to have a “catch-all” allowance that can be applied to prevent clear injustices. At the same time, the Board noted that this type of discretionary exemption to Meta’s policies is in serious tension with the legality standard. To avoid arbitrary restrictions on speech, the Board reiterates its prior recommendation that Meta provide a public explanation of the spirit of the policy allowance and disclose the criteria used to assess when such an allowance is applied. Without a publicly available explanation, users have no way of knowing about the spirit of the policy allowance or its application across all Community Standards. Meta has already committed to fully implementing this recommendation. Further, if such an allowance is repeatedly used in the same way, the company should carefully assess whether or not this should be specifically provided for in the relevant policy. Discretionary departures from the letter of the rules are more concerning in the context of removing content than when permitting it. Where application of the strict rules may lead to disproportionate restrictions on speech that should be permitted on Meta’s platforms, the goal of using the spirit of the policy allowance is to increase protection for the right to expression. Conversely, using the allowance to restrict speech that is not clearly prohibited by Meta’s rules significantly impacts users’ ability to effectively regulate their conduct on the platform by reference to the rules. The public-facing language of the Human Exploitation policy does not explicitly state that forced marriages include child marriage. Meta informed the Board that it considers child marriage to be a form of forced marriage based on the recognition that minors (people under the age of 18) cannot fully consent, in alignment with international human right standards. Meta provides an internal definition of forced marriages and, according to the company, human reviewers are provided with instructions that minors cannot consent. In evaluating content under Meta’s Human Exploitation policy, the company instructs reviewers not to consider purported evidence of a minor's consent because minors lack capacity to provide lawful consent. According to Meta, when interpreted together, both instructions provide clarity for reviewers that content seeking to “facilitate” child marriage should be removed. No internal guidelines are provided in relation to content that supports child marriage. The company noted that “support” for child marriage is addressed upon escalation. To provide clarity and sufficient precision about the rules to users, the Board urges Meta to specify in the public-facing Human Exploitation policy that child marriage is to be understood as a form of forced marriage, based on the recognition that minors (people under the age of 18) cannot fully consent. The company should also update its internal guidelines accordingly. The Board finds that while the internal guidelines to reviewers provide some guidance around children’s signs of consent and human trafficking, Meta should clearly explain that children are people under 18 and cannot fully consent to marriage or informal unions. II. Legitimate Aim Any restriction on freedom of expression should also pursue one or more of the legitimate aims listed in the ICCPR, which includes protecting the rights of others. As applied to the facts of this case, Meta’s Human Exploitation policy seeks to pursue the legitimate aims of protecting the rights of children. In seeking to “disrupt and prevent harm” by removing content “that facilitates or coordinates the exploitation of humans” through child marriage, the Human Exploitation Community Standard serves the legitimate aims of protecting a wide range of children’s rights, particularly girls’ human rights, in line with the best interests of the child (Article 3, CRC). The policy seeks to shield them from the negative impacts associated with child marriage. The Board has previously found that protecting children’s rights is a legitimate aim (see Swedish Journalist Reporting Sexual Violence Against Minors and News Documentary on Child Abuse in Pakistan decisions). Meta’s policy seeks to protect children’s rights to: physical and mental health (Article 12 ICESCR, Article 19, CRC); privacy (Article 17, ICCPR, Article 16, CRC); education (Article 13, ICESCR, Article 28, CRC); development (Article 12, ICESCR, Article 6, CRC); family and to consent to marriage (Article 10, ICESCR, Article 23, ICCPR); and freedom from sexual exploitation and abuse (Article 34, CRC). III. Necessity and Proportionality Necessity and proportionality require that restrictions on expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected” (ICCPR Article 19(3), General Comment No. 34 , para. 34). The Board finds that removing the content was necessary to protect children’s rights to physical and mental health, privacy, education and freedom from all forms of discrimination. The content facilitated the practice of child marriage, which, as discussed above, is associated with significant negative impacts, particularly for girls. Given that the content sought to provide material aid to enable this harmful practice, removal was the least restrictive way to protect children’s rights. No less restrictive measures such as labeling would have been sufficient to prevent users from accessing the services being promoted. Meta’s decision to remove speech in order to protect children’s rights was proportionate. The post in this case facilitates child marriage by advertising beauty services that encourage girls to come and receive those services in preparation for their weddings, thereby materially aiding child marriage. The expressive value of this post was primarily focused on advertising beauty services that facilitate the practice of child marriage. While the post violated the prohibition on “facilitation” of child marriage, the Board also considered whether Meta should expand this policy to explicitly prohibit content that more generally supports child marriage. This presents a tension between two issues: on one hand, the problematic consequences of allowing content on platforms that more generally supports child marriage; on the other hand, the potential negative consequences of expanding the Human Exploitation policy to prohibit such content. For a majority of Board Members, allowing content on platforms that more generally supports child marriage can contribute to the normalization of this extremely harmful practice. Speaking positively about the practice, implying that child marriage should be permitted or celebrated, or legitimizing or defending the practice by claiming it has a moral, political, logical or other justification, could all contribute to this normalization, to the detriment of the child’s best interest. According to Article 3 of the Convention on the Rights of the Child , “in all actions concerning children, ... the best interests of the child shall be a primary consideration.” A public comment from Equality Now (PC 29268) noted that “the normalization of child marriage perpetuates a cycle of human rights violations that deeply affects young girls and denies them their basic human rights. This normalization is entrenched in cultural and religious beliefs.” In the Image of Gender-Based Violence decision, the Board expressed concern that Meta’s existing policies do not adequately address content that normalizes gender-based violence by praising it or implying it is deserved. Child marriage, which primarily impacts girls, is a form of gender-based violence. In response to the Board’s recommendation in that case, Meta modified its Violence and Incitement policy to prohibit “glorification of gender-based violence that is either intimate partner violence or honor-based violence.” The majority of Board Members emphasize that the digital environment can exacerbate the risks of normalization of child marriage and the spread of harmful content. The CRC has also called on states for measures to prevent the online spread of materials and services that may damage children’s mental or physical health, while ensuring respect for freedom of expression ( General Comment No. 25 , paras. 14, 54, 96). While the internet and social media can also be valuable tools for providing information and opportunities for debate among children, the CRC and the CEDAW have noted that harmful practices such as child marriage may be increasing “as a result of technological developments such as the widespread use of social media,” ( CEDAW/C/GC/31/Rev.1 , para. 18). The UN Human Rights Council has also urged states to take “comprehensive, multisectoral and human rights-based measures to prevent and eliminate forced marriage, and to address its structural and underlying root causes and risk factors ( A/HRC/RES/53/23 , para. 3).” Meta is in a unique position to contribute to the eradication of child marriage on its platforms, following its commitment to respecting human rights standards in accordance with the UN Guiding Principles on Business and Human Rights. The majority acknowledge that while a prohibition on support of child marriage could assist in strengthening protection for children’s rights, these terms may be too vague. In the context of prohibitions on content related to terrorism, the UN Special Rapporteur on freedom of expression has described social media platforms prohibitions on “support” as “excessively vague” ( A/HRC/38/35 , para. 26; see also: General Comment No. 34 , para. 46). If Meta were to prohibit speech in support of child marriage, it should clearly define this term for its application in the specific context of child marriage. Additionally, to avoid the overenforcement of expressions and opinions that constitute protected speech, and to prevent silencing critical discussions and counter-speech that could contribute to protecting children's rights, the company should provide its content reviewers with adequate internal guidance and sufficient opportunities and resources to accurately enforce the exceptions established in the Human Exploitation policy (e.g., posted in condemnation, educational, awareness raising, or news reporting contexts). For a minority of Board Members, a prohibition on speech in support of child marriage would be inherently too vague, even if specified in the ways that the majority suggests. In addition, while child marriage itself clearly causes significant harm and violates a number of rights, there is insufficient evidence that speech in support of it causes actual harm or that removing such posts would help to solve the problem more expeditiously than allowing reactions and a public debate on the matter. Experts consulted by the Board noted that there are limited studies or evidence on how depictions of child marriage on social media affect social perceptions of the issue. These Board Members also consider that the term ""normalization"" is too vague and amorphous, and the causal connection between speech ""supporting"" child marriage and the harm of ""normalization"" is too remote in terms of causation of real-world harm. Moreover, for these Board Members, an assessment of less intrusive means (e.g., labeling and directing users to authoritative information about child marriage harms, preventing sharing a post, demoting the post, etc.) would also be required before determining that removal of “support” for child marriage is the least intrusive measure. While there may be situations where speech in support of child marriage causes actual harm, blanket bans on content deemed to support the practice could lead to the removal of expression and opinions that do not cause harm and therefore constitute protected speech. Risks of “normalization” of the practice should be addressed through education (e.g., labeling that directs users to the harms of child marriage) and counter-speech rather than censorship. The UN Special Rapporteur on Freedom of Expression has noted that “counterspeech has been a successful response strategy [when] exposing hate speech,” ( A/78/288 , para. 109,) and has highlighted the importance of “expanding access to information and ideas that counter hateful messages,” ( A/74/486 , para. 18). These Members consider that this conclusion is equally applicable in the context of child marriage. For the minority, expanding the Human Exploitation policy to prohibit content that supports child marriage could have unintended and counterproductive consequences for efforts to combat it, by suppressing debate and counter-speech that may in fact help challenge prevalent social norms and attitudes towards child marriage and contribute to its eradication. These Board Members consider that a Community Standard that suppresses all speech that ""supports"" child marriage, especially when enforced at scale, will inevitably result in the removal of a disproportionate amount of speech beyond what is permissible in line with international human standards. Overall, the Board was divided on the advantages and disadvantages of a prohibition on ""support"" and did not reach a definitive conclusion on that question. As this particular case was focused on “facilitation”, the Board had no occasion to consider in sufficient detail the many potential implications of how a ban on “support” would be implemented by Meta in practice. For instance, the Board lacks sufficient information on the feasibility of Meta clearly identifying and distinguishing “support” from neutral statements or on the potential error rates. Consequently, the Board believes that this issue should be revisited in a future case. Finally, on the proportionality of Meta’s response, the Board welcomes the fact that the company did not apply a strike against the user who posted the content because it removed the post based on the spirit of the policy allowance rather than the letter of the policy, and determined that removal was sufficient with no need for additional penalization in the form of a strike. The Board emphasizes the value of separating Meta’s enforcement actions on content from the penalties given to users. 6. The Oversight Board’s Decision The Oversight Board upholds Meta’s decision to take down the content. 7. Recommendations A. Content Policy 1. To ensure clarity for users, Meta should modify the Human Exploitation policy to explicitly state that forced marriages include child marriage. The Board will consider this recommendation implemented when Meta updates its public-facing Human Exploitation Community Standard to reflect the change. 2. To ensure clarity for users, Meta should modify the Human Exploitation policy to define child marriage in line with international human rights standards to include marriage and informal unions of children under 18 years of age. The Board will consider this recommendation implemented when Meta updates its public-facing Human Exploitation Community Standard to reflect the change. B. Enforcement 3. Meta should provide explicit guidance to human reviewers about child marriage being included in the definition of forced marriages. The Board will consider this recommendation implemented when Meta provides updated internal documents demonstrating that the change was implemented. 4. To protect children’s rights and to avoid Meta’s reliance on the spirit of the policy allowance, the company should expand the definition of facilitation in its internal guidelines to include the provision of any type of material aid (which include “services”) to enable exploitation. The Board will consider this recommendation implemented when Meta provides updated internal documents demonstrating that the change was implemented. The Oversight Board also reiterates the importance of its previous recommendations calling for a public explanation of the spirit of the policy allowance to be provided ( Sri Lanka Pharmaceuticals decision, recommendation no. 1, reiterated in Communal Violence in the State of Odisha decision). In our Sri Lanka Pharmaceuticals decision, the Board made a recommendation urging Meta to explain in the landing page of the Community Standards that allowances may be made when their rationale, and Meta’s values, demand a different outcome than a strict reading of the rules. Additionally, the Board asked Meta to include a link to a Transparency Center page providing information about the “spirit of the policy” allowance. The Board will be monitoring implementation of this recommendation, which Meta has already committed to. *Procedural Note: Return to Case Decisions and Policy Advisory Opinions" ig-5mc5ojil,Responding to antisemitism,https://www.oversightboard.com/decision/ig-5mc5ojil/,"September 13, 2023",2023,,TopicFreedom of expressionCommunity StandardDangerous individuals and organizations,Dangerous individuals and organizations,Overturned,"Turkey, United States","A user appealed Meta’s decision to remove an Instagram post of a video that condemned remarks by music artist Ye (the American rapper formerly known as Kanye West) praising Hitler and denying the Holocaust. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post.",5736,878,"Overturned September 13, 2023 A user appealed Meta’s decision to remove an Instagram post of a video that condemned remarks by music artist Ye (the American rapper formerly known as Kanye West) praising Hitler and denying the Holocaust. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Summary Topic Freedom of expression Community Standard Dangerous individuals and organizations Location Turkey, United States Platform Instagram This is a summary decision . Summary decisions examine cases where Meta reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors. They are approved by a Board Member panel, not the full Board. They do not consider public comments, and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas where the company could improve its policy enforcement. Case summary A user appealed Meta’s decision to remove an Instagram post of a video that condemned remarks by music artist Ye (the American rapper formerly known as Kanye West) praising Hitler and denying the Holocaust. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Case Description and background In January 2023, an Instagram user from Turkey posted a video containing an excerpt of an interview in English where Ye states that he ""likes"" Adolph Hitler and that Hitler ""didn't kill 6 million Jews."" The video then cuts to a person who appears to be a TV reporter expressing outrage over Ye's statements and recounting how his family members were killed in the Holocaust. The video is subtitled in Turkish and has a caption that can be translated as ""TV reporter responds to Kanye West."" Meta originally removed the post from Instagram citing its Dangerous Organizations and Individuals (DOI) and Hate Speech policies. Under Meta’s DOI policy, the company removes praise of designated individuals, including Adolf Hitler. However, the policy recognizes that “users may share content that includes references to designated dangerous organizations and individuals to report on, condemn or neutrally discuss them or their activities.” Under its Hate Speech policy, the company removes Holocaust denial as a form of harmful stereotype that is “historically linked to intimidation, exclusion, or violence on the basis of a protected characteristic.” The Hate Speech policy also recognizes that “people sometimes share content that includes slurs or someone else's hate speech to condemn it or raise awareness.” In their appeal to the Board, the user argued that the video does not support Adolf Hitler and that they were misunderstood. After the Board brought this case to Meta’s attention, the company determined that the content did not violate its policies. Although the video contained praise for Adolf Hitler and Holocaust denial, the second part of the video clearly condemned these statements, placing it within an allowable context. Therefore, the company concluded that its initial removal was incorrect and restored the content to the platform. Board authority and scope The Board has authority to review Meta's decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). Where Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, to reduce errors and to increase fairness for people who use Facebook and Instagram. Case significance This case shows an example of error in the application of exceptions in Meta's DOI and Hate Speech policies. Such mistakes can suppress speech meant to respond to hate speech, including holocaust denial, or condemn statements of praise for dangerous individuals such as Hitler. Protecting counter-speech is essential for advancing freedom of expression and a tool for combating harmful content such as misinformation and hate speech. The Board has previously recommended that: Meta should assess the accuracy of reviewers enforcing the reporting allowance under the DOI policy in order to identify systemic issues causing enforcement errors ( Mention of the Taliban in news reporting , recommendation no. 5); Meta should evaluate automated moderation processes for enforcement of the DOI policy ( Öcalan's isolation , recommendation no. 2); and Meta should conduct accuracy assessments focused on its Hate Speech policy allowances that cover forms of expression such as condemnation, awareness raising, self-referential, and empowering uses ( Wampum belt , recommendation no. 3). Meta has reported progress on Mention of the Taliban in news reporting , recommendation no. 5), declined to implement Öcalan's isolation , recommendation no. 2), and demonstrated implementation on Wampum belt , recommendation no. 3. The Board reiterates that the full implementation of these recommendations may reduce error rates in the enforcement of allowances under the Hate Speech and the Dangerous Organizations and Individuals policies. This will, in turn, better protect counter-speech and enhance freedom of expression overall. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" ig-6bz783wq,Iranian Woman Confronted on Street,https://www.oversightboard.com/decision/ig-6bz783wq/,"March 7, 2024",2024,,"TopicFreedom of expression, Protests, SafetyCommunity StandardViolence and incitement","Policies and TopicsTopicFreedom of expression, Protests, SafetyCommunity StandardViolence and incitement",Overturned,Iran,The Oversight Board has overturned Meta’s original decision to take down a video showing a man confronting a woman on the streets of Iran for not wearing the hijab.,48882,7720,"Overturned March 7, 2024 The Oversight Board has overturned Meta’s original decision to take down a video showing a man confronting a woman on the streets of Iran for not wearing the hijab. Standard Topic Freedom of expression, Protests, Safety Community Standard Violence and incitement Location Iran Platform Instagram Iranian Woman Confronted on Street Public Comments Appendix Iranian Woman Confronted on Street Decision PDF The Oversight Board has overturned Meta’s original decision to take down a video showing a man confronting a woman on the streets of Iran for not wearing the hijab. The post did not violate the Violence and Incitement rules because it contains a figurative statement, rather than literal, and is not a credible threat of violence. Shared during a period of turmoil, escalating repression and violence against people protesting, access to social media in Iran is crucial, with the internet representing the new battleground in the struggle for women’s rights. As Instagram is one of the few remaining platforms not to be banned in the country, its role in the anti-regime “Woman, Life, Freedom” movement has been immeasurable, despite the regime’s efforts to instill fear and silence women online. The Board concludes that Meta’s efforts to ensure respect for freedom of expression and assembly in the context of systematic state repression have been insufficient and it recommends a change to the company’s Crisis Policy Protocol. About the Case In July 2023, a user posted a video on Instagram in which a man confronts a woman in public for not wearing the hijab. In the video, which is in Persian with English subtitles, the woman responds by saying she is standing up for her rights. An accompanying caption expresses support for the woman and Iranian women standing up to the regime. Part of the caption, which also criticizes the regime, includes a phrase that translates as, “it is not far to make you into pieces,” according to Meta. Iran’s criminal code penalized women who appeared in public without a “proper hijab” with imprisonment, a fine or lashes. In September 2023, Iran’s regime approved a new Hijab and Chastity Bill under which women could face up to 10 years in prison if they continue to defy the mandatory hijab rules. The caption in this post makes it clear the woman in the video has already been arrested. First flagged by Meta’s automated systems for potentially violating Instagram’s Community Guidelines, the post was sent for human review. Although multiple reviewers assessed the content under Meta’s Violence and Incitement policy , they did not come to the same conclusion, which, in combination with a technical error, meant the post stayed up. A user then reported the post, which led to an additional round of review, this time by Meta’s regional team with language expertise. At this stage, it was determined the post violated the Violence and Incitement policy, and it was removed from Instagram. The user who posted the content then appealed to the Board. Meta maintained its decision to remove the content was correct until the Board selected this case, at which stage the company reversed its decision, restoring the post. Key Findings The Board finds the post did not violate the Violence and Incitement Community Standard because it contains figurative speech, rather than literal, and is not a credible threat of violence that is capable of inciting offline harm. While Meta originally removed the post partly because it assessed the phrase, “it is not far to make you into pieces,” as a statement of intent to commit high-severity violence – targeting the man in the video – it should not be interpreted literally. Given the context of widespread protests in Iran, and the caption and video as a whole, the phrase is figurative and expresses anger and dismay at the regime. Linguistic experts consulted by the Board noted a slightly different translation of the phrase (“we will tear you to pieces sometime soon”), explaining that it conveys anger, disappointment and resentment towards the regime. Rather than triggering harm against the regime, the most likely harm that would result from this post would be retaliatory violence by the regime. While Meta’s policy rationale suggests “language” and “context” may be considered when evaluating a “credible threat,” Meta’s internal guidance to moderators does not enable this in practice. Moderators are instructed to identify specific criteria (a threat and a target) and when those are met, to remove content. The Board previously noted its concern about this misalignment in the Iran Protest Slogan case, in which it recommended that Meta provide nuanced guidance on how to consider context, directing moderators to stop default removals of “rhetorical language” expressing dissent. It remains concerning there is still room for inconsistent enforcement of figurative speech, in contexts such as Iran. Furthermore, as automation accuracy is impacted by the quality of training data provided by humans, it is likely the mistake of removing figurative speech is amplified. This post was also considered under the Coordinating Harm and Promoting Crime Community Standard because there is a rule prohibiting “content that puts unveiled women at risk by revealing their images without [a] veil against their will or without permission.” The policy line has since been edited to prohibit: “Outing [unveiled women]: exposing the identity of a person and putting them at risk of harm.” On this, the Board agrees with Meta that the content does not “out” the woman in the video and the risk of harm had abated because her identity was widely known and she had already been arrested. In fact, the post was shared to call attention to her arrest and could help pressurize the authorities to release her. As Iran is designated an at-risk country under Meta’s crisis policies, including the Crisis Policy Protocol, the company is able to apply temporary policy changes (“levers”) to address a particular situation. While the Board recognizes Meta’s efforts on Iran, these have been insufficient to ensure respect for people’s freedom of expression and assembly in environments of systematic repression. The Oversight Board's Decision The Oversight Board has overturned Meta’s original decision to take down the post. The Board recommends that Meta: *Case summaries provide an overview of cases and do not have precedential value. 1. Decision Summary The Oversight Board overturns Meta’s original decision to take down a video that shows a man confronting a woman on the streets of Iran for not wearing the hijab. Meta removed the post from Instagram because of the following line in the caption – “it is not far to make you into pieces” – which the company read as a threat, targeting the man in the video who approached the woman for not wearing a hijab. The Board finds the post did not violate the Violence and Incitement policy because the relevant statement is figurative, rather than literal, and given the context, does not constitute a credible threat of violence. After the Board selected the case, Meta initially upheld its decision to remove the post – however, before submitting its rationale to the Board, Meta determined that its original decision to take down the content was in error and it restored the post to the platform. The Board concludes that Meta’s efforts to ensure respect for freedom of expression and assembly in the context of systematic state repression of freedom of expression have been insufficient and it recommends a change to the company’s Crisis Policy Protocol. 2. Case Description and Background In July 2023, an Instagram user posted a video in Persian with English subtitles showing a man confronting a woman in public for not wearing the hijab, with the woman responding that she is standing up for her rights. The man is not identifiable in the video while the woman is fully visible. The video appears to be a repost of a recording initially shared by someone affiliated with or supporting the Iranian regime. The video was accompanied by a caption, also in Persian, expressing support for the woman and for Iranian women standing up to the regime, and criticizing the regime and its supporters. The caption included a phrase translated by Meta as, “it is not far to make you into pieces” and stated that the woman was arrested following the incident. The post had about 47,000 views, 2,000 likes, 100 comments and 50 shares. This content was first flagged by an automated classifier, an algorithm Meta uses to identify potential violations of its policies, as potentially violating Instagram’s Community Guidelines and sent for human review. Multiple reviewers assessed the content, but because of a technical error and because the reviewers did not reach the same conclusion on whether the post was a violation of the Violence and Incitement policy, it was initially not removed. A user then reported the content, in response to which an automated classifier determined again that the content potentially violated Meta’s policies and sent it for additional review. The content was reported by one user and was reported only once. Following this additional level of review by Meta’s team with regional and language expertise, Meta removed the post from Instagram under its Violence and Incitement policy. Meta’s decision to remove the post was based on the following line in the caption: “it is not far to make you into pieces.” The company read this line as a threat, targeting the man in the video who approached the woman for not wearing a hijab. The user who created the post appealed the decision to take down the post to Meta. A reviewer upheld the decision to remove. The user who posted the content then appealed the removal to the Board. When the Board identified the case for legal review, Meta upheld its decision to remove the content. At this stage of review, Meta also considered the Coordinating Harm and Promoting Crime Community Standard, which prohibits “outing” unveiled women when this puts the woman at risk of harm. At the time the content was posted, the woman had already been arrested by the regime. After the Board selected the case, the company subsequently changed its decision and restored the content based on additional input from its regional team and on the Board’s decision in the Call for Women’s Protest in Cuba case. As the Board noted in its Iran Protest Slogan decision, people in Iran have been protesting against the government and for civil and political rights and gender equality, since at least the 1979 revolution. In 2023, the Nobel Peace Prize was awarded to Narges Mohammadi , an imprisoned human rights defender, for “more than 20 years of fighting for women’s rights [which has] made her a symbol of freedom and standard-bearer in the struggle against the Iranian theocracy.” Iran’s criminal code penalizes women who appear in public without a “proper hijab” with imprisonment, a fine or lashes. Women in Iran are also banned from certain fields of study and many public places , and people are prohibited from dancing with members of the opposite sex, among other things. Men are considered the head of the household and women need the permission of their father or husband to work, marry or travel . A woman’s court testimony is considered half the weight of a man’s, which limits access to justice for women. After Iranian authorities intensified and expanded mandatory hijab enforcement measures in 2022, women have faced increased scrutiny, often leading to verbal and physical harassment and arrests . In September 2022, 22-year-old Jina Mahsa Amini died in police custody three days after her arrest for allegedly failing to comply with the country’s rules on wearing a “proper hijab.” Her death sparked nationwide outrage and waves of protests across the country, and an anti-regime movement that has become known as: “Zan, Zendegi, Azadi” (“Woman, Life, Freedom”). This led to a violent crackdown by authorities, with over 500 confirmed deaths by the end of 2022, and an estimated 14,000 people being arrested, including protesters as well as journalists, lawyers, activists, artists and athletes who voiced support for the movement. In September 2023, Iran’s parliament approved a new “Hijab and Chastity” bill under which women could face up to 10 years in prison if they continue to defy the country’s mandatory hijab rules. Businesses that serve women without a hijab would also face sanctions and risk being shut down. Social media has been central to the women’s protest movement in Iran, playing a critical role in the mobilization of protests and broadcasting of vital information (see public comments PC 21007, PC-21011), and in documenting and publicly preserving evidence about abuses and human rights violations (PC-21008, attachment). However, online campaigns also expose women to risks of further repression by the regime, including threats, defamation campaigns, arrests and imprisonment. Experts consulted by the Board noted an extensive network of entities associated with the Islamic Revolutionary Guard Corps and the Iranian government that operate on Instagram and Telegram, with the latter being frequently used to directly target and accuse protesters and dissenters. Several public comments submitted to the Board also highlighted the regime’s tactic of mass reporting protest content on Instagram using the user reporting system in order to “pressure social media companies into removing content related to dissidents or placing them into shadow bans,” (see public comments PC-21011, PC-21009). There have also been reports of Iranian intelligence officials offering content moderators money to remove content shared by critics of the regime. In February 2023, the UN Special Rapporteur on Iran reported concerns on the continuing repression and targeting of civil society activists, human rights defenders, women’s rights activists, lawyers and journalists, as the authorities clamp down on avenues for expressing dissent, including heavy disruption of the internet and censorship of social media platforms. 3. Oversight Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). When Meta commits to act on recommendations, the Board monitors their implementation. When the Board selects cases like this one, in which Meta subsequently acknowledges that it made an error, the Board reviews the original decision to increase understanding of the content moderation process and to make recommendations to reduce errors and increase fairness for people who use Facebook and Instagram. 4. Sources of Authority and Guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board Decisions The most relevant previous decisions of the Oversight Board include: II. Meta’s Content Policies The Board’s analysis was informed by Meta’s commitment to voice , which the company describes as “paramount,” and its values of safety, privacy and dignity. Instagram Community Guidelines The Instagram Community Guidelines state that the company will “remove content that contains credible threats” and links to the Violence and Incitement Community Standard. The Community Guidelines do not directly link to the Coordinating Harm and Promoting Crime Community Standard. Meta’s Community Standards Enforcement Report for Q1 2023 states that “Facebook and Instagram share Content Policies. This means that if content is considered violating on Facebook, it is also considered violating on Instagram.” The content was removed under the Violence and Incitement policy. After the Board selected the case, Meta also analyzed the content under its Coordinating Harm and Promoting Crime policy. Violence and Incitement Community Standard According to the policy rationale , the Violence and Incitement Community Standard aims to “prevent potential offline violence that may be related to content” appearing on Meta’s platforms. At the same time, Meta recognizes that “people commonly express disdain or disagreement by threatening or calling for violence in non-serious and casual ways.” Meta therefore removes content when the company believes it contains “[t]hreats of violence that could lead to death,” including content that targets anyone with “statements of intent” to commit “high-severity violence.” It states that it considers the context of the statement when assessing whether a threat is credible, which can be any additional information such as the person’s “public visibility and vulnerability of the target.” The “do not post” section of the policy specifically prohibits “threats of violence that could lead to death (and other forms of high-severity violence).” The word “threat” includes “statements of intent” to commit high-severity violence. Coordinating Harm and Promoting Crime Community Standard According to the policy rationale , the Coordinating Harm and Promoting Crime Community Standard aims to “disrupt offline harm and copycat behavior” by prohibiting people from “facilitating, organizing, promoting or admitting to certain criminal or harmful activities targeted at people, businesses, property or animals.” This includes “outing,” which Meta defines as content exposing the identity or locations affiliated with anyone who is alleged to, among other things, “be a member of an outing-risk group.” This policy line is enforced by moderators at scale in relation to certain specific groups. On escalation, with additional context, Meta’s policy at the time the content was posted stated that it may also remove: “content that puts unveiled women at risk by revealing their images without [a] veil against their will or without permission.” This language has now been edited to prohibit: “outing [unveiled women]: exposing the identity of a person and putting them at risk of harm.” III. Meta’s Human Rights Responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. Meta’s Corporate Human Rights Policy , announced on 16 March 2021, reaffirmed the company’s commitment to respect rights as reflected in the UNGPs. The following international standards may be relevant to the Board’s analysis of Meta’s human rights responsibilities in this case: 5. User Submissions The user who posted the content appealed the removal to the Board. In their statement, the user explained the post showed a representative of the Iranian government confronting a woman for not wearing a hijab. The user stated that the video shows the bravery of the Iranian woman standing up for her rights. The user stated that others had shared similar videos on social media and that the content was not harmful or dangerous and did not violate any Instagram policy. 6. Meta’s Submissions Violence and Incitement Community Standard Meta told the Board that its initial decision to remove the content in this case was based on the Violence and Incitement policy. It explained that its decision was based on part of the post’s caption, which the company translated as: “it is not far to make you into pieces.” The company read that line as targeted towards the man in the video who approached the woman for not wearing a hijab. Its regional team determined that the phrase “make you into pieces” constituted a threat of physical harm in the Iranian context. Given that interpretation, Meta initially upheld the decision to remove the content under its Violence and Incitement policy. Although Meta upheld its decision to remove the content when the Board first identified the case for legal review, it subsequently changed its decision after the Board selected this case and restored the content based on additional input from the regional team. Based on that input, Meta concluded that the most likely interpretation of the language in the caption was a reference to taking down the Iranian regime or those supporting the mandatory hijab, rather than a literal threat targeting the man in the video. In this final review, Meta concluded that the content aimed to raise awareness and draw attention to abuses committed against women, such as the woman confronted by the man in the video for not wearing a hijab. The user refers to the strength of Iranian women and criticizes the “bastardness,” referring to either the man filming the video or the regime as a whole, and raises awareness of the arrest of the woman in the video. Meta explained that the potentially threatening language should be read and understood in light of this overall context. This type of awareness raising, Meta noted in its rationale, citing the Board’s decision in the Iran Protest Slogan case, is particularly important in Iran where there are limited outlets for free expression. Meta also informed the Board that after additional research, regional teams suggested that the most reasonable interpretation of the language “it is not far to make you into pieces” was not a true threat, but rather a political critique directed toward either the regime as a whole or people who support the mandatory hijab requirement more generally. While “make you into pieces” generally refers to killing a person by cutting their body into pieces, here it could be understood as dismantling the regime (similar to the metaphorical language used in the Metaphorical Statement Against the President of Peru summary decision). Meta told the Board that another factor in the company ultimately restoring the content was the Board’s recent decision in the Call for Women’s Protest in Cuba case, in which the Board emphasized that a contextual reading of the post should take into account the wave of state repression and the significant public interest in the historic protests that were the subject of the post. Additionally, the company considered the Iran Protest Slogan case, in which the Board analyzed the political movement around women’s rights in Iran. There, the Board emphasized the importance of protecting voice in the context of the protest movement, particularly in light of the Iranian government’s systematic repression of free expression and the importance of digital spaces as a forum to express dissent. The company also noted that it had considered the Board’s recent summary decision in Metaphorical Statement Against the President of Peru, which re-emphasized the “importance of designing context-sensitive moderation systems with awareness to irony, satire or rhetorical discourse, especially to protect political speech.” Coordinating Harm and Promoting Crime Community Standard Meta told the Board it also considered removing the content under the Coordinating Harm and Promoting Crime Community Standard for involuntarily outing the woman shown in the video without a veil. Meta enforces this policy line on escalation only and if additional context is provided. Enforcement requires input from relevant stakeholders and is focused on determining whether the content depicts an unveiled woman, exposing her identity without her permission, and is likely to put her at risk, rather than any specific terms used or the tone of the content or caption. Meta notes that a person cannot “out” themselves – outing must be involuntary to violate the policy. In this case, Meta determined the content should not be removed for involuntary outing as the woman’s identity was widely known and available online, and she had already been arrested at the time the case content was posted. This context significantly reduced the risk of harm associated with leaving the content on the platform. The Board asked Meta 11 questions and two follow-up questions in writing. Questions related to enforcement procedures and resources for Iran, Meta’s risk assessment for Iran in general and for the woman in the video in particular, automated and human review processes, the enforcement of content depicting unveiled women and the outing of an at-risk group, at-scale and on escalation. Meta answered all questions. 7. Public Comments The Oversight Board received 12 public comments relevant to this case. Seven of the comments were submitted from the United States and Canada, two from Central and South Asia, two from Europe and one from the Middle East and North Africa. The submissions covered the following themes: the role of social media in the Iran protests, including the “Woman, Life, Freedom” movement, and the role that images of unveiled women play in digital campaigns; the risks for circulating imagery showing unveiled women in Iran on social media; the use of social media by the Iranian authorities; Meta’s enforcement of its content moderation policies for Persian-language expression related to the political situation in Iran; freedom of expression, human rights, women’s rights, government repression and social media bans in Iran. To read public comments submitted for this case, please click here . 8. Oversight Board Analysis The Board examined Meta’s original decision to remove the content under the company’s content policies, human rights responsibilities and values. The Board selected this case because it offered the opportunity to explore Meta’s Violence and Incitement and Coordinating Harm and Promoting Crime policies, as well as related enforcement processes in the context of massive protests for women’s rights and women’s participation in public life in Iran since September 2022. In particular, it addresses the importance of social media platforms for people protesting against the mandatory hijab rules. Additionally, the case provides the Board with the opportunity to discuss Meta’s internal procedures that determine when and why figurative speech, which, if understood literally, may be interpreted as threatening, but which given the context should not be interpreted as a credible threat. The case primarily falls into the Board’s Elections and Civic Space priority, but also touches on the Board’s priorities of Gender, Government Use of Meta's Platforms, and Crisis and Conflict Situations. 8.1 Compliance With Meta’s Content Policies Violence and Incitement Community Standard The Board finds that the content in this case does not violate the Violence and Incitement Community Standard as it contains figurative speech expressing anger at government repression, rather than a literal and therefore credible threat of violence. Meta explained that it originally removed the content in this case because it contained “a statement of intent to commit high-severity violence.” It defines high-severity violence as a threat that could lead to death or is likely to be lethal. Based on its regional team’s assessment, Meta construed the phrase “it is not far to make you into pieces” from the post’s caption as a threat of physical harm in the Iranian context, violating the Violence and Incitement policy. Linguistic experts consulted by the Board explained that the relevant part of the caption can be translated as “we will tear you to pieces sometime soon!” or “it is not far away, we will rip you into shreds.” The experts noted that the phrase in the Iranian context conveys anger, disappointment and resentment towards oppressors, and suggests the idea that the situation might eventually change as the oppressors' hold on power will not be eternal. This phrase should not be interpreted literally as an intention to cause physical harm; instead, it serves as a “rhetorical statement” aiming to attract attention, emphasized by emotionally charged language featuring forceful verbs such as “tear” or “rip into pieces/shreds.” These experts highlighted that such figurative speech mirrors the profound anger shared by both the user posting the content and their audience. Therefore, it does not imply actual threats of physical violence. Although Meta told the Board that it also considers the context of the statement when assessing whether a threat is credible, its guidance to moderators does not indicate that they can consider context when assessing if there is a “statement of intent [to commit] high-severity violence.” As long as the elements of the rule are satisfied, specifically when content includes both a threat and a target, the post is found violating, as was the case here. In this case, Meta interpreted the statement to be targeting the man following the woman. The rule does not require the target to be visible or identifiable. The rules provide only one example of threatening speech that is exempted or considered not to be a credible threat as a rule: “threats directed against certain violent actors, like terrorist groups.” The phrase “make you into pieces” here does not constitute a credible threat. Given the context of a period of turmoil, escalating repression and violence against people protesting the regime in Iran and considering the caption and the video as a whole, the Board finds that the phrase is figurative, not literal, expressing anger and dismay at the regime, and did not constitute a “statement of intent [to commit] high-severity violence.” This interpretation is consistent with Meta’s commitment to voice, and the importance of protecting expression of political discontent. Coordinating Harm and Promoting Crime Community Standard The Board finds that the content in this case does not violate the Coordinating Harm and Promoting Crime Community Standard. Meta considered removing the post under the rule prohibiting “content that puts unveiled women at risk by revealing their images without [a] veil against their will or without permission.” The policy line has since been edited to prohibit “Outing [unveiled women]: exposing the identity of a person and putting them at risk of harm.” The company understands “outing” to include content that shares an image of an unveiled woman, exposes her identity without her permission and puts her at risk of harm. This policy line is applied on escalation only and requires input from various stakeholders for enforcement (see Section 4 above). In this case, the Board agrees with Meta that the content does not “out” the woman, as her identity was widely known and the risk of harm from the content had abated because she had already been arrested at the time the content was posted. Therefore, the video remaining on the platform would not meaningfully increase the level of risk to the woman and could in fact be protective in raising awareness of her case. Determining whether a post “outs” a woman and puts her at risk is especially context dependent; enforcing the policy on escalation-only can ensure the team enforcing it has the time and resources to effectively identify and consider the relevant context. 8.2 Compliance with Meta’s Human Rights Responsibilities The Board finds that removing the content from the platform was inconsistent with Meta’s human rights responsibilities. Freedom of Expression (Article 19 ICCPR) Article 19 of the ICCPR provides for broad protection of expression, including “freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice.” When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). The Board uses this framework to interpret Meta’s voluntary human rights commitments, both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users' right to freedom of expression,” ( A/74/486 , para. 41). Access to social media is crucial in a closed society such as Iran. As “digital gatekeepers,” social media platforms have a “profound impact” on public access to information ( A/HRC/50/29 , para. 90). The post in this case is part of a broader protest movement that relies on digital civic spaces to survive. Laws determining how women must dress impact their freedom and dignity ( A/68/290 , para. 38), whether the law seeks to prohibit wearing a veil or to proscribe going in public without one (see e.g., Yaker v France, CCPR/C/123/D/2747/2016 ). In this regard, “the Internet has become the new battleground in the struggle for women’s rights, amplifying opportunities for women to express themselves,” ( A/76/258 , para. 4). Empowering women’s free expression enables their political participation and the realization of their human rights ( A/HRC/Res/23/2 , paras. 1-2; A/76/258 , para. 5). I. Legality (Clarity and Accessibility of the Rules) The principle of legality requires any restriction on freedom of expression to be pursuant to an established rule, which is accessible and clear to users. The rule must be “formulated with sufficient precision to enable an individual to regulate his or her conduct accordingly and it must be accessible to the public,” ( General Comment No. 34 , at para 25). Additionally, the rules restricting expression “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” and should “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not,” (General Comment No. 34, at para 25; A/HRC/38/35 (undocs.org), at para 46). Lack of clarity or precision can lead to inconsistent and arbitrary enforcement of the rules. Applied to Meta, users should be able to predict the consequences of posting content on Facebook and Instagram, and content reviewers should have clear guidance on their enforcement. Violence and Incitement Community Standard The Board finds that while the policy rationale – which expresses the aims of the Community Standard but is not part of the rule itself – of the Violence and Incitement Community Standard suggests that “context matters,” and may be considered when evaluating a “credible threat,” Meta’s internal guidelines and approach to moderation do not enable this in practice. As the Board noted in the Iran Protest Slogan case, an at-scale content moderator is instructed to look for specific criteria, or elements, in the post, and once those elements are met, the moderators are instructed to remove the post. In other words, if the post contains a threat (e.g., “kill” or “I will tear you into pieces”) and a target, the result is removal. Content moderators are not empowered to make an assessment on whether the threat is credible. As Meta also explained in that case, the rote or formulaic approach is because “assessing whether [a phrase] constitutes rhetorical speech as opposed to a credible threat is challenging, particularly at scale.” Consideration of credibility of threats is taken into account in creating the rule, not in enforcing it. As the Board noted in the Iran Protest Slogan case, while the “policy rationale appears to accommodate rhetorical speech of the kind that might be expected in protests contexts, the written rules and corresponding guidance to reviewers do not. Indeed, enforcement in practice, in particular at-scale, is more formulaic than the rules imply, and this may create misperceptions to users of how rules are likely to be enforced. The guidance to reviewers, as currently drafted, exclude[s] the possibility of contextual analysis, even when there are clear cues within the content itself that threatening language is rhetorical.” This misalignment between the company’s stated policy rationale and its actual enforcement practice continues and does not adequately satisfy the principle of legality. The Board reiterates its findings from the Iran Protest Slogan case that Meta should provide nuanced guidance on how to take into account context, directing moderators to refrain from default removal of “rhetorical” or non-literal language expressing dissent, particularly in sensitive political environments, such as Iran. Coordinating Harm and Promoting Crime Community Standard The Board finds that Meta’s prohibition on “content that puts unveiled women at risk by revealing their images without [a] veil against their will or without permission” is sufficiently clear, as applied in this case. It makes clear that content that “outs” unveiled women and could lead to harm is prohibited on Meta’s platforms. However, the Board notes with concern that Instagram Community Guidelines do not directly link to the Coordinating Harm and Promoting Crime Community Standard. This undermines the accessibility of the rules to Instagram users. In previous cases ( Breast Cancer Symptoms and Nudity , Öcalan’s Isolation ), the Board has recommend that Meta publicly clarify for users how Facebook Community Standards apply to Instagram. In response, Meta has undertaken a process to unify the Community Standards with Instagram’s Community Guidelines and specify where policies differ slightly between platforms. In ensuing quarterly transparency reports, Meta has assured the Board that this effort remains a priority, noting that legal and regulatory considerations have impacted their timelines. The Board reiterates the importance of moving quickly to complete this process and ensure clarity of applicable rules. II. Legitimate Aim Under Article 19, paragraph 3 of the ICCPR, expression may be restricted for a defined and limited list of reasons, including for the purpose of protecting the rights of others. In this case, the Board finds that the Violence and Incitement Community Standard aims to “prevent potential offline harm” by removing content that poses “a genuine risk of physical harm or direct threats to public safety.” This policy therefore serves the legitimate aim of protecting the right to life (Article 6, ICCPR) and the right to physical security of a person (Article 9 ICCPR; General Comment No. 35, para. 9). The Coordinating Harm and Promoting Crime policy serves the legitimate aim of protecting the rights of women in Iran to non-discrimination (Articles 2, 3 and 26, ICCPR; Articles 1 and 7, CEDAW), including in the enjoyment of their rights to freedom of expression and assembly (Articles 19 and 21, ICCPR), the right to take part in public life (Articles 1 and 7, CEDAW), the right to privacy (Article 17, ICCPR) and their rights to life (Articles 6, ICCPR ) and to liberty and security of person (Article 9 ICCPR). III. Necessity and Proportionality Any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected,” ( General Comment 34 , para. 34). Social media companies should consider a range of possible responses to problematic content beyond deletion to ensure restrictions are narrowly tailored ( A/74/486 , para. 51). Violence and Incitement Community Standard The Board finds that Meta’s initial decision to remove the content in this case was not necessary. It was not required to protect the safety of the person taking the video or others, as the threat mentioned in the caption of the post is not literal. The Board is concerned that even after its guidance in the Iran Protest Slogan case, the company’s Community Standards and guidance provided to moderators still leave room for inconsistent enforcement of figurative (non-literal) threats, despite the situation in Iran, with protests ongoing for more than a year now. That case, like this one, involved a phrase that Meta failed to identify as a non-literal statement of a threat. Continuing lack of adequate guidance is further highlighted by the fact that the content in this case was reviewed by multiple at-scale moderators and teams within Meta, and repeatedly found to be violating. As part of its analysis, the Board drew upon the six factors from the Rabat Plan of Action to evaluate the capacity of the content in this case to create a serious risk of inciting discrimination, violence or other lawless action. The Board notes that while the Rabat factors were developed for advocacy of national, racial or religious hatred that constitutes incitement, and not for incitement generally, the six-factor test is useful for assessing incitement in general terms, and the Board has used it in this way previously (see, for example, Iran Protest Slogan and Call for Women’s Protest in Cuba): Based on the analysis of the factors above, the Board considers that the content did not constitute a credible threat and was not capable of inciting offline harm. When figurative speech is used in the context of widespread protests met with violent repression, Meta should enable its reviewers to assess language and local context, aligning the guidance for moderators with the underlying policy rationale. Ensuring accurate assessment of whether a post is “figurative speech” or likely to incite violence is vital for improving moderation in crises more broadly. Automation accuracy will be impacted by the quality of the training data provided by human moderators. Where human moderators remove “figurative” statements due to a rigid enforcement of a rule, that mistake is likely to be reproduced and amplified through automation. The Board notes that Meta has a number of mechanisms available to adjust its policies and their enforcement during crisis situations, including the “at risk” country tiering system, and its Crisis Policy Protocol. The “at risk” country tiering system is used to identify countries at risk of “offline harm and violence” in order to determine how the company should prioritize its product development or how to invest its resources. The assessment can also be taken into account for other processes (e.g., whether to stand up a special operations team or to trigger the use of its Crisis Policy Protocol). Meta informed the Board that for the second half of 2023, Iran has been designated an “at risk country.” Iran has also been designated under the Crisis Policy Protocol since September 21, 2022, and has remained designated since that time. The Crisis Policy Protocol enables Meta to make certain temporary policy changes, known as “policy levers,” to address a particular situation. Meta provided some examples of the policy levers already used in Iran, including “an allowance to permit content that includes the slogan 'I will kill whoever kills my sister/brother' or its derivatives, absent other violations of our policies.” (For more examples of policy levers see Policy Forum Minutes , January 25, 2022, Crisis Policy Protocol.) While the Board recognizes the company’s commitment to safety and its efforts to mitigate potential content moderation risks by activating the Crisis Policy Protocol for Iran, these efforts have been insufficient to ensure respect for people’s freedom of expression and assembly in an environment of systematic repression of dissent and social tensions. Research commissioned by the Board indicates the overwhelming majority of content depicting unveiled women in the context of discussing the wearing of hijab in Iran, on Meta’s platforms, is shared by or in support of the protest movement. Here, Meta’s enforcement process repeatedly failed to distinguish figurative, or not literal, statements in relevant context from real threats and incitement to violence, which have the potential to further offline harm. The Board recommends that Meta add a policy lever to the Crisis Policy Protocol, and accordingly provide internal criteria to at-scale moderators on how to identify statements using threatening language figuratively, or not literally, in the relevant context to be deemed non-violating of the Violence and Incitement policy line prohibiting threats of violence. In developing the crisis-specific criteria on how to determine whether a threat is figurative and not literal, Meta may look to the Rabat Plan factors (e.g., context of widespread protests against state repression, whether the speaker has the ability to incite or presents the risk of inciting people to engage in harm, relevant linguistic and social context indicating common use of strong/emotional language for rhetorical power, likelihood of harm given local knowledge, etc.). The company may also rely on its trusted partners to help devise or assess the criteria for moderation. Meta itself has stressed the practical importance of the Rabat Plan of Action to content moderation, having supported the United Nations in translating the Plan of Action into 32 languages. This policy lever should allow “figurative speech” within the context of protests against the regime, provided they are not intended to, and are not likely to, incite violence. Coordinating Harm and Promoting Crime Community Standard In this case, the Board finds removal under the Coordinating Harm and Promoting Crime Community Standard was not necessary, as the depicted woman’s identity was widely known, and the content was clearly posted to call attention to her arrest, in the hope that the attention would lead to her release. Additionally, several public commentators highlighted that women who remove the hijab in public do it purposefully, as a form of protest, and are aware of the potential consequences, choosing “defiance as a strategic opposition to authority,” (see public comment Tech Global Institute, PC-21009). The woman in the video had already been identified and arrested by the regime. This post was shared to call attention to that arrest. Protesters and dissidents detained by the regime have been tortured, subjected to gender-based violence or disappeared. Experts consulted by the Board and several public commentators specifically noted that this practice, calling attention to arrests and calling for the release of a detained person, is regularly used by the movement and by human rights defenders in Iran, and can help protect individuals held by the regime. Balancing the need to safeguard the identities of vulnerable users while avoiding censorship for those who desire exposure is a delicate question and requires contextual analysis, timely review and quick action. 9. Oversight Board Decision The Oversight Board overturns Meta's original decision to take down the content. 10. Recommendations To ensure respect for users' freedom of expression and assembly in an environment of systematic state repression, Meta should add a policy lever to the Crisis Policy Protocol providing that figurative (or not literal) statements, not intended to, and not likely to, incite violence, do not violate the Violence and Incitement policy line prohibiting threats of violence in relevant contexts. This should include developing criteria for at-scale moderators on how to identify such statements in the relevant context. The Board will consider this recommendation implemented when Meta both shares with the Board the methods for implementing the policy lever and the resulting criteria for moderation in Iran. *Procedural Note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg, which draws on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Linguistic expertise was provided by Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world. Return to Case Decisions and Policy Advisory Opinions" ig-7hc7exg7,Reclaimed Term in Drag Performance,https://www.oversightboard.com/decision/ig-7hc7exg7/,"April 23, 2025",2025,,"TopicLGBT, Marginalized communities, Sex and gender equalityCommunity StandardHate speech",Hate speech,Overturned,United States,"A user appealed Meta’s decision to remove an Instagram post featuring a drag performance and a caption that included a word, designated by Meta as a slur, being used in a reclaimed, positive, self-referential context.",7078,1071,"Overturned April 23, 2025 A user appealed Meta’s decision to remove an Instagram post featuring a drag performance and a caption that included a word, designated by Meta as a slur, being used in a reclaimed, positive, self-referential context. Summary Topic LGBT, Marginalized communities, Sex and gender equality Community Standard Hate speech Location United States Platform Instagram Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention and include information about Meta’s acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta’s decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Summary A user appealed Meta’s decision to remove an Instagram post featuring a drag performance and a caption that included a word, designated by Meta as a slur, being used in a reclaimed, positive, self-referential context. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. About the Case In May 2024, a user posted a video of themselves to Instagram wearing a red, glittery outfit and performing in a drag show. The caption underneath the video mentioned other Instagram users, acknowledging them for their support and participation. The post also included a thank-you note to another user for providing the sound production for the show. In the post, the user refers to themselves as a “faggy martyr.” The user who posted the video appealed Meta’s decision to remove this post to the Board explaining that they are a queer, trans, drag performer and that they are speaking about themselves in the caption of the video. They emphasized that they included the word “faggy” (a diminutive version of the “fag” slur, hereafter “f***y” and “f-slur”) in their post description because it is a “reclaimed colloquial term that the queer community ... uses all the time.” The user also emphasized that they consider this term a joyous self-descriptor of which they are proud. The user concluded their appeal to the Board by stating the importance of keeping the post up, as it helps them book more performances. Under Meta's Hateful Conduct Community Standard, Meta removes slurs, “defined as words that inherently create an atmosphere of exclusion and intimidation against people on the basis of a protected characteristic,” in most contexts, “because these words are tied to historical discrimination, oppression, and violence.” Although on January 7, 2025, Meta announced changes to the language of the company’s Hate Speech policy, now Hateful Conduct policy, and its enforcement, the “f-slur” remains on Meta’s list. The company allows slurs when used self-referentially and in an expressly positive context. These exceptions remain in place following Meta’s January 7 policy update. In this case, the user posted a video in which they were performing, praising their performance and referring to themself as “f***y.” While “f***y” is a slur, in this context, it was being used “self-referentially or in an empowering way.” After the Board brought this case to Meta’s attention, the company determined that the content did not violate the Hateful Conduct policy and that its original decision to remove the content was incorrect because in the post the “f-slur"" was used both self-referentially and in an explicitly positive context. The company then restored the content to Instagram. Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook, Instagram and Threads users. Significance of Case This case demonstrates ongoing issues with Meta’s ability to enforce exceptions to its Hateful Conduct (formerly Hate Speech) policy for the use of slurs in self-referential and/or empowering speech. This summary decision highlights the impact of wrongful removals on the visibility and the livelihoods of queer performers, as the user appealing Meta’s decision indicated. The potential for disproportionate errors in the moderation of reappropriated speech by queer communities and the subsequent impact of mistaken removals is a serious issue that has been noted by researchers for many years. In the Reclaiming Arabic Words case, the Board found Meta had also over-enforced its hate speech policies against the self-referential use of slurs, impacting Arabic-speaking LGBTQIA+ users. In that case, three moderators mistakenly determined the content violated the Hate Speech policy (as it then was), raising concerns that enforcement guidance to reviewers was insufficient. The Board also highlighted it expects Meta to be “particularly sensitive to the possibility of wrongful removal” of this type of content “given the importance of reclaiming derogatory terms for LGBTQIA+ people in countering discrimination.” The Board has issued recommendations aimed at reducing the number of enforcement errors Meta makes in enforcing exceptions to the Community Standards. For example, the Board has recommended that Meta should, “conduct accuracy assessments focused on Hate Speech policy allowances that cover artistic expression and expression about human rights violations (e.g., condemnation, awareness raising, self-referential use, empowering use),” ( Wampum Belt , recommendation no. 3). The company performed an accuracy assessment and provided the Board with enforcement precision metrics for the Hate Speech (now Hateful Conduct) policy. The Board categorizes the recommendation as implemented, as demonstrated through published information. Enforcement errors may occur in at-scale content moderation. However, the Board encourages Meta to continue to improve its ability to accurately detect content where over-enforcement and under-enforcement pose heightened risks for vulnerable groups. On January 7, Meta announced that it was committed to reducing mistakes in the enforcement of its policies, in particular to protect speech. Through its summary decisions, the Board highlights enforcement errors the company has made, often indicating areas where Meta can make further improvements based on prior Board decisions and recommendations. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" ig-7thr3si1,Breast cancer symptoms and nudity,https://www.oversightboard.com/decision/ig-7thr3si1/,"January 28, 2021",2021,January,"TopicHealth, SafetyCommunity StandardAdult nudity and sexual activity","Policies and TopicsTopicHealth, SafetyCommunity StandardAdult nudity and sexual activity",Overturned,Brazil,The Oversight Board has overturned Facebook's decision to remove a post on Instagram.,24915,3782,"Overturned January 28, 2021 The Oversight Board has overturned Facebook's decision to remove a post on Instagram. Standard Topic Health, Safety Community Standard Adult nudity and sexual activity Location Brazil Platform Instagram To read this decision in Brazilian Portuguese click here . Para ler a decisão completa em Português do Brasil, clique aqui . The Oversight Board has overturned Facebook’s decision to remove a post on Instagram. After the Board selected this case, Facebook restored the content. Facebook’s automated systems originally removed the post for violating the company’s Community Standard on Adult Nudity and Sexual Activity. The Board found that the post was allowed under a policy exception for “breast cancer awareness” and Facebook’s automated moderation in this case raises important human rights concerns. About the case In October 2020, a user in Brazil posted a picture to Instagram with a title in Portuguese indicating that it was to raise awareness of signs of breast cancer. The image was pink, in line with “Pink October,” an international campaign to raise awareness of this disease. Eight photographs within the picture showed breast cancer symptoms with corresponding descriptions. Five of them included visible and uncovered female nipples, while the remaining three photographs included female breasts, with the nipples either out of shot or covered by a hand. The post was removed by an automated system enforcing Facebook’s Community Standard on Adult Nudity and Sexual Activity. After the Board selected the case, Facebook determined this was an error and restored the post. Key findings In its response, Facebook claimed that the Board should decline to hear this case. The company argued that, having restored the post, there was no longer disagreement between the user and Facebook that the content should stay up, making this case moot. The Board rejects Facebook’s argument. The need for disagreement applies only at the moment the user exhausts Facebook’s internal appeal process. As the user and Facebook disagreed at that time, the Board can hear the case. Facebook’s decision to restore the content also does not make this case moot, as the company claims. On top of making binding decisions on whether to restore pieces of content, the Board also offers users a full explanation for why their post was removed. The incorrect removal of this post indicates the lack of proper human oversight which raises human rights concerns. The detection and removal of this post was entirely automated. Facebook’s automated systems failed to recognize the words “Breast Cancer,” which appeared on the image in Portuguese, and the post was removed in error. As Facebook’s rules treat male and female nipples differently, using inaccurate automation to enforce these rules disproportionately affects women’s freedom of expression. Enforcement which relies solely on automation without adequate human oversight also interferes with freedom of expression. In this case, the user was told that the post violated Instagram’s Community Guidelines, implying that sharing photos of uncovered female nipples to raise breast cancer awareness is not allowed. However, Facebook’s Community Standard on Adult Nudity and Sexual Activity, expressly allows nudity when the user seeks to “raise awareness about a cause or educational or medical reasons” and specifically permits uncovered female nipples to advance “breast cancer awareness.” As Facebook’s Community Standards apply to Instagram, the user’s post is covered by the exception above. Hence, Facebook’s removal of the content was inconsistent with its Community Standards. The Oversight Board’s decision The Oversight Board overturns Facebook’s original decision to remove the content and requires that the post be restored. The Board notes that Facebook has already taken action to this effect. The Board recommends that Facebook: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision Summary The Oversight Board has overturned Facebook’s original decision to take down the content, noting that Facebook restored the post after the Board decided to hear this case. Facebook’s decision to reinstate the content does not exclude the Board’s authority to hear the case. The Board found that the content was allowed under a policy exception for “breast cancer awareness” in Facebook’s Community Standard on Adult Nudity and Sexual Activity. The Board has issued a policy advisory statement on the relationship between content policies on Instagram and Facebook, as well as on the use of automation in content moderation and the transparency of these practices. 2. Case Description In October 2020, a user in Brazil posted a picture to Instagram with a title in Portuguese indicating that it was to raise awareness of signs of breast cancer. The image was pink, in line with “Pink October,” an international campaign popular in Brazil to raise breast cancer awareness. Eight photographs within a single picture post showed breast cancer symptoms with corresponding descriptions such as “ripples,” “clusters,” and “wounds,” underneath. Five of the photographs included visible and uncovered female nipples. The remaining three photographs included female breasts, with the nipples either out of shot or covered by a hand. The user shared no additional commentary with the post. The post was detected and removed by a machine learning classifier trained to identify nudity in photos, enforcing Facebook’s Community Standards on Adult Nudity and Sexual Activity, which also applies on Instagram. The user appealed this decision to Facebook. In public statements, Facebook has previously said that it could not always offer users the option to appeal due to a temporary reduction in its review capacity as a result of COVID-19. Moreover, Facebook has stated that not all appeals will receive human review. The user submitted a request for review to the Board and the Board decided to take the case. Following the Board’s selection and assignment of the case to a panel, Facebook reversed its original removal decision and restored the post in December 2020. Facebook claims the original decision to remove the post was automated and subsequently identified as an enforcement error. However, Facebook only became aware of the error after it was brought to the company’s attention through the Board’s processes. 3. Authority and Scope The Board has authority to review Facebook’s decision under Article 2 (Authority to Review) of the Board’s Charter and may uphold or reverse that decision under Article 3, Section 5 (Procedures for Review: Resolution) of the Charter. Facebook has not presented reasons for the content to be excluded in accordance with Article 2, Section 1.2.1 (Content Not Available for Board Review) of the Board’s Bylaws , nor has Facebook indicated that it considers the case to be ineligible under Article 2, Section 1.2.2 (Legal Obligations) of the Bylaws. While Facebook publicly welcomed the Board’s review of this case, Facebook proposed that the Board should decline to hear the case in its filings before the Board because the issue is now moot. Facebook argues that, having restored the content, there is no disagreement that it should stay on Instagram and that this is a requirement for a case to be heard, according to Article 2, Section 1 of the Board’s Charter: in instances where people disagree with the outcome of Facebook’s decision and have exhausted appeals, a request for review can be submitted to the Board. The Board disagrees, and interprets the Charter to only require disagreement between the user and Facebook at the moment the user exhausts Facebook’s internal process. This requirement has been met. The Board’s review process is separate from, and not an extension of Facebook’s internal appeals process. For Facebook to correct errors the Board brings to its attention and thereby exclude cases from review would integrate the Board inappropriately to Facebook’s internal process and undermine the Board’s independence. While Facebook reversed its decision and restored the content, irreversible harm still occurred in this case. Facebook's decision to restore the content in early December 2020 did not make up for the fact that the user's post was removed for the entire ""pink month"" campaign in October 2020. Restoring the content in this case is not the only purpose of the remedy the Board offers. Under Article 4 (Implementation) of the Board’s Charter, and Article 2, Section 2.3.1 (Implementation of Board Decisions)of the Bylaws, Facebook is committed to take action on “identical content with parallel context”. Thus, the impact of the Board taking decisions extends far beyond the content in this case. Moreover, a full decision, even where Facebook complies with its outcome in advance, is important. The Board’s process offers users an opportunity to be heard and to receive a full explanation for why their content was wrongly removed. Where content removal is performed entirely through automation, the content policies are essentially embedded into code and may be considered inseparable from it and self-enforcing. Hearing the case allows the Board to issue policy advisory statements on how Facebook’s content moderation practices are applied, including with the use of automation. For these reasons, the Board finds that its authority to review this case is not affected by Facebook’s decision to restore the content after the Board selected the case. The Board proceeds with its review of the original decision to remove the content. 4. Relevant Standards The Board considered the following standards in its decision: I. Facebook’s Content Policies: The Community Standard on Adult Nudity and Sexual Activity’s policy rationale states that Facebook aims to restrict the display of nudity or sexual activity because some people “may be sensitive to this type of content” and “to prevent the sharing of non-consensual or underage content.” Users should not “post images of real nude adults, where nudity is defined as […] uncovered female nipples except in the context of […] health-related situations (for example, post-mastectomy, breast cancer awareness […]).” Instagram’s Community Guidelines state a general ban on uncovered female nipples, specifying some health-related exceptions, but do not specifically include “breast cancer awareness.” The Community Guidelines link to Facebook’s Community Standards. II. Facebook’s Values: The Facebook values relevant to this case are outlined in the introduction to the Community Standards. The first is “Voice”, which is described as “paramount”: The goal of our Community Standards has always been to create a place for expression and give people a voice. […] We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Facebook limits “Voice” in service of four values. The Board considers that two of these values are relevant to this decision: Safety: We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook. Privacy : We are committed to protecting personal privacy and information. Privacy gives people the freedom to be themselves, and to choose how and when to share on Facebook and to connect more easily. III. International Human Rights Standards: The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. The Board's analysis in this case was informed by UN treaty provisions and the authoritative guidance of UN human rights mechanisms, including the following: 5. User Statement The user states that the content was posted as part of the national “Pink October” campaign for breast cancer prevention. It shows some of the main signs of breast cancer, which the user says are essential for early detection of this disease and save lives. 6. Explanation of Facebook’s Decision Facebook clarified that its original decision to remove the content was a mistake. The company explained to the Board that the Community Standards apply to Instagram. While the Community Standards generally prohibit uncovered and visible female nipples, they are allowed for “educational or medical purposes,” including for breast cancer awareness. Facebook restored the content because it fell within this exception. Facebook claims that allowing this content on the platform is important for its values of “Voice” and “Safety.” The company states that the detection and original enforcement against this content was entirely automated. That automated process failed to determine that the content had clear “educational or medical purposes.” Facebook also claims that it is not relevant to the Board’s consideration of the case whether the content was removed through an automated process, or whether there was an internal review to a human moderator. Facebook would like the Board to focus on the outcome of enforcement, and not the method. 7. Third party submissions The Oversight Board considered 24 public comments for this case: eight from Europe; five from Latin American and Caribbean, and 11 from the United States and Canada. Seven were submitted on behalf of an organization. One comment was submitted without consent to publish. The submissions covered the following themes: whether the post complied with Facebook’s Community Standards and values; the importance of breast cancer awareness in early diagnosis; critique on the over-sexualization and censorship of female nipples compared to male nipples; Facebook’s influence on society; over-enforcement due to automated content moderation, as well as feedback for improving the public comment process. 8. Oversight Board Analysis 8.1 Compliance with Facebook content policies Facebook’s decision to remove the user’s Instagram post did not comply with the company’s content policies. According to Facebook, the Community Standards operate across the company’s products, including Instagram. The user in this case was notified that the content violated Instagram’s Community Guidelines, which were quoted to the user. The differences between these rules warrants separate analysis. I. Instagram’s Community Guidelines The “short” Community Guidelines summarize Instagram’s rules as: “Respect everyone on Instagram, don’t spam people or post nudity.” Taken on their own, these imply that the user’s post violates Instagram’s rules. The “long” Community Guidelines go into more detail. Under the heading “post photos and videos that are appropriate for a diverse audience,” they state: [F]or a variety of reasons, we don’t allow nudity on Instagram […] It also includes some photos of female nipples, but photos of post-mastectomy scarring and women actively breastfeeding are allowed. This explanation does not expressly allow photos of uncovered female nipples to raise breast cancer awareness. While Instagram’s Community Guidelines include a hyperlink to Facebook’s Community Standard on Adult Nudity and Sexual Activity, the relationship between the two sets of rules, including which takes precedence, is not explained. II. Facebook’s Community Standards The Community Standard on Adult Nudity and Sexual Activity, under Objectionable Content, states that the display of adult nudity, defined to include “uncovered female nipples,” as well as sexual activity, is generally restricted on the platform. Two reasons are given for this position: “some people in our community may be sensitive to this type of content” and “to prevent the sharing of non-consensual or underage content.” The Community Standard specifies that consensual adult nudity is allowed when the user clearly indicates the content is “to raise awareness about a cause or for educational or medical reasons.” The “do not post” section of the Community Standard lists “breast cancer awareness” as an example of a health-related situation where showing uncovered female nipples is permitted. The Board finds that the user’s post, while depicting uncovered female nipples, falls squarely within the health-related exception for raising breast cancer awareness. Accepting Facebook’s explanation that the Community Standards operate on Instagram, the Board finds that the user’s post complies with them. Facebook’s decision to remove the content was therefore inconsistent with the Community Standards. The Board acknowledges Facebook has agreed with this conclusion. 8.2 Compliance with Facebook Values Facebook’s values are outlined in the introduction to the Community Standards but are not directly referenced in Instagram’s Community Guidelines. Facebook’s decision to remove the user’s content did not comply with Facebook’s values. The value of “Voice” clearly includes discussions on health-related matters and is especially valuable for raising awareness of the symptoms of breast cancer. Images of early breast cancer symptoms are especially valuable to make medical information more accessible. Sharing this information contributes to the “Safety” of all people vulnerable to this disease. There is no indication that the pictures included any non-consensual imagery. Therefore, “Voice” was not displaced by “Safety” and “Privacy” in this case. 8.3 Compliance with international human rights standards I. Freedom of expression (Article 19 ICCPR) Facebook’s decision to remove the post also did not comply with international human rights standards on freedom of expression (Article 19, ICCPR). Health-related information is particularly important (A/HRC/44/49, para. 6) and is additionally protected as part of the right to health (Article 12, IESCR; E/C.12/2000/4, para. 11). In Brazil, where awareness raising campaigns are crucial to promote early diagnosis of breast cancer, the Board emphasizes the connection between these two rights. This right to freedom of expression is not absolute. When restricting freedom of expression, Facebook should meet the requirements of legality, legitimate aim, and necessity and proportionality. Facebook’s removal of the content failed the first and third parts of this test. a. Legality Any rules restricting expression must be clear, precise, and publicly accessible (General Comment 34, para. 25). Facebook’s Community Standards permit female nipples in the context of raising breast cancer awareness, while Instagram’s Community Guidelines only mention post-mastectomy scarring. That Facebook’s Community Standards take precedence over the Community Guidelines is also not communicated to Instagram users. This inconsistency and lack of clarity is compounded by removal notices to users that solely reference the Community Guidelines. Facebook’s rules in this area therefore fail the legality test. b. Legitimate aim Any restriction on freedom of expression must be for a legitimate aim, which are listed in Article 19, para. 3 of the ICCPR. Facebook claims its Adult Nudity and Sexual Activity Community Standard helps prevent the sharing of child abuse images and non-consensual intimate images on Facebook and Instagram. The Board notes that both content categories are prohibited under separate Community Standards and are not subject to the exceptions that apply to consensual adult nudity. These aims are consistent with restricting freedom of expression under international human rights law to protect “the rights of others” (Article 19, para. 3, ICCPR). These include the right to privacy of victims of non-consensual intimate image sharing (Article 17 ICCPR), and the rights of the child to life and development (Article 6, CRC), which are threatened in cases of sexual exploitation (CRC/C/GC/13, para. 62). c. Necessity and proportionality Any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected” (General Comment 34, para. 34). The Board finds that removing information that serves a public interest without cause cannot be proportionate. The Board is concerned that the content was wrongfully removed by an automated enforcement system and potentially without human review or appeal. This reflects the limitations of automated technologies to understand context and grasp the complexity of human communication for content moderation (UN Special Rapporteur on freedom of expression, A/73/348, para. 15). In this case, these technologies failed to recognize the words “Breast Cancer” that appear at the top left of the image in Portuguese. The Board accepts automated technologies are essential to the detection of potentially violating content. However, enforcement which relies solely on automation, in particular when using technologies that have a limited ability to understand context, leads to over-enforcement that disproportionately interferes with user expression. The Board recognizes that automated enforcement may be needed to swiftly remove non-consensual intimate images and child abuse images, in order to avoid immediate and irreparable harm. However, when content is removed to safeguard against these harms, the action should be premised on the applicable policies on sexual exploitation, and users notified that their content was removed for these purposes. Regardless, automated removals should be subject to both an internal audit procedure explained under section 9.2 (I) and appeal to human review should be offered (A/73/348, para. 70), allowing enforcement mistakes to be repaired. Automated content moderation without necessary safeguards is not a proportionate way for Facebook to address violating forms of adult nudity. d. Equality and non-discrimination Any restrictions on expression must respect the principle of equality and non-discrimination (General Comment 34, paras. 26 and 32). Several public comments argued Facebook’s policies on adult nudity discriminate against women. Given that Facebook’s rules treat male and female nipples differently, the reliance on inaccurate automation to enforce those rules will likely have a disproportionate impact on women, thereby raising discrimination concerns (Article 1 CEDAW; Article 2 ICCPR). In Brazil, and in many other countries, awareness raising of breast cancer symptoms is a matter of critical importance. As such, Facebook's actions jeopardize not only women’s right to freedom of expression but also their right to health. II. Right to remedy (Article 2 ICCPR) The Board welcomes that Facebook restored the content. However, the negative impacts of that error could not be fully reversed. The post, intended for breast cancer awareness month in October, was only restored in early December. Restoring the content did not make this case moot: as the Board had selected this case, the user had a right to be heard and to receive a fully reasoned decision. The UN Special Rapporteur on freedom of opinion and expression identified the responsibility to provide remedy as one of the most relevant aspects of the UNGPs as they relate to business enterprises that engage in content moderation (A/HRC/38/35, para. 11). Facebook’s over-reliance on automated enforcement, if there was no appeal, failed to respect the user’s right to an effective remedy (Article 2, ICCPR; CCPR/C/21/Rev.1/Add. 13, para. 15) or meet its responsibilities under the UN Guiding Principles (Principles 29 and 31). The Board is especially concerned that Facebook does not inform users when their content is enforced against through automation, and that appeal to human review might not be available in all cases. This reflects a broader concern at Facebook’s lack of transparency on its use of automated enforcement, and circumstances where internal appeal might not be available. 9. Oversight Board Decision 9.1 Content Decision The Oversight Board overturns Facebook’s original decision to take down the content, requiring the post to be left up. The Board notes Facebook has already taken action to this effect. 9.2 Policy Advisory Statement I. Automation in enforcement, transparency and the right to effective remedy The Board recommends that Facebook: These recommendations should not be implemented in a way which would undermine content moderators’ right to health during the COVID-19 pandemic. II. The relationship between the Community Standards and the Community Guidelines: The Board recommends that Facebook: *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and must be agreed by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. Return to Case Decisions and Policy Advisory Opinions" ig-bt93iaco,Syria Protest,https://www.oversightboard.com/decision/ig-bt93iaco/,"February 27, 2024",2024,,"TopicCommunity organizations, Freedom of expression, ProtestsCommunity StandardDangerous individuals and organizations",Dangerous individuals and organizations,Overturned,"Syria, United States","An Instagram user appealed Meta's decision to remove a video that encouraged Syrians to resist the regime of Bashar al-Assad. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post.",4714,708,"Overturned February 27, 2024 An Instagram user appealed Meta's decision to remove a video that encouraged Syrians to resist the regime of Bashar al-Assad. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Summary Topic Community organizations, Freedom of expression, Protests Community Standard Dangerous individuals and organizations Location Syria, United States Platform Instagram This is a summary decision . Summary decisions examine cases in which Meta reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors and inform the public about the impact of the Board’s work. They are approved by a Board Member panel, not the full Board. They do not consider public comments and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement. Case Summary An Instagram user appealed Meta's decision to remove a video that encouraged Syrians to resist the regime of Bashar al-Assad. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Case Description and Background In August 2023, an Instagram user posted a video showing Abdul Baset al-Sarout, a Syrian football player, activist and a public symbol of opposition to the country's president, Bashar al-Assad. Sarout was killed in 2019. In the video, Sarout is heard saying in Arabic, ""We have one liberated neighborhood in Syria, we are a thorn in this regime, we will return to this neighborhood"" and that ""the revolution continues,"" to encourage Syrians to resist the regime of Bashar al-Assad. The video had about 30,000 views. The Instagram post was removed for violating Meta’s Dangerous Organizations and Individuals policy, which prohibits representation of and certain speech about the groups and people the company judges as linked to significant real-world harm. In their appeal to the Board, the user described their account as a non-profit page “dedicated to spreading information and raising awareness.” Furthermore, the user argued the content did not violate Instagram’s guidelines. After the Board brought this case to Meta’s attention, the company determined the content did not contain any references to a designated organization or individual and did not violate its policies. The company restored the content to the platform. Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, to reduce errors and increase fairness for Facebook and Instagram users. Case Significance This case highlights the incorrect removal of content that did not contain any references to a designated organization or individual. In order to reduce enforcement errors in places experiencing conflict or other sensitive circumstances, the Board has recommended that Meta “enhance the capacity allocated to HIPO [high-impact false positive override system] review across languages to ensure that more content decisions that may be enforcement errors receive additional human review,” on which Meta has reported progress on implementation ( Mention of the Taliban in News Reporting , recommendation no. 7). The Board has also recommended that Meta “evaluate automated moderation processes for enforcement of the Dangerous Organizations and Individuals policy,"" ( Öcalan’s Isolation , recommendation no. 2), which Meta has declined. This case highlights over-enforcement of Meta’s Dangerous Organizations and Individuals policy. The Board’s cases suggest that errors of this sort are all too frequent. The company should make reducing such errors a high priority. Full adoption of these recommendations, along with the published information to demonstrate successful implementation, could reduce the number of incorrect removals under Meta’s Dangerous Organizations and Individuals policy. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" ig-feywnwi2,Heritage of Pride,https://www.oversightboard.com/decision/ig-feywnwi2/,"December 18, 2023",2023,December,"TopicLGBT, Marginalized communities, ProtestsCommunity StandardHate speech",Hate speech,Overturned,United States,A user appealed Meta’s decision to remove an Instagram post that was celebrating Pride month by reclaiming a slur that has traditionally been used against gay people.,5964,913,"Overturned December 18, 2023 A user appealed Meta’s decision to remove an Instagram post that was celebrating Pride month by reclaiming a slur that has traditionally been used against gay people. Summary Topic LGBT, Marginalized communities, Protests Community Standard Hate speech Location United States Platform Instagram This is a summary decision . Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors and inform the public about the impact of the Board’s work. They are approved by a Board Member panel, not the full Board. They do not involve a public comments process and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement. Case Summary A user appealed Meta’s decision to remove an Instagram post that was celebrating Pride month by reclaiming a slur that has traditionally been used against gay people. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Case Description and Background In January 2022, an Instagram user posted an image with a caption that includes a quote by writer and civil-rights activist James Baldwin, which speaks of the power of love to unite humanity. The caption also states the user’s hope for a year of rest, community and revolution, and calls for the continuous affirmation of queer beauty. The image in the post shows a man holding a sign that says, “That’s Mr Faggot to you,” with the original photographer credited in the caption. The post was viewed approximately 37,000 times. Under Meta’s Hate Speech policy , the company prohibits the use of certain words it considers to be slurs. The company recognizes, however, that “speech, including slurs, that might otherwise violate our standards can be used self-referentially or in an empowering way.” Meta explains its “policies are designed to allow room for these types of speech,” but the company requires people to “clearly indicate their intent.” If the intention is unclear, Meta may remove content. Meta initially removed the content from Instagram. The user, a verified Instagram account based in the United States, appealed Meta’s decision to remove the post to the Board. After the Board brought this case to Meta’s attention, the company determined the content did not violate the Hate Speech Community Standard and that its original decision was incorrect. The company then restored the content to Instagram. Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation processes involved, reduce errors and increase fairness for Facebook and Instagram users. Case Significance This case highlights challenges in Meta’s ability to enforce exceptions to its Hate Speech policy, as well as shortcomings of the company’s cross-check program. The content in this case was posted by a verified Instagram account eligible for review under the cross-check system. Therefore, the account, which is dedicated to educating users about the LGBTQIA+ movement, should have had additional levels of review. As the caption mentions “infinite queer beauty” and makes references to community and solidarity with LGBTQIA+ people, Meta’s moderation systems should have recognized the slur was used here in an empowering way, rather than to condemn or disparage the LGBTQIA+ community. Previously, the Board has issued several recommendations relevant to this case. The Board has recommended that “Meta should help moderators better assess when exceptions for content containing slurs are warranted,” ( Reclaiming Arabic Words decision, recommendation no. 1) and that Meta should “let users indicate in their appeal that their content falls into one of the exceptions to the Hate Speech policy. This includes where users share hateful content to condemn it or raise awareness,” ( Two Buttons Meme decision, recommendation no. 4). Meta has taken no further action on the first recommendation and has implemented in part the second recommendation. Additionally, the Board has recommended that Meta “conduct accuracy assessments focused on Hate Speech policy allowances that cover expression about human-rights violations (e.g., condemnation, awareness raising),” ( Wampum Belt decision, recommendation no. 3). Meta has implemented this recommendation in part. Finally, since the content was posted by an account that is part of Meta’s cross-check program, relevant recommendations include encouraging Meta to identify “‘historically over-enforced entities’ to inform how to improve its enforcement practices at scale,” ( policy advisory opinion on Cross-Check Program , recommendation no. 26) and to establish “a process for users to apply for over-enforcement mistake-prevention protections,” ( policy advisory opinion on Cross-Check Program , recommendation no. 5). Meta is currently fully implementing the first recommendation and declined to take further action on the second recommendation. The Board underlines the need for Meta to address these concerns to reduce the error rate in moderating hate speech content. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" ig-fzse6j9c,United States posts discussing abortion,https://www.oversightboard.com/decision/ig-fzse6j9c/,"September 6, 2023",2023,,"TopicFreedom of expression, Health, Sex and gender equalityCommunity StandardViolence and incitement","Policies and TopicsTopicFreedom of expression, Health, Sex and gender equalityCommunity StandardViolence and incitement",Overturned,United States,The Oversight Board has overturned Meta’s original decisions to remove three posts discussing abortion and containing rhetorical uses of violent language as a figure of speech.,36959,5766,"Overturned September 6, 2023 The Oversight Board has overturned Meta’s original decisions to remove three posts discussing abortion and containing rhetorical uses of violent language as a figure of speech. Standard Topic Freedom of expression, Health, Sex and gender equality Community Standard Violence and incitement Location United States Platform Instagram United States Abortion Cases Public Comments Appendix The Oversight Board has overturned Meta’s original decisions to remove three posts discussing abortion and containing rhetorical uses of violent language as a figure of speech. While Meta acknowledges its original decisions were wrong and none of the posts violated its Violence and Incitement policy, these cases raise concerns about whether Meta’s approach to assessing violent rhetoric is disproportionately impacting abortion debates and political expression. Meta should regularly provide the Board with the data that it uses to evaluate the accuracy of its enforcement of the Violence and Incitement policy, so that the Board can undertake its own analysis. About the cases The three abortion-related pieces of content considered in this decision were posted by users in the United States in March 2023. In the first case, a user posted an image of outstretched hands, overlaid with the text, “Pro-Abortion Logic” in a public Facebook group. The post continued, “We don’t want you to be poor, starved or unwanted. So we’ll just kill you instead.” The group describes itself as supporting the “sanctity of human life.” In the other two cases, both users’ posts related to news articles covering a proposed bill in South Carolina that would apply state homicide laws to abortion, meaning the death penalty would be allowed for people getting abortions. In one of these posts, on Instagram, the image of the article headline was accompanied by a caption referring to the South Carolina lawmakers as being “so pro-life we’ll kill you dead if you get an abortion.” The other post, on Facebook, contained a caption asking for clarity on whether the lawmakers’ position is that “it’s wrong to kill so we are going to kill you.” After Meta’s automated systems, specifically a hostile speech classifier, identified the content as potentially harmful, all three posts were sent for human review. Across the three cases, six out of seven human reviewers determined the posts violated Meta’s Violence and Incitement Community Standard because they contained death threats. The three users appealed the removals of their content. When the Board selected these cases, Meta determined its original decisions were wrong and restored the posts. Key findings The Board concludes that none of the three posts can be reasonably interpreted as threatening or inciting violence. While each uses some variation of “we will kill you,” expressed in a mock first-person voice to emphasize opposing viewpoints, none of the posts expresses a threat or intent to commit violence. In these three cases, six out of seven human moderators made mistakes in the application of Meta's policies. The Board has considered different explanations for the errors in these cases, which may represent, as Meta’s responses suggest, a small and potentially unavoidable subset of mistaken decisions on posts. It is also possible that the reviewers, who were not from the region where the content was posted, failed to understand the linguistic or political context, and to recognize non-violating content that used violent words. Meta’s guidance may also be lacking, as the company told the Board that it does not provide any specific guidance to its moderators on how to address abortion-related content as part of its Violence and Incitement policy. Discussion of abortion policy is often highly charged and can include threats that are prohibited by Meta. Therefore, it is important Meta ensure that its systems can reliably distinguish between threats and non-violating, rhetorical uses of violent language. Since none of these cases are ambiguous, the errors suggest there is scope for improvement in Meta’s enforcement processes. While such errors may limit expression in individual cases, they also create cyclical patterns of censorship through repeated mistakes and biases that arise from machine-learning models trained on present-day abusive content. Additionally, these cases show that mistakenly removing content that does not violate Meta’s rules can disrupt political debates over the most divisive issues in a country, thereby complicating a path out of division. Meta has not provided the Board with sufficient assurance that the errors in these cases are outliers, rather than being representative of a systemic pattern of inaccuracies. The Board believes that relatively simple errors like those in these cases are likely areas in which emerging machine learning techniques could lead to marked improvements. It is also supportive of Meta’s recent improvement to the sensitivity of its violent speech enforcement workflows. However, the Board expects more data to assess Meta’s performance in this area over time. The Oversight Board's decision The Oversight Board overturns Meta’s original decisions to remove three posts discussing abortion. The Board recommends that Meta: * Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board overturns Meta’s original decisions to remove two Facebook posts and one Instagram post, all of which discussed abortion. The Board finds that the three posts did not violate Meta’s Violence and Incitement policy, as they did not incite or threaten violence but were rhetorical comments about abortion policy. Meta has acknowledged that its original decisions were wrong, and that the content did not violate its Violence and Incitement policy. The Board selected these cases to examine the difficult content moderation problem of dealing with violent rhetoric when used as a figure of speech as well as its potential impact on political expression. 2. Case description and background In March 2023, three users in the United States posted abortion-related content, two on Facebook and one on Instagram. The posts reflect different perspectives on abortion. In the first case (Facebook group case) , a Facebook user posted an image showing outstretched hands with a text overlay saying, “Pro-Abortion Logic.” It continues, “We don’t want you to be poor, starved or unwanted. So we’ll just kill you instead,” and has the caption, “Psychopaths...” The post was made in a public group with approximately 1,000 members. The group describes itself as supporting traditional values and the “sanctity of human life,” while opposing, among other things, the “liberal left.” The other two cases related to users posting news articles covering a proposed bill in South Carolina that would apply state homicide laws to abortion, making people who get abortions eligible for the death penalty. In the second case (Instagram news article case) , an Instagram user posted an image of a news article headline stating, “21 South Carolina GOP Lawmakers Propose Death Penalty for Women Who Have Abortions.” The caption describes the lawmakers as being “so pro-life we'll kill you dead if you get an abortion.” In the third case (Facebook news article case) , a Facebook user posted a link to an article entitled “South Carolina GOP lawmakers propose death penalty for women who have abortions.” The caption asks for clarity on whether the lawmakers’ position is that “it’s wrong to kill so we are going to kill you.” Each of the pieces of content in the three cases had fewer than 1,000 interactions. Meta uses automated systems to identify potentially violating content on its platforms. These include content classifiers that use machine learning to screen for what Meta considers “hostile” speech. In all three cases, one of these hostile speech classifiers identified the content as potentially harmful and sent it for human review. Meta informed the Board that, in each case, a human reviewer determined the post violated the Violence and Incitement Community Standard ’s prohibition on death threats. Each of the three users appealed the removals. In both the Facebook group and Instagram news article cases, an additional human review upheld the original removals for violating the Violence and Incitement policy. In the Facebook news article case, on appeal, a second human reviewer found the content was non-violating. This post was then reviewed for a third time, as Meta has told the Board it generally requires two reviews to overturn an initial enforcement decision. The third reviewer found the content violated the prohibition on death threats and Meta therefore upheld its initial decision to remove the content. In total, seven human moderators were involved in assessing the content across the three cases. Four of them were located in the Asia Pacific region and three were located in the Central and South Asia region. The three users appealed the cases to the Board. As a result of the Board selecting these cases, Meta determined that its previous decisions to remove the three pieces of content were in error and restored the posts. Meta stated that, while the policy prohibits threats that could lead to death, none of the pieces of content included a threat. As relevant context, the Board notes that in June 2022, the United States Supreme Court issued its decision in Dobbs v. Jackson Women's Health Organization . The decision determined that the United States Constitution does not protect the right to abortion, overruling the precedent set in Roe v. Wade and leaving the question of whether, and how, to regulate abortion to individual states. Since then, legislation has been proposed and passed in multiple states, and abortion regulation is a high-profile political issue. As mentioned above, two of the posts refer to one such proposed bill in South Carolina. 3. Oversight Board authority and scope The Board has authority to review Meta’s decision following an appeal from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). The Board monitors the implementation of its recommendations, and may follow up on prior recommendations in its case decisions. When the Board selects cases like these three in which Meta subsequently acknowledges that it made an error, the Board reviews the original decision to increase understanding of the content moderation process and to make recommendations to reduce errors and increase fairness for people who use Facebook and Instagram. When the Board identifies cases that raise similar issues, they may be assigned to a panel simultaneously to deliberate together. Binding decisions will be made in respect of each piece of content. 4. Sources of authority and guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board decisions The most relevant previous decisions of the Oversight Board include: II. Meta’s content policies Meta seeks to prohibit threats of violence while permitting joking or rhetorical uses of threatening language. The policy rationale for Facebook's Violence and Incitement Community Standard explains: “We aim to prevent potential offline harm that may be related to content on Facebook. While we understand that people commonly express disdain or disagreement by threatening or calling for violence in non-serious ways, we remove language that incites or facilitates serious violence.” It further states that Meta removes content “when [it] believe[s] there is a genuine risk of physical harm or direct threats to public safety.” Meta says that it tries “to consider the language and context in order to distinguish casual statements from content that constitutes a credible threat.” Meta’s rules specifically prohibit “threats that could lead to death” and “threats that lead to serious injury” of private individuals, unnamed specified persons, or minor public figures. It defines threats as including “statements of intent to commit violence,” “statements advocating for violence,” or “aspirational or conditional statements to commit violence.” The Board's analysis of the content policies was informed by Meta's commitment to “Voice ,” which the company describes as “paramount,"" and its values of “Safety” and “Dignity.” III. Meta’s human rights responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , when it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following international standards: 5. User submissions All three users submitted a statement as part of their appeals to the Board. 6. Meta’s submissions Meta explained that while human reviewers initially assessed the content in these three cases to be violating, after the cases were appealed to the Oversight Board, the company determined they did not violate the Violence and Incitement policy, and should remain on the platforms. In the Facebook group case , Meta found that the user was not making a threat against any target, but rather characterizing how they believe groups supporting abortion rationalize their position. In the Instagram news article case , Meta explained that it was clear that the user did not threaten violence when the post was considered holistically. Finally, in the Facebook news article case , Meta similarly explained that the post did not contain a threat against any target when read in the context of the entire post. The user was instead using satire to express their political views on the proposed legislation. Meta said that it did not have further information about why six out of seven of the human reviewers involved in these cases incorrectly found the content violating. This is because Meta does not require its at-scale reviewers to document the reasons for their decisions. Meta conducted a root cause analysis, an internal exercise to determine why a mistake was made, into the removal of all three pieces of content. In each analysis, Meta determined that the mistakes were “the result of human review error, where a reviewer made a wrong decision despite correct protocols in place.” The Board asked Meta 10 questions in writing. The questions related to the challenge of interpreting the non-literal use of violent language at scale, Meta’s hostile speech classifier, the training for moderators regarding the Violence and Incitement policy, and the guidance to moderators to address content that relates to abortion and/or capital punishment. All questions were answered. 7. Public comments The Oversight Board received 64 public comments relevant to this case: 4 comments were submitted from Asia Pacific and Oceania, 5 from Central and South Asia, 6 from Europe, 4 from Latin America and the Caribbean, and 45 from the United States and Canada. The submissions covered the following themes: abortion discourse in the United States and recent legal developments; the central role of social media in facilitating public discourse about abortion; the impact of Meta’s Violence and Incitement policy and moderation practices on abortion discourse and freedom of expression; the use of violent rhetoric in political debate; and the potential effects of content labelling and fact-checking of medical misinformation, among others. In June 2023, as part of ongoing stakeholder engagement, the Board consulted representatives of advocacy organizations, academics, and other experts on issues relating to the moderation of abortion-related content and hostile speech. Roundtable meetings were held under the Chatham House rule. Participants raised a variety of issues, including the contextually relevant use of the word “kill” in abortion discourse, the importance of context when assessing possible death threats, fact checking medical misinformation, and moderating satire and humor in abortion discourse, among others. The insights provided at this meeting were valuable, and the Board extends its appreciation to all participants. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board selected these cases to examine the difficult content moderation problem of addressing violent rhetoric when used as a figure of speech. It selected content that reflects different positions to better assess the nature of this problem. These cases fall into the Board’s strategic priority of “Gender.” These cases are clear enforcement errors but represent a difficult set of problems. In one sense, they are easy to resolve as Meta and the Board agree that none of the posts threaten or advocate violence. The posts in question have already been restored, and the Board agrees that they do not violate Meta’s Community Standards. However, the Board is concerned that Meta’s approach to assessing violent rhetoric could have a disproportionate impact on debates around abortion. Through its analysis of these cases, the Board aims to determine whether they indicate the existence of a systemic problem in this area. 8.1 Compliance with Meta’s content policies The Board finds that the posts in these cases do not violate the Violence and Incitement Community Standard. That policy prohibits “threats that could lead to death (and other forms of high-severity violence) … targeting people or places,” including “statements of intent to commit high-severity violence.” None of the posts in these three cases can be reasonably interpreted as threatening or inciting violence. Each of the three posts uses some variation of “we will kill you” expressed in a mock first-person voice to characterize opposing viewpoints in the abortion debate. When read in full, none of these posts advocates violence or expresses a threat or intent to commit violence. In fact, all three posts appear to be intended to criticize the violence that the authors perceive in their opponents' positions. None of these cases are ambiguous. In each case, the violent language used is political commentary or a caricature of views that the user opposes. It is important Meta ensures that its systems can reliably distinguish between threats and non-violating, rhetorical uses of violent language. Discussion of abortion policy is often highly charged and divisive and can include threats that are prohibited by Meta. As threats are often directed at activists and vulnerable women, as well as medical practitioners and public figures like judges, they can have serious negative impacts on participation and political expression. At the same time, public debates about abortion often invoke violent speech in non-literal ways. Mistaken removals of non-violating content (false positives) negatively impact expression, while mistakenly leaving up violent threats and incitement (false negatives) presents major safety risks and can suppress the participation of those targeted because of their identity or opinion. These mistakes may limit expression in individual cases but also create cyclical patterns of censorship. To the extent that content moderation machine learning models are trained on present-day abusive content, mistakes and biases will be repeated in the future. These cases regarding abortion also show that taking down false positives can disrupt political debates over the most divisive issues before a nation, thereby complicating or even precluding a path out of division. The Board recognizes that understanding context and the non-literal use of violent language at scale is a difficult challenge. Classic examples in content moderation debates of false positives include phrases like “ I will kill you for sending me spoilers !” False negatives include threats expressed in coded language that are often misunderstood and dismissed by social media platforms, an issue frequently raised by gender-rights advocates. For example, advocates complain that threats and abuse that target them are not taken seriously enough. The Board previously addressed this in the hate speech context in the Knin cartoon case. The Board recognizes this as a crucial issue because real threats of violence and death must not remain on Facebook and Instagram. Over a series of cases, the Board has repeatedly emphasized that violent words, used rhetorically, do not necessarily convey a threat or incite violence. In the Iran protest slogan case, the Board found that the protest slogan, “Marg bar Khamenei” (literally “death to Khamenei”) should not be removed given its usage in Iran. The Board restored a post that suggested “[taking the sword] out of its sheath,” with a majority of the Board interpreting the post as a criticism of President Macron’s response to religiously motivated violence, and not a veiled threat ( Protest in India against France ). The Board overturned Meta’s decision to remove a poem comparing the Russian army in Ukraine to Nazis, which included the call to “kill the fascist... Kill him! Kill him! Kill!” ( Russian Poem ), finding that the quotes are an artistic and cultural reference employed as a rhetorical device. The Board also overturned Meta’s removal of a UK drill music clip that referred to gun violence (“Beat at the crowd, I ain’t picking and choosing (No, no). Leave man red, but you know…,” UK drill music ), finding that Meta should have given more weight to the content's artistic nature. The Board also criticized Meta’s original decision to remove a post featuring art by an Indigenous North American artist entitled “Kill the Indian / Save the Man,” a phrase used to justify the forced removal and assimilation of Indigenous children as part of historic crimes and acts of cultural genocide carried out in North America ( Wampum belt ). On the other hand, when evaluating content posted during escalating violence in Ethiopia, the Board upheld Meta’s decision to remove a post urging the national army to “turn its gun towards the fascist” ( Tigray Communication Affairs Bureau ). In each of these cases, the Board has held that the meaning of posts must be evaluated in context; overly literal interpretations of speech may often lead to errors in moderation. In each of the three cases in this decision, Meta has accepted that its initial findings were incorrect and that none of the three posts violated its policies. Meta has already restored the content to Facebook and Instagram, and the Board finds that the decision to reinstate the posts was correct. 8.2 Compliance with Meta’s human rights responsibilities Article 19, para. 2 of the International Covenant on Civil and Political Rights (ICCPR) protects “the expression and receipt of communications of every form of idea and opinion capable of transmission to others,” including about politics, public affairs, and human rights (General Comment No. 34, paras. 11-12). Moreover, the UN Human Rights Committee has stated that “free communication of information and ideas about public and political issues between citizens, candidates and elected representatives is essential” (General Comment No. 34, para. 20). In addition, restrictions on speech may not discriminate on the basis of political opinion (General Comment 34, para. 26). In these cases, all three pieces of content discuss abortion, a key political issue in the United States. Facebook and Instagram have become important sites for political discussion, and Meta has a responsibility to respect the freedom of expression of its users on controversial political issues. For Meta to meet its voluntary commitments to respect human rights, its rules and procedures must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). The Board has acknowledged that while the ICCPR does not create obligations for Meta as it does for states, Meta has committed to respect human rights as set out in the UNGPs ( A/74/486 , paras. 47-48). I. Legality The condition of legality, which requires that rules are clear and accessible to both the people subject to them and those enforcing them, is satisfied in these cases. The policy rationale of the Violence and Incitement policy makes it clear that non-threatening uses of violent language and casual statements are not prohibited. II. Legitimate aim Meta’s rules prohibiting threats are also addressed at achieving a legitimate aim. The Violence and Incitement policy aims to “prevent potential offline harm” by removing content that poses “a genuine risk of physical harm or direct threats to public safety.” This policy serves the legitimate aim of respecting the rights of others, such as the right to life (Article 6, ICCPR), as well as public order and national security (Article 19, para. 3, ICCPR). In the context of political speech, the policy may also pursue the legitimate aim of respecting others’ right to participate in public affairs (Article 25, ICCPR). III. Necessity and proportionality The Violence and Incitement policy can only comply with Meta’s human rights responsibilities if it meets the principle of necessity and proportionality, which requires that any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected” ( General Comment No. 34 , para. 34). The Board is concerned that the rhetorical use of violent words may be linked to disproportionately high rates of errors by human moderators. In response to the Board's questions, Meta said that the posts in these cases were sent for human review seven times, and in six of those, human moderators wrongly concluded that the post contained a death threat. The “Root Cause Analysis” that Meta carried out internally to determine why these cases were wrongly decided led the company to conclude that all the mistakes were simply a result of human error, with no indication that its protocols are deficient. In general, mistakes are inevitable among the hundreds of millions of posts that Meta moderates every month. In the first quarter of 2023, Meta either removed or placed behind a warning screen 12.4 million posts on Facebook and 7.7 million posts on Instagram under its Violence and Incitement policy. Approximately 315,000 of those Facebook posts and 96,000 of those Instagram posts were later restored, most following an appeal by the user. At this scale, even a very low error rate represents tens of thousands of errors. The true rate of mistakes is hard to estimate for a number of reasons, including that not all decisions will be appealed, not all appeals are correctly resolved, and violating posts that are not identified (false negatives) are not quantified. The Board expects that well-trained human moderators, with language proficiency and access to clear guidance, should not often make mistakes in clear situations like the content in these cases. However, the Board does not have adequate data from Meta to conclude whether the erroneous decisions taken by these human moderators represent a systemic problem for content discussing abortion. Rates of false positives and false negatives are usually inversely correlated. The more that Meta tries to ensure that all threats violating the Violence and Incitement policy are detected and removed, the more likely it is to make the wrong call on posts that use violent language rhetorically in a non-threatening way. Meta has told the Board that ""distinguishing between the literal and non-literal use of violent language is particularly challenging because it requires consideration of multiple factors like the user’s intent, market-specific language nuances, sarcasm and humor, the nature of the relationship between people, and the adoption of a third-party ‘voice’ to make a point.” While Meta explained that it recognizes the harm that over-enforcement can do to free expression, it believes that the risk to people’s safety that threatening speech poses justifies its approach. The Board agrees that Meta must be careful in making changes to its violent threats policy and enforcement processes. It could be potentially disastrous to allow more rhetorical uses of violent language (thereby reducing false positives) without understanding the impact on targets of veiled and explicit threats. This is particularly the case in the context of posts about abortion policy in the United States, where women, public figures like judges, and medical providers have reported experiencing serious abuse, threats, and violence. In these three cases, the removals were not necessary or proportional to the legitimate aim pursued. However, the Board does not yet have sufficient information to conclusively determine whether Meta’s Violence and Incitement policy and enforcement processes are necessary and proportionate. The Board has considered different explanations for the mistakes in these cases. These cases may represent, as Meta’s responses suggest, a small and potentially unavoidable subset of mistaken decisions on posts. It is also possible that the reviewers, who were not from the region where the content was posted, failed to understand the linguistic or political context, and to recognize non-violating content that used violent words. Meta’s guidance may also be lacking, as the company told the Board that it does not provide any specific guidance to its moderators on how to address abortion-related content as part of its Violence and Incitement policy. Given the scale of the risks to safety that are involved, the Board is wary of recommending major changes to Meta’s policies or enforcement processes without better understanding the distribution of errors and the likely human rights impacts of different options. However, the Board remains concerned by the potential wider implications raised by these apparently simple enforcement errors that are missed by Meta and appealed to the Board. Meta has not been able to provide the Board with sufficient assurance that the errors in these cases are outliers and do not represent a systemic pattern of inaccuracies in dealing with violent rhetoric in general or political speech about abortion in particular. The Board therefore requests and recommends that Meta engages in a collaborative process to identify and assemble more specific information that the Board can use in its ongoing work to help Meta align its policies with human rights norms. IV. A future of continuous improvement and oversight The Oversight Board expects Meta to demonstrate continual improvement in the accurate enforcement of its policies. Meta has said that these errors are not attributable to any shortcoming in its policies, training, or enforcement processes. If this is the case, then the Board suggests that these cases are useful examples of areas where improvements in Meta’s automated tools may help better align its enforcement processes with human rights. In general, in implementing technological improvements to content moderation, Meta should strive to reduce the number of false positives as much as possible without increasing the number of false negatives. Meta has explained to the Board that the classifiers used in these cases were not highly confident that the content was likely to violate its policies, and sent the content for human review. While the Board accepts that the automated interpretation of context and nuance, particularly sarcasm and satire, is difficult, this is an area where the pace of advancement across the industry is extraordinarily rapid. The Board believes that relatively simple errors like those in these cases are likely areas in which emerging machine learning techniques could lead to marked improvements. Meta told the Board that it has improved the sensitivity of its violent speech enforcement workflows to reduce the instances of over-enforcement when people jokingly use explicit threats with their friends. After assessing a sample of content automatically removed by the hostile speech classifier, Meta tested no longer proactively deleting some types of content that the sample showed resulted in the most over-enforcement and it is assessing the results. This sort of progress and improvement is exactly what social media companies should be doing on a continual basis. The Board expects to continue helping Meta identify potential areas of improvement and to help evaluate the human rights impacts of potential changes to its rules and enforcement processes. The Oversight Board expects Meta to share more data to enable the Board to assess improvements in performance over time, including regular detailed analyses of its experiments and the changes it makes in its efforts to improve. Previously, such as in the Two buttons meme decision, when the Board recommended that Meta develop better processes to assess sarcasm, the company stated that it implemented the recommendation without demonstrating the progress it claimed to have made. To assess the necessity and proportionality of content moderation at scale, the Board must be able to reliably evaluate whether Meta’s rules and processes are appropriate to achieving Meta’s aims. In this case, we have highlighted our concerns about the potential uneven enforcement of Meta’s Violence and Incitement policy on political speech and noted that moderation at scale involves complex challenges and difficult trade-offs. These cases raise concerns about the possibility of disproportionate removal of political speech, specifically about abortion, when posts are more likely to use words and phrases that present an increased risk of being mistaken for violent threats. In these cases, the Board recommends that Meta demonstrate its position with data sufficient to facilitate an analysis of the proportionality of the policy. As an achievable first step, the Board recommends that Meta begins to regularly share the data that it already holds and generates for its own internal evaluation processes, including the data it relied on to substantiate claims that its policies and procedures are necessary and proportionate. The Board expects Meta to engage in a collaborative process to identify the information that would enable the Board to analyze Meta’s policies in light of the trade-offs and likely impacts of potential alternatives. 9. Oversight Board decision The Oversight Board overturns Meta's original decisions to take down the content in all three cases. 10. Recommendations Enforcement 1. In order to inform future assessments and recommendations to the Violence and Incitement policy, and enable the Board to undertake its own necessity and proportionality analysis of the trade-offs in policy development, Meta should provide the Board with the data that it uses to evaluate its policy enforcement accuracy. This information should be sufficiently comprehensive to allow the Board to validate Meta’s arguments that the type of enforcement errors in these cases are not a result of any systemic problems with Meta’s enforcement processes. The Board expects Meta to collaborate with it to identify the necessary data (e.g., 500 pieces of content from Facebook and 500 from Instagram in English for US users) and develop the appropriate data sharing arrangements. The Board will consider this implemented when Meta provides the requested data. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg, which draws on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Return to Case Decisions and Policy Advisory Opinions" ig-h3138h6s,Violence against women,https://www.oversightboard.com/decision/ig-h3138h6s/,"July 12, 2023",2023,,"TopicFreedom of expression, Sex and gender equalityCommunity StandardHate speech","Policies and TopicsTopicFreedom of expression, Sex and gender equalityCommunity StandardHate speech",Overturned,Sweden,The Oversight Board has overturned Meta's decisions to remove two Instagram posts which condemned gender-based violence.,46147,7173,"Overturned July 12, 2023 The Oversight Board has overturned Meta's decisions to remove two Instagram posts which condemned gender-based violence. Standard Topic Freedom of expression, Sex and gender equality Community Standard Hate speech Location Sweden Platform Instagram Public comments appendix The Oversight Board has overturned Meta’s decisions to remove two Instagram posts which condemned gender-based violence. The Board recommends that Meta include the exception for allowing content that condemns or raises awareness of gender-based violence in the public language of the Hate Speech policy, as well as update its internal guidance to reviewers to ensure such posts are not mistakenly removed. About the cases In this decision, the Board considers two posts from an Instagram user in Sweden together. Meta removed both posts for violating its Hate Speech Community Standard. After the Board identified the cases, Meta decided that the first post had been removed in error but maintained its decision on the second post. The first post contains a video with an audio recording and its transcription, both in Swedish, of a woman describing her experience in a violent intimate relationship, including how she felt unable to discuss the situation with her family. The caption notes that the woman in the audio recording consented to its publication, and that the voice has been modified. It says that there is a culture of blaming victims of gender-based violence, and little understanding of how difficult it is for women to leave a violent partner. The caption says, “men murder, rape and abuse women mentally and physically – all the time, every day.” It also shares information about support organizations for victims of intimate partner violence, mentions the International Day for the Elimination of Violence against Women, and says it hopes women reading the post will realize they are not alone. After one of Meta’s classifiers identified the content as potentially violating Meta’s rules on hate speech, two reviewers examined the post and removed it. This decision was then upheld by the same two reviewers on different levels of review. As a result of the Board selecting this case, Meta determined that it had removed the content in error, restoring the post. As the Board began to assess the first post, it received another appeal from the same user. The second post, also shared on Instagram, contains a video of a woman speaking in Swedish and pointing at words written in Swedish on a notepad. In the video, the speaker says that although she is a man-hater, she does not hate all men. She also states that she is a man-hater for condemning misogyny and that hating men is rooted in fear of violence. Meta removed the content for violating its rules on hate speech. The user appealed the removal to Meta, but the company upheld its original decision after human review. After being informed that the Board had selected this case, Meta did not change its position. Since at least 2017, digital campaigns have highlighted that Facebook’s hate speech policies result in the removal of phrases associated with calling attention to gender-based violence and harassment. For example, women and activists have coordinated posting phrases such as “men are trash” and “ men are scum ” and protested their subsequent removal on the grounds of being anti-men hate speech. Key findings The Board finds that neither of the two posts violates Meta’s rules on hate speech. On the first post, the Board finds that the statement “Men murder, rape and abuse women mentally and physically – all the time, every day” is a qualified statement which does not violate Meta’s Hate Speech policy. Given that the post refers to international campaigns against violence against women and provides local resources for organizations that work to help women victims, it is clear the language describes men who commit violence against women. In addition, the Board finds that the second post is not an expression of contempt towards men but condemns violence against women and discusses the roots of gender-based hate. While Meta argues that the user’s statement that she does not hate all men does not impact the assessment of other parts of the post, the Board disagrees and assesses the post as a whole. The Board finds that the other aspects of the post that Meta cited as potentially violating are not violating when read within the context of the post. Some Board Members disagreed that the posts in question did not violate Meta’s hate speech rules. The Board is concerned that Meta’s approach to enforcing gender-based hate speech may result in the disproportionate removal of content raising awareness of and condemning gender-based violence. Meta states, for example, that the first post should be allowed on its platforms and that the Hate Speech policy is “designed to allow room for raising awareness of gender-based violence.” However, neither the public-facing policy nor its internal guideline documents to moderators contain clear guidance to ensure that posts like these would not be mistakenly removed. The company’s confusing guidance makes it virtually impossible for moderators to reach the right conclusion. While Meta relied on contextual cues to determine the first post was not violating once it was identified by the Board, the company’s guidance for moderators limits the possibility of contextual analysis significantly. The Board finds that within this context, it is critical that statements that condemn and raise awareness of gender-based violence not be mistakenly removed. The Board’s concern that this may be happening is particularly pronounced given that an allowance for this type of content, while highlighted by Meta, is not communicated clearly to the public and the guidance provided to moderators is confusing. To address this, Meta should clarify its public rules and provide appropriate guidance to moderators that better reflects this allowance. The Oversight Board’s decision The Oversight Board overturns Meta’s decisions to remove both posts. The Board recommends that Meta: * Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board overturns Meta’s decisions in two cases about Instagram posts condemning gender-based violence that Meta removed as anti-men hate speech. Meta has acknowledged that its initial decision in the first case was wrong but maintains the second post violates the Hate Speech policy. In both cases, the Board finds the posts do not violate the Hate Speech policy. It is also recommended that Meta should include a clearer exception to allow content that condemns or raises awareness of gender-based violence in the public language of the Hate Speech policy, as well as update its internal guidance so that moderators can effectively implement this exception. This would help ensure that Meta does not incorrectly remove content condemning or raising awareness about gender-based violence. 2. Case description and background This decision concerns two content decisions made by Meta, which the Oversight Board is addressing here together. An Instagram user in Sweden created two posts with videos and captions. Meta removed both posts for violating its Hate Speech Community Standard. After the Board identified the cases, Meta reversed its decision on the first post stating that it had been removed in error. However, it maintained its decision on the second post. In the first post, the user posted a video with an audio recording and its transcription, both in Swedish, of a woman describing her experience in a violent intimate relationship, including feeling unable to discuss her situation with her family. The audio does not contain graphic details of violence. The caption to the post notes the woman in the audio recording consented to it being published, and that the voice has been modified. It says that there is a culture of blaming victims of gender-based violence, and little understanding of how difficult it is for women to leave a violent partner. The caption says, “men murder, rape and abuse women mentally and physically - all the time, every day.” It also provides a helpline number, shares information about support organizations for victims of intimate partner violence, mentions the International Day for the Elimination of Violence against Women, and says it hopes women reading the post will realize they are not alone. The post was viewed around 10,000 times. On the same day, a Meta classifier identified the content as potentially violating the Hate Speech policy. Meta stated that due to a bug, the classifier created two review jobs. It then sent the content twice to two reviewers, who each decided twice that the content violated the Hate Speech policy. Meta removed the post and applied a “ strike ” to the user’s Instagram account. When Meta removes content, it sometimes applies “strikes,” which correspond to different penalties against an account as they accumulate. The content creator appealed Meta’s decision on the same day, and one of the reviewers who had already examined the content upheld the removal. After this, about an hour after the content had initially been posted, it was automatically sent to a High Impact False Positive Override (HIPO) channel, which aims to identify wrongfully removed content. This resulted in the content being sent to the same two reviewers who had originally examined the content. Both reviewers decided once again that the post violated the Hate Speech policy. In total, the content was examined seven times by the same two human reviewers who, on every occasion, found the content to be violating. As a result of the Board selecting this appeal, Meta reviewed the relevant post and determined that its decision to remove it was in error, restored it, and reversed the strike. While the Board began to assess the first post, it received another appeal from the same user. This concerned an Instagram video of a woman speaking in Swedish and pointing at words written in Swedish on a notepad. In the video, the speaker says that although she is a man-hater, she does not hate all men. She further explains that this means she talks about and condemns violence against women, and that these feelings of hate are rooted in fear of violence. Within this discussion of fear, the person in the video draws an analogy between venomous snakes and men who commit violence against women. She notes that although many snakes are not poisonous, the fact that some are impacts how people approach them in general, just as the fear towards men stems from a worldwide social problem of violence against women. In the caption of the post, the user calls on men who are “allies” to help women in their fight. The post was viewed around 150,000 times. Following user reports, Meta removed the content of the second post for violating the Hate Speech policy and again applied a strike against the account, preventing the user from creating live videos. On the same day, the content creator appealed Meta’s removal, but the company upheld its original decision after human review. After being informed that the Board had selected this case, Meta did not change its position. When assessing cases, the Board notes as relevant context research, reporting, and public comments that highlight similar issues. Since at least 2017, digital campaigns have highlighted that Facebook’s hate speech policies result in the removal of phrases associated with calling attention to gender-based violence and harassment. For example, women and activists have coordinated posting phrases such as “men are trash” and “ men are scum ” and protested their subsequent removal on the grounds of being anti-men hate speech. Meta, itself, has reflected on the complexities of its policy approach to gender-based hate speech. In 2019, Mark Zuckerberg explained his rationale for considering such posts hate speech, citing the challenges the company perceived in enforcing a policy that acknowledged power differences among different groups. Meta also held a policy forum in which it debated potentially modifying the Hate Speech policy, and ultimately decided to continue with its current approach. 3. Oversight Board authority and scope The Board has authority to review Meta’s decision following an appeal from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). Where Meta commits to act on recommendations, the Board monitors their implementation. When the Board selects cases like the first post, where Meta subsequently acknowledges that it made an error, the Board reviews the original decision to increase understanding of the content moderation process and to make recommendations to reduce errors and increase fairness for people who use Facebook and Instagram. The Board also aims to make recommendations to lessen the likelihood of future errors and treat users more fairly moving forward. When the Board identifies cases that raise similar issues, they may be assigned to a panel simultaneously to deliberate together. Binding decisions will be made with respect to each piece of content. 4. Sources of authority and guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board decisions The most relevant previous decisions of the Oversight Board include: II. Meta’s content policies The Instagram Community Guidelines note that content containing hate speech may be removed and link to Facebook’s Hate Speech policy. The Hate Speech policy rationale defines hate speech as a direct attack against people on the basis of protected characteristics, including sex and gender. Meta does not allow Hate Speech on its platforms because it “creates an environment of intimidation and exclusion, and in some cases may promote offline violence.” The rules prohibit “violent” or “dehumanizing” speech and “expressions of contempt” against people based on these characteristics, including men. Tier 1 of the Hate Speech policy prohibits “dehumanizing speech” includes “comparisons, generalizations, or unqualified behavioral statements to or about ... violent and sexual criminals.” Meta’s internal policy guidelines define “qualified” and “unqualified” behavioral statements and provide examples. Under these guidelines, ‘qualified statements’ do not violate the policy, while ‘unqualified statements’ are violating and removed. Meta states qualified behavioral statements use statistics, reference individuals, or describe direct experience. According to Meta, unqualified behavioral statements “explicitly attribute a behavior to all or a majority of people defined by a protected characteristic.” Tier 2 of the Hate Speech policy prohibits direct attacks against people on the basis of protected characteristics with ""expressions of contempt,” which includes “self-admission to intolerance on the basis of protected characteristics"" and “expressions of hate, including, but not limited to: despise, hate.” The Board’s analysis of the content policies was informed by Meta’s commitment to ""Voice,"" which the company describes as “paramount,” and its values of “Safety” and “Dignity.” III. Meta’s human rights responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following international standards: 5. User submissions In their first appeal to the Board, the content creator said that they wanted to show women who face domestic violence that they are not alone. They also stated that removing the post stops an important discussion and keeps people from learning, and possibly sharing the post. In their second appeal, they explained that it was clear that they do not hate all men but want to discuss the problem of men committing violence against women. 6. Meta’s submissions After the Board identified the first post, Meta determined that it had been removed in error and did not violate the Hate Speech policy. Meta, however, maintained that the second post violated the Hate Speech policy. With regards to the first post, Meta stated that the text in the caption that “men murder, rape and abuse women mentally and physically - all the time, every day” likely caused the removal. When read in isolation, Meta found it was an “unqualified behavioral statement"" about men comparing them to sexual predators or violent criminals, and therefore violated the Hate Speech policy. However, once the Board identified the case, Meta determined that, when read within the context of the post as whole, this was a “qualified behavioral statement.” Meta explained that an “unqualified behavioral statement” attributes a behavior to all or a majority of people defined by a protected characteristic, while a “qualified behavioral statement” does not. Meta further explained that it determined the statement was qualified by looking at several factors. These included: noting the International Day for the Elimination of Violence Against Women; that the user encourages sharing the post and provides information on helplines; and that the user shares a description of an experience of violence and describes it as a social problem. Meta concluded that “the user’s clear intent to raise awareness of violence against women provides further support that the content does not violate the Hate Speech policy.” Meta also stated that “although [the content] does not squarely fall into Meta’s allowance for raising awareness of or condemning someone else’s hate speech, [its] policy is designed to allow room for raising awareness of gender-based violence.” While Meta listed the user’s intent as a contextual factor in finding the content non-violating, in response to question asked by the Board, the company acknowledged that its policies generally do not grant reviewers discretion to consider intent. According to Meta, to ensure consistent and fair enforcement of its rules, it does not require at-scale reviewers “to infer intent or guess at what someone ‘is really saying’” because “divining intent for hate speech invites subjectivity, bias, and inequitable enforcement.” As Meta referenced criteria that are not in Meta’s internal guidelines to reviewers, the Board asked Meta for any existing guidance that would help reviewers reach the correct outcome in this case. Meta then referenced additional confidential internal guidance that focused on elements not relevant to this case. The company also stated that while this case did “not fit neatly into its policies,” it would expect its reviewers to understand that this content is non-violating. With regard to the second post, Meta found that “[u]nlike the content in [the first] case, this content contained an expression of hatred directed toward men, which violates [Meta’s] Hate Speech policy.” The company explained that its Hate Speech policy prohibits content targeting men with expressions of contempt, which it defines as “self-admission to intolerance on the basis of protected characteristics,” including expressions of hate. For Meta, the reference to being a man-hater is an expression of hate. Meta acknowledged that the user also said they do not hate all men but stated this did not negate the expression of hate. Meta further noted that the content may violate other elements of its Hate Speech policy but stressed that its removal decision was made based solely on this expression of hate. Meta stated there was an implicit generalization about men, as the user included a phrase about knowing what men are like in general. Meta also described the part of the post that described poisonous snakes as an “implicit comparison between men and snakes” and arguably violating. The Board asked Meta 14 questions in writing, all of which Meta answered. The questions addressed issues related to the criteria, internal guidelines and automated processes for distinguishing qualified and unqualified behavioral statements; how the accumulation of strikes impacts users on Instagram; internal escalation guidelines for at-scale reviewers; and how at-scale reviewers evaluate context, intent, and the accuracy of statements. 7. Public comments The Oversight Board received and considered 13 public comments related to these cases. One of the comments was submitted from Asia Pacific and Oceania; two were submitted from Central and South Asia; six from Europe; one from Latin America and the Caribbean; and three from the United States and Canada. The submissions covered the following themes: the significance of gender-based violence worldwide; the frequency of incorrect removals of content shared by women condemning gender-based violence and the need for change; the lack of clarity in Meta’s policies and the ineffectiveness of its appeals systems including automated moderation; and the lack of contextual approach to content governance. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board examined whether these posts should be restored by analyzing Meta's content policies, human rights responsibilities, and values. The Board also assessed the implications of these cases for Meta’s broader approach to content governance. The Board selected these appeals because they offer the potential to explore how Meta’s Hate Speech rules and their enforcement allow for condemnation and awareness raising of gender-based violence, an issue the Board is focusing on through its strategic priorities of gender, hate speech, and treating users fairly. 8.1 Compliance with Meta’s content policies I. Content rules The Board finds that the first post does not violate any Meta content policy. While the Community Guidelines apply to Instagram, Meta states that ""Facebook and Instagram share content policies. Content that is considered violating on Facebook is also considered violating on Instagram."" Meta ultimately found that this post was a qualified behavioral statement and did not violate Facebook’s Hate Speech policy. While the statement that “men murder, rape and abuse women mentally and physically - all the time, every day” may be susceptible to different interpretations, the Board agrees with Meta's ultimate conclusion that the post taken as a whole is not violating. Rather than being a generalization about all men, or even the majority of men, the principal focus of the post is to reassure victims of gender-based violence that they are not to blame and encourage them to speak out. The user refers to the International Day for the Elimination of Violence Against Women and then provides a helpline number and shares information about local support organizations for victims of intimate partner violence. The post discusses that women have little space to speak about these experiences, that victims are not to blame, and that men perpetrate acts of violence against women. Within this broader language, the statement that ""Men murder, rape and abuse women mentally and physically - all the time, every day"" describes the actions of those men who commit violence against women. In this context, this statement is also better understood as assurance to other victims of domestic violence that they are not alone. It is therefore a non-violating qualified statement. For some Board Members, the global context of violence against women is also relevant to the analysis, as the content reflects and raises awareness of a broader worldwide societal phenomenon, further reinforcing that read within the context of the post, the statement was not an assertion that all men are rapists or murderers. On the other hand, other Board Members do not believe that such broad and contested sociological considerations such as root cause assessments or analysis of power differentials should be used to interpret the statement, believing that it could invite controversial interpretations of what constitutes hate speech. The majority of the Board, though cognizant of the societal phenomenon of violence against women and the debates around its root causes, did not rely on them in order to reach its conclusion that the statement was a ""qualified"" one. Some Board Members disagreed with the majority’s interpretation of this post. For these Members, the user posted a clearly unqualified behavioral statement that men “murder, rape and abuse” women “all the time, every day.” Instead, they believe the post violates Meta’s hate speech rules. The Board also finds that the second post does not violate any Meta content policy. The Board finds that assessing the post as a whole, as Meta did with the first post, shows that this is not an “expression of contempt” against men, as prohibited by the Hate Speech policy. Meta argues that the user’s statement that she does not hate all men does not impact the assessment of other parts of the post. The Board disagrees. Again, when reading the post in its entirety, the content does not express contempt against all men, but expresses strong condemnation of violence against women and men who commit it. While the user states she is a ""man-hater,"" she both explains that this does not mean she hates all men and describe man hating as being defined by discussing fear and condemning violence against women. The user's analogy to the fear of venomous snakes, while disturbing on the surface, actually strengthens the Board’s conclusion that the post as a whole is not a condemnation of all men. Not all snakes are venomous; most are harmless. But the user is pointing out that the fear of venomous snakes brushes off onto all snakes, causing many or most human to be frightened of snakes as a class. Some Members disagreed and thought the second post was an expression of contempt, and thus a violation of Meta’s rules. A subset of these Members believed the post should remain off the platform and thus dissent from any decision to restore the post whose language, they claim, could lead to negative unintended consequences for both men and women. The Board finds the second post to be more complex to assess than the first post. While the first post should have been more easily recognized as qualified, for the second post nuanced analysis of the entire post and its language was key to understand it was not an expression of contempt. The Board agrees that the post is ultimately a condemnation of violence against women and discusses the roots of gender-based hate, thus a majority decides to restore it to the platform. Finally, the Board agrees that the content of these posts does not create an environment of intimidation or promote offline violence, and consequently does not violate the Hate Speech policy rationale. The Board finds this post seeks to diminish offline violence against women and falls directly within Meta’s paramount value of “Voice.” For this reason, the Board also finds that removing the content was not consistent with Meta's values. II. Enforcement action The Board notes that Meta’s review and appeal process for the first post used two at-scale reviewers seven times at different levels of review. In other words, the same two people were asked to review decisions that they themselves had taken earlier, rather than refer the secondary decisions to different reviewers. The Board is concerned that the effectiveness of the appeal and HIPO reviews here may have been undermined by this approach. In the “Wampum belt” case, the Board expressed concerns about Meta’s review and appeal system, and requested an evaluation of accuracy rates when content moderators are informed that they are engaged in secondary review and know that the initial determination was contested. Meta responded that it is still exploring the most efficient way to provide reviewers with additional information to maximize the accuracy of their reviews while ensuring consistency and scalability. Meta should consider adjusting its relevant protocol to send review jobs to different reviewers than those who previously assessed the content to improve the accuracy of decisions made upon secondary review. The Board is further concerned about the pressure on at-scale reviewers to assess content that may require more complex policy assessment in a short amount of time, often mere seconds. The Board has previously expressed concern about the limited resources available to moderators and their capacity to prevent the kind of mistakes seen in these cases (“Wampum belt” case, “‘Two buttons’ meme” case). III. Transparency The Board welcomes recent changes Meta has made in response to the Board’s recommendations to make its account strikes and penalty system fairer and clearer. However, Meta does not provide information in its Transparency Center about the consequences of Instagram strikes specifically, as it currently does for Facebook strikes. An Instagram Help Center article shares some penalties Meta applies to Instagram accounts when they accumulate strikes, but this is less accessible. It is also not comprehensive, as it does not mention limits on the ability to create live videos, for example. To treat users fairly, Meta should clearly explain and share Instagram-specific information in the Transparency Center alongside the information about Facebook strikes and penalties. 8.2 Compliance with Meta’s human rights responsibilities The Board finds that Meta's initial decisions to remove both posts are inconsistent with Meta's human rights responsibilities as a business. Freedom of expression (Article 19 ICCPR) Article 19 of the ICCPR provides for broad protection of expression, including about politics, public affairs, and human rights ( General Comment No. 34 (2011), Human Rights Committee, paras. 11-12). Moreover, “the Internet has become the new battleground in the struggle for women’s rights, amplifying opportunities for women to express themselves” ( A/76/258 para. 4). Empowering women to freely express themselves enables the realization of their human rights ( A/HRC/Res/23/2 ; ( A/76/258 para. 5). The Joint Declaration on Freedom of Expression and Gender Justice , a statement by the freedom of expression experts in the UN and regional human rights systems, discusses the importance of protecting speech that calls attention to gender-based violence. It states that “when women speak out about sexual and gender-based violence, states should ensure that such speech enjoys special protection, as the restriction of such speech can hinder the eradication of violence against women.” As social media is an important pathway to raise awareness about intimate partner violence and women's rights, and in alignment with its company values, the Board believes Meta should take a similar approach. Where restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human rights commitments, both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although ""companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users' right to freedom of expression"" ( A/74/486 , para. 41). I. Legality (clarity and accessibility of the rules) The principle of legality requires rules that limit expression to be clear and publicly accessible (General Comment No. 34, at para. 25). Legality standards further require that rules restricting expression “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not” ( A/HRC/38/35 at para. 46). People using Meta's platforms should be able to access and understand the rules and content reviewers should have clear guidance on their enforcement. Meta’s approach to enforce its hate speech policy raises serious legality concerns with respect to both rules analyzed by the Board. The Board’s main concern is that Meta states that the Hate Speech policy is “designed to allow room for raising awareness of gender-based violence.” However, neither the public-facing policy nor its internal guideline documents contain clear guidance to ensure that posts like these would not be mistakenly removed. The public-facing policy rationale mentions that someone else’s hate speech can be shared to condemn it or raise awareness, but that does not apply here. The Board agrees that Meta’s policies should permit expression that condemns and raises awareness of gender-based violence, when the content does not create an environment of intimidation or promote offline violence, and recommends that its policies more clearly reflect this. For the Tier 1 hate speech rules around qualification relevant to the first post, Meta’s internal guidelines mean that at-scale moderators would find it almost impossible to reach the correct outcome. Meta relied on a series of contextual cues to determine the first post was non-violating once it was identified by the Board, but these are not included in its internal guidance for moderators. Meta informed the Board that “it can be difficult for at-scale content reviewers to distinguish between qualified and unqualified behavioral statements without taking a careful reading of context into account.” However, the guidance to reviewers, as currently drafted, limits the possibility of contextual analysis significantly, even when there are clear cues within the content itself that it raises awareness about gender-based violence. Further, Meta stated that because it is challenging to determine intent at scale, its internal guidelines instruct reviewers to default to removing behavioral statements about protected characteristic groups when the user has not made it clear whether the statement is qualified or unqualified. This further reinforces the Board’s concern that moderators would remove non-violating content that condemns or raises awareness of gender-based violence. Meta states that content such as the first post on anti-gender-based violence should be allowed on its platforms, but at the same time the company’s internal guidance to human reviewers seems to lead to the opposite outcome in practice. For the Tier 2 hate speech rules around expressions of contempt relevant to the second post, the Board finds the public guidance around expressions of hate to be clearer. However, it is similarly questionable how Meta allows for condemnation and awareness raising in relation to this rule. Meta again told the Board in its description of this case that it “allow[s] people to raise awareness of violence against women” and to “share their experiences or call out intolerance.” Meta’s position that additional language within the post that negated or nuanced the expression of contempt were not relevant reinforces the Board’s concern that there is no guidance in place to ensure that Meta’s described allowance of awareness raising of gender-based violence exists in practice. II. Legitimate aim Any state restriction on expression should pursue one of the legitimate aims listed in the ICCPR, which include the ""rights of others."" According to the Hate Speech policy rationale, it aims to protect users from an “environment of intimidation and exclusion” and to prevent offline violence. Therefore, Meta’s Hate Speech policy, which aims to protect people from the harm caused by hate speech, has a legitimate aim that is recognized by international human rights law standards. III. Necessity and proportionality The principle of necessity and proportionality provides that any restrictions on freedom of expression ""must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected” ( General Comment No.34 , para. 34). Social media companies should consider a range of possible responses to problematic content beyond deletion to ensure restrictions are narrowly tailored ( A/74/486 para. 51). In previous hate speech cases, the Board has looked to the Rabat Plan of Action to assess the necessity and proportionality of removing hate speech. Although it focuses on the prohibition of advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence, the Board applies the Plan’s framework by analogy to gender-based discrimination. The Joint Declaration on Freedom of Expression and Gender Justice , for example, supports this approach, stating that “sex and gender should be recognized as protected characteristics for the prohibition of advocacy of hatred that constitutes incitement to discrimination, hostility or violence.” In both cases, the Board considered the six Rabat Plan factors (context, identity of speaker, intent of speaker, content, extent of expression, and likelihood of harm including its imminence). The Board finds that these posts pose no risk of imminent harm and thus removal of this content was not necessary. For both cases, the Board finds that the removal of this content was not necessary to protect men from harm. The Board finds both posts to be of public interest and non-violent, directly condemning and drawing attention to gender-based violence. The first post is a factual statement, reflecting that men commit gender-based violence. The second post contains a personal opinion and its rationale against the backdrop of global violence against women. Some of the Members that found second post policy violating would nonetheless keep it on the platform for these reasons. For this minority of Members, while the second post violated Meta’s hate speech Standard, the strongly expressed views in question posed no risk of likely and imminent harm and thus removing it was inconsistent with international human rights standards. ( A/68/362 at para 52-53). Therefore, both the removals and the strikes that resulted from Meta’s decisions were unnecessary. The Board is concerned that Meta’s enforcement approach to gender-based hate speech may result in the disproportionate removal of content raising awareness and condemning gender-based violence and intimate partner violence against women, as seen here. The UN Special Rapporteur on freedom of expression has recommended that companies ensure that enforcement of hate speech rules involves an evaluation of context and the harm that the content imposes on users and the public (A/74/486, para. 58 lit. d). At the same time, the Rapporteur has noted that “the scale and complexity of addressing hateful expression presents long-term challenges and may lead companies to restrict such expression even if it is not clearly linked to adverse outcomes"" (A/HRC/38/35, para. 28). While the Board understands that Meta’s approach to gender-based hate speech involves complex policy and enforcement questions, and that expression that creates an environment of intimidation or promotes offline violence could be removed, as stated in previous decisions, it is concerned that the company’s current approach inhibits the discussion and condemnation of gender-based violence in posts such as these. Meta should consider how the context and prevalence of gender-based violence should influence its policy and enforcement choices. According to UN Women, more than 640 million women have been subject to intimate partner violence. Most of that violence is perpetrated by current or former husbands or intimate partners, reflecting societal power imbalance worldwide. Most of the 81,000 women and girls killed in 2020 died at the hands of an intimate partner or family member, which equals to a woman or girl being killed every 11 minutes in their home. Although most people who kill women are men, Meta prohibits the phrase “Men kill women” absent additional explanation. Multiple public comments raised the impact of gender-based violence in society worldwide (e.g., PC-11023 by Karisma Foundation (Colombia), PC-11012 by Digital Rights Foundation, PC-11026 Women’s Support and Information Center). The public comment by the Digital Rights Foundation (PC-11012) states that “even in cases that use ‘all men,’ the intention is often to shed light on the gender hierarchy in society rather than literally condemning all men as violent perpetrators.” It also reiterated that “the alarming prevalence of this phenomenon globally and that most violent crimes, including intimate partner violence towards all genders, are statistically largely perpetuated by men.” The Cyber Rights Organization stated (PC-11025) that many gender-based violence survivors that speak up and generate awareness see their discourse censored online. Additionally, the public comment by Dr. Carolina Are notes that content raising awareness of gender-based violence is often mistakenly removed while misogynistic content remains online, citing several studies (PC-10999). Experts consulted by the Board in this case state that social media policies that prohibit hate speech that are sex and gender-neutral may inadvertently result in challenges for raising awareness of violence against women and inadequate enforcement have a profound impact on victims, leading to women changing their online behavior by limiting their interactions and self-censoring. The Board finds that within this context, it is critical that statements that condemn and raise awareness of gender-based violence, and do not create an environment of intimidation or promote offline violence, not be mistakenly removed. The Board’s concern that this may be happening is particularly pronounced given that an allowance for this type of content, while highlighted by Meta, is not communicated clearly to the public and the guidance provided to moderators is confusing. To address this, Meta should clarify its public rules and provide appropriate guidance to moderators that better reflects this allowance. 9. Oversight Board decision The Oversight Board overturns Meta’s decisions to remove both posts. 10. Recommendations A. Content policy 1. To allow users to condemn and raise awareness of gender-based violence, Meta should include the exception for allowing content that condemns or raises awareness of gender-based violence in the public language of the Hate Speech policy. The Board will consider this recommendation implemented when the public-facing language of the Hate Speech Community Standard reflects the proposed change. B. Enforcement 2. To ensure that content condemning and raising awareness of gender-based violence is not removed in error, Meta should update guidance to its at-scale moderators with specific attention to rules around qualification. This is important because the current guidance makes it virtually impossible for moderators to make the correct decisions even when Meta states that the first post should be allowed on the platform. The Board will consider this recommendation implemented when Meta provides the Board with updated internal guidance that shows what indicators it provides to moderators to grant allowances when considering content that may otherwise be removed under the Hate Speech policy. 3. To improve the accuracy of decisions made upon secondary review, Meta should assess how its current review routing protocol impacts accuracy. The Board believes Meta would increase accuracy by sending secondary review jobs to different reviewers than those who previously assessed the content. The Board will consider this implemented when Meta publishes a decision, informed by research on the potential impact to accuracy, whether to adjust its secondary review routing. C. Transparency 4. To provide greater transparency to users and allow them to understand the consequences of their actions, Meta should update its Transparency Center with information on what penalties are associated with the accumulation of strikes on Instagram. The Board appreciates that Meta has provided additional information about strikes for Facebook users in response to Board recommendations. It believes this should be done for Instagram users as well. The Board will consider this implemented when the Transparency Center contains this information. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg which draws on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Return to Case Decisions and Policy Advisory Opinions" ig-i9dp23ib,Öcalan’s isolation,https://www.oversightboard.com/decision/ig-i9dp23ib/,"July 8, 2021",2021,,"TopicFreedom of expression, Marginalized communities, MisinformationCommunity StandardDangerous individuals and organizations","Policies and TopicsTopicFreedom of expression, Marginalized communities, MisinformationCommunity StandardDangerous individuals and organizations",Overturned,"Turkey, United States","The Oversight Board has overturned Facebook's original decision to remove an Instagram post encouraging people to discuss the solitary confinement of Abdullah Öcalan, a founding member of the Kurdistan Workers' Party (PKK).",40895,6282,"Overturned July 8, 2021 The Oversight Board has overturned Facebook's original decision to remove an Instagram post encouraging people to discuss the solitary confinement of Abdullah Öcalan, a founding member of the Kurdistan Workers' Party (PKK). Standard Topic Freedom of expression, Marginalized communities, Misinformation Community Standard Dangerous individuals and organizations Location Turkey, United States Platform Instagram 2021-006-IG-UA Public Comments To read the full decision in Northern Kurdish click here . Ji bo hûn ev biryar bi Kurdiya Bakur bixwînin, li vir bitikînin. The Oversight Board has overturned Facebook’s original decision to remove an Instagram post encouraging people to discuss the solitary confinement of Abdullah Öcalan, a founding member of the Kurdistan Workers’ Party (PKK). After the user appealed and the Board selected the case for review, Facebook concluded that the content was removed in error and restored it. The Board is concerned that Facebook misplaced an internal policy exception for three years and that this may have led to many other posts being wrongly removed. About the case This case relates to Abdullah Öcalan, a founding member of the PKK. This group has used violence in seeking to achieve its aim of establishing an independent Kurdish state. Both the PKK and Öcalan are designated as dangerous entities under Facebook’s Dangerous Individuals and Organizations policy. On January 25, 2021, an Instagram user in the United States posted a picture of Öcalan which included the words “y’all ready for this conversation” in English. In a caption, the user wrote that it was time to talk about ending Öcalan’s isolation in prison on Imrali island in Turkey. The user encouraged readers to engage in conversation about Öcalan’s imprisonment and the inhumane nature of solitary confinement. After being assessed by a moderator, the post was removed on February 12 under Facebook’s rules on Dangerous Individuals and Organizations as a call to action to support Öcalan and the PKK. When the user appealed this decision, they were told their appeal could not be reviewed because of a temporary reduction in Facebook’s review capacity due to COVID-19. However, a second moderator did carry out a review of the content and found that it violated the same policy. The user then appealed to the Oversight Board. After the Board selected this case and assigned it to panel, Facebook found that a piece of internal guidance on the Dangerous Individuals and Organizations policy was “inadvertently not transferred” to a new review system in 2018. This guidance, developed in 2017 partly in response to concern about the conditions of Öcalan’s imprisonment, allows discussion on the conditions of confinement for individuals designated as dangerous. In line with this guidance, Facebook restored the content to Instagram on April 23. Facebook told the Board that it is currently working on an update to its policies to allow users to discuss the human rights of designated dangerous individuals. The company asked the Board to provide insight and guidance on how to improve these policies. While Facebook updated its Community Standard on Dangerous Individuals and Organizations on June 23, 2021, these changes do not directly impact the guidance the company requested from the Board. Key findings The Board found that Facebook’s original decision to remove the content was not in line with the company’s Community Standards. As the misplaced internal guidance specifies that users can discuss the conditions of confinement of an individual who has been designated as dangerous, the post was permitted under Facebook’s rules. The Board is concerned that Facebook lost specific guidance on an important policy exception for three years. Facebook’s policy of defaulting towards removing content showing “support” for designated individuals, while keeping key exceptions hidden from the public, allowed this mistake to go unnoticed for an extended period. Facebook only learned that this policy was not being applied because of the user who decided to appeal the company’s decision to the Board. While Facebook told the Board that it is conducting a review of how it failed to transfer this guidance to its new review system, it also stated “it is not technically feasible to determine how many pieces of content were removed when this policy guidance was not available to reviewers.” The Board believes that Facebook’s mistake may have led to many other posts being wrongly removed and that Facebook’s transparency reporting is not sufficient to assess whether this type of error reflects a systemic problem. Facebook’s actions in this case indicate that the company is failing to respect the right to remedy, contravening its Corporate Human Rights Policy (Section 3). Even without the discovery of the misplaced guidance, the content should never have been removed. The user did not advocate violence in their post and did not express support for Öcalan’s ideology or the PKK. Instead, they sought to highlight human rights concerns about Öcalan’s prolonged solitary confinement which have also been raised by international bodies. As the post was unlikely to result in harm, its removal was not necessary or proportionate under international human rights standards. The Oversight Board’s decision The Oversight Board overturns Facebook’s original decision to remove the content. The Board notes that Facebook has already restored the content. In a policy advisory statement, the Board recommends that Facebook: 1. Decision summary The Oversight Board has overturned Facebook’s original decision to remove an Instagram post encouraging people to discuss the solitary confinement of Abdullah Öcalan, a person designated by Facebook as a dangerous individual. After the user appealed and the Board selected the case for review, Facebook concluded that the content was removed in error and restored the post to Instagram. Facebook explained that in 2018 it “inadvertently failed to transfer” a piece of internal policy guidance that allowed users to discuss conditions of confinement of designated dangerous individuals to a new review system. The Board believes that if Facebook were more transparent about its policies the harm from this mistake could have been mitigated or avoided altogether. Even without the misplaced internal policy guidance, the Board found that the content never should have been removed. It was simply a call to debate the necessity of Öcalan’s detention in solitary confinement and its removal did not serve the aim of the Dangerous Individuals and Organizations policy “to prevent and disrupt real-world harm.” Instead, the removal resulted in a restriction on freedom of expression about a human rights concern. 2. Case description The case concerns content related to Abdullah Öcalan, a founding member of the Kurdistan Workers' Party (PKK). The PKK was founded in the 1970s with the aim of establishing an independent Kurdish state in South-Eastern Turkey, Syria, and Iraq. The group uses violence in seeking to achieve its aim. The PKK has been designated as a terrorist organization by the United States, the EU, the UK, and Turkey, among others. Öcalan has been imprisoned on Imrali Island, Turkey, since his arrest and sentencing in 1999 for carrying out violent acts aimed at the secession of a part of Turkey’s territory ( Case of Ocalan v Turkey, European Court of Human Rights ). On January 25, 2021, an Instagram user in the United States posted a picture of Öcalan, which included the words ""y'all ready for this conversation"" in English. Below the picture, the user wrote that it was time to talk about ending Öcalan's isolation in prison on Imrali Island. The user encouraged readers to engage in conversation about Öcalan’s imprisonment and the inhumane nature of solitary confinement, including through hunger strikes, protests, legal action, op-eds, reading groups, and memes. The content did not call for Öcalan's release, nor did it mention the PKK or endorse violence. The post was automatically flagged by Facebook and, after being assessed by a moderator, was removed on February 12 for breaching the policy on Dangerous Individuals and Organizations. The user appealed the decision to Facebook and was informed that the decision was final and could not be reviewed because of a temporary reduction in Facebook’s review capacity due to COVID-19. However, a second moderator still carried out a review of the content, also finding a breach of the Dangerous Individuals and Organizations policy. The user received a notification explaining that the initial decision was upheld by a second review. The user then appealed to the Oversight Board. The Board selected the case for review and assigned it to a panel. As Facebook prepared its decision rationale for the Board, it found a piece of internal guidance on the Dangerous Individuals and Organizations policy that allows discussion or debate about the conditions of confinement for individuals designated as dangerous. This guidance was developed in 2017 partly in response to international concern about the conditions of Öcalan’s imprisonment. Facebook explained that in 2018 the guidance was “inadvertently not transferred” to a new review system. It was also not shared within Facebook’s policy team which sets the rules for what is allowed on the platform. While the guidance remained technically accessible to content moderators in a training annex, the company acknowledges that it was difficult to find during standard review procedures and that the reviewer in this case did not have access to it. This guidance is a strictly internal document designed to assist Facebook’s moderators and was not reflected in Facebook’s public-facing Community Standards or Instagram’s Community Guidelines. Facebook only learned that this policy was not being applied because of the user who decided to appeal Facebook’s decision to remove their content to the Board. If not for this user’s actions, it is possible this error would never have come to light. As of June 29, Facebook has yet to reinstate the misplaced internal policy into its guidance for content moderators. The company explained to the Board that it “will work to ensure that the guidance it provides to its content reviewers on this subject is clear and more readily accessible to help avoid future enforcement errors.” Facebook restored the content to Instagram on April 23 and notified the Board that it “is currently working on an update to its policies to make clear that users can debate or discuss the conditions of confinement of designated terrorist individuals or other violations of their human rights, while still prohibiting content that praises or supports those individuals’ violent actions.” The company welcomed “the Oversight Board’s insight and guidance into how to strike an appropriate balance between fostering expression on subjects of human rights concern while simultaneously ensuring that its platform is not used to spread content praising or supporting terrorists or violent actors.” 3. Authority and scope The Board has the power to review Facebook’s decision following an appeal from the user whose post was removed (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1). The Board may uphold or reverse that decision (Charter Article 3, Section 5). In line with case decision 2020-004-IG-UA, Facebook reversing a decision a user appealed against does not exclude the case from review. The Board’s decisions are binding and may include policy advisory statements with recommendations. These recommendations are non-binding, but Facebook must respond to them (Charter Article 3, Section 4). The Board is an independent grievance mechanism to address disputes in a transparent and principled manner. 4. Relevant standards The Oversight Board considered the following standards in its decision: I. Facebook’s content policies: Instagram's Community Guidelines state that Instagram is not a place to support or praise terrorism, organized crime, or hate groups. This section of the Guidelines includes a link to Facebook’s Community Standard on Dangerous Individuals and Organizations (a change log reflecting the June 23 update to the Community Standards is here ). In response to a question from the Board, Facebook has confirmed that the Community Standards apply to Instagram in the same way they apply to Facebook. The Dangerous Individuals and Organizations Community Standard states that ""in an effort to prevent and disrupt real-world harm, we do not allow any organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on Facebook."" The Standard further stated, at the time it was enforced, that Facebook removes ""content that expresses support or praise for groups, leaders or individuals involved in these activities."" II. Facebook’s values: Facebook’s values are outlined in the introduction to the Community Standards. The value of “Voice” is described as “paramount”: The goal of our Community Standards has always been to create a place for expression and give people a voice. […] We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Facebook limits “Voice” in the service of four values, and one is relevant here: “Safety”: We are committed to making Facebook a safe place. Expression that threatens people has the potential to intimidate, exclude or silence others and isn't allowed on Facebook. III. Human rights standards: The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In March 2021, Facebook announced its Corporate Human Rights Policy , where it recommitted to respecting human rights in accordance with the UNGPs. The Board's analysis in this case was informed by the following human rights standards: 5. User statement In their appeal to the Board, the user explained that they posted the content to spur discussion about Öcalan’s philosophy and to end his isolation. The user said that they believed banning communicating about Öcalan and his philosophy prevents discussions that could lead to a peaceful settlement for Kurdish people in the Middle East. They also stated that they did not wish to promote violence but believed there should not be a ban on posting pictures of Öcalan on Instagram. The user claimed that the association of Öcalan’s face with violent organizations is not based on fact, but rather is slander and an ongoing effort to silence an important conversation. They compared Öcalan’s imprisonment to that of former South African President Nelson Mandela, noting that the international community has a role in illuminating Öcalan’s imprisonment just as it did with Mandela. 6. Explanation of Facebook’s decision Facebook initially concluded that the content was a call to action to support Öcalan and the PKK, which violated the Dangerous Individuals and Organizations policy. Öcalan co-founded the PKK, which Facebook notes has been designated as a Foreign Terrorist Organization by the United States. Based on this designation of the organization, Facebook added Öcalan to its list of designated dangerous individuals. Under its Dangerous Individuals and Organizations policy, Facebook removes all content that it deems to support such individuals. After the Board selected this case for review, Facebook evaluated the content against its policies again and found that it developed internal guidance in this area in 2017. In explaining the situation, Facebook stated that it inadvertently failed to transfer this guidance when it switched to a new review system in 2018 and did not share it throughout its policy team. This guidance allows content where the poster is calling for the freedom of a terrorist when the context of the content is shared in a way that advocates for peace or debate of the terrorist’s incarceration. Applying that guidance, Facebook found that the content in this case fell squarely within it and restored the content. 7. Third-party submissions The Board received 12 public comments related to this case. Six came from the United States and Canada, four from Europe, and two from the Middle East and North Africa. The submissions covered themes including the lack of transparency around the Dangerous Individuals and Organizations policy as well as its inconsistency with international human rights law, and that calls for discussion of solitary confinement do not constitute praise or support. To read public comments submitted for this case, please click here . 8. Oversight Board analysis 8.1 Compliance with Facebook’s content policies The Board found that Facebook’s decision to remove the content was not in line with the company’s Community Standards. The Community Standard on Dangerous Individuals and Organizations did not define what constituted “support” for a designated dangerous individual or organization until it was updated on June 23, 2021. In January 2021, the Board recommended that Facebook publicly define praise, support, and representation, as well as provide more illustrative examples of how the policy is applied ( case 2020-005-FB-UA ). In February, Facebook committed to “add language to our Dangerous Individuals and Organizations Community Standard within a few weeks explaining that we may remove content if the intent is not made clear [as well as to] add definitions of “praise,” “support” and “representation” within a few months.” On June 23, 2021, Facebook updated this standard to include definitions. The Board also recommended that Facebook clarify the relationship between Instagram’s Community Guidelines and the Facebook Community Standards ( case 2020-004-IG-UA ). As of June 29, Facebook has yet to inform the Board of its actions to implement this commitment. In the present case, following a request from the Board, Facebook shared internal guidance for content moderators about the meaning of “support” of designated individuals and organizations. That defines a “call to action in support” as a call to direct an audience to do something to further a designated dangerous organization or its cause. This language was not reflected in the public-facing Community Standards at the time this content was posted and is not included in the update published on June 23, 2021. Further to this, the misplaced and non-public guidance created in 2017 in response to Öcalan’s solitary confinement makes clear that discussions of the conditions of a designated dangerous individual’s confinement are permitted, and do not constitute support. In the absence of any other context, Facebook views statements calling for the freedom of a terrorist as support and such content is removed. Again, this language is not reflected in the public-facing Community Standards. The Board is concerned that specific guidance for moderators on an important policy exception was lost for three years. This guidance makes clear that the content in this case was not violating. Had the Board not selected this case for review, the guidance would have remained unknown to content moderators, and a significant amount of expression in the public interest would have been removed. This case demonstrates why public rules are important for users: they not only inform them of what is expected, but also empower them to point out Facebook’s mistakes. The Board appreciates Facebook’s apprehension of fully disclosing its internal content moderation rules due to concerns over some users taking advantage of this to spread harmful content. However, Facebook’s policy of defaulting towards removing content showing “support” for designated individuals, while keeping key exceptions hidden from the public, allowed this mistake to go unnoticed by the company for approximately three years without any accountability. The June 2021 update to the Dangerous Individuals and Organizations Community Standard provides more information on what Facebook considers to be “support” but does not explain to users what exceptions could be applied to these rules. Even without the discovery of the misplaced guidance, the content should not have been removed for “support.” This kind of call to action should not be construed as supporting the dangerous ends of the PKK. The user only encouraged people to discuss Öcalan’s solitary confinement, including through hunger strikes, protests, legal action, op-eds, reading groups, and memes. Accordingly, the removal of content in this case did not serve the policy’s stated aim of preventing and disrupting real-world harm. 8.2 Compliance with Facebook’s values The Board found that Facebook’s decision to remove this content did not comply with Facebook’s values of “Voice” and “Safety.” The user carefully crafted their post calling for a discussion about ending Öcalan’s isolation in prison on Imrali Island. They encouraged readers to discuss the inhumanity of solitary confinement and why it would be necessary to keep Öcalan confined in such a manner. The user advocated peaceful actions to provoke this discussion, and did not advocate for or incite violence in their post. They also did not express support for Öcalan’s ideology or the PKK. The Board found that expression which challenges human rights violations is central to the value of “Voice.” This is especially important with reference to the rights of detained people, who may be unable to effectively advocate in support of their own rights, particularly in countries with alleged mistreatment of prisoners and where human rights advocacy may be suppressed. The value of “Safety” was notionally engaged because the content concerned a designated dangerous individual. However, removing the content did not address any clear “Safety” concern. The content did not include language that incited or advocated for the use of violence. It did not have the potential to “intimidate, exclude, or silence other users.” Instead, Facebook’s decision illegitimately suppressed the voice of a person raising a human rights concern. 8.3 Compliance with Facebook’s human rights responsibilities Removing this content was inconsistent with the company’s commitment to respect human rights, as set out in its Corporate Human Rights Policy . In relation to terrorist content, Facebook is a signatory of the Christchurch Call , which aims to “eliminate” the dissemination of “terrorist and violent extremist content” online. While the Board is cognizant of human rights concerns raised by civil society concerning the Christchurch Call, it nevertheless requires social media companies to enforce their community standards ""in a manner consistent with human rights and fundamental freedoms."" I. Freedom of expression (Article 19 ICCPR) Article 19 states that everyone has the right to freedom of expression, which includes freedom to seek, receive and impart information. The right to freedom of expression includes discussion of human rights (General Comment 34, para. 11) and is a “necessary condition for the realization of the principles of transparency and accountability” (General Comment 34, para. 3). Furthermore, the UN Declaration on Human Rights Defenders provides that everyone has the right to “study, discuss, form and hold opinions on the observance, both in law and in practice, of all human rights and fundamental freedoms and, through these and other appropriate means, to draw public attention to those matters” (A/RES/53/144, Article 6(c)). The user sought to highlight concerns about an individual’s solitary and prolonged confinement. The Board notes that international bodies have raised human rights concerns about such practices. The Nelson Mandela Rules set out that states should prohibit indefinite as well as prolonged solitary confinement as a form of torture or cruel, inhuman, or degrading treatment of punishment ( A/Res/70/175 , Rule 43). Discussing the conditions of any individual’s detention and alleged violations of their human rights in custody falls squarely within the types of expression protected by Article 19 of the ICCPR, as emphasized in the UN Declaration on Human Rights Defenders seeks to protect. While the right to freedom of expression is fundamental, it is not absolute. It may be restricted, but restrictions should meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). The Human Rights Committee has stated that restrictions on expression should not “put in jeopardy the right itself,” and has emphasized that “the relation between right and restriction and between norm and exception must not be reversed” (General Comment No. 34, para. 21). The UN Special Rapporteur on freedom of expression has emphasized that social media companies should seek to align their content moderation policies on Dangerous Individuals and Organizations with these principles (A/74/486, para. 58(b)). a. Legality (clarity and accessibility of the rules) Restrictions on expression should be formulated with sufficient precision so that individuals understand what is prohibited and act accordingly (General Comment 34, para. 25). Such rules should also be made accessible to the public. Precise rules are important for those enforcing them: to constrain discretion and prevent arbitrary decision-making, and also to safeguard against bias. The Board recommended in case 2020-005-FB-UA that the Community Standard on Dangerous Individuals and Organizations be amended to define “representation,” “praise,” and “support,” and reiterated these concerns in case 2021-003-FB-UA . The Board notes that Facebook has now publicly defined those terms. The UN Special Rapporteur on freedom of expression has described social media platforms prohibitions on both “praise” and “support” as “excessively vague” (A/HRC/38/35, para. 26; see also: General Comment No. 34, para. 46). In a public comment submitted to the Board (PC-10055), the UN Special Rapporteur on human rights and counter-terrorism noted that although Facebook has made some progress to clarify its rules in this area, “the Guidelines and Standard are [still] insufficiently consistent with international law and may function in practice to undermine certain fundamental rights, including but not limited to freedom of expression, association, participation in public affairs and non-discrimination.” Several public comments made similar observations. The Board noted Facebook provides extensive internal and confidential guidance to reviewers to interpret the company’s public-facing content policies, to ensure consistent and non-arbitrary moderation. However, it is unacceptable that key rules on what is excluded from Facebook’s definition of support are not reflected in the public-facing Community Standards. b. Legitimate aim Restrictions on freedom of expression should pursue a legitimate aim. The ICCPR lists legitimate aims in Article 19, para. 3, which includes the protection of the rights of others. The Board notes that because Facebook reversed the decision the user appealed against, following the Board’s selection of that appeal, the company did not seek to justify the removal as pursuing a legitimate aim but instead framed the removal as an error. The Community Standards explain that the Dangerous Individuals and Organizations policy seeks to prevent and disrupt real world harm. Facebook has previously informed the Board the policy limits expression to protect “the rights of others,” which the Board has accepted ( case 2020-005-FB-UA ). c. Necessity and proportionality Restrictions on freedom of expression should be necessary and proportionate to achieve a legitimate aim. This requires there to be a direct connection between the expression and a clearly identified threat (General Comment 34, para. 35), and restrictions “must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected” (General Comment 34, para. 34). As Facebook implicitly acknowledged by reversing its decision following the Board’s selection of this case, the removal of this content was not necessary or proportionate. In the Board’s view, the breadth of the term “support” in the Community Standards combined with the misplacement of internal guidance on what this excludes, meant an unnecessary and disproportionate removal occurred. The content in this case called for a discussion about ending Öcalan’s isolation in prolonged solitary confinement. It spoke about Öcalan as a person and did not indicate any support for violent acts committed by him or by the PKK. There was no demonstrable intent of inciting violence or likelihood that leaving this statement or others like it on the platform would result in harm. The Board is particularly concerned about Facebook removing content on matters in the public interest in countries where national legal and institutional protections for human rights in particular freedom of expression are weak (cases 2021-005-FB-UA and 2021-004-FB-UA ). The Board shares the concern articulated by the UN Special Rapporteur on human rights and counter-terrorism in her submissions, that the “sub-optimal protection of human rights on the platform [...] may be enormously consequential in terms of the global protection of certain rights, the narrowing of civic space, and the negative consolidation of trends on governance, accountability and rule of law in many national settings.” The Board notes that the UN Special Rapporteur on freedom of expression has expressed specific concerns in this regard on Turkey ( A/HRC/41/35/ADD.2 ). II. Right to remedy (Article 2 ICCPR) The right to remedy is a key component of international human rights law ( General Comment No. 31 ) and is the third pillar of the UN Guiding Principles on Business and Human Rights. The UN Special Rapporteur on freedom of expression has stated that the process of remediation “should include a transparent and accessible process for appealing platform decisions, with companies providing a reasoned response that should also be publicly accessible” (A/74/486, para 53). In this case, the user was informed an appeal was not available due to COVID-19. However, an appeal was then carried out. The Board once again stresses the need for Facebook to restore the appeals process in line with recommendations in cases 2020-004-IG-UA and 2021-003-FB-UA. While the user in this case had their content restored, the Board is concerned at what may be a significant number of removals that should not have happened because Facebook lost internal guidance which allowed for discussion on conditions of confinement for designated individuals. Facebook informed the Board that it is undertaking a review of how it failed to transfer this guidance to its new review system, as well as whether any other policies were lost. However, in response to a Board question, the company said that “it is not technically feasible to determine how many pieces of content were removed when this policy guidance was not available to reviewers.” The Board is concerned that Facebook’s transparency reporting is not sufficient to meaningfully assess if the type of error identified in this case reflects a systemic problem. In questions submitted to Facebook, the Board requested more information on its error rates for enforcing its rules on “praise” and “support” of dangerous individuals and organizations. Facebook explained that it did not collect error rates at the level of the individual rules within the Dangerous Individuals and Organizations policy, or in relation to the enforcement of specific exceptions contained only in its internal guidance. Facebook pointed the Board to publicly available information on the quantity of content restored after being incorrectly removed for violating its policy on Dangerous Individuals and Organizations. The Board notes that this does not provide the same kind of detail that would be reflected in internal audits and quality control to assess the accuracy of enforcement. While Facebook acknowledged it internally breaks down error rates for enforcement decisions by moderators and by automation, it refused to provide this information to the Board on the basis that “the information is not reasonably required for decision-making in accordance with the intent of the Charter.” Furthermore, the Board asked Facebook whether content is appealable to the Board if it has been removed for violating the Community Standards following a government flagging the content. Facebook confirmed such cases are appealable to the Board. This is distinct from those removed on the basis of a government requesting removal to comply with local law, which are excluded from review by Article 2, Section 1.2 of the Bylaws. While the Board does not have reason to believe that this content was the subject of a government referral, it is concerned that neither users whose content is removed on the basis of the Community Standards, nor the Board, are informed where there was government involvement in content removal. This may be particularly relevant for enforcement decisions that are later identified as errors, as well as where users suspect government involvement but there was none. Facebook’s transparency reporting is also limited in this regard. While it includes statistics on government legal requests for the removal of content based on local law, it does not include data on content that is removed for violating the Community Standards after a government flagging the content. This collection of concerns indicates that Facebook is failing to respect the right to remedy, in contravention of its Corporate Human Rights Policy (Section 3). 9. Oversight Board decision The Oversight Board overturns Facebook's original decision to take down the content, requiring the post to be restored. The Board notes that Facebook has accepted that its original decision was incorrect and has already restored the content. 10. Policy advisory statement As noted above, Facebook changed its Community Standard on Dangerous Individuals and Organizations after asking the Board to provide guidance on how this Community Standard should function. These recommendations take into account Facebook’s updates. The misplaced internal guidance Pending further changes to the public-facing Dangerous Individuals and Organizations policy, the Board recommends Facebook should take the following interim measures to reduce the erroneous enforcement of the existing policy: 1. Immediately restore the misplaced 2017 guidance to the Internal Implementation Standards and Known Questions (the internal guidance for content moderators), informing all content moderators that it exists and arranging immediate training on it. 2. Evaluate automated moderation processes for enforcement of the Dangerous Individuals and Organizations policy and where necessary update classifiers to exclude training data from prior enforcement errors that resulted from failures to apply the 2017 guidance. New training data should be added that reflects the restoration of this guidance. 3. Publish the results of the ongoing review process to determine if any other polices were lost, including descriptions of all lost policies, the period the policies were lost for, and steps taken to restore them. Updates to the Dangerous Individuals and Organizations policy Facebook notified the Board that it is currently working on an update to its policies to make clear that its rules on “praise” and “support” do not prohibit discussions on the conditions of confinement of designated individuals or other violations of their human rights. As an initial contribution to this policy development process, the Board recommends that Facebook should: 4. Reflect in the Dangerous Individuals and Organizations “policy rationale” that respect for human rights and freedom of expression, in particular open discussion about human rights violations and abuses that relate to terrorism and efforts to counter terrorism, can advance the value of “Safety,” and that it is important for the platform to provide a space for these discussions. While “Safety” and “Voice” may sometimes be in tension, the policy rationale should specify in greater detail the “real-world harms” the policy seeks to prevent and disrupt when “Voice” is suppressed. 5. Add to the Dangerous Individuals and Organizations policy a clear explanation of what “support” excludes. Users should be free to discuss alleged violations and abuses of the human rights of members of designated organizations. This should not be limited to detained individuals. It should include discussion of rights protected by the UN human rights conventions as cited in Facebook’s Corporate Human Rights Policy. This should allow, for example, discussions on allegations of torture or cruel, inhuman, or degrading treatment or punishment, violations of the right to a fair trial, as well as extrajudicial, summary, or arbitrary executions, enforced disappearance, extraordinary rendition and revocation of citizenship rendering a person stateless. Calls for accountability for human rights violations and abuses should also be protected. Content that incites acts of violence or recruits people to join or otherwise provide material support to Facebook-designated organizations should be excluded from protection even if the same content also discusses human rights concerns. The user’s intent, the broader context in which they post, and how other users understand their post, is key to determining the likelihood of real-world harm that may result from such posts. 6. Explain in the Community Standards how users can make the intent behind their posts clear to Facebook. This would be assisted by implementing the Board’s existing recommendation to publicly disclose the company’s list of designated individuals and organizations (see: case 2020-005-FB-UA). Facebook should also provide illustrative examples to demonstrate the line between permitted and prohibited content, including in relation to the application of the rule clarifying what “support” excludes. 7. Ensure meaningful stakeholder engagement on the proposed policy change through Facebook’s Product Policy Forum , including through a public call for inputs. Facebook should conduct this engagement in multiple languages across regions, ensuring the effective participation of individuals most impacted by the harms this policy seeks to prevent. This engagement should also include human rights, civil society, and academic organizations with expert knowledge on those harms, as well as the harms that may result from over-enforcement of the existing policy. 8. Ensure internal guidance and training is provided to content moderators on any new policy. Content moderators should be provided adequate resources to be able to understand the new policy, and adequate time to make decisions when enforcing the policy. Due process To enhance due process for users whose content is removed, Facebook should: 9. Ensure that users are notified when their content is removed. The notification should note whether the removal is due to a government request or due to a violation of the Community Standards or due to a government claiming a national law is violated (and the jurisdictional reach of any removal). 10. Clarify to Instagram users that Facebook’s Community Standards apply to Instagram in the same way they apply to Facebook, in line with the recommendation in case 2020-004-IG-UA . Transparency reporting To increase public understanding of how effectively the revised policy is being implemented, Facebook should: 11. Include information on the number of requests Facebook receives for content removals from governments that are based on Community Standards violations (as opposed to violations of national law), and the outcome of those requests. 12. Include more comprehensive information on error rates for enforcing rules on “praise” and “support” of dangerous individuals and organizations, broken down by region and language. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world, provided expertise on socio-political and cultural context. Return to Case Decisions and Policy Advisory Opinions" ig-jlsbgadz,Elon Musk Satire,https://www.oversightboard.com/decision/ig-jlsbgadz/,"March 7, 2024",2024,,"TopicFreedom of expression, HumorCommunity StandardDangerous individuals and organizations",Dangerous individuals and organizations,Overturned,United States,A user appealed Meta’s decision to remove an Instagram post containing a fictional “X” thread that satirically depicts Elon Musk reacting to a post containing offensive content. The case highlights Meta’s shortcomings in accurately identifying satirical content on its platforms.,5733,865,"Overturned March 7, 2024 A user appealed Meta’s decision to remove an Instagram post containing a fictional “X” thread that satirically depicts Elon Musk reacting to a post containing offensive content. The case highlights Meta’s shortcomings in accurately identifying satirical content on its platforms. Summary Topic Freedom of expression, Humor Community Standard Dangerous individuals and organizations Location United States Platform Instagram This is a summary decision. Summary decisions examine cases where Meta reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors and inform the public about the impact of the Board’s work. They are approved by a Board Member panel, not the full Board. They do not consider public comments and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas in which the company could improve its policy enforcement. Case Summary A user appealed Meta’s decision to remove an Instagram post containing a fictional “X” (formerly Twitter) thread that satirically depicts Elon Musk reacting to a post containing offensive content. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. Case Description and Background In July 2023, a user posted an image on Instagram containing a fictional X thread that does not resemble X’s layout. In the thread, a fictitious user posted several inflammatory statements such as: “KKK never did anything wrong to black people,” “Hitler didn’t hate Jews,” and “LGBT are all pedophiles.” The thread featured Elon Musk replying to the user’s post by stating “Looking into this.…” This Instagram post received fewer than 500 views. The post was removed for violating Meta’s Dangerous Organizations and Individuals policy, which prohibits representation of and certain speech about the groups and people the company judges as linked to significant real-world harm. Meta designates both the Ku Klux Klan (KKK) and Hitler as dangerous entities under this policy. In certain cases, Meta will allow “content that may otherwise violate the Community Standards when it is determined that the content is satirical. Content will only be allowed if the violating elements of the content are being satirized or attributed to something or someone else in order to mock or criticize them.” In their appeal to the Board, the user emphasized that the post was not intended to endorse Hitler or the KKK, but rather to “call out and criticize one of the most influential men on the planet for engaging with extremists on his platform."" After the Board brought this case to Meta’s attention, the company determined that the content did not violate the Dangerous Organizations and Individuals policy and its removal was incorrect. The company then restored the content to Instagram. Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). Where Meta acknowledges that it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook and Instagram users. Case Significance This case highlights Meta’s shortcomings in accurately identifying satirical content on its platforms. The Board has previously issued recommendations on Meta’s enforcement of satirical content. The Board has urged Meta to “make sure that it has adequate procedures in place to assess satirical content and relevant context properly. This includes providing content moderators with: (i) access to Facebook’s local operation teams to gather relevant cultural and background information; and (ii) sufficient time to consult with Facebook’s local operation teams and to make the assessment. Facebook should ensure that its policies for content moderators incentivize further investigation or escalation where a content moderator is not sure if a meme is satirical or not , ” ( Two Buttons Meme decision, recommendation no. 3). Meta reported implementation of this recommendation without publishing further information and thus its implementation cannot be verified. Furthermore, this case illustrates Meta's challenges in interpreting user intent. Previously, the Board has urged Meta to communicate to users how they can clarify the intent behind their post, particularly in relation to the Dangerous Organizations and Individuals policy. Meta partially implemented the Board's recommendation to “explain in the Community Standards how users can make the intent behind their posts clear to Facebook… Facebook should provide illustrative examples to demonstrate the line between permitted and prohibited content, including in relation to the application of the rule clarifying what ‘support’ excludes , ” ( Ocalan’s Isolation decision, recommendation no. 6). The Board emphasizes that full adoption of these recommendations, alongside Meta publishing information to demonstrate they have been successfully implemented, could reduce the number of enforcement errors of satirical content on Meta’s platforms. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" ig-kfly3526,India sexual harassment video,https://www.oversightboard.com/decision/ig-kfly3526/,"December 14, 2022",2022,December,"TopicFreedom of expression, Marginalized communities, News eventsCommunity StandardSexual exploitation of adults","Policies and TopicsTopicFreedom of expression, Marginalized communities, News eventsCommunity StandardSexual exploitation of adults",Upheld,India,The Board has upheld Meta’s decision to restore a post to Instagram containing a video of a woman being sexually assaulted by a group of men.,41958,6493,"Upheld December 14, 2022 The Board has upheld Meta’s decision to restore a post to Instagram containing a video of a woman being sexually assaulted by a group of men. Standard Topic Freedom of expression, Marginalized communities, News events Community Standard Sexual exploitation of adults Location India Platform Instagram India sexual harassment video - public comments Case summary The Board has upheld Meta’s decision to restore a post to Instagram containing a video of a woman being sexually assaulted by a group of men. The Board has found that Meta’s “newsworthiness allowance” is inadequate in resolving cases such as this at scale and that the company should introduce an exception to its Adult Sexual Exploitation policy. About the case In March 2022, an Instagram account describing itself as a platform for Dalit perspectives posted a video from India showing a woman being assaulted by a group of men. “Dalit” people have previously been referred to as “untouchables,” and have faced oppression under the caste system. The woman’s face is not visible in the video and there is no nudity. The text accompanying the video states that a ""tribal woman"" was sexually assaulted in public, and that the video went viral. “Tribal” refers to indigenous people in India, also referred to as Adivasi. After a user reported the post, Meta removed it for violating the Adult Sexual Exploitation policy, which prohibits content that “depicts, threatens or promotes sexual violence, sexual assault or sexual exploitation."" A Meta employee flagged the content removal via an internal reporting channel upon learning about it on Instagram. Meta's internal teams then reviewed the content and applied a “newsworthiness allowance.” This allows otherwise violating content to remain on Meta’s platforms if it is newsworthy and in the public interest. Meta restored the content, placing the video behind a warning screen which prevents anyone under the age of 18 from viewing it, and later referred the case to the Board. Key findings The Board finds that restoring the content to the platform, with the warning screen, is consistent with Meta’s values and human rights responsibilities. The Board recognizes that content depicting non-consensual sexual touching can lead to a significant risk of harm, both to individual victims and more widely, for example by emboldening perpetrators and increasing acceptance of violence. In India, Dalit and Adivasi people, especially women, suffer severe discrimination, and crime against them has been rising. Social media is an important means of documenting such violence and discrimination and the content in this case appears to have been posted to raise awareness. The post therefore has significant public interest value and enjoys a high degree of protection under international human rights standards. Given that the video does not include explicit content or nudity, and the majority of the Board finds the victim is not identifiable, a majority finds that the benefits of allowing the video to remain on the platform, behind a warning screen, outweigh the risk of harm. Where a victim is not identifiable, their risk of harm is reduced significantly. The warning screen, which prevents people under-18 from viewing the video, helps to protect the dignity of the victim, and protects children and victims of sexual harassment from exposure to disturbing or traumatizing content. The Board agrees the content violates Meta's Adult Sexual Exploitation policy and that the newsworthiness allowance could apply. However, echoing concerns raised in the Board’s “Sudan graphic video” case, the Board finds that the newsworthiness allowance is inadequate for dealing with cases such as this at scale. The newsworthiness allowance is rarely used. In the year ending June 1, 2022, Meta only applied it 68 times globally, a figure that was made public following a recommendation by the Board. Only a small portion of those were issued in relation to the Adult Sexual Exploitation Community Standard. The newsworthiness allowance can only be applied by Meta’s internal teams. However, this case shows that the process for escalating relevant content to those teams is not reliable. A Meta employee flagged the content removal via an internal reporting channel upon learning about it on Instagram. The newsworthiness allowance is vague, leaves considerable discretion to whoever applies it, and cannot ensure consistent application at scale. Nor does it include clear criteria to assess the potential harm caused by content that violates the Adult Sexual Exploitation policy. The Board finds that Meta’s human rights responsibilities require it to provide clearer standards and more effective enforcement processes for cases such as this one. A policy exception is needed which can be applied at scale, that is tailored to the Adult Sexual Exploitation policy. This should provide clearer guidance to distinguish posts shared to raise awareness from those intended to perpetuate violence or discrimination, and help Meta to balance competing rights at scale. The Oversight Board's decision The Oversight Board upholds Meta's decision to restore the post with a warning screen. The Board also recommends that Meta: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary A majority of the Board upholds Meta’s decision to restore the content to the platform and to apply a warning screen over it. The Board finds, however, that while the newsworthiness allowance could be applied in this case, it does not provide a clear standard or effective process to resolve cases such as this one at scale. In line with Meta’s values and human rights responsibilities, Meta should add a clearly defined exception in the Adult Sexual Exploitation policy which is applied in a more consistent and effective way than the newsworthiness allowance. The exception should be designed to protect content that raises awareness of public issues and to help Meta balance the specific risks of harms presented by content that violates the Adult Sexual Exploitation policy. The majority of the Board finds that, rather than relying on the newsworthiness allowance, it is preferable to apply the recommended exception to the Adult Sexual Exploitation policy, which would permit the content in this case. A minority finds that this exception does not apply to the content in this case. 2. Case description and background In March 2022, an Instagram account describing itself as a news platform for Dalit perspectives posted a video from India showing a woman being assaulted by a group of men. Dalits, previously referred to as “untouchables,” are socially segregated and economically marginalized in India due to the country’s caste system – a hierarchical system of social stratification. In the video, the woman's face is not visible and she is fully clothed. The text accompanying the video states in English that a ""tribal woman"" was sexually assaulted and harassed by a group of men in public, and that the video previously went viral. The term “tribal” refers to indigenous people in India, who are also referred to as Adivasi. The account that posted the video has around 30,000 followers, mostly located in India. Dalit and Adivasi women are frequently the target of assaults in the country (see section 8.3.). The content was reported by another Instagram user for sexual solicitation and sent for human review. Human reviewers determined that the content violated Meta's Adult Sexual Exploitation policy. Under this policy, Meta removes content ""that depicts, threatens or promotes sexual violence, sexual assault or sexual exploitation."" Following the removal of the content, Meta applied one standard strike (a strike that applies to all violation types), one severe strike (a strike that applies to the most egregious violations, including violations of the Adult Sexual Exploitation policy), and a 30-day feature limit to the content creator’s account. The feature limit prevented the user from starting any live video. On the day the original content was removed, a member of Meta’s Global Operations team saw a post on their personal Instagram account discussing the content’s removal, and escalated the original post. When content is escalated, it is reviewed by policy and safety experts within Meta. Upon escalation, Meta issued a newsworthiness allowance, reversed the strikes, restored the content, and placed a warning screen on the video alerting users that it may contain violent or graphic content. The warning screen prevents users under the age of 18 from viewing the content and requires all other users to click through the screen to view the video. A newsworthiness allowance permits content on Meta’s platforms that might otherwise violate its policies if the content is newsworthy and keeping it visible is in the public interest. It can only be applied by specialist teams within Meta, and not by human reviewers who review content at scale. Meta referred this case to the Board, stating that it demonstrates the challenge in striking “the appropriate balance between allowing content that condemns sexual exploitation and the harm in allowing visual depictions of sexual harassment to remain on [its] platforms.” 3. Oversight Board authority and scope The Board has authority to review decisions that Meta submits for review (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1.1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include policy advisory statements with non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). 4. Sources of authority The Oversight Board considered the following authorities and standards: I. Oversight Board decisions: The most relevant previous decisions of the Oversight Board include: II. Meta’s content policies: Instagram Community Guidelines The Instagram Community Guidelines provide, under the heading “follow the law,” that Instagram has “zero tolerance when it comes to sharing sexual content involving minors or threatening to post intimate images of others.” The words “intimate images” include a link to the Facebook Community Standards on Adult Sexual Exploitation in Meta’s Transparency Center. The Community Guidelines do not expressly address depictions of non-consensual sexual images. Facebook Community Standards In the policy rationale for the Adult Sexual Exploitation policy, Meta recognizes “the importance of Facebook as a place to discuss and draw attention to sexual violence and exploitation.” Therefore, it “allow[s] victims to share their experiences, but remove[s] content that depicts, threatens or promotes sexual violence, sexual assault, or sexual exploitation.” To protect victims and survivors, Meta removes “images that depict incidents of sexual violence and intimate images shared without the consent of the person[s] pictured.” The “do not post” section of this Community Standard states that content consisting of “any form of non-consensual sexual touching,” such as “depictions (including real photos/videos except in a real-world art context)” are removed from the platform. This policy also states that Meta “may restrict visibility to people over the age of 18 and include a warning label on certain fictional videos […] that depict non-consensual sexual touching.” In the Transparency Center , Meta explains that whether a strike is applied “depends on the severity of the content, the context in which it was shared and when it was posted.” It aims for its strike system to be “fair and proportionate.” Newsworthiness allowance Defining the newsworthiness allowance in its Transparency Center , Meta explains that it allows “content that may violate Facebook’s Community Standards or Instagram Community Guidelines, if it is newsworthy and keeping it visible is in the public interest.” Meta only does this “after conducting a thorough review that weighs the public interest against the risk of harm” and looks to “international human rights standards, as reflected in our Corporate Human Rights Policy , to help make these judgments.” The policy states that “content from all sources, including news outlets, politicians, or other people, is eligible for a newsworthy allowance” and “[w]hile the speaker may factor into the balancing test, we do not presume that any person’s speech is inherently newsworthy.” When the newsworthiness allowance is applied and content is restored, but may be sensitive or disturbing, restoration may include a warning screen. When weighing public interest against the risk of harm, Meta takes the following factors into consideration: whether the content poses imminent threats to public health or safety; whether the content gives voice to perspectives currently being debated as part of a political process; country-specific circumstances (for example, whether there is an election underway, or the country is at war); the nature of the speech, including whether it relates to governance or politics; and the political structure of the country, including whether it has a free press. III. Meta’s values: Meta’s values are outlined in the introduction to Facebook’s Community Standards. The value of “Voice” is described as “paramount”: The goal of our Community Standards has always been to create a place for expression and give people a voice. […] We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Meta limits “Voice” in service of four values, the relevant ones in this case being “Safety,” “Privacy” and “Dignity”: “Safety”: We’re committed to making Facebook a safe place. Content that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook. “Privacy”: We’re committed to protecting personal privacy and information. Privacy gives people the freedom to be themselves, choose how and when to share on Facebook and connect more easily. “Dignity”: We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade others. IV. International human rights standards: The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board’s analysis of Meta’s human rights responsibilities in this case was informed by the following human rights standards: 5. User submissions Following Meta’s referral and the Board’s decision to accept the case, the user was sent a message notifying them of the Board’s review and providing them with an opportunity to submit a statement to the Board. The user did not submit a statement. 6. Meta’s submissions In the rationale provided for this case, Meta explained that the caption and background of the user posting the content indicated intent to condemn and raise awareness of violence against marginalized communities. However, there is no relevant exception for this in the Adult Sexual Exploitation policy. With regard to its decision to apply the newsworthiness allowance, Meta explained that the public interest value of the content was high because the content was shared by a news organization that highlights the stories of underrepresented and marginalized populations. Meta said the content appears to have been shared with the intent to condemn the behavior in the video and raise awareness of gender-based violence against tribal women. Adivasi and marginalized voices, Meta argued, have been historically repressed in India and would benefit from greater reach and visibility. Meta also argued that the risk of harm was limited as the depiction did not involve overt nudity or explicit sexual activity, and does not sensationalize. It argued that the “case was exceptional in that the victim’s face is not visible and her identity is not readily identifiable.” In response to questions asked by the Board, Meta further explained that “a user’s self-description as a news organization is a factor that is considered, but is not determinative, in deciding whether it is treated as a news organization.” Subject matter experts and regional market specialists decide which users qualify as news organizations, based on a variety of factors, including their market knowledge and previous classifications of the organizations. Meta argued that its decision and policy is in line with its values and human rights responsibilities. The Board asked Meta 15 questions in this case. Meta answered 14 fully and did not answer one. The Board asked Meta to share its Human Rights Impact Assessment Report for India with the Board, which Meta declined, citing security risks. Meta failed to provide a satisfactory explanation for why sharing the Human Rights Impact Assessment Report with the Board would entail security risks. 7. Public comments The Oversight Board considered 11 public comments related to this case. One of the comments was submitted from Asia Pacific and Oceania, four were submitted from Central and South Asia, three from Europe, and three from the United States and Canada. The submissions covered the following themes: marginalization of Adivasi in India; power relations in the caste system; the potential of depictions of violence to embolden perpetrators and contribute to violence; the difference between non-sexual violence and sexual violence and the importance of contextual assessments with regard to the latter; the risk of victims of sexual harassment being ostracized by their own communities; intersectionality; risks of trauma of sexual assault survivors; harmful effects of hate speech on social media in India; the importance of social media as a tool for raising awareness of violence against marginalized groups; and the importance of Meta keeping a highly secured cache of removed content that is accessible to law enforcement officials. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board looked at the question of whether this content should be restored through three lenses: Meta's content policies, the company's values and its human rights responsibilities. 8.1 Compliance with Meta’s content policies The Board believes the newsworthiness allowance could be applied in this case, but does not provide a clear standard or effective process to assess this kind of content at scale. The Board therefore recommends that, in addition to the newsworthiness allowance, as a general exception to any policy, Meta include an exception to the Adult Sexual Exploitation policy, which would provide a clear and effective process for moderating content at scale. A majority of the Board believes the content in this case should be allowed under such an exception, while a minority believes no exception should apply to this specific content and that it should be removed from the platform. I. Content rules and enforcement The Board agrees with Meta’s assessment that the content in this case violates the prohibition in the Adult Sexual Exploitation Standard on depictions of non-consensual sexual touching. A majority of the Board agrees with the substance of Meta’s reasoning for reinstating the content and believes that the newsworthiness allowance could be applied in this case due to the content’s strong public interest value. However, the Board believes that the newsworthiness allowance does not provide an adequate standard or process to assess content such as the post in this case at scale, as it does not assure an effective and consistent application. The Board agrees with Meta’s assessment that there is strong public interest value in keeping this content on the platform, as it raises awareness of violence against a marginalized community. The Board also agrees that leaving content on the platform which depicts non-consensual sexual touching, including assault, can entail significant risks of harm (see section 8.3, below). The Board further agrees with Meta that in cases in which a victim of non-consensual sexual touching is identifiable, potential harm is too great and content generally should be removed, certainly if it is posted without the consent of the victim. In this case, however, the Board disagrees on whether the victim is identifiable. A majority of the Board believes that the victim is not identifiable. The victim’s face is not visible in the video, and the video is shot from a distance and generally of poor quality. The caption does not provide any information on the victim’s identity. A minority believes that the content should be removed from the platform on the basis that there is some possibility that the victim could be identified. Viewers of the video who have local knowledge of the area or the incident might be able to identify the victim even if their face is not visible. The likelihood of this, a minority believes, is especially high as the incident was widely reported by local news outlets. The majority acknowledges the minority’s concerns but does not believe that local awareness of an incident should, by itself, mean that a victim is ""identifiable."" The Board recommends that Meta review its policies and processes based on its values and human rights responsibilities, as analyzed in sections 8.2 and 8.3 below, and introduce a clearly defined exception in the Adult Sexual Exploitation Standard which can be applied in a more consistent and effective way than the newsworthiness allowance. II. Transparency Following the Board’s recommendations in the “Colombia protests,” and “Sudan graphic video” decisions, Meta has provided more information in its Transparency Center on the factors it considers in determining whether its newsworthiness allowance should be applied to a piece of content. It has not, however, developed and publicized “clear criteria for content reviewers to escalate for additional review public interest content that potentially violates the Community Standards but may be eligible for the newsworthiness allowance,” as recommended by the Board in its “Colombia protests” decision, and its policy advisory opinion on sharing private residential information. The Board reiterates its concern that Meta should provide more information on the escalation process in the context of the newsworthiness allowance. 8.2 Compliance with Meta’s values In this case, as in many, Meta's values of “Voice,” “Privacy,” “Safety,” and “Dignity” may point in different directions. Raising awareness about abuses against Adivasi serves the value of “Voice"" and may also help to protect the safety and the dignity of Adivasi. On the other hand, publicity around sexual assault may be unwelcome for the victim or may normalize the conduct, creating negative impacts on the privacy, dignity, and safety of the victim or others in their community. Because the video in this case was not explicit and the majority of the Board considered that the victim was not identifiable, the majority believes that leaving the video on the platform is consistent with Meta's values, taken as a whole. A minority of the Board maintains, however, that even if the likelihood of the victim being identified were low, a real risk of identification persists. In their view, concerns for the privacy, dignity, and safety of the victim must prevail, and the video should be removed. 8.3 Compliance with Meta’s human rights responsibilities A majority of the Board finds that keeping the content on the platform is consistent with Meta’s human rights responsibilities. Meta has committed itself to respect human rights under the UN Guiding Principles on Business and Human Rights ( UNGPs ). Its Corporate Human Rights Policy states that this commitment includes respecting the International Covenant on Civil and Political Rights (ICCPR). Freedom of expression (Article 19 ICCPR) The scope of the right to freedom of expression is broad. Article 19, para. 2, of the ICCPR gives heightened protection to expression on political issues (General Comment No. 34, paras. 20 and 49). The International Convention on the Elimination of All Forms of Racial Discrimination (ICERD) also provides protection from discrimination in the exercise of the right to freedom of expression (Article 5). The Committee on the Elimination of Racial Discrimination has emphasized the importance of the right with respect to assisting ""vulnerable groups in redressing the balance of power among the components of society"" and to offer ""alternative views and counterpoints"" in discussions (CERD Committee, General Recommendation 35, para. 29). The content in this case appears to have been shared to raise awareness of violence against Adivasi women in India, and, in line with the standards provided in General Comment No. 34, enjoys a high level of protection. Under Article 19, para. 3, ICCPR, restrictions on expression must (i) be provided for by law, (ii) pursue a legitimate aim, and (iii) be necessary and proportionate. The ICCPR does not create binding obligations for Meta as it does for states, but this three-part test has been proposed by the UN Special Rapporteur on Freedom of Expression as a framework to guide platforms’ content moderation practices ( A/HRC/38/35 ). I. Legality (clarity and accessibility of the rules) Rules restricting expression must be clear and accessible so that those affected know the rules and may follow them (General Comment No. 34, paras. 24-25). Applied to Meta, users of its platforms and reviewers enforcing the rules should be able to understand what is allowed and what is prohibited. In this case, the Board concludes that Meta falls short of meeting that responsibility. The Board finds that the wording of the newsworthiness policy is vague and leaves significant discretion to whoever applies it. As the Board noted in the “Sudan graphic video"" case (2022-002-FB-MR) , vague standards invite arbitrary application and fail to assure the adequate balancing of affected rights when moderating content. The Board has also repeatedly drawn attention to the lack of clarity for Instagram users about which policies apply to their content, particularly, if and when Facebook policies apply (see, for example, the Board’s decisions in the “Breast cancer symptoms and nudity” case (2020-004-IG-UA), and the “Ayahuasca brew” case (2021-013-IG-UA)). The Board reiterates that concern here. The Board further reiterates the critical need for more information around the standards, internal guidance and processes that determine when content is escalated (see, for example, the Board’s decision in the “Colombia protests” case, the “Sudan graphic video” case, and the “Knin cartoon” case). For users to understand whether and how the newsworthiness allowance will apply to the content, Meta must provide more detailed information on the escalation process. II. Legitimate aim Under Article 19, para. 3, ICCPR, freedom of expression may be limited for the purpose of protecting “the rights of others.” The Adult Sexual Exploitation Policy aims to prevent abuse, re-victimization, social stigmatization, doxing and other forms of harassment. It serves the protection of the right to life (Article 6, ICCPR), the right to privacy (Article 17, ICCPR) and the right to physical and mental health (Article 12, ICESCR). It also pursues the goal of preventing discrimination and gender-based violence (Article 2, para. 1, ICCPR, Article 1, CEDAW). In applying the warning screen, Meta pursues the legitimate aim of mitigating the harms described above, and of protecting other users. The warning that appears on the screen aims to protect victims of sexual harassment from being exposed to potentially retraumatizing, disturbing and graphic content (Article 12, ICESCR). The age restriction also pursues the legitimate aim of protecting children from harmful content (Art. 3, 17 and 19, CRC). III. Necessity and proportionality The principle of necessity and proportionality provides that any restrictions on freedom of expression ""must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected"" (General Comment 34, para. 34). In this case, a majority finds that removal of the content would not be necessary and proportionate but that applying a warning screen and age restriction satisfies this test. A minority believes that removal would be necessary and proportionate. The Board finds that Meta’s human rights responsibilities require it to provide clearer standards and more effective enforcement processes to allow content on the platform in cases such as this one. It therefore recommends that Meta include a clearly defined exception in the Adult Sexual Exploitation Standard, which would be a better way of balancing competing rights at scale. As a general exception which is rarely applied, the newsworthiness allowance would still be available, and allow Meta to assess whether public interest outweighs the risk of harm including in cases where the victim is identifiable. Decision to leave the content up The Board recognizes that content such as this may lead to considerable harms. This includes harms to the victim, who, if identified, could suffer re-victimization, social stigmatization, doxing and other forms of abuse or harassment (see PC-10802 (Digital Rights Foundation)). The severity of potential harms is great when the victim is identified. Where the victim cannot be identified, their risk of harm is reduced significantly. In this case, the majority finds that the probability of the victim being identified is low. Even if the victim is not publicly identified, the victim may be harmed by interacting with the content on the platform, and by comments and reshares of that content. The majority believes that the application of the warning screen over the content addresses this concern. The Board also considered broader risks. Social media in India has been criticised for spreading caste-based hate-speech (see the report on Caste-hate speech by the International Dalit Solidarity Network , March 2021). Online content can reflect and strengthen existing power structures, embolden perpetrators and motivate violence against vulnerable populations. Depictions of violence against women can lead to attitudes that are more accepting of such violence. Public comments highlighted that sexual harassment is an especially cruel form of harassment (see, for example, PC-10808 (SAFEnet), PC-10806 (IT for change), PC-10802 (Digital Rights Foundation), PC-10805 (Media Matters for Democracy)). The majority balanced those risks with the fact that it is important for news organizations and activists to be able to rely on social media to raise awareness of violence against marginalized communities (see the public comments PC-10806 (IT for change), PC-10802 (Digital Rights Foundation), PC-10808 (SAFEnet)), especially in a context where media freedom is under threat (see the report by Human Rights Watch ). In India, Dalit and Adivasi people, especially women who fall at the intersection of caste and gender (see PC-10806, (IT for Change)), suffer severe discrimination, and crime against them has been on the rise. Civil society organizations report rising levels of ethnic-religious discrimination against non-Hindu and caste minorities which undermines equal protection of the law (see Human Rights Watch report ). With the government targeting independent news organizations, and public records underrepresenting crime against Adivasi and Dalit individuals and communities, social media has become an important means of documenting discrimination and violence (see the report by Human Rights Watch and the public comments cited above). The Board ultimately disagrees on the question whether, in this particular case, there is any reasonable risk of identifying the victim. The majority believes that this risk is minimal and therefore the interest of keeping the content on the platform outweighs potential harms if a warning screen is applied. The minority believes that the remaining risk requires removing the content from the platform. Warning screen and age restriction The Board believes that applying a warning screen is a “lesser restriction” compared to removal. In this case, the majority believes that applying a warning screen would be the least intrusive way to mitigate potential harms inflicted by the content while protecting freedom of expression. As the Board found in the “Sudan graphic video” case, the warning screen “does not place an undue burden on those who wish to see the content while informing others about the nature of the content and allowing them to decide whether to see it or not.” In addition, it ""adequately protects the dignity of the individual depicted and their family.” The minority believes that a warning screen does not sufficiently mitigate potential harms and that the severity of those harms requires the removal of the content. The warning screen also triggers an age restriction, which seeks to protect minors. General Comment No. 25 on Children’s Rights in Relation to the Digital Environment states that “parties should take all appropriate measures to protect children from risks to their right to life, survival and development. Risks relating to content ... encompass, among other things, violent and sexual content...” (para 14). It further states that children should be “protected from all forms of violence in the digital environment” (paras. 54, 103). The majority of the Board agrees with Meta’s reasoning that an age restriction reconciles the objective of protecting minors with the objective of allowing content which is in the public interest to be seen. Design of policy and enforcement processes While the majority agrees with Meta’s ultimate decision to allow the content on the platform, the Board believes that the newsworthiness allowance is an ineffective mechanism to be applied at scale. The Board unanimously finds that allowing depictions of sexual violence against marginalized groups on the platform should be based on clear policies and accompanied by adequately nuanced enforcement. This should distinguish posts such as this one, which are being shared to raise awareness, from posts being shared to perpetuate violence or discrimination against these individuals and communities. It should include clear criteria to assess the risks of harm presented by such content, to help Meta balance competing rights at scale. The newsworthiness allowance is an ineffective mechanism for moderating content at scale (see the Board’s decision in the “Sudan graphic video” case). This is indicated by the fact it is used so rarely. According to Meta the newsworthiness allowance was applied just 68 times across all policies globally between June 1, 2021 to June 1, 2022. Only a small portion of those were issued in relation to the Adult Sexual Exploitation Community Standard. This case demonstrates that the internal escalation process for application of the newsworthiness allowance is not reliable: the content was not escalated by any of the at-scale human reviewers who initially reviewed the content, but by a member of the Global Operations team. Upon learning about the content removal on Instagram, they flagged the issue via an internal reporting channel. In Meta’s content moderation process, most content is reviewed by external at-scale reviewers rather than Meta’s internal specialized teams. Content can be escalated for additional review by these internal specialized teams where at scale reviewers consider that the newsworthiness allowance may apply – however, the escalation process is only effective when at-scale human reviewers have clear guidance on when to escalate content. The newsworthiness allowance is a general exception, that can be applied to content violating any of Meta’s policies. It does therefore not include criteria to assess or balance the harms presented by content violating the Adult Sexual Exploitation policy in particular. A more effective means of protecting freedom of expression and allowing people to raise awareness of the sexual harassment of marginalized groups while protecting the rights of the victim and marginalized communities would be to include an exception to the Adult Sexual Exploitation Standard, to be applied ""at escalation.” In addition, at-scale reviewers should be instructed to escalate content when the exception potentially applies, rather than relying on the rarely applied newsworthiness allowance (for similar reasoning, see the Board’s decision in the “Sudan graphic video” case). The Board therefore recommends that an exception from the Adult Sexual Exploitation policy should be introduced for depictions of non-consensual sexual touching. This would allow content violating the policy to remain on Meta’s platforms where, based on a contextual analysis, Meta judges that the content is shared to raise awareness, the victim is not identifiable, the content does not involve nudity, is not shared in a sensationalized context and thus entails minimal risks of harm for the victim. This exception should be applied on escalation-level only, that is by Meta’s specialist internal teams. Meta should also provide clear guidance to at-scale reviewers on when to escalate content which potentially falls under this exception. This exception does not preclude the application of the newsworthiness allowance. Including an exception to the Adult Sexual Exploitation policy and updating guidance to at-scale reviewers would ensure that an assessment on escalation becomes part of a standard procedure which can be triggered by at-scale moderators in every relevant case. At-scale reviewers would still remove content depicting non-consensual sexual touching but would escalate content in cases where the exception potentially applies. Building on regional expertise, policy and safety experts can then decide whether the exception applies. If it does not, they can decide whether strikes should be imposed, and, if so, which strikes, in line with Meta’s goal to apply strikes in a proportionate and fair manner. Limiting the application of the exception to specialized teams encourages consistency as well as adequate consideration of potential harms. Non-discrimination Meta has a responsibility to respect equality and non-discrimination on its platforms (Articles 2 and 26 ICCPR). In its General Recommendation No. 35, the Committee on the Elimination of Racial Discrimination highlighted the “contribution of speech to creating a climate of racial hatred and discrimination” (para. 5) and the potential of hate speech “leading to mass violations of human rights” (para. 3). The Board recognizes that there is a difficult tension between allowing content on the platform which raises awareness of violence against marginalized groups and removing content which might potentially harm the privacy and security of an individual who is part of those groups. The Board believes that a significant potential for individual harm could outweigh the benefits of raising awareness of harassment on Instagram. However, in this case, the majority believes that, as the victim is not identifiable and the risk for individual harm is low, the content should remain on the platform with a warning screen. A minority believes that the risk is not low enough and therefore the content should be removed. 9. Oversight Board decision The Board upholds Meta’s decision to leave the content on the platform with a warning screen. 10. Policy advisory statement Policy 1. Meta should include an exception to the Adult Sexual Exploitation Community Standard for depictions of non-consensual sexual touching, where, based on a contextual analysis, Meta judges that the content is shared to raise awareness, the victim is not identifiable, the content does not involve nudity and is not shared in a sensationalized context, thus entailing minimal risks of harm for the victim. This exception should be applied at escalation only. The Board will consider this recommendation implemented when the text of the Adult Sexual Exploitation Community Standard has been changed. Enforcement 2. Meta should update its internal guidance to at-scale reviewers on when to escalate content reviewed under the Adult Sexual Exploitation Community Standard, including guidance to escalate content depicting non-consensual sexual touching, with the above policy exception. The Board will consider this recommendation implemented when Meta shares with the Board the updated guidance to at-scale reviewers. *Procedural note: The Oversight Board's decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg, which draws on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Return to Case Decisions and Policy Advisory Opinions" ig-oznr5j1z,Video after Nigeria church attack,https://www.oversightboard.com/decision/ig-oznr5j1z/,"December 14, 2022",2022,December,"TopicMistreatment, Safety, War and conflictCommunity StandardViolent and graphic content","Policies and TopicsTopicMistreatment, Safety, War and conflictCommunity StandardViolent and graphic content",Overturned,Nigeria,The Board has overturned Meta’s decision to remove a video from Instagram showing the aftermath of a terrorist attack in Nigeria.,54038,8557,"Overturned December 14, 2022 The Board has overturned Meta’s decision to remove a video from Instagram showing the aftermath of a terrorist attack in Nigeria. Standard Topic Mistreatment, Safety, War and conflict Community Standard Violent and graphic content Location Nigeria Platform Instagram Video after Nigeria church attack - public comments Yoruba translation The Board has overturned Meta’s decision to remove a video from Instagram showing the aftermath of a terrorist attack in Nigeria. The Board found that restoring the post with a warning screen protects victims’ privacy while allowing for discussion of events that some states may seek to suppress. About the case On June 5, 2022, an Instagram user in Nigeria posted a video showing motionless, bloodied bodies on the floor. It appears to be the aftermath of a terrorist attack on a church in southwest Nigeria, in which at least 40 people were killed and many more injured. The content was posted on the same day as the attack. Comments on the post included prayers and statements about safety in Nigeria. Meta’s automated systems reviewed the content and applied a warning screen. However, the user was not alerted as Instagram users do not receive notifications when warning screens are applied. The user later added a caption to the video. This described the incident as “sad,” and used multiple hashtags, including references to firearms collectors, allusions to the sound of gunfire, and the live-action game “airsoft” (where teams compete with mock weapons). The user had included similar hashtags on many other posts. Shortly after, one of Meta’s Media Matching Service banks, an “escalations bank,” identified the video and removed it. Media Matching Service banks can automatically match users’ posts to content that has previously been found violating. Content in an “escalations bank” has been found violating by Meta's specialist internal teams. Any matching content is identified and immediately removed. The user appealed the decision to Meta and a human reviewer upheld the removal. The user then appealed to the Board. When the Board accepted the case, Meta reviewed the content in the “escalations bank,” found it was non-violating, and removed it. However, it upheld its decision to remove the post in this case, saying the hashtags could be read as “glorifying violence and minimizing the suffering of the victims.” Meta found this violates multiple policies, including the Violent and Graphic Content policy, which prohibits sadistic remarks. Key findings A majority of the Board finds that restoring this content to Instagram is consistent with Meta’s Community Standards, values and human rights responsibilities. Nigeria is experiencing an ongoing series of terrorist attacks and the Nigerian government has suppressed coverage of some of them, though it does not appear to have done so in relation to the June 5 attack. The Board agrees that in such contexts freedom of expression is particularly important. When the hashtags are not considered, the Board is unanimous that a warning screen should be applied to the video. This would protect the privacy of the victims, some of whose faces are visible, while respecting freedom of expression. The Board distinguishes this video from the image in the “Russian poem” case, which was significantly less graphic, where the Board found a warning screen was not required. It also distinguishes it from the footage in the “Sudan graphic video"" case, which was significantly more graphic, where the Board agreed with Meta’s decision to restore the content with a warning screen, applying a “newsworthiness allowance,” which permits otherwise violating content. A majority of the Board finds that the balance still weighs in favor of restoring the content when the hashtags are considered, as they are raising awareness and are not sadistic. Hashtags are commonly used to promote a post within a community. This is encouraged by Meta’s algorithms, so the company should be cautious in attributing ill-intent to their use. The majority notes that Meta did not find that these hashtags are used as coded mockery. Users commenting on the post appeared to understand that it was intended to raise awareness, and responses from the post’s author were sympathetic to the victims. A minority of the Board finds that adding shooting-related hashtags to the footage appears sadistic, and could traumatize survivors or victims’ families. A warning screen would not reduce this effect. Given the context of terrorist violence in Nigeria, Meta is justified in exercising caution, particularly when victims are identifiable. The minority therefore finds this post should not be restored. The Board finds the Violence and Graphic Content policy should be clarified. The policy prohibits “sadistic remarks,” yet the definition of that term included in the internal guidance for moderators is broader than its common usage. The Board notes that the content was originally removed because it matched a video that had wrongly been added to the escalations bank. In the immediate aftermath of a crisis, Meta was likely attempting to ensure that violating content did not spread on its platforms. However, the company must now ensure content mistakenly removed is restored, and resulting strikes, reversed. The Oversight Board's decision The Oversight Board overturns Meta's decision to remove the post and finds it should be restored to the platform with a “disturbing content” warning screen. The Board recommends that Meta: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary Meta removed an Instagram post containing a captioned video depicting the aftermath of an attack at a church in Nigeria, for violating its policies on Violent and Graphic Content, Bullying and Harassment, and Dangerous Individuals and Organizations. A majority of the Board finds that the content should be restored to the platform with a “disturbing content” warning screen, requiring users to click through to see the video. A minority of the Board disagrees and would uphold Meta’s decision to remove the content. 2. Case description and background On June 5, 2022, terrorists attacked a Catholic church in Owo, southwestern Nigeria, killing at least 40 people and injuring approximately 90 others. Within hours of the attack, an Instagram user in Nigeria posted a video on their public account that appears to be of the aftermath, showing motionless and bloodied bodies on the church floor, some with their faces visible. Chaotic sounds, including people wailing and screaming, can be heard in the background. The video was initially posted without a caption. There were fewer than 50 comments. Those seen by the Board included prayers for victims, crying emojis, and statements about safety in Nigeria. The author of the post had responded to several showing solidarity with those sentiments. After the user posted the content, it was identified by one of Meta's Violent and Graphic Content Media Matching Service banks, which contained a substantially similar video. This bank automatically flags content which has previously been identified by human reviewers as violating the company's rules. In this case, the bank referred the user’s video to an automated content moderation tool called a classifier, which can assess how likely content is to violate a Meta policy. The classifier determined that the video should be allowed on Instagram. It also determined that the content likely contained imagery of violent deaths and, as a result, automatically applied a “disturbing content” warning screen as required by the Violent and Graphic Content policy. Meta did not notify the user that the warning screen had been applied. Over the following 48 hours, three users reported the content, including for depicting death and severe injury. At the same time, Meta’s staff were working to identify and deal with content arising from the attack. Meta’s policy team was alerted about the attack by regional staff and added videos of the incident to a different Media Matching Service bank, an “escalations bank.” Content in this escalations bank has been found violating by Meta's specialist internal teams, and any matching content is immediately removed. Videos added to the escalations bank after the incident included footage that showed visible human innards (which the video at issue in this case did not). Other teams at Meta were invited to refer potentially similar videos to the policy team. The policy team would then determine if they should also be added to the escalations bank. Three days after the attack, Meta added a video almost identical to the content in this case to the escalations bank. As a result, Meta’s systems compared that video to content already on the platform to check for matches. While this retroactive review was taking place, the user edited their original post, adding an English-language caption to the video. It states that the church was attacked by gunmen, that multiple people were killed, and describes the incident as “sad.” The caption included a large number of hashtags. The majority of these were about the live-action game “airsoft” (where teams compete to tag each other out of play using plastic projectiles shot with mock weapons). Another, according to Meta, alluded to the sound of gun-fire and is also used to market firearms. Other hashtags referenced people who collect firearms and firearm paraphernalia, as well as military simulations. Shortly after the caption was added, the escalations bank’s retroactive review matched the user’s post to the recently added near-identical video, and removed it from the platform. The user appealed. A human moderator reviewed the content and maintained the removal decision. The user then appealed to the Board. At this point, the three reports users had made on the content had still not been reviewed and were closed. Meta told the Board that the reports had mistakenly been assigned to a low priority queue. In response to the Board selecting this case, Meta reviewed the near-identical video that had been placed in the escalations bank. Meta determined that it did not violate any policies because there were no “visible innards” and no sadistic caption, and removed it from the bank. However, Meta maintained its decision to remove the content in this case, as it stated that, while the narrative about the event and user’s expression of sadness were not violating, the hashtags in the caption added by the user violated multiple policies. In response to questions from the Board, Meta analyzed the user’s posting history and found that the user had included similar hashtags on many of their recent Instagram posts. The Board notes as relevant context the recent history of violence and terrorist incidents in Nigeria . Experts consulted by the Board stated that the Nigerian government has at times suppressed domestic reporting of terror attacks but does not appear to have done so to a significant degree with regards to the June 5 attack, which was widely covered by traditional media. Graphic imagery of the attack and its victims was widely circulated on social media platforms, including Instagram and Facebook, but was not shown to the same extent by traditional media. In response to questions from the Board, Meta confirmed that the Nigerian government did not contact Meta regarding the attack or request that the content be taken down. 3. Oversight Board authority and scope The Board has authority to review Meta’s decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include policy advisory statements with non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). 4. Source of authority The Oversight Board considered the following authorities and standards: I. Oversight Board decisions: II. Meta’s content policies: This case involves Instagram's Community Guidelines and Facebook's Community Standards . Meta’s third quarter transparency report states that ""Facebook and Instagram share Content Policies. This means that if content is considered violating on Facebook, it is also considered violating on Instagram."" The Instagram Community Guidelines say that Meta “may remove videos of intense, graphic violence to make sure Instagram stays appropriate for everyone.” This links to the Facebook Violent and Graphic Content Community Standard , where the policy rationale states: To protect users from disturbing imagery, we remove content that is particularly violent or graphic, such as videos depicting dismemberment, visible innards or charred bodies. We also remove content that contains sadistic remarks towards imagery depicting the suffering of humans and animals. In the context of discussions about important issues such as human rights abuses, armed conflicts or acts of terrorism, we allow graphic content (with some limitations) to help people to condemn and raise awareness about these situations. The Violent and Graphic Content policy states that “imagery that shows the violent death of a person or people by accident or murder” will be placed behind a disturbing content warning screen. The “do not post” section of the rules explains that users cannot post sadistic remarks towards imagery that requires a warning screen under the policy. It also states that content will be removed if there are “visible innards.” Meta’s Bullying and Harassment policy rationale explains that it removes a variety of content “because it prevents people from feeling safe and respected on Facebook.” Under Tier 4 of the specific rules, the company prohibits content that “praises, celebrates or mocks the death or serious injury” of private individuals. Meta’s Dangerous Individuals and Organizations policy rationale explains that Meta prohibits “content that praises, substantively supports or represents events that Facebook designates as violating violent events – including terrorist attacks, hate events, multiple-victim violence or attempted multiple-victim violence.” Under Tier 1 of the specific rules Meta removes any praise of such events. III. Meta’s values: Meta's values are outlined in the introduction to Facebook's Community Standards and the company has confirmed that these values apply to Instagram. The value of ""Voice"" is described as ""paramount"": The goal of our Community Standards is to create a place for expression and give people a voice. Meta wants people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Meta limits ""Voice"" in the service of four other values, three of which are relevant here: Safety : We're committed to making Facebook a safe place. We remove content that could contribute to a risk of harm to the physical security of persons. Content that threatens people has the potential to intimidate, exclude or silence others and isn't allowed on Facebook. Privacy: We're committed to protecting personal privacy and information. Privacy gives people the freedom to be themselves, choose how and when to share on Facebook and connect more easily. Dignity : We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade others. IV. International human rights standards: The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following human rights standards: 5. User submissions In their statement to the Board, the user explained that they shared the video to raise awareness of the attack and to let the world know what was happening in Nigeria. 6. Meta’s submissions Meta explained in its rationale that, under the Violent and Graphic Content policy, imagery, including video, that shows the violent death of people is usually placed behind a warning screen that indicates it may be disturbing. Adult users may click through to view the content, whereas minors do not have that option. However, Meta also explained that when such content is accompanied by sadistic remarks, it is removed. According to Meta, this is to stop people using the platforms to glorify violence or celebrate the suffering of others. Meta confirmed that, without a caption, the video would be permitted on Instagram behind a disturbing content warning screen. If the video had included “visible innards,” as other videos of the same incident had, it would be removed under the Violent and Graphic Content policy without the need for sadistic remarks. Initially, Meta told the Board that in this case the user was not notified of the warning screen, or the policy used to apply it, because of a technical error. However, after further questioning from the Board, Meta disclosed that while Facebook users generally receive notification of the addition of a warning screen and the reason, Instagram users receive no notification. Meta explained that its internal guidance for moderators, the Known Questions, define sadistic remarks as those that “are enjoying or deriving pleasure from the suffering/humiliation of a human or animal.” The Known Questions provide examples of remarks that qualify as sadistic, divided into those that show an “enjoyment of suffering” and “humorous responses.” Meta also confirmed that sadistic remarks can be expressed through hashtags as well as emojis. In its analysis of the hashtags used in this case, Meta explained that the reference to the sound of gunfire was a “humorous response” to violence that made light of the June 5 terror attack. Meta explained the same hashtag is also used to market weapons. Meta also stated that the gunfire hashtag, as well as the hashtag referring to individuals who collect firearms and firearm paraphernalia, “could be read as glorifying violence and minimizing the suffering of the victims by invoking humor and speaking positively about the weapons and gear used to perpetrate their death.” Meta also explained that the hashtag referring to military simulations compared the attack to a simulation, “minimizing the actual tragedy and real-world harm experienced by the victims and their community.” Meta also stated that the hashtags referring to “airsoft” compared the attack to a game in a way that glorifies violence as something done for pleasure. Meta explained that the user’s caption asserting that they do not support violence and that the attack was a sad day “do not clearly indicate that they are sharing the video to raise awareness of the attack.” The company also clarified that, even if the user showed intent to raise awareness, the use of “sadistic hashtags” would still result in removal. To support this position, Meta explained that some users attempt to evade moderation by including deceptive or contradictory language in their posts. Meta distinguished this from the Board’s decision in the “Sudan graphic video” case ( 2022-002-FB-FBR ) where the user made clear their intent to raise awareness while sharing disturbing content. In response to the Board’s questions, Meta informed the Board that the user included the same hashtags in most of their recent posts. Meta could not determine why this user was repeatedly using the same hashtags. Meta also stated that the user’s post violated the Bullying and Harassment policy which prohibits content that mocks the death of private individuals. In this case, the hashtag referencing the sound of gunfire was deemed to be a humorous response to the violence shown in the video. In response to questions from the Board, Meta also determined that the content violated the Dangerous Individuals and Organizations policy. Meta had designated the June 5 attack as a “multiple-victim violence event” and, as a result, any content deemed to praise, substantively support, or represent that event is prohibited under the Dangerous Individuals and Organizations policy. Meta explained that this was in line with its commitments under the Christchurch Call for Action , and that it brought the June 5 attack to the attention of industry partners in the Global Internet Forum to Counter Terrorism . Meta explained that, while it was a “close call,” the content in this case appears to mock the victims of the attack and speak positively about the weapons used, and therefore qualifies as praise of a designated event under its policy. Meta stated that removing the content in this case strikes the appropriate balance between its values. The user’s caption demonstrated a lack of respect for the dignity of the victims, their families, and the community impacted by the attack – all of which outweigh the value of the user’s own voice. In response to the Board’s questions, Meta confirmed that it did not issue any newsworthiness allowances in relation to content containing violating imagery related to the June 5 attack. Finally, Meta explained its actions were consistent with international human rights law, stating that its policy on sadistic remarks is clear and accessible, the policy aims to protect the rights of others, as well as public order and national security, and all actions short of removal would not adequately address the risk of harm. Meta pointed to the European Court of Human Rights decision in Hachette Filipacchi Associes v. France (2007), which held that journalists who published photos of someone’s violent death in a widely distributed magazine “intensified the trauma suffered by the relatives.” Meta also pointed to a 2010 article by Sam Gregory, “Cameras Everywhere: Ubiquitous Video Documentation of Human Rights, New Forms of Video Advocacy, and Considerations of Safety, Security, Dignity and Consent” in the Journal of Human Rights Practice, which explains that “the most graphic violations” such as violent attacks “most easily translate into a loss of dignity, privacy, and agency, and which carries with it the potential for real re-victimization.” Meta noted that its policy is not to remove graphic content, but to place it behind a warning screen, which limits its access to minors. It said that its policy to remove sadistic remarks “goes a step further” because “the value of dignity outweighs the value of voice.” The Board asked Meta 29 questions, 28 of which were answered fully. Meta was unable to answer a question on the percentage of user reports that are closed without review in the Sub-Saharan Africa market. 7. Public comments The Oversight Board considered nine public comments related to this case. One of the comments was submitted from Asia Pacific and Oceania, one from Central and South Asia, one from the Middle East and North Africa, one from Sub-Saharan Africa, and five from the United States and Canada. The submissions covered themes including the need to clarify the Violent and Graphic Content policy, and Nigeria-specific issues that the Board should be aware of while deciding this case. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board looked at the question of whether this content should be restored through three lenses: Meta's content policies, the company's values, and its human rights responsibilities. 8.1 Compliance with Meta’s content policies The Board analyzed three of Meta’s content policies: Violent and Graphic Content; Bullying and Harassment, and Dangerous Individuals and Organizations. The majority of the Board finds that no policy was violated. I. Content rules Violent and Graphic Content The policy rationale states that Meta “removes content that contains sadistic remarks towards imagery depicting the suffering of humans and animals.” However, it also states that it allows graphic content with some limitations to help people condemn and raise awareness about “important issues such as human rights abuses, armed conflicts or acts of terrorism.” The policy also provides for warning screens to alert people that content may be disturbing, including where imagery shows violent deaths. In the rules immediately under the policy rationale, Meta explains that users cannot post “sadistic remarks towards imagery that is deleted or put behind a warning screen under this policy.” Meta does not provide any further public explanation or examples of sadistic remarks. The Board agrees with Meta that the video at issue in this case shows violent deaths and that, without a caption, it should have a warning screen. Distinct from the content in the Board’s “Sudan graphic video” decision, the video in this case, though depicting bloodied dead bodies, does not show visible innards, which would require removal under the policy. In the “Sudan graphic video” case, the hashtags indicated a clear intent to document human rights abuses, and the Board relied in part on the clear intent of those hashtags and on Meta’s newsworthiness allowance to restore the content. In this case, the Board’s assessment of the content against Meta’s content policies is based in part on the absence of any visible innards, dismemberment or charring in the video footage, as well as the hashtags used. The difference between the majority and minority positions turns on the purpose or meaning that should be attributed to the hashtags in this case. A majority of the Board premises its position on the common use of hashtags by users to promote a post within a certain community, and to associate with others who share common interests and signify relationships. When used in this way, they are not necessarily implying commentary on an image or issue. A majority of the Board finds that the hashtags in the caption are not sadistic, as they are not used in a way that shows the user is “enjoying or deriving pleasure” from the suffering of others. Interpreting the long list of idiosyncratic hashtags as commentary on the video is, in this case, misguided. This distinguishes this case from the Board’s decision in the “Sudan graphic video” case, in which hashtags clearly indicated the user’s intent in sharing a graphic video. The user’s inclusion of hashtags about the game airsoft, as well as those related to firearms and military simulations, should not have been read as ""glorifying violence"" (under the Dangerous Individuals and Organizations policy) and still less as ""mocking"" (under the Bullying and Harassment policy) or showing that the user was ""enjoying or deriving pleasure from the suffering of others"" (under the Graphic Violence policy). Many users of social media have and share interests in airsoft, firearms, or military simulation and may use hashtags to connect with others without in any way expressing support for terrorism or violence against individuals. The airsoft hashtags are more directly associated with enthusiasm for the game, and are, as a whole, incongruous with the content of the video and commentary the user shared immediately below it. This should have indicated to Meta that the user was trying to raise awareness amongst the people they normally communicate with on Instagram, and to reach others. As Instagram’s design incentivizes the liberal use of hashtags as a means to promote content and connect with new audiences, it is important that Meta is cautious before attributing ill intent to their use. Independent research commissioned by the Board confirmed that the hashtags used in this post are used widely among airsoft and firearm enthusiasts, and Meta did not find that these hashtags had been used as coded mockery to evade detection on its platforms. For the majority, it was also clear that the commentary the user added to the video, after the warning screen was applied, did not indicate they were enjoying or deriving pleasure from the attack. The user stated that the attack represented a “sad day,” and that they do not support violence. Comments on the post further indicated that the user’s followers understood the intent to be awareness raising, like the situation in the “Nazi quote” case. The user’s responses to those comments also showed further sympathy with victims. The Board accepts Meta’s argument that explicit user statements in content that they do not support violence should not always be accepted at face value, as some users may attempt to evade moderation by including them, contrary to the actual purpose of their posts. The Board recalls its finding that it should not be necessary for a user to expressly state condemnation when commenting on the activities of terrorist entities, and that expecting them to do so could severely limit expression in regions where such groups are active (see the Board’s decision in the “Mention of the Taliban in news reporting” case). A minority of the Board concludes that when assessed together, the juxtaposition of shooting-related hashtags against the footage appears sadistic, comparing the murder of those depicted to games and appearing to promote weapons imitating those used in the attack. This would appear sadistic to survivors of the attack and relatives of those deceased, and the potential for re-traumatization would not be reduced by placing the content behind a warning screen. Given the context of terrorist violence in Nigeria, the minority finds that Meta is justified to err on the side of caution where commentary on graphic violence appears sadistic, even if there is a degree of ambiguity. This is especially relevant for content like this video, where specific victims are identifiable as their faces are visible, and where escalating violence or further retaliation against survivors from attackers cannot be ruled out. A minority of the Board also finds that the statements in the caption in this case do not negate the sadistic effect of juxtaposing hashtags associated with gun enthusiasts with a video depicting the horrific aftermath of violence inflicted with guns. While hashtags may serve an associative purpose for members of a community, a minority of the Board believes that it is appropriate, in situations such as this, for Meta to apply its policies in a manner that considers the content from the perspective of survivors and the victims’ families. It is also important to consider how Meta can swiftly and consistently enforce its content policies in crisis situations, such as in the aftermath of terrorist acts where imagery quickly spreads across social media. The minority considered it pertinent that a moderator or casual reader would not know that the user had routinely included these hashtags on most of their recent posts. In a fast moving situation, the minority find that Meta was correct to interpret the use of firearms-related hashtags as indicating that the user is deriving enjoyment from the suffering depicted. The majority acknowledges that the removal of the content was a reasonable mistake and agrees with Meta that it was ""a difficult call."" Nevertheless, the Board's independent analysis (assisted by experts providing contextual information on the shooting, on violence in Nigeria more generally and its relationship to social media, and on the meaning and use of the hashtags) leads the majority to conclude that it is an error to characterize these hashtags as sadistic merely because they are associated with users of firearms. Bullying and Harassment Tier 4 of the Bullying and Harassment policy prohibits content that mocks the death or serious injury of private individuals. For the same reasons set forth in the previous section, a majority of the Board finds that the content is not mockery, as the purpose of the hashtags is not an attempt at humor but an attempt to associate with others, this is confirmed by the responses to the post and the user’s engagement with them. Meta erred by presuming a string of hashtags are commentary on the shared video. As noted above, that the user was not asking firearms enthusiasts to mock the victims appears to be confirmed by responses to the post expressing shock and sympathy, which Meta confirms were mostly from users in Nigeria, and the user’s engagement with those responses (see the Board decision in the “Nazi quote” case). While the majority agrees it is important to consider the perspectives of survivors and victims’ families, the responses to this content indicate that those perspectives do not necessarily weigh against keeping content on the platform, particularly given the frequency of attacks on Christians in Nigeria . A minority of the Board disagrees. By adding hashtags involving imitation firearms it also appears that the user was intentionally directing a video depicting the victims of a shooting to firearms enthusiasts. Meta was correct to find that this appears mocking, and it is appropriate for the company to prioritize the perspective of survivors and the victims’ families in making this assessment. Dangerous Individuals and Organizations Tier 1 of the Dangerous Individuals and Organizations policy prohibits content that praises, substantively supports, or represents “multiple-victim violence.” The Board agrees that according to Meta’s definition of “multiple-victim violence,” the June 5 attack qualifies. However, the majority finds that the use of hashtags in the caption is not “praise” of the attack, for the same reasons it was not sadistic. The minority disagrees and finds that, while it is a close call, for the same reasons articulated in the previous sections, the juxtaposition between the hashtags and the content could be viewed as praise of the attack itself. II. Enforcement action Meta initially informed the Board that the user was not sent a message when their content was put behind a warning screen due to a technical issue. However, in response to questions from the Board, Meta investigated and determined that Instagram users are not notified when their content is placed behind a warning screen. In this case, the addition of a caption in which the user explicitly states that they do not support violence may have been an attempt to respond to the imposition of the warning screen. Meta should ensure that all users are notified when their content is placed behind warning screens, and told why this action has been taken. The Board notes that the content in this case was removed because the video matched with a near identical video that was mistakenly added to an escalations bank that automatically removes matched content. In the “Colombian police cartoon” case, the Board said Meta must ensure that it has robust systems and screening processes to assess content before it is added to any Media Matching Service banks that delete matches without further review. The Board understands that in the immediate aftermath of a crisis, Meta was likely attempting to ensure that violating content did not spread on its platforms. However, given the multiplying impacts of Media Matching Service banks, controls remain critical. Meta should ensure that all content mistakenly removed due to this wrongful banking is restored and any related strikes are reversed. The Board is concerned that the three user reports of the content were not reviewed in the five days before the content was removed. In response to questions from the Board, Meta explained that this was due to an unknown technical issue which it is investigating. In response to further questions, Meta stated that it is unable to ascertain what percentage of Instagram user reports in Sub-Saharan Africa are closed without review. 8.2 Compliance with Meta’s values The Board concludes that removing the content in this case was inconsistent with Meta’s value of “Voice.” The Board recognizes the competing interests in situations such as the one in this case. The content in this case implicates the dignity and privacy of the victims of the June 5 attack, as well as that of their families and communities. A number of the victims in the video have their faces visible and are likely identifiable. The Board recalls that in its “Sudan graphic video"" case, and its “Russian poem” case, it called for improvements to Meta’s policy on Violent and Graphic Content to align it with Meta’s values. The Board found the policy’s treatment of content sharing graphic content to “raise awareness” was insufficiently clear. In a number of cases, the Board has found that warning screens can be appropriate mechanisms to balance Meta’s values of “Voice,” “Privacy,” “Dignity,” and “Safety” (see the Board’s decisions in the “Sudan graphic video” and “Russian poem” cases). The Board is in agreement that, in contexts where civic space and media freedom is illegitimately restricted by the state, as is the case in Nigeria, Meta’s value of ""Voice"" becomes even more important (see its decision in the ""Colombia protests"" case). It also agrees that raising awareness of human rights abuses is a particularly important aspect of “Voice,” which can in turn advance “Safety” by ensuring access to information. Warning screens can further this exercise of “Voice,” though they may be inappropriate where the content is not sufficiently graphic as they significantly reduce reach and engagement with content (see the Board’s decision in the ""Russian poem” case). For essentially the same reason laid out in the preceding section, the majority and minority reached different conclusions about what Meta’s values require for interpreting the hashtags added to the video caption. The majority notes that there is a particular need to protect “Voice” where content draws attention to serious human rights violations and atrocities, including attacks on churches in Nigeria. The majority finds these hashtags do not contradict the user’s stated sympathy to the victims, expressed in the caption, and that their use is consistent with the user’s efforts to raise awareness. As the caption is not “sadistic,” it is consistent with Meta’s values to restore the content with an age-gated warning screen. While the majority acknowledges that adding a warning screen may impact “Voice” by limiting the reach of content raising awareness of human rights abuses, given the identifiability of the victims it is required to properly balance the values of “Dignity” and “Safety”. The minority find removal of the content justified to protect the “Dignity” and “Safety” of the victims’ families and survivors, who are at a high-risk of re-traumatization from exposure to content that appears to be providing sadistic and mocking commentary on the killings of their loved ones. That several victims’ faces are visible and identifiable in the video and are not blurred is pertinent. In respect of “Voice,” the minority finds it relevant that similar content without hashtags was shared on the platform behind a warning screen, and it remained possible for this user to share similar content without firearm-related hashtags. Meta’s removal of this content did not therefore excessively hinder efforts of the community in Nigeria to raise awareness of, or seek accountability for, these atrocities. 8.3 Compliance with Meta’s human rights responsibilities A majority of the Board finds that removing the content in this case is inconsistent with Meta’s human rights responsibilities. However, as in the “Sudan graphic video” case, the majority and minority agree that Meta should amend the Violent and Graphic Content policy in order to make clear which policy rules impact content that aims to raise awareness of human rights abuses and violations. Freedom of expression (Article 19 ICCPR) Article 19 of the ICCPR provides broad protection for freedom of expression, including the right to seek and receive information. However, the right may be restricted under certain specific conditions, as evaluated according to a three-part test of legality, legitimacy, and necessity and proportionality. The Board has adopted this framework to analyze Meta’s content policies and enforcement practices. The UN Special Rapporteur on freedom of expression has encouraged social media companies to be guided by these principles when moderating online expression, mindful that regulation of expression at scale by private companies may give rise to concerns particular to that context (A/HRC/38/35, paras. 45 and 70). I. Legality (clarity and accessibility of the rules) The principle of legality requires any restriction on the right to freedom of expression to be clear and accessible, so that individuals know what they can and cannot do (General Comment No. 34, para. 25 and 26). Lack of specificity can lead to subjective interpretation of rules and their arbitrary enforcement. The Santa Clara Principles on Transparency and Accountability in Content Moderation , which have been endorsed by Meta, are grounded in ensuring companies’ respect for human rights in line with international standards including freedom of expression. They provide that companies must have “understandable rules and policies,” including “detailed guidance and examples of permissible and impermissible content.” Before addressing each content policy, the Board notes its previous recommendations for Meta to clarify the relationship between the Instagram Community Guidelines and Facebook’s Community Standards (“Breast cancer symptoms and nudity” case, 2020-004-IG-UA-2 and “Öcalan's isolation” case, 2021-006-IG-UA-10 ), and urges Meta to complete its implementation of this recommendation as soon as possible. Violent and Graphic Content The Board reiterates its concern that the Violent and Graphic Content policy is insufficiently clear with regards to how users may raise awareness of graphic violence under the policy. In this case, there are further concerns about the content captured under Meta’s definition of “sadistic.” In the “Sudan graphic video” case decision, the Board stated that: [T]he Violent and Graphic Content policy does not make clear how Meta permits users to share graphic content to raise awareness of or document abuses. The rationale for the Community Standard, which sets out the aims of the policy, does not align with the rules of the policy. The policy rationale states that Meta allows users to post graphic content ""to help people raise awareness about"" human rights abuses, but the policy prohibits all videos (whether it is shared to raise awareness or not) ""of people or dead bodies in non-medical settings if they depict dismemberment.” The Board recommended that Meta amend the policy to specifically allow imagery of people and dead bodies to be shared to raise awareness or document human rights abuses. It also recommended that Meta to develop criteria to identify videos shared for that purpose. Meta has stated that it is assessing the feasibility of those recommendations and will conduct a policy development process to determine whether they can be implemented. Meta has also updated the policy rationale “to ensure that it reflects the full range of enforcement actions covered in the policy and adds clarification about the deletion of exceptionally graphic content and sadistic remarks.” However, the Board notes that the rules on what can and cannot be posted under this policy still do not provide clarity on how otherwise prohibited content may be posted to “raise awareness.” The Board also notes that after it publicly announced its selection of this case and sent questions to the company, Meta updated the policy rationale to include a reference to its existing prohibition on “sadistic” remarks. However, the term is still not publicly defined, as the policy simply lists types of content that users cannot make sadistic remarks towards. The Board finds the common usage of the term “sadistic” has connotations of intentional depravity and seriousness, which do not align adequately with Meta’s internal guidance for moderators, the Known Questions. That internal guidance shows that Meta’s definition of “sadistic” is broadly defined to extend to any humorous response or positive speech about human or animal suffering. This appears to set a lower bar for the removal of content than the public-facing policy communicates. Bullying and Harassment Under Meta’s Bullying and Harassment policy, the company prohibits content that mocks the death or serious physical injury of private individuals. The Board did not find that the framing of this rule raised legality concerns in this case. Dangerous Individuals and Organizations Under Tier 1 of this policy, Meta prohibits praise of designated “violating violent events,” a category which includes terrorist attacks, “multiple-victim violence, and multiple murders.” The Board notes that Meta does not appear to have a consistent policy regarding when it publicly announces events that it has designated. Without this information, users in many scenarios may not know why their content was removed. II. Legitimate aim Restrictions on freedom of expression should pursue a legitimate aim, which includes the protection of the rights of others, such as the right to privacy of the identifiable victims, including those who are deceased, depicted in this content (General Comment 34, para. 28). The Board has previously assessed the three policies at issue in this case and determined that each pursues the legitimate aim of protecting the rights of others. The Violent and Graphic Content policy was assessed in the “Sudan graphic video” case, the Bullying and Harassment policy was assessed in the “Pro-Navalny protests in Russia” case, and the Dangerous Individuals and Organizations policy was assessed in the “Mention of the Taliban in news reporting” case. III. Necessity and proportionality Restrictions on expression ""must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interests to be protected"" (General Comment 34, para. 34). The Board has discussed whether warning screens are a proportionate restriction on expression in the “Sudan graphic video” decision and the “Russian poem” decision. The nature and severity of graphic violence has been determinative in those decisions, and Meta’s human rights responsibilities have at times been in tension with its stated content policies and their application. The “Russian poem” case concerned a picture, taken at a distance, of what appeared to be a dead body. The face was not visible, the person was not identifiable, and there were no visible graphic indicators of violence. In that case, the Board found that a warning screen was not necessary. By contrast, the “Sudan graphic video” case concerned a video showing dismemberment and visible innards, shot at closer range. In that case, the Board found that the content was sufficiently graphic to justify the application of a warning screen. The latter decision relied on the newsworthiness allowance, which is used to permit otherwise violating content. This was used because the policy itself was not clear on how it could be applied to permit content raising awareness of human rights violations. In the present case, the Board agrees that, absent the hashtags, a warning screen was necessary to protect the privacy rights of victims and their families, primarily because the victims’ faces are visible and the location of the attack was known. This makes victims identifiable, and more directly engages their privacy rights and the rights of their families. The depictions of death are also significantly more graphic than in the “Russian poem” case, with bloodied bodies shown at much closer range. However, there is no dismemberment and there are no ""visible innards.” If either of these features had been present the content would have to be removed or given a newsworthiness allowance to allow it to remain on the platform. While a warning screen will reduce both reach and engagement with the content, it is a proportional measure to both respect expression while also respecting the rights of others. The majority of the Board finds that removing the content was not a necessary or proportionate restriction on the user’s freedom of expression, and that it should be restored with a “disturbing content” warning screen. The majority finds that the addition of the hashtags did not increase the risk of harming the privacy rights and dignity of victims, survivors or their families, as substantially similar footage is already on Instagram behind a warning screen. By drastically reducing the number of people who would see the content, the application of a warning screen in this case served to respect the victims’ privacy (as with other instances of similar videos), while also allowing for discussion of events that some states may seek to suppress. In contexts of ongoing insecurity, it is particularly important that users are able to raise awareness of recent developments, document human rights abuses, and promote accountability for atrocities. For a majority of the Board, the caption as a whole, including the hashtags, was not sadistic and would need to have more clearly demonstrated sadism, mockery, or glorification of the violence for removal of the content to be considered necessary and proportionate. A minority of the Board agrees with the majority in terms of their analytical approach and overall view of Meta’s policies in this area, but disagrees with their interpretation of the hashtags, and therefore the outcome of their human rights analysis. For the minority, removal of the post was in line with Meta’s human rights responsibilities and the principles of necessity and proportionality. When events like this attack occur, videos of this nature frequently go viral. The user in this case had a large number of followers. It is crucial that in response to incidents like this, Meta acts quickly and at-scale, including through collaboration with industry partners, to prevent and mitigate harms to the human rights of victims, survivors and their families. This also serves a broader public purpose of countering the widespread terror that perpetrators of such attacks seek to instill, knowing that social media will amplify their psychological impacts. For the minority, it is therefore less important in human rights terms whether the user in this case primarily intended to use the hashtags to connect with their community or increase their reach. The value of those associations, to the individuals concerned and the broader public, while not insignificant, are far outweighed by the importance of respecting the right to privacy and dignity of the survivors and victims. Victims’ faces are visible and identifiable at close range in the video, in a place of worship, with their bodies covered in blood. The juxtaposition between this and the militaristic hashtags about weapons in the caption is jarring and appears mocking. Exposing victims and their family members to such content would likely re-traumatize them, even if that is not what the posting user intended. For the minority, this is distinct from the Board’s “Sudan graphic video” case, where the hashtags very clearly indicated intent to document human rights abuses. While explicit statements of intent to raise awareness should not be a policy requirement (see the Board’s “Wampum belt,” and “Mention of the Taliban in news reporting” decisions), it is consistent with Meta’s human rights responsibilities to remove hashtags that non-critically evoke enthusiasm for weapons alongside identifiable imagery of persons killed by gunfire. In these circumstances, the minority believes Meta should err in favor of removal. 9. Oversight Board decision The Oversight Board overturns Meta’s decision to take down the content, requiring the post to be restored with a “mark as disturbing” warning screen. 10. Policy advisory statement Content policy 1. Meta should review the public facing language in the Violent and Graphic Content policy to ensure that it is better aligned with the company’s internal guidance on how the policy is to be enforced. The Board will consider this recommendation implemented when the policy has been updated with a definition and examples, in the same way as Meta explains concepts such as “praise” in the Dangerous Individuals and Organizations policy. Enforcement 2. Meta should notify Instagram users when a warning screen is applied to their content and provide the specific policy rationale for doing so. The Board will consider this recommendation implemented when Meta confirms notifications are provided to Instagram users in all languages supported by the platform. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. An independent research institute headquartered at the University of Gothenburg and drawing on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology, and Memetica, a digital investigations group providing risk advisory and threat intelligence services to mitigate online harms. Return to Case Decisions and Policy Advisory Opinions" ig-pt5wrtlw,UK drill music,https://www.oversightboard.com/decision/ig-pt5wrtlw/,"November 22, 2022",2022,,"TopicArt / Writing / Poetry, Freedom of expression, GovernmentsCommunity StandardViolence and incitement","Policies and TopicsTopicArt / Writing / Poetry, Freedom of expression, GovernmentsCommunity StandardViolence and incitement",Overturned,United Kingdom,The Oversight Board has overturned Meta’s decision to remove a UK drill music video clip from Instagram.,71692,11069,"Overturned November 22, 2022 The Oversight Board has overturned Meta’s decision to remove a UK drill music video clip from Instagram. Standard Topic Art / Writing / Poetry, Freedom of expression, Governments Community Standard Violence and incitement Location United Kingdom Platform Instagram October 2022 - Metropolitan Police Freedom of Information request response January 2023 - Updated Metropolitan Police Freedom of Information request response UK drill music - public comments UPDATE JANUARY 2023: In this case, the Oversight Board submitted a Freedom of Information request to the Metropolitan Police Service (MPS), with questions on the nature and volume of requests the MPS made to social media companies, including Meta, to review or remove drill music content over a one-year period. The MPS responded on October 7, 2022, providing figures on the number of requests sent, and the number that resulted in removals. The MPS response was published in full alongside the decision, and figures it contained were included in the Board's decision, published on November 22, 2022. On January 4, 2023, the MPS contacted the Board to say it had identified errors in its response, and corrected them. Notably, it corrected the figures to: all of the 992 requests [corrected from 286 requests] the Metropolitan Police made to social media companies and streaming services to review or remove content between June 2021 and May 2022 involved drill music; those requests resulted in 879 removals [corrected from 255 removals]; 28 requests related to Meta’s platforms [corrected from 21 requests], resulting in 24 removals [corrected from 14 removals]. The decision contains the original figures prior to the MPS corrections. This update does not change the Oversight Board’s analysis or decision in this case. The updated Freedom of Information response can be found here . Case summary The Oversight Board has overturned Meta’s decision to remove a UK drill music video clip from Instagram. Meta originally removed the content following a request from the Metropolitan Police. This case raises concerns about Meta’s relationships with law enforcement, which has the potential to amplify bias. The Board makes recommendations to improve respect for due process and transparency in these relationships. About the case In January 2022, an Instagram account that describes itself as publicizing British music posted content highlighting the release of the UK drill music track, ""Secrets Not Safe"" by Chinx (OS), including a clip of the track’s music video. Shortly after, the Metropolitan Police, which is responsible for law enforcement in Greater London, emailed Meta requesting that the company review all content containing ""Secrets Not Safe.” Meta also received additional context from the Metropolitan Police. According to Meta, this covered information on gang violence, including murders, in London, and the Police’s concern that the track could lead to further retaliatory violence. Meta’s specialist teams reviewed the content. Relying on the context provided by the Metropolitan Police, they found that it contained a “veiled threat,” by referencing a shooting in 2017, which could potentially lead to further violence. The company removed the content from the account under review for violating its Violence and Incitement policy. It also removed 52 pieces of content containing the track “Secrets Not Safe” from other accounts, including Chinx (OS)’s. Meta’s automated systems later removed the content another 112 times. Meta referred this case to the Board. The Board requested that Meta also refer Chinx (OS)’s post of the content. However, Meta said that this was impossible as removing the “Secrets Not Safe” video from Chinx (OS)’s account ultimately led to the account being deleted, and its content was not preserved. Key findings The Board finds that removing this content does not align with Meta’s Community Standards, its values, or its human rights responsibilities. Meta lacked sufficient evidence to conclude that the content contained a credible threat, and the Board’s own review did not uncover evidence to support such a finding. In the absence of such evidence, Meta should have given more weight to the content’s artistic nature. This case raises concerns about Meta’s relationships with governments, particularly where law enforcement requests lead to lawful content being reviewed against the Community Standards and removed. While law enforcement can sometimes provide context and expertise, not every piece of content that law enforcement would prefer to have taken down should be taken down. It is therefore critical that Meta evaluates these requests independently, particularly when they relate to artistic expression from individuals in minority or marginalized groups for whom the risk of cultural bias against their content is acute. The channels through which law enforcement makes requests to Meta are haphazard and opaque. Law enforcement agencies are not asked to meet minimum criteria to justify their requests, and interactions therefore lack consistency. The data Meta publishes on government requests is also incomplete. The lack of transparency around Meta’s relationship with law enforcement creates the potential for the company to amplify bias. A freedom of information request made by the Board revealed that all of the 286 requests the Metropolitan Police made to social media companies and streaming services to review or remove musical content from June 2021 to May 2022 involved drill music, which is particularly popular among young Black British people. 255 of these requests resulted in platforms removing content. 21 requests related to Meta platforms, resulting in 14 content removals. The Board finds that, to honor its values and human rights responsibilities, Meta’s response to law enforcement requests must respect due process and be more transparent. This case also raises concerns around access to remedy. As part of this case, Meta told the Board that when the company takes content decisions “at escalation,” users cannot appeal to the Board. A decision taken “at escalation” is made by Meta’s internal specialist teams. According to Meta, all decisions on law enforcement requests are made “at escalation” (unless the request is made through a publicly available “in-product reporting tool”), as are decisions on certain policies that can only be applied by Meta's internal teams. This situation adds to concerns raised in preparing the Board’s policy advisory opinion on cross-check where Meta revealed that, between May and June 2022, around a third of content in the cross-check system could not be escalated to the Board. Meta has referred escalated content to the Board on several occasions, including this one. However, the Board is concerned that users have been denied access to remedy when Meta makes some of its most consequential content decisions. The company must address this problem urgently. The Oversight Board's decision The Oversight Board overturns Meta's decision to remove the content. The Board recommends that Meta: *Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board overturns Meta's decision to remove a clip from Instagram announcing the release of a UK drill music track by the artist Chinx (OS). Meta referred this case to the Board because it raises recurring questions about the appropriate treatment of artistic expression that references violence. It involves a balance between Meta’s values of “Voice,” in the form of artistic expression, and “Safety.” The Board finds that Meta lacked sufficient evidence to independently conclude that the content contained a credible veiled threat. In the Board’s assessment the content should not have been removed in the absence of stronger evidence that the content could lead to imminent harm. Meta should have more fully taken into account the artistic context of the content when assessing the credibility of the supposed threat. The Board finds that the content did not violate Meta’s Community Standard on Violence and Incitement, and its removal did not sufficiently protect Meta’s value of “Voice,” or meet Meta’s human rights responsibilities as a business. This case raises broader concerns about Meta’s relationship with governments, including where law enforcement requests Meta to assess whether lawful content complies with its Community Standards. The Board finds the channels through which governments can request such assessments to be opaque and haphazard. The absence of transparency and adequate safeguards around Meta’s relationship with law enforcement creates the potential for the company to exacerbate abusive or discriminatory government practices. This case also reveals that, for content moderation decisions Meta takes at escalation, users have been wrongly denied the opportunity to appeal to the Oversight Board. Decisions “at escalation” are those made by Meta’s internal, specialist teams rather than through the “at scale” content review process. This lack of appeal availability adds to concerns about access to the Board that will be addressed in the Board’s upcoming policy advisory opinion on cross-check. Combined, these concerns raise serious questions about users’ right of access to remedy when Meta makes some of its most consequential content decisions at escalation. The company must address this problem urgently. 2. Case description and background In January 2022, an Instagram account that describes itself as promoting British music posted a video with a short caption on its public account. The video was a 21-second clip of the music video for a UK drill music track called ""Secrets Not Safe"" by the rapper Chinx (OS). The caption tagged Chinx (OS) as well as an affiliated artist and highlighted that the track had just been released. The video clip is of the second verse of the track and ends by fading out to a black screen with the text ""OUT NOW."" Drill is a subgenre of rap music popular in the UK, in particular among young Black people, with many drill artists and fans in London. This music genre is hyper-local, where drill collectives can be associated with areas as small as single housing estates. It is a grassroots genre, widely performed in English in an urban context with a thin line separating professional and amateur artists. Artists often speak in granular detail about ongoing violent street conflicts, using a first-person narrative with imagery and lyrics that depict or describe violent acts. Potential claims of violence and performative bravado are considered to be part of the genre – a form of artistic expression where fact and fiction can blur. Through these claims, artists compete for relevance and popularity. Whether drill music causes real-world violence or not is disputed, particularly the reliability of evidential claims made in the debate. In recent years, recorded incidents of gun and knife violence in London have been high, with disproportionate effects on Black communities. The lyrics of the track excerpt are quoted below. The Board has added meanings for non-standard English terms in square brackets and redacted the names of individuals: Ay, broski [a close friend], wait there one sec (wait). You know the same mash [gun] that I showed [name redacted] was the same mash that [name redacted] got bun [shot] with. Hold up, I’m gonna leave somebody upset (Ah, fuck). I’m gonna have man fuming. He was with me putting loud on a Blue Slim [smoking cannabis] after he heard that [name redacted] got wounded. [Name redacted] got bun, he was loosing (bow, bow) [he was beaten]. Reverse that whip [car], confused him. They ain’t ever wheeled up a booting [a drive-by shooting] (Boom). Don’t hit man clean, he was moving. Beat [shoot] at the crowd, I ain’t picking and choosing (No, no). Leave man red [bleeding], but you know [track fades out]. Shortly after the video was posted, Meta received a request via email from the UK Metropolitan Police to review all content that included this Chinx (OS) track. Meta says law enforcement provided context of gang violence and related murders in London, and flagged that elements of the full track might increase the risk of retaliatory gang violence. Upon receiving the request from the Metropolitan Police, Meta escalated the content review to its internal Global Operations team and then to its Content Policy team. Meta’s Content Policy team makes removal decisions following input from subject matter experts and after specialized contextual reviews. Based on the additional context the Metropolitan Police provided, Meta took the view that the track excerpt referenced a shooting in 2017. It determined that the content violated the Violence and Incitement policy, specifically the prohibition on ""coded statements where the method of violence or harm is not clearly articulated, but the threat is veiled or implicit."" Meta believed that the track’s lyrics, ""acted as a threatening call to action that could contribute to a risk of imminent violence or physical harm, including retaliatory gang violence."" Meta therefore removed the content. Hours later, the content creator appealed the decision to Meta. Usually, users cannot appeal to Meta content decisions the company takes through its escalation process. This is because user appeals to Meta are not routed to escalation teams but to at-scale reviewers. Without access to the additional context available at escalation, those reviewers would be at increased risk of making errors, and incorrectly reversing decisions made at escalation. In this case however, due to human error, the user was able to appeal the escalated decision to Meta’s at-scale reviewers. An at-scale reviewer assessed the content as non-violating and restored it to Instagram. Eight days later, following a second request from the UK Metropolitan Police, Meta removed the content through its escalations process again. The account in this case has fewer than 1,000 followers, the majority of whom live in the UK. The user received notifications from Meta both times their content was removed but was not informed that the removals were initiated following a request from UK law enforcement. Alongside removing the content under review, Meta identified and removed 52 pieces of content featuring the “Secrets Not Safe” track from other accounts, including Chinx (OS)’s account. Meta added the content at issue in this case to the Violence and Incitement Media Matching Service bank, marking it as violating. These banks automatically identify matching content and can remove it or prevent it from being uploaded to Facebook and Instagram. Adding the video to the Media Matching Service bank resulted in an additional 112 automated removals of matching content from other users. Meta referred this case to the Board. The content it referred was posted from an account that was not directly associated with Chinx (OS). Because Chinx (OS)’s music is at the center of this case, the Board requested Meta to additionally refer its decision to remove Chinx (OS)’s own post featuring the same track. Meta explained this was not possible, because the removal of the video from the artist’s account had resulted in a strike. This caused the account to exceed the threshold for being permanently disabled. After six months, and prior to the Board’s request for the additional referral, Meta permanently deleted Chinx (OS)’s disabled account, as part of a regular, automated process, despite the Oversight Board’s pending decision on this case. The action to delete Chinx (OS)’s account, and the content on it, was irreversible, making it impossible to refer the case to the Board. 3. Oversight Board authority and scope The Board has authority to review decisions that Meta refers for review (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1.1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include policy advisory statements with non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). When the Board identifies cases that raise similar issues, they may be assigned to a panel together. In this case, the Board requested that Meta additionally refer content featuring the same track posted by the artist Chinx (OS). In the Board’s view, the difficulty of balancing safety and artistic expression could have been better addressed by Meta referring Chinx (OS)’s post of his music from his own account. This would also have allowed the Board to issue a binding decision in respect of the artist’s post. Meta’s actions in this case have effectively excluded the artist from formally participating in the Board’s processes, and have removed Chinx (OS)'s account from the platform without access to remedy. On several occasions, including this one, Meta has referred content that was escalated within Meta to the Board (see, for example, the “Tigray Communication Affairs Bureau” case, and the “Former President Trump’s suspension” case). When Meta takes a content decision “at escalation,” users are unable to appeal the decision to the company or to the Board. As Meta is able to refer cases decided at escalation to the Board, users who authored or reported the content should equally be entitled to appeal to the Board. Decisions made at escalation are likely to be among the most significant and difficult, where independent oversight is at its most important. The Board’s governing documents provide that all content moderation decisions that are within scope and not excluded by the Bylaws (Bylaws Article 2, Sections 1.2, 1.2.1) and that have exhausted Meta’s internal appeal process (Charter Article 2, Section 1) be eligible for people to appeal to the Board. 4. Sources of authority The Oversight Board considered the following authorities and standards: I. Oversight Board decisions: II. Meta’s content policies: This case involves Instagram's Community Guidelines and Facebook's Community Standards . The Instagram Community Guidelines say, under the heading “Respect other members of the Instagram Community,” that the company wants to “foster a positive, diverse community.” The company removes content that contains “credible threats,” with those words linked to the Facebook Violence and Incitement Community Standard. The Guidelines further set out that: Serious threats of harm to public and personal safety aren’t allowed. This includes specific threats of physical harm as well as threats of theft, vandalism and other financial harm. We carefully review reports of threats and consider many things when determining whether a threat is credible. The policy rationale for the Facebook Community Standard on Violence and Incitement , states it ""aim[s] to prevent potential offline harm that may be related to content on Facebook"" and that while Meta “understand[s] that people commonly express disdain or disagreement by threatening or calling for violence in non-serious ways, we remove language that incites or facilitates serious violence.” It further provides that Meta removes content, disables accounts and works with law enforcement ""when [it] believe[s] there is a genuine risk of physical harm or direct threats to public safety."" Meta states it tries “to consider the language and context in order to distinguish casual statements from content that constitutes a credible threat.” Under a subheading stating that Meta requires “additional information and/or context to enforce,” the Community Standard provides that users should not post coded statements ""where the method of violence or harm is not clearly articulated, but the threat is veiled or implicit."" Those include “[r]eferences [to] historical or fictional incidents of violence” and where “[l]ocal context or subject matter expertise confirm that the statement in question could be threatening and/or could lead to imminent violence or physical harm.” III. Meta’s values: The value of ""Voice"" is described as ""paramount"": The goal of our Community Standards is to create a place for expression and give people a voice. Meta wants people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. Meta limits ""Voice"" in the service of four values. ""Safety"" and “Dignity” are the most relevant in this case: Safety : We're committed to making Facebook a safe place. We remove content that could contribute to a risk of harm to the physical security of persons. Content that threatens people has the potential to intimidate, exclude or silence others and isn't allowed on Facebook. Dignity: We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade others. IV. International human rights standards: The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following human rights standards: 5. User submissions Meta referred this case to the Board, and the Board selected it in mid-June 2022. Due to a technical error that has since been fixed, Meta did not successfully notify the user that the Board selected a case concerning content they had posted and did not invite them to submit a statement to the Board. At the end of August, Meta manually notified the user, but the user did not provide a user statement within the deadline of 15 days. 6. Meta’s submissions Meta explained to the Board that it removed the content because the Instagram post violated its Violence and Incitement policy by containing a veiled threat of violence. Meta argued that its decision comports with international human rights principles because the Community Standards explain that users may not post veiled threats of violence; because the application of this policy serves the legitimate aim of protecting the rights of others and public order; and because the removal of the content at issue was necessary and proportionate to accomplish those aims. Meta deems the decision particularly challenging because its Violence and Incitement policy does not have explicit exceptions for humor, satire, or artistic expression. The policy requires Meta to assess whether a threat is credible or merely a show of bravado or provocative, but ultimately nonviolent, expression. Meta considers this case to be significant because it raises recurring questions about the appropriate treatment of artistic expression that references violence. This assessment involves balancing its values of “Voice” and “Safety.” Meta told the Board that, when a creator’s work includes threats of violence or statements that could contribute to a risk of violence it “err[s] on the side of removing it from our platforms.” Meta’s internal guidance for moderators, the internal Implementation Standards, sets out the “veiled threats analysis” which Meta uses to determine the existence of veiled threats under the Violence and Incitement Community Standard. This explains that for content to qualify as a veiled threat there must be both a primary signal (such as a reference to a past act of violence), and a secondary signal. Secondary signals include local context or subject matter expertise indicating that the content is potentially threatening, or confirmation by the target that they view the content as threatening. According to Meta, local NGOs, law enforcement agencies, Meta’s Public Policy team, or other local experts provide secondary signals. The “veiled threats analysis” is only performed “at escalation,” meaning it cannot be performed by “at-scale” reviewers. It can only be conducted by Meta’s internal teams. In this case, Meta found the content contained a primary signal in referring to the rapper’s participation in an earlier shooting and indicating an intent to respond further. According to Meta, the secondary signal was the UK Metropolitan Police’s confirmation that it viewed the content as potentially threatening or likely to contribute to imminent violence or physical harm. Law enforcement did not allege the content violated local law. Meta says it assesses law enforcement reports alongside political, cultural and linguistic expertise from Meta’s internal teams. Meta argued that drill music has often been connected to violence, citing a Policy Exchange report claiming that approximately one quarter of London’s gang murders have been linked to drill music. However, Meta later acknowledged this report faced “some criticism from criminologists.” Meta quoted an open letter , signed by 49 criminologists, social scientists and professional organizations, which “dismiss[ed] the report as factually inaccurate, misleading and politically dangerous” and for “committing grave causation-correlation errors.” Certain Meta policies can only be enforced by Meta’s internal teams, through its internal escalation process. This is known as being decided “at escalation.” Meta provided a list of around 40 “escalation-only” rules in nine policy areas. The Board asked Meta how, in this case, an at-scale reviewer was able to restore a post that had been removed at escalation. Meta responded that “exceptionally in this case and due to a human error, the content creator was able to appeal the initial removal decision.” Meta disclosed that where content is actioned through Meta’s internal escalation process, it is not usually possible to appeal for a second examination by the company. This is to prevent “at-scale” reviewers from reversing decisions made “at escalation” without access to the context available in escalated review. At the Board’s request, Meta provided a briefing on ""Government requests to review content for Community Standard violations."" Meta explained to the Board that governments can make requests to the company to remove content by email, post, or the help center, as well as through in-product reporting tools, which send the content to automated or “at scale” review. Meta explained that when it receives a request from law enforcement made outside the in-product tools, the content goes through an internal escalation process. Regardless of how an escalation is received, there is a standard scope and prioritization process to assess the urgency, sensitivity and complexity of the escalation. This process determines which of Meta’s teams will handle the request and the position of the request in the queue. Following a review request from a third party, including governments, Meta’s Global Operations Team manually completes a form with signals. The prioritization model and pillars take into consideration signals related to legal, reputational, and regulatory risks, impact to the physical safety of Meta's community, the scope and viewership of the issue at hand, and the time sensitivity of the issue. These prioritization pillars are ranked in order of importance and a priority score is automatically calculated based on the signals entered. The model is dynamic and responsive to changes in the environment (e.g., offline events) and refinements Meta introduces. High priority escalations, such as in this case, are sent to a specialist team within Meta’s Global Operations team. The team reviews content against the Community Standards, investigates, reaches out to stakeholders for additional assessment where necessary, and makes an enforcement decision. In some cases, including this one, content is then escalated to the Content Policy team for input. The action taken is communicated to the external person(s) who submitted the original request. Meta shared that their human rights team does not typically weigh-in on individual applications of the veiled threats framework. Meta states that when it receives content removal requests from law enforcement agencies, it evaluates the content against the Community Standards in the same way it would for any other piece of content, regardless of how it was detected or escalated. Meta claims this means that requests are treated the same way in all countries. Meta explained that the priority score is affected by the context that is provided in relation to the pillars. While the identity of the requester is not a factor that directly affects the prioritization, these specific types of context do impact the prioritization. To appeal to the Oversight Board, a user must have an appeal reference ID. Meta issues these IDs as part of its appeals process. In response to the Board’s questions, Meta confirmed it does this for content decisions that are eligible for internal appeal and after a second review has been exhausted. Therefore, in situations where Meta acts on content without allowing for an appeal, there is no opportunity to appeal to the Board. This is usually the case for decisions taken “at escalation,” such as those reviewed following government requests (made outside of the in-product reporting tools), and content reviewed under “escalation-only” policies. The Board formally submitted to Meta a total of 26 written questions, including three rounds of follow-up questions. These numbers exclude questions the Board asked during the in-person briefing Meta provided to the Board on how it handles government requests. Twenty-three of the written questions were answered fully and three requests were not fulfilled by Meta. Meta declined to provide data on law enforcement requests globally and in the UK focusing on “veiled threats,” drill music, or the proportion of requests resulting in removal for Community Standard violations. Further, Meta declined to provide a copy of the content review requests received from the Metropolitan Police in this case. However, the Metropolitan Police provided the Board with a copy of the first request sent to Meta, on condition that the content of the request remain confidential. 7. Public comments The Oversight Board considered ten public comments related to this case. One comment was submitted from each of the following regions: Europe; Middle East and North Africa; and Latin America and Caribbean. Two comments were submitted from Central and South Asia, and five from the United States and Canada. The Metropolitan Police provided a comment; it understood that the Board would disclose this fact, but did not give permission for the comment to be published. The Board requested the Metropolitan Police revisit that decision in the interests of transparency, but it declined. The Metropolitan Police indicated it may be able to provide consent at a later point in time. If that happens, the Board will share the public comment. The submissions covered the following themes: racial bias and disproportionate over-targeting of Black communities by law enforcement; the importance of socio-cultural context in assessing artistic expression; causal links between drill music and violence; and handling government-issued takedown requests. The Board filed a Freedom of Information Act request (Reference No: 01/FOI/22/025946) to the Metropolitan Police to provide information about its policies and practice on making requests to social media and streaming companies to review and/or remove content. Its responses to that request inform the Board’s analysis below. To read public comments submitted for this case, please click here . To read the Metropolitan Police’s response to the Board’s Freedom of Information request, please click here . 8. Oversight Board analysis The Board looked at the question of whether this content should be restored through three lenses: Meta’s content policies, the company’s values and its human rights responsibilities. 8.1 Compliance with Meta’s content policies I. Content rules The Board finds that Meta’s removal of the content in this case did not comply with the Violence and Incitement Community Standard. Detecting and assessing threats at scale is challenging, in particular where they are veiled, and when specific cultural or linguistic expertise may be required to assess context (see the Oversight Board decisions “Protest in India against France” and “Knin cartoon”). Artistic expression can contain veiled threats, as can any other medium. The challenge of assessing the credibility of veiled threats in art is especially acute. Messages in art can be intentionally obscure in their intent and deliberately subject to interpretation. Statements referencing violence can be coded, but also can be of a performative or satirical nature. They may even characterize certain art forms, such as drill music. Meta acknowledged these challenges when referring this case to the Board. The Board agrees with Meta that the lyrics in the video clip did not contain an overt threat of violence under the Violence and Incitement Community Standard. It is a more challenging question whether the video contains a veiled threat under the same policy. The policy rationale outlines that Meta seeks to “prevent potential offline harm,” and that language “that incites or facilitates serious violence” and that “poses a genuine risk of physical harm or direct threats to public safety” will be removed. An emphasis is placed on distinguishing credible threats from non-credible threats. Establishing a causal relationship between language and risk of harm requires a resource-intensive analysis. For content to constitute a “veiled or implicit” threat under the Violence and Incitement Community Standard, the method of violence or harm need not be clearly articulated. Meta uses its “veiled threats analysis,” set out in its non-public guidance to moderators, the internal Implementation Guidance, to assess whether a veiled threat is present. This requires that both a primary and secondary signal are identified for content to qualify as a veiled threat. The Board agrees with Meta that a primary signal is present in this case. The lyrics contain reference to a “historical” incident of violence. Additional context is required to understand that this is a 2017 shooting between two rival gangs in London. For Meta, the excerpt in its entirety referred to these events. The Board’s conclusion, agreeing with Meta’s that a primary signal is present, is based on two independent third-party analyses of the lyrics sought by the Board. The Board notes that these analyses differed in substantial ways from Meta’s interpretation. For example, Meta interpreted the term “mash” to mean “cannabis,” whereas experts the Board consulted interpreted this term to mean “gun.” Meta interpreted the term “bun” to mean “high,” whereas the Board’s experts interpreted this to mean “shot.” Identifying a veiled or implicit threat also requires a secondary signal showing that the reference “could be threatening and/or could lead to imminent violence or physical harm” [emphasis added]. That signal depends on local context, often provided by third parties such as law enforcement, confirming the content, “is considered potentially threatening, or likely to contribute to imminent violence or physical harm” [emphasis added]. In this case, the UK Metropolitan Police provided this confirmation. Meta determined that Chinx (OS)’s reference to the 2017 shooting was potentially threatening, or likely to contribute to imminent violence or physical harm and qualified as a veiled threat. Meta’s contextual assessment included the specific rivalry between gangs associated with the 2017 shooting, as well as the broader context of inter-gang violence and murders in London. It is appropriate that Meta draws upon local subject matter expertise to evaluate the relevant context and credibility of veiled threats. The Board notes that there is understandable anxiety around high levels of gun and knife violence in recent years in London, with disproportionate effects on Black communities. Law enforcement can sometimes provide such context and expertise. But not every piece of content that law enforcement would prefer to have taken down – and not even every piece of content that has the potential to lead to escalating violence – should be taken down. It is therefore critical that Meta evaluate these requests itself and reach an independent conclusion. The company says they do this. Independence is crucial, and the evaluation should require specific evidence of how the content cause harm. This is particularly important in counteracting the potential for law enforcement to share information selectively, and the limited opportunity to gain counter-perspectives from other stakeholders. For artistic expression from individuals in minority or marginalized groups, the risk of cultural bias against their content is especially acute. In this case, Meta has not demonstrated that the lyrics in the content under review constituted a credible threat or risk of imminent harm, nor has the Board’s own review uncovered evidence to support such a finding. To establish that the reference to a shooting five years ago presents a risk of harm today requires additional probative evidence beyond the reference itself. The fact that the track references events that involve gangs engaged in a violent rivalry does not mean artistic references to that rivalry necessarily constitute a threat. In the absence of either sufficient detail to make that causal relationship clearer, such as evidence of past lyrics materializing into violence or a report from the target of the purported threat that they were endangered, greater weight should have been afforded to the artistic nature of the alleged threat when evaluating its credibility. The fact that performative bravado is common within this musical genre was relevant context that should have informed Meta’s analysis of the likelihood that the track’s reference to past violence constituted a credible present threat. Third party experts informed the Board that a line-by-line lyrical analysis to determine evidence of past wrongdoing or risk of future harm is notoriously inaccurate and that verifying supposedly factual statements within drill lyrics is challenging (Digital Rights Foundation, PC-10618). Public comments (e.g., Electronic Frontier Foundation, PC-10971) criticized law enforcement’s policing of lawful drill music, and research Meta cited in its submissions has been widely criticized by criminologists, as the company has acknowledged. In the Board’s view, this criticism should also have factored into Meta’s analysis, and prompted it to request additional information from law enforcement and/or from additional parties in relation to causation before removing the content. The Board further notes that the full Chinx (OS) track, which was excerpted in this case, remains available on music streaming platforms accessible in the UK and the Board has seen no evidence this has led to any act of violence. This context was not available to Meta at the time of its initial decision in this case, but it is nonetheless relevant to the Board’s independent review of the content. The Board acknowledges the deeply contextual nature of this kind of content decision that Meta has to make, and the associated time pressure when there may be risks of serious harm. Reasonable people might differ, as in this case, on whether a given piece of content constitutes a veiled threat. Still, the lack of transparency on Meta’s decisions to remove content that result from government requests makes it difficult to evaluate whether Meta’s mistake in an individual case reflects reasonable disagreement or is indicative of potential systemic bias that requires additional data and further investigation. Meta insists that its “veiled threats analysis” is independent — and the Board agrees that it should be — but in this context, Meta’s say-so is not enough. Meta states that it evaluates the content against the Community Standards in the same way it would for any other piece of content, regardless of how it is detected. The “veiled threats analysis” places law enforcement in the position of both reporting the content (i.e., flagging a primary signal), and providing all the contextual information Meta needs to assess potential harm (i.e., providing the local knowledge needed for the secondary signal). While there may be good reasons to adopt a prioritization framework that ensures reports from law enforcement are assessed swiftly, that process needs to be designed to ensure that such reports include sufficient information to make independent assessment possible, including seeking further input from the requesting entity or other parties where necessary. The Board distinguishes this decision from its “Knin cartoon” decision. In that case, the additional context to enforce the ""veiled threat"" rule was the presence, in the cartoon, of hate speech against the group targeted in the prior violent incident. The Board primarily justified the removal based on the Hate Speech Community Standard. The Board’s finding that the content additionally violated the Violence and Incitement Community Standard relied on contextual knowledge to understand the historical incident (Operation Storm) referenced in the post. This contextual knowledge is well known within the region, both to Croatian language speakers and ethnic Serbs. This was evidenced by the sheer number of reports the content in that case received (almost 400), compared to the content in this case which received no reports. Incitement of hatred against an ethnic group was immediately apparent to a casual observer. While understanding many of the references within the content relied on specific contextual knowledge, that knowledge could be gained without relying on external third parties. II. Enforcement action and transparency In this case, one request from law enforcement resulted in 52 manual removals and 112 automated removals of matching pieces of content (between January 28, 2022, and August 28, 2022). It is important to recognize that the actions Meta took in response to the request from the Metropolitan Police impacted not only the owner of the Instagram account in this case, but also Chinx (OS) and many others (see also: “Colombia police cartoon” case). The scale of these removals underscores the importance of due process and transparency around Meta’s relationship with law enforcement and the consequences of actions taken pursuant to that relationship (see also the Oversight Board decisions “Öcalan isolation” and “Shared Al Jazeera post”). To address these concerns, there needs to be a clear and uniform process with safeguards against abuse, including auditing; adequate notice to users of government involvement in the action taken against them; and transparency reporting on these interactions to the public. These three aspects are interconnected, and all must be addressed. a. Transparency to the public Meta publishes reports in its transparency center on government requests to remove content based on local law. It also publishes separate reports on governmental requests for user data. There is separate reporting on enforcement against the Community Standards. However, none of these reports differentiate data on content removed for violating content policies following a government request for review. Current transparency data on government removal requests underrepresent the full extent of interactions between Meta and law enforcement on content removals. By focusing on the action Meta takes (removal for violating local law), reporting on government requests excludes all reports received from law enforcement that result in removal for violating content policies. Content reported by law enforcement that violates both local law and the content policies is not included. For this reason, the Board submitted a freedom of information request to the Metropolitan Police to understand more fully the issues in this case. Meta has claimed that transparency around government removal requests based on content policies is of limited use, since governments can (and do) also use in-product reporting tools. These tools do not distinguish between government requests and those made by other users. This case demonstrates the level of privileged access law enforcement has to Meta’s internal enforcement teams, evidenced by correspondence the Board has seen, and how certain policies rely on interaction with third parties, such as law enforcement, to be enforced. The way this relationship works for escalation-only policies, as in this case, brings into question Meta’s ability to independently assess government actors’ conclusions that lack detailed evidence. The Board acknowledges Meta has made progress in relation to transparency reporting since the Board’s first decisions addressing this topic. This includes conducting a scoping exercise on measuring content removed under the Community Standards following government requests, and contributing to Lumen, a Berkman Klein Center for Internet & Society research project on government removal requests. Further transparency efforts in this area will be immensely valuable to public discussion on the implications of the interactions between governments and social-media companies. b. Intake process for law enforcement requests Although Meta has disclosed publicly how it responds to government requests for takedowns based on local law violations , the channels through which governments can request review for violations of Meta’s content policies remain opaque. This case demonstrates that there are significant flaws in Meta’s system governing law enforcement requests, where these requests are not based on local law and are made outside of its in-product reporting tools (i.e., the functions all regular users have access to for flagging or reporting content). In the “Shared Al Jazeera post” decision, the Board recommended that Meta formalize a transparent process on how it receives and responds to all government requests for content removals. Law enforcement agencies make requests by various communications channels, making the standardization and centralization of requests, and collecting data about them, challenging. The current intake system, where Meta fills in the intake form, focuses solely on prioritizing incoming requests. The system does not adequately ensure that third party requests meet minimum standards and does not allow for the accurate collection of data to enable the effects of this system to be properly monitored and audited. Some requests may refer to violations of Meta’s Community Standards, others to violations of national law, and others to generally stated concerns about potential harms without connecting this to allegations of unlawful activity or violations of platform policies. Law enforcement is not asked to meet minimum criteria to fully contextualize and justify their requests, leading to unstructured, ad hoc, and inconsistent interactions with Meta. Minimum criteria might include, for example, an indication of which Meta policy law enforcement believes has been violated, why it has been violated, and a sufficiently detailed evidential basis for that conclusion. c. Notification to users In its Q2 2022 Quarterly Update on the Oversight Board , Meta disclosed that it is improving notifications to users by specifically indicating when content was removed for violating the Community Standards after being reported by a government entity (implementing the Board’s recommendation in the “Öcalan’s isolation” case decision). In this case, if those changes had been rolled out, all users who were subject to the 164 additional removals of content should have received notifications of this kind. Meta has acknowledged that only once it has set up the infrastructure required to collect more granular data from government requests, will it be able to design and test sending more detailed user notifications. The Board therefore agrees with Meta that this work is dependent on tracking and providing more information on government requests, which can then be published in Meta’s public transparency reporting. 8.2 Compliance with Meta’s values The Board finds that removing the content did not comply with Meta’s values. This case demonstrates the challenges that Meta faces in balancing the values of “Voice” and “Safety,” when seeking to address a high number of potential veiled threats in art, at a global scale and in a timely manner. However, Meta claims that ""Voice"" is its paramount value. As the Board stated in its “Wampum belt” decision, art is a particularly important and powerful expression of ""Voice,"" especially for people from marginalized groups creating art informed by their experiences. Meta did not have sufficient information to conclude that this content posed a risk to ""Safety"" that justified displacing ""Voice."" The Board is concerned that in light of doubts as to whether the content credibly risked harm, Meta describes that it errs on the side of “Safety” rather than “Voice.” Where doubt arises, as in this case, from lack of specificity in the information law enforcement has provided about a piece of artistic expression, such an approach is inconsistent with Meta’s self-described values. The Board recognizes the importance of keeping people safe from violence, and that this is especially important for communities disproportionately impacted by such violence. The Board is also mindful that decisions about alleged threats often must be made quickly, without the benefit of extended reflection. However, a presumption against “Voice” may have a disproportionate impact on the voices of marginalized people. In practice, it may also significantly increase the power and leverage of law enforcement, who may claim knowledge that is difficult to verify through other sources. 8.3 Compliance with Meta’s human rights responsibilities The Board concludes that Meta did not meet its human rights responsibilities as a business in deciding to remove this post. The right to freedom of expression is guaranteed to all people without discrimination (Article 19, para. 2, ICCPR; Article 2, para. 1, ICCPR). This case further engages the rights of persons belonging to ethnic minorities to enjoy, in community with other members of their group, their own culture (Article 27, ICCPR) and the right to participate in cultural life (Article 15, ICESCR). The right of access to remedy is a key component of international human rights law (Article 2, para. 3, ICCPR; General Comment No. 31), and remedy is a third pillar of the UNGPs and a focus area in Meta’s corporate human rights policy. Freedom of expression (Article 19 ICCPR; Article 5 ICERD) Article 19 of the ICCPR gives specific mention to protecting expression ""in the form of art."" International human rights standards reinforce the importance of artistic expression ( General Comment 34 , para. 11; Shin v. Republic of Korea, Human Rights Committee, communication No. 926/2000). The International Convention on the Elimination of All Forms of Racial Discrimination (ICERD) protects the exercise of the right to freedom of expression without discrimination based on race (Article 5). The Committee on the Elimination of Racial Discrimination has emphasized the importance of freedom of expression to assist ""vulnerable groups in redressing the balance of power among the components of society"" and to offer ""alternative views and counterpoints"" in discussions (CERD Committee, General Recommendation 35, para. 29). Drill music offers young people, and particularly young Black people, a means of creative expression. Art is often political, and international standards recognize its unique and powerful role in challenging the status quo (UN Special Rapporteur in the field of cultural rights, A/HRC/23/34, at paras 3-4). The internet, and social media platforms such as Facebook and Instagram in particular, have special value to artists in helping them to reach new and larger audiences. Artists’ livelihoods, and their social and economic rights, may depend on access to social platforms that dominate the internet. Drill music relies on boastful claims to violence to drive the commercial success of artists on social media. Such claims and performances are expected as part of the genre. As a result of Meta’s actions in this case, Chinx (OS) has been removed from Instagram permanently, which is likely to have a significant impact on his ability to reach his audience and find commercial success. ICCPR Article 19 requires that where restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). The UN Special Rapporteur on freedom of expression has encouraged social media companies to be guided by these principles when moderating online expression, mindful that regulation of expression at scale by private companies may give rise to concerns particular to that context (A/HRC/38/35, paras. 45 and 70). The Board has employed the three-part test based on Article 19 of the ICCPR in all its decisions to date. I. Legality (clarity and accessibility of the rules) The principle of legality requires laws limiting expression to be clear and accessible, so people understand what is permitted and what is not. Furthermore, it requires those laws to be specific, to ensure that those charged with their enforcement are not given excessive discretion (General Comment 34, para. 25). The Board applies these principles to assess the clarity and accessibility of Meta's content rules, and the guidance reviewers have to make fair decisions based on those rules. The Board reiterates its previously stated concerns that the relationship between the Instagram Community Guidelines and Facebook Community Standards is unclear. In August 2022, Meta committed to implement the Board’s prior recommendations in this area ( Q2 2022 Quarterly Update on the Oversight Board ) and align the Community Standards and Guidelines in the long term. The differences between the publicly facing Violence and Incitement Community Standard and Meta’s internal Implementation Standards is also a concern. Meta uses “signals” to determine whether content contains a veiled threat. The “signals” were added to the public-facing Community Standards as a result of the Board’s prior recommendations. However, the Community Standards do not explain that Meta divides these into primary and secondary signals, or that both a primary and secondary signal is required to find a policy violation. Making this clear will be useful to those raising complaints about content on the platform, including trusted third parties and law enforcement. Clarity about signals is especially important, as the secondary signal validates the risk of harm resulting from the content and leads to the removal decision. Third parties who provide a primary signal without a secondary signal may be confused if the content they report is not actioned. II. Legitimate aim Restrictions on freedom of expression must pursue a legitimate aim. The Violence and Incitement Community Standard exists in part to prevent offline harm. This policy therefore serves the legitimate aim of the protection of the rights of others (the rights to life and security of person of those targeted by the post). III. Necessity and proportionality The Board finds the content removal was not necessary to achieve the aim of the policy. The principle of necessity and proportionality requires that restrictions on expression ""must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those that might achieve their protective function; they must be proportionate to the interest to be protected"" (General Comment 34, para. 34). The form of expression at issue, such as expression in the form of art, must be taken into consideration (General Comment 34, para. 34). The UN Special Rapporteur on freedom of expression has observed the difficulties artists face in their use of social media, as such expression tends to have complex characteristics and can easily fall foul of platforms’ rules, with inadequate remedial mechanisms (A/HRC/44/49/Add.2, at para. 44 - 46). This affirms broader observations the Special Rapporteur has made on deficiencies in contextual content moderation by platforms, including on issues requiring historical or cultural nuance (A/HRC/38/35, at para. 29). The complexity of artistic expression was emphasized by the UN Special Rapporteur in the field of cultural rights (A/HRC/23/34, at para. 37): An artwork differs from non-fictional statements, as it provides a far wider scope for assigning multiple meanings: assumptions about the message carried by an artwork are therefore extremely difficult to prove, and interpretations given to an artwork do not necessarily coincide with the author’s intended meaning. Artistic expressions and creations do not always carry, and should not be reduced to carrying, a specific message or information. In addition, the resort to fiction and the imaginary must be understood and respected as a crucial element of the freedom indispensable for creative activities and artistic expressions: representations of the real must not be confused with the real... Hence, artists should be able to explore the darker side of humanity, and to represent crimes… without being accused of promoting these. The Special Rapporteur’s observations do not exclude the possibility that art can be intended to cause harm and may achieve that objective. For a company in Meta’s position, making these assessments quickly, at scale, and globally is challenging. Meta’s creation of “escalation only” policies that require a fuller contextual analysis to remove content shows respect for the principle of necessity. Meta's human rights responsibilities require Meta to prevent and mitigate risks to the right to life, and the right to security of person, for those who may be put in danger by posts that contain veiled threats. However, for reasons stated in Section 8.1 of this decision, that analysis requires a closer examination of causality and must be more nuanced in its assessment of art in order to meet the requirements of necessity. As the Board has not seen sufficient evidence to show a credible threat in this case, removal was not necessary. In this respect, the Violence and Incitement policy uses terminology that may be read to permit excessive removals of content. A “potential” threat, or content that “could” result in violence somewhere down the line, such as a taunt, was too broad in this decision to satisfy the requirements of necessity. In prior cases, the Board has not required that the risk of future violence be imminent for removal to be allowed (see, for example, the “Knin cartoon” case), since Meta’s human rights responsibilities may differ from those of a state imposing criminal or civil penalties (see, for example, the “South Africa slurs” case). The Board has, however, required a more substantial evidential basis that a threat was present and credible than appears in this case (see, for example, the “Protest in India against France” case). Non-discrimination and access to remedy (Article 2(1), ICCPR) The Human Rights Committee has made clear that any restrictions on expression must respect the principle of non-discrimination (General Comment No. 34, at para. 32). This principle informs the Board’s interpretation of Meta’s human rights responsibilities (UN Special Rapporteur on freedom of expression, A/HRC/38/35, para. 48). In its public comment, the Digital Rights Foundation argued that while some have portrayed drill music as a rallying call for gang-violence, it serves as a medium for youth, in particular Black and Brown youth, to express their discontent with a system that perpetuates discrimination and exclusion (PC-10618). JUSTICE, in its report “ Tackling Racial Injustice: Children and the Youth Justice System ,” cites law enforcement’s misuse of drill music to secure convictions as an example of systemic racism. As the Board learned through its freedom of information request, all 286 requests the Metropolitan Police made to social media companies and streaming services to review music content from June 1, 2021 to May 31, 2022 involved drill music. 255 of those requests resulted in content being removed from the platform. 21 of those 286 requests related to Meta’s platforms, and 14 of those requests were actioned through removals. As outlined above, one request can result in multiple content removals. This intensive focus on one music genre among many that include reference to violence raises serious concerns of potential over-policing of certain communities. It is beyond the Board’s purview to say whether these requests represent sound police work, but it does fall to the Board to assess how Meta can honor its values and human rights responsibilities when it responds to such requests. Accordingly, and as described below, Meta’s response to law enforcement requests must, in addition to meeting minimal evidential requirements, be sufficiently systematized, audited, and transparent, to affected users and the broader public, to enable the company, the Board and others to assess the degree to which Meta is living up to its values and meeting its human rights responsibilities. Where a government actor is implicated in interference with an individual’s expression, as in this case, due process and transparency are key to empower the affected users to assert their rights and even challenge that government actor. Meta should consider whether its processes as they currently stand enable or obstruct this. The company cannot allow its cooperation with law enforcement to be opaque to the point that it creates a barrier to users accessing remedies for potential human rights violations. It is also important that Meta provides its users with adequate access to remedy for the content decisions it takes that impact users’ rights. The UN Special Rapporteur on freedom of expression has addressed the responsibilities of social media companies in relation to artistic expression (A/HRC/44/49/Add.2, at para. 41 onwards). Their observations on access to remedy of female artists is relevant to the situation of Black artists in the United Kingdom: Artists reportedly have experienced shutdowns of personal and professional Facebook and Twitter pages… Violations of vague community guidelines can leave artists without “counter-notice” procedures allowing challenges to removals of their art. The lack of procedural safeguards and access to remedies for users leaves artists without access to a platform to display their art, and without viewership to enjoy their art. In some cases, States work with companies to control what kinds of content is available online. This dangerous collaboration has the effect of silencing artists and preventing individuals… from receiving art as expression. The UN Special Rapporteur on freedom of expression has stated that the process of remediation for social media companies ""should include a transparent and accessible process for appealing platform decisions, with companies providing a reasoned response that should also be publicly accessible"" (A/74/486, para 53). Even though the content under review in this case was posted by an Instagram account not belonging to Chinx (OS), the artist had posted the same video to his own account. This was removed at the same time as the content in this case, resulting in his account being first disabled, and then deleted. This shows how collaboration between law enforcement and Meta can result in significantly limiting the expression of artists, denying their audience access to art on the platform. As the Board’s freedom of information request confirms, this collaboration specifically and exclusively targets drill artists, who are mostly young Black men. The Board requested that Meta refer the removal of content from Chinx (OS)’s account for review, so that it could be examined alongside the content in this case. That was not technically possible due to Meta deleting the account. This raises significant concerns about the right to remedy, as does the fact that users cannot appeal decisions taken “at escalation” to the Oversight Board. This includes significant and difficult decisions concerning “additional context to enforce” policies, which are only decided “at escalation.” It also includes all government requests for removals (besides “in-product tool” usage), including lawful content, that are eligible for review and within scope under the Oversight Board Charter. The latter is especially concerning for individuals who belong to discriminated-against groups, who are likely to experience further barriers to accessing justice as a result of Meta’s product design choices. These concerns about the right to remedy add to those raised during the Board’s work in the upcoming policy advisory opinion on cross-check. Cross-check is the system Meta uses to reduce enforcement errors by providing additional layers of human review for certain posts initially identified as breaking its rules, before removing content. Meta has told the Board that, between May and June 2022, around a third of content decisions in the cross-check system could not be appealed by users to the Board. The Board will address this further in the cross-check policy advisory opinion. 8.4 Identical content with parallel context The Board notes that this content was added to the Violence and Incitement Media Matching Service bank, which resulted in automated removals of matching content and potentially additional account-level actions on other accounts. Following this decision, Meta should ensure the content is removed from this bank, restore identical content it has wrongly removed where possible, and reverse any strikes or account-level penalties. It should remove any bar on Chinx (OS) re-establishing an account on Instagram or Facebook. 9. Oversight Board decision The Oversight Board overturns Meta's decision to take down the content, requiring the post to be restored. 10. Policy advisory statement A. Content Policy 1. Meta’s description of its value of “Voice” should be updated to reflect the importance of artistic and creative expression. The Board will consider this recommendation implemented when Meta’s values have been updated. 2. Meta should clarify that for content to be removed as a “veiled threat” under the Violence and Incitement Community Standard, one primary and one secondary signal is required. The list of signals should be divided between primary and secondary signals, in line with the internal Implementation Standards. This will make Meta’s content policy in this area easier to understand, particularly for those reporting content as potentially violating. The Board will consider this recommendation implemented when the language in the Violence and Incitement Community Standard has been updated. B. Enforcement 3. Meta should provide users with the opportunity to appeal to the Oversight Board for any decisions made through Meta’s internal escalation process, including decisions to remove content and to leave content up. This is necessary to provide the possibility of access to remedy to the Board and to enable the Board to receive appeals for “escalation-only” enforcement decisions. This should also include appeals against removals made for Community Standard violations as a result of “trusted flagger” or government actor reports made outside of in-product tools. The Board will consider this implemented when it sees user appeals coming from decisions made on escalation and when Meta shares data with the Board showing that for 100% of eligible escalation decisions, users are receiving reference IDs to initiate appeals. 4. Meta should implement and ensure a globally consistent approach to receive requests for content removals (outside of in-product reporting tools) from state actors by creating a standardized intake form asking for minimum criteria, for example, the violated policy line, why it has been violated, and a detailed evidential basis for that conclusion, before any such requests are actioned by Meta internally. This contributes to ensuring more organized information collection for transparency reporting purposes. The Board will consider this implemented when Meta discloses the internal guidelines that outline the standardized intake system to the Board and in the transparency center. 5. Meta should mark and preserve any accounts and content that were penalized or disabled for posting content that is subject to an open investigation by the Board. This prevents those accounts from being permanently deleted when the Board may wish to request content is referred for decision or to ensure its decisions can apply to all identical content with parallel context that may have been wrongfully removed. The Board will consider this implemented when Board decisions are applicable to the aforementioned entities and Meta discloses the number of said entities affected for each Board decision. C. Transparency 6. Meta should create a section in its Transparency Center , alongside its “ Community Standards Enforcement Report ” and “ Legal Requests for Content Restrictions Report ,” to report on state actor requests to review content for Community Standard violations. It should include details on the number of review and removal requests by country and government agency, and the numbers of rejections by Meta. This is necessary to improve transparency. The Board will consider this implemented when Meta publishes a separate section in its “Community Standards Enforcement Report” on requests from state actors that led to removal for content policy violations. 7. Meta should regularly review the data on its content moderation decisions prompted by state actor content review requests to assess for any systemic biases. Meta should create a formal feedback loop to fix any biases and/or outsized impacts stemming from its decisions on government content takedowns. The Board will consider this recommendation implemented when Meta regularly publishes the general insights derived from these audits and the actions taken to mitigate systemic biases. *Procedural note: The Oversight Board's decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg, which draws on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Linguistic expertise was provided by Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world. Return to Case Decisions and Policy Advisory Opinions" ig-rh16obg3,Call for women’s protest in Cuba,https://www.oversightboard.com/decision/ig-rh16obg3/,"October 3, 2023",2023,,"TopicFreedom of expression, Protests, Sex and gender equalityCommunity StandardHate speech","Policies and TopicsTopicFreedom of expression, Protests, Sex and gender equalityCommunity StandardHate speech",Overturned,Cuba,The Oversight Board has overturned Meta’s decision to remove a video posted by a Cuban news platform on Instagram in which a woman calls for protests against the government.,55565,8618,"Overturned October 3, 2023 The Oversight Board has overturned Meta’s decision to remove a video posted by a Cuban news platform on Instagram in which a woman calls for protests against the government. Standard Topic Freedom of expression, Protests, Sex and gender equality Community Standard Hate speech Location Cuba Platform Instagram PDF of Call for Women's Protest in Cuba Decision Call for Women's Protest in Cuba Public Comments Appendix The Oversight Board has overturned Meta’s decision to remove a video posted by a Cuban news platform on Instagram in which a woman protests against the Cuban government, calls for other women to join her on the streets and criticizes men, by comparing them to animals culturally perceived as inferior, for failing to defend those who have been repressed. The Board finds the speech in the video to be a qualified behavioral statement that, under Meta’s Hate Speech Community Standard, should be allowed. Furthermore, in countries where there are strong restrictions on people’s rights to freedom of expression and peaceful assembly, it is critical that social media protects the users’ voice, especially in times of political protest. About the case In July 2022, a news platform, which describes itself as critical of the Cuban government, posted a video on its verified Instagram account. The video shows a woman calling on other women to join her on the streets to protest against the government. At a certain point, she describes Cuban men as “rats” and “mares” carrying urinal pots, because they cannot be counted on to defend people being repressed by the government. A caption in Spanish accompanying the video includes hashtags that refer to the “dictatorship” and “regime” in Cuba, and it calls for international attention on the situation in the country, by using #SOSCuba. The video was shared around the first anniversary of the nationwide protests that had taken place in July 2021 when Cubans took to the streets, in massive numbers, for their rights. State repression increased in response, continuing into 2022. The timing of the post was also significant because it was shared days after a young Cuban man was killed in an incident involving the police. The woman in the video appears to reference this when she mentions that “we cannot keep allowing the killing of our sons.” Text overlaying the video connects political change to women’s protests. The video was played more than 90,000 times and shared fewer than 1,000 times. Seven days after it was posted, a hostile speech classifier identified the content as potentially violating and sent it for human review. While a human moderator found the post violated Meta’s Hate Speech policy, the content remained online as it went through additional rounds of human review under the cross-check system. A seven-month gap between these rounds meant the post was removed in February 2023. On the same day in February, the user who shared the video appealed Meta’s decision. Meta upheld its decision, without escalating the content to its policy or subject matter experts. A standard strike was applied to the Instagram account, but no feature limit. Key findings The Board finds that, when read as a whole, the post does not intend to dehumanize men based on their sex, trigger violence against them or exclude them from conversations about the Cuban protests. The post unambiguously aims to call attention to the woman’s opinion about the behavior of Cuban men in the context of the historic demonstrations that began in July 2021. With the woman using language such as “rats” or “mares” to imply cowardice in that precise context, and to express her own personal frustration at their behavior, regional experts and public comments point to the post as a call-to-action to Cuban men. If taken out of context and given an overly literal reading, the stated comparison of men to animals culturally perceived as inferior could be seen as violating Meta’s Hate Speech policy. However, the post, when taken as a whole, is not a generalization that aims to dehumanize men, but instead a qualified behavioral statement, which is allowed under the policy. Consequently, the Boards finds that the removal of the content is inconsistent with Meta’s Hate Speech policy. Furthermore, with external experts flagging the hashtag #SOSCuba, posted by the user to draw attention to the economic, political and humanitarian crises facing Cubans, the protests are established as an important point of historical reference. The Board is concerned about how contextual information is factored into Meta’s decisions on content that does benefit from additional human review. In this case, even though the content underwent escalated review–a process that is supposed to deliver better results–Meta still failed to get it right. Meta should ensure that both its automated systems and content reviewers are able to factor contextual information into their decision-making process. In this case, it was particularly important to protect the content. Cuba is characterized by closed civic spaces, so the risks associated with dissent are high, and access to internet is very restricted. In this case, relevant context may not have been sufficiently considered as part of the escalation process. Meta should consider how context influences its policies and the way in which they are enforced. The Oversight Board’s decision The Oversight Board overturns Meta’s decision to remove the post. While the Board makes no new recommendations in this case, it reiterates relevant ones from previous decisions, for Meta to follow closely: * Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board overturns Meta’s decision to remove an Instagram post published around the first anniversary of the historic nationwide protests that occurred in July 2021 in Cuba. In the post, a woman protests against the government and compares Cuban men to different animals that are culturally perceived as inferior. She does so to assert that Cuban men are not to be trusted because they have not acted with the forcefulness required to defend those who are being repressed. The post calls for women to hit the streets and demonstrate to defend the lives of “our sons.” Under Meta’s Hate Speech policy, this is a qualified behavioral statement and, as such, should be allowed. In countries where there are strong restrictions on people’s rights to freedom of expression and peaceful assembly, it is critical that social media protects the users’ voice, especially in times of political protest. 2. Case description and background In July 2022, a news platform’s verified Instagram account, describing itself as critical of the government in Cuba, posted a video in which a woman calls on other women to join her in the streets to protest. A caption in Spanish includes quotes from the video, hashtags that refer to the “dictatorship” and “regime” in Cuba, and calls for international attention to the humanitarian situation in the country, including by using #SOSCuba. At one point in the video, the woman says that Cuban men are “rats” because they cannot be counted on to defend those who are being repressed by the government. At another point, she says that Cuban men are “mares” who carry urinal pots. The text overlaying the video connects political change to women’s protests. The video was played more than 90,000 times and shared fewer than 1,000 times. Public comments and experts familiar with the region, who the Board consulted, confirmed that these phrases are understood colloquially by Spanish speakers in Cuba to imply cowardice. One public comment (PC-13012) said that the terms, while insulting, “should not be interpreted as violent or dehumanizing speech.” External experts said the term for “mares” is frequently employed as a homophobic insult or to refer to people as unintelligent. However, when combined with the reference to urinal pots, experts reported that the phrase “takes on the connotation that men are ‘full of shit’ and [is] utilized here to show women’s discontent toward male figures” in the context of their inaction during political protests. In this sense, public comments point out that the woman does not disparage men by calling them “rats” or “mares,” but that she uses this language to mobilize men in her country. According to these comments, men are not her enemies: she is just trying to awaken the conscience of men. The post was shared around the first anniversary of the historic nationwide protests that occurred in July 2021 when Cubans took to the streets in what the Inter-American Commission on Human Rights (IACHR) described as “a peaceful protest to claim their civil liberties and demand changes to the country’s political structure.” The IACHR reported that Cubans “were also protesting the lack of access to economic, social, and cultural rights – especially because of persistent food and medicine shortages and the escalating impacts of the COVID-19 pandemic. According to civil society and international bodies – such as the European Parliament – the massive protest of July 11 was among the largest demonstrations in Cuba’s recent history. These protests triggered immediate state reactions against the demonstrators” (Inter-American Commission of Human Rights, 2022 Annual Report , para. 43). From July 2021 onwards and throughout 2022, state repression increased. The post was published in the context of this significant social tension. Additionally, it was shared days after a young Cuban man was killed in an incident involving the police. Some parts of that incident were documented on social media, and the woman speaking in the video appears to reference this when she says: “we cannot keep allowing the killing of our sons.” External experts who analyzed the social-media response found a broader pattern of users referring to the teenager’s killing as a way to articulate their criticism of the government and to call for civilian action: “the discourse in the comment sections of the largest Instagram posts centered around the common themes of dictatorship, police brutality, and the lack of action from bystanders.” External experts familiar with the region highlighted the importance of social-media campaigns that use hashtags such as #SOSCuba in raising awareness around the economic, political, and humanitarian crises faced by Cubans. In the wake of the 2021 protests, the government intensified its crackdown on virtually all forms of dissent and public criticism. The IACHR documented eight waves of repression by the Cuban state in which it observed “(1) the use of force and intimidation and smear campaigns; (2) arbitrary arrests, mistreatment, and deplorable prison conditions; (3) criminalization of protesters, judicial persecution, and violations of due process; (4) closure of democratic forums through repression and intimidation to discourage new social demonstrations; (5) ongoing incarceration, trials without due process guarantees, and harsh sentences; (6) legislative proposals aimed at curtailing, surveilling, and punishing dissent and criticism of the Government and at criminalizing the actions of independent civil society organizations; (7) harassment of relatives of persons detained and charged for taking part in the protests; and (8) deliberate cuts in Internet access” (IACHR, 2022 Annual Report , para. 44). The IACHR noted that, although the waves of repression began in the second half of 2021, they continued throughout 2022, and that dozens of people were injured by police through the disproportionate use of force (IACHR, 2022 Annual Report , para. 46). On July 11, 2022, the IACHR and its special rapporteurs condemned the persistent state repression of 2022 that occurred in response to the demonstrations of 2021. The legislative response to the July 2021 protests also included further criminalization of online speech, including new penal code regulation establishing heightened penalties for alleged offenses such as spreading “fake information” or offending someone’s “honor” on social media, or in online or offline media. This is supplementary to existing provisions of the penal code, which cover “public disorder,” “resistance,” and “contempt,” and have historically been used to stifle dissent and criminalize protests. According to the IACHR, “the new text imposes harsher penalties and uses broad, imprecise language to define offenses, such as sedition and crimes against constitutional order” (IACHR, 2022 Annual Report , para. 97). Despite these displays of force and legal actions by the government after July 2021, external experts familiar with the region documented several attempts to organize localized protests against the government, but noted the significant risks of participation. Near-complete government control of the internet’s technical infrastructure in Cuba, in addition to censorship, obstruction of communications , and the very high cost of accessing the internet , “prevents all but a small fraction of Cubans from reading independent news website and blogs” (IACHR, 2022 Annual Report , para. 69). The Board also makes note of the attempts by government-linked networks described by Meta in its February 2023 report on Adversarial Threat to “create the perception of widespread support for the Cuban government across many internet platforms, including Facebook, Instagram, Telegram, Twitter, YouTube and Picta, a Cuban social network.” According to Meta, the company’s investigation found links between the Cuban government and the people behind a network of 363 Facebook accounts, 270 pages, 229 groups and 72 accounts on Instagram, which violated Meta’s policy against coordinated inauthentic behavior. Seven days after the video was posted on the Instagram account in July 2022, a hostile speech classifier identified the content as potentially violating and sent it for human review. The following day, a human moderator reviewed the content and found the post violated Meta’s Hate Speech policy. Meta did not consider the woman depicted in the video to be a public figure. Based on the account’s cross-check status, the content in this case was then escalated for secondary review. The first moderator in the secondary review process assessed the content as violating on July 12, 2022. The second moderator assessed the content as violating on February 24, 2023. Meta then removed the content from Instagram on the same day, more than seven months after it was initially flagged by the company’s automated systems. The delay in the review was caused by a backlog in Meta’s review queues under the cross-check system . On the same day the content was removed, the user who shared the video appealed Meta’s decision. The content was again reviewed by a moderator who, on February 26, 2023, upheld the original decision to remove it. The content was not escalated to policy or subject matter experts for additional review at this time. According to Meta, a standard strike was applied to the user’s account. However, no feature limit was applied to the account in line with Meta’s account restriction protocols. The user then appealed the case to the Board. 3. Oversight Board authority and scope The Board has authority to review Meta’s decision following an appeal from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying the Board’s decision in respect to identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). When Meta commits to act on recommendations, the Board monitors their implementation. 4. Sources of authority and guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board decisions The most relevant previous decisions of the Oversight Board include: II. Meta's content policies The Instagram Community Guidelines state that content containing hate speech will be removed. Under the heading “Respect other members of the Instagram community,” the guidelines state that it is “never OK to encourage violence or attack anyone based on their race, ethnicity, national origin, sex, gender, gender identity, sexual orientation, religious affiliation, disabilities, or diseases.” The Instagram Community Guidelines then link the words “hate speech” to the Facebook Hate Speech Community Standard . The Hate Speech policy rationale defines hate speech as a direct attack against people on the basis of protected characteristics, including sex, gender, and national origin. Meta does not allow Hate Speech on its platform because it “creates an environment of intimidation and exclusion, and in some cases may promote offline violence.” The rules prohibit “violent” or “dehumanizing” speech against people based on these characteristics, including men. Tier 1 of the Hate Speech policy prohibits “dehumanizing speech or imagery in the form of comparisons, generalizations, or unqualified behavioral statements (in written or visual form) to or about [...] [a]nimals in general or specific types of animals that are culturally perceived as intellectually or physically inferior.” Additionally, Meta’s internal guidelines to content reviewers on how to apply the policy define “qualified” and “unqualified” behavioral statements and provide examples. Under these guidelines, “qualified statements” do not violate the policy, while “unqualified statements” are violating and removed. Meta states qualified behavioral statements use statistics, reference individuals, or describe direct experience. Meta also states that, under the Hate Speech policy, the company allows people to post content containing qualified behavioral statements about protected characteristic groups when the statement discusses a specific historical event (for example, by referencing statistics or patterns). According to Meta, unqualified behavioral statements “explicitly attribute a behavior to all or a majority of people defined by a protected characteristic.” The Board’s analysis was informed by Meta’s commitment to “ Voice ,” which the company describes as “paramount,” and its values of “Safety” and “Dignity.” III. Meta’s human rights responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , in which it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following international standards: 5. User submissions In their appeal to the Board, the content creator called on social-media companies to better understand the “critical situation” in Cuba, flagging that the video makes references to the July 2021 protests. The content creator also explained that the woman in the video is calling on Cuban men to “do something to solve” the crisis. 6. Meta's submissions Meta removed the post under Tier 1 of its Hate Speech Community Standard because it attacked men by comparing them to rats and horses carrying human waste. The company explained that rats are a “classic example” of “animals that are culturally perceived as intellectually or physically inferior.” While the company is not aware of any specific trope or cultural tradition associated with “mares loaded with chamber pots or toilets,” the phrase violates the Hate Speech policy, according to Meta, because it compares men to the “repulsive image of animals that are presumably carrying human urine and feces.” Meta explained that “the comparisons to rats and toilet-laden horses dehumanizes men based on their sex.” Meta also said that “this excludes men from the conversation and could result in them feeling silenced.” In its response to the Board’s questions, Meta stated that the company considered applying a “spirit of the policy” allowance to this content. Meta makes “spirit of the policy” exceptions to allow content when a strict application of the relevant Community Standard produces results that are inconsistent with its rationale and objectives. However, Meta concluded that such an allowance was not appropriate because the content violates both the letter and the spirit of the policy. Meta further explained that under the Hate Speech Community Standard, it treats all groups defined by protected characteristics equally. According to the company, violating hate speech attacks by one marginalized protected characteristic group directed at another protected characteristic group will be removed. Meta explained that as part of its Hate Speech policy, the company approaches all protected characteristic groups in the same way, so that globally they receive equitable treatment and so the policy can be enforced at scale. Meta refers to this approach as being “protected characteristic-agnostic.” Meta stated that when content is escalated for additional review by human moderators, it does not allow hate speech or “spirit of the policy” allowances based on asymmetrical power dynamics (i.e., when the target of the hate speech is a more powerful group) “for the same reason we have a protected characteristic-agnostic policy.” Meta stated that it “cannot and should not rank which protected characteristic groups are more marginalized than others.” Instead, Meta focuses on “whether there is an attack against a group of people based on their protected characteristics.” Meta acknowledged that some stakeholders have said the Hate Speech policy should differentiate between content that is perceived to be “punching down,” which should be removed, versus content that is “punching up,” which should be allowed because it may imply themes of social justice. However, Meta said that “there is little consensus among stakeholders about what counts as ‘punching down vs. punching up.’” The Board also asked how contextual information, asymmetrical power dynamics between protected characteristic groups, and information about the political environment in which a post is made factor into the hostile speech classifier’s decision to send content for human review. In response, Meta said that “the context that a classifier takes into account is within the post itself” and that it “does not consider other contextual information from global events.” In this case, the hostile speech classifier identified the content as potentially violating Meta’s policies and sent it for human review. The Board asked 17 questions in writing. The questions addressed issues relating to Meta’s content-moderation approach in Cuba; the bearing that asymmetrical power dynamics have on the Hate Speech Community Standard, as well as its enforcement following automated and human review; and opportunities for context assessment, specifically within the part of Meta’s cross-check system called Early Response Secondary Review (ERSR). ERSR is a type of cross-check that provides additional levels of human review for certain posts initially identified as violating Meta’s policies while keeping the content online. All 17 questions were answered by Meta. 7. Public comments The Oversight Board received 19 public comments relevant to this case. Nine of the comments were submitted from the United States and Canada; three from Latin America and Caribbean; five from Europe; one from Asia Pacific; and one from the Middle East and North Africa. The submissions covered the following themes: the human rights situation in Cuba; the importance of an approach to content moderation that recognizes linguistic, cultural, and political nuances in calls for protest; gender-based power asymmetries in Cuba; the intersection of hate speech and calls for protest; and online and offline protest dynamics in Cuba. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board examined whether this content should be restored by analyzing Meta’s content policies, human rights responsibilities and values. The Board also assessed the implications of this case for Meta’s broader approach to content governance. The Board selected this appeal because it provides an opportunity to better understand how Meta’s Hate Speech policy and its enforcement impact calls for protest in contexts characterized by restricted civic spaces. 8.1 Compliance with Meta's content policies I. Content rules The Board finds that the content in this case is not hate speech as per Meta’s Community Standards, but a qualified behavioral statement and, as such, is allowed under the Hate Speech policy. Consequently, the removal of the content is inconsistent with this policy. It is true that the statements in which men are compared with “rats” or “mares” loaded with urinal pots, in a literal reading and out of context, could be interpreted as violating Meta’s policy on hate speech. Nevertheless, the post, taken as a whole, is not a generalization that aims at dehumanizing or triggering violence against all men, or even the majority of men. The Board finds that the statements directed at men are qualified in the sense they unambiguously aim at calling attention to the behavior of Cuban men in the context of the historic demonstrations that began in July 2021 in Cuba, and which were followed by state repression that continued into 2022 in response to subsequent calls for protest. The content creator explicitly refers to those events in the post by using the #SOSCuba hashtag. The content is a commentary on how an identifiable group of people have acted, not a statement about character flaws that are inherent in a group. According to public comments and experts consulted by the Board, epithets such as “rats” or “mares” are used in the vernacular Spanish spoken in Cuba in heated discussions to imply cowardice. As such, the terms should not be read literally and do not indicate that men have inherently negative characteristics by virtue of being men. Rather, they mean that Cuban men have not acted with the necessary forcefulness to defend those who are being repressed by the government in the context of the protests. The post was shared in the context of a wave of state repression that took place around the first anniversary of the historic nationwide protests , which occurred in July 2021. External experts have flagged the hashtag #SOSCuba, used by the content creator, as an important one adopted by social-network campaigns to draw attention to the economic, political, and humanitarian crises facing Cubans. The use of the hashtag in addition to the woman’s warning statement in the video (“we cannot keep allowing the killing of our sons”) and her call “to the streets,” demonstrate how the events that began in July 2021 are established as an important point of historical reference for subsequent efforts by citizens to mobilize around social and political issues that continued into 2022. Therefore, the post reflects the user’s opinion about the behavior of a defined group of people, Cuban men, in the specific context of a historical event. In conclusion, the Board finds that, when read as a whole, the post does not intend to dehumanize men, generate violence against them or exclude them from conversations about the Cuban protests. On the contrary, the woman in the video is questioning what, in her opinion, the behavior of Cuban men has been in the precise context of the protests, and she aims to galvanize them to participate in such historic events. The content in this case is, therefore, a statement of qualified behavior on an issue of significant public interest related to the historic protests and the wave of repression that followed. In response to this case, a minority of the Board questioned the agnostic enforcement of Meta’s Hate Speech policy, particularly in situations when such enforcement can lead to further silencing of historically marginalized groups. For these Board Members, a proportionate Hate Speech policy should acknowledge the existence of power asymmetries when such acknowledgment can prevent the suppression of under-represented voices. Finally, the Board agrees that the post falls directly within Meta’s paramount value of “Voice.” Therefore, its removal was not consistent with Meta’s values. A similar approach was taken by the Board in relation to one of the posts reviewed in the Violence against women cases, when the Board agreed with Meta’s ultimate conclusion that the content should be taken as a whole and assessed as a qualified behavioral statement. II. Enforcement action According to Meta, after a hostile speech classifier identified the content as potentially violating Meta’s policies, it was sent for human review. Between the first human review and first level of secondary review on July 12, 2022, both of whom found the content to violate Meta’s Hate Speech policies, and the second level of secondary review on February 24, 2023, when an additional moderator found the content violating and removed the post, more than seven months elapsed. As described in Section 2, the delay in the review was caused by a backlog in Meta’s cross-check system. As part of the Board’s cross-check policy advisory opinion , Meta disclosed that the cross-check system had been operating with a backlog of content that delays decisions. In information that Meta provided to the Board, the longest time a piece of content remained in the ERSR queue was 222 days; the delay of more than seven months observed in this case is similar to this length. According to Meta, as of June 13, 2023, the review of backlogged content in the ERSR program queue has been completed in response to recommendation no. 18 from the cross-check policy advisory opinion, which said that Meta should not operate this program with a backlog. The Board notes the seven-month delay in this case. The delay ultimately meant the content remained on the platform while waiting for the final stage of cross-check secondary review. The content remaining on the platform is an outcome in line with the Board’s analysis of the application of the Hate Speech Community Standard. However that outcome was not in accordance with Meta’s understanding that the content was harmful. The enforcement history in this case also raises concerns about how contextual information is factored into decisions on content that does benefit from additional human review. The Board has previously acknowledged that assessing the use of hate speech and relevant context at scale is a difficult challenge (see Knin cartoon case). In particular, the Board has emphasized that dehumanizing discourse, through implicit or explicit discriminatory acts or speech, has, in some circumstances, resulted in atrocities (see Knin cartoon case). The Board has also considered that, in certain circumstances, moderating content with the objective of addressing cumulative harms caused by hate speech at scale may be consistent with Meta's human rights responsibilities, even when specific pieces of content, seen in isolation, do not appear to directly incite violence or discrimination (see Depiction of Zwarte Piet case). In order to avoid inappropriately stifling public debate on highly relevant issues, such as violence against women (see Violence against women cases) or, as in this case, political speech on historical events, Meta has established exceptions such as the one on qualified behavioral statements. Making sure content reviewers are able to accurately distinguish between qualified and unqualified behavior statements is therefore necessary for Meta to reduce false positive (mistaken removal of content that does not violate its policies) rates in the enforcement of the Hate Speech policy. For the same reason, it is important for Meta to ensure that both its automated systems, including content machine learning classifiers that screen for what Meta considers “hostile speech,” and human content reviewers are able to factor contextual information into their determinations and decisions. This is especially important to reiterate when, as in this case, Meta’s content reviewers do not take context into account and remove a post when it is particularly urgent to protect it. Indeed, operational mechanisms and processes aimed at surfacing contextual insights are especially significant for countries or regions characterized by closed civic spaces, where the risks associated with dissent and criticism of the government are much higher, and access to internet is very restricted. The Board also notes that reviews at escalation level are supposed to deliver better results, even in difficult cases, since better tools for assessing context are available. However, even after the content in this case underwent escalated review, Meta still failed to get it right and keep the post on Instagram. As part of the cross-check policy advisory opinion , Meta explained that, generally for ERSR, the markets team (which includes a mix of Meta full-time employees and full-time contractors) first reviews the content. This team has additional contextual and language knowledge about a specific geographic market. According to Meta, the Cuban market “is not a separate market and it is categorized in [Meta’s] general Spanish language ESLA queues (Español Latin),” meaning that content from Cuba is reviewed by reviewers covering Spanish-language content in general and not focusing specifically on that country. Meta said that “other countries are split” in queues for country-specific or region-specific review (e.g. Spain queues for Spain, VeCAM (Venezuela, Honduras, Nicaragua) queues for Venezuela and Central America).” The Early Response team (an escalations team comprising Meta full-time employees only) may then review to confirm whether the content is violating. According to Meta, this team has “deeper policy expertise and the ability to factor in additional context” and may also apply Meta’s “newsworthiness” and “spirit of the policy” allowances. However, to assess the content, the Early Response team relies on translations and contextual information provided by the relevant Regional Market team and does not have language or regional expertise. Given Meta’s decision in this case, the Board is concerned that relevant contextual information – such as the whole content of the post, the #SOSCuba hashtag, the events around the one-year anniversary of the historic July 2021 protests, the wave of repression denounced by international organizations at the time the post was published and, among other things, the death of a young Cuban in an incident involving the police – may not have been sufficiently considered when assessing the content as part of the cross-check escalations process. In response, the Board reiterates recommendation no. 3 from the cross-check policy advisory opinion , which called on Meta to “improve how its workflow dedicated to meet Meta’s human rights responsibilities incorporates context and language expertise on enhanced review, specifically at decision making levels.” Meta has agreed to fully implement this recommendation. In Meta’s Q1 2023 update, the company stated it has already taken certain initiatives to incorporate context and language expertise at the ERSR level. The Board hopes that context and language expertise would help prevent future content like the post considered here from being removed. 8.2 Compliance with Meta's human rights responsibilities The Board finds that Meta’s decision to remove the content in this case was inconsistent with Meta’s human rights responsibilities. Freedom of expression (Article 19 ICCPR) Article 19 of the ICCPR provides for broad protection of expression, including about politics, public affairs, and human rights, with expression about social or political concerns receiving heightened protection ( General Comment No. 34 , paras. 11-12). Article 21 of the ICCPR provides protection for freedom of peaceful assembly – and assemblies with a political message are accorded heightened protection ( General Comment No. 37 , paras. 32 and 49). Extreme restrictions on freedom of expression and assembly in Cuba make it especially crucial that Meta respect these rights, particularly in times of protest ( Colombia protests decision; Iran protest slogan decision; General Comment No. 37 , para. 31). Article 21’s protection extends to associated activities that take place online (Ibid., paras. 6 and 34). As highlighted by the UN Special Rapporteur (UNSR) on the right to freedom of expression, “the Internet has become the new battleground in the struggle for women’s rights, amplifying opportunities for women to express themselves” ( A/76/258 para. 4). The expression at issue in this case deserves “heightened protection” because it involves a woman’s call for protest to defend the rights of those who have been repressed, one which came at a significant political moment, almost one year after historic protests in Cuba in July 2021. Public anger and criticism of the Cuban government continued as Cuban authorities intensified their legal and physical crackdowns on expressions of dissent in the year following the July 2021 protests. According to experts, while those sentiments can manifest as smaller protests in response to local events (such as the death of the Cuban teenager in this case), the persistence of citizens’ concerns around the economy, governance, and fundamental freedoms, combined with internet connectivity (albeit constrained by high costs and state control of important infrastructure), have made it clear that protests are “here to stay.” When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human rights commitments, both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As the UNSR on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users' right to freedom of expression” ( A/74/486 , para. 41). I. Legality (clarity and accessibility of the rules) The principle of legality requires rules that limit expression to be clear and publicly accessible (General Comment No. 34, para. 25). The Human Rights Committee has further noted that rules “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” (Ibid.). In the context of online speech, the UNSR on freedom of expression has stated that rules should be specific and clear ( A/HRC/38/35 , para. 46). People using Meta’s platforms should be able to access and understand the rules, and content reviewers should have clear guidance on their enforcement. Meta’s Hate Speech policy prohibits content attacking groups on the basis of protected characteristics. Meta defines attacks as “violent or dehumanizing speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation.” Dehumanizing speech includes comparisons, generalizations, or unqualified behavioral statements about or to animals culturally perceived as inferior. The same policy, however, allows qualified behavioral statements. Meta’s enforcement error in this case demonstrates that the policy’s language and the internal guidance provided to content reviewers are not sufficiently clear in order for reviewers to accurately determine when a qualified behavioral statement has been made. According to Meta, unqualified behavioral statements “explicitly attribute a behavior to all or a majority of people defined by a protected characteristic.” Meta further explained that the company allows qualified behavioral statements about protected characteristic groups when the statement discusses a specific historical event (for example, by referencing statistics or patterns). In the Violence against women case, Meta informed the Board that “it can be difficult for at-scale content reviewers to distinguish between qualified and unqualified behavioral statements without taking a careful reading of context into account.” However, the guidance to reviewers, as currently drafted, significantly limits their ability to perform an adequate contextual analysis, even when there are clear cues within the content itself that it includes a qualified behavioral statement. Indeed, Meta stated that because it is challenging to determine intent at scale, its internal guidelines instruct reviewers to default to removing behavioral statements about protected characteristic groups when the user has not made it clear whether the statement is qualified or unqualified. In the present case, the post, read as a whole, unambiguously reflects the critical judgment of the woman in the video when she refers to the behavior of Cuban men in the specific context of the historic Cuban protests of 2021 and the wave of repression that followed in 2022. The whole content, including the hashtag #SOSCuba, and the events publicly known at the time of publication, make it clear that the post was, in fact, a statement discussing specific historical and conflict events through the reference to what the woman in the video understands as a pattern. As discussed in the Violence against women decision and in the Knin cartoon decision, content reviewers should have sufficient opportunities and resources to take contextual cues into account in order to accurately apply Meta’s policies. The Board finds that the language of the policy itself and the internal guidelines to content reviewers are not sufficiently clear to ensure that qualified behavioral statements are not wrongfully removed. The company’s confusing, or even contradictory, guidance makes it difficult for reviewers to reach a reliable, consistent and predictable conclusion. The Board reiterates recommendation no. 2 from the Violence against women decision, which urged Meta to “update guidance to its at-scale moderators with specific attention to rules around qualification.” II. Legitimate aim Any restriction on expression should pursue one of the legitimate aims listed in the ICCPR, which include the “rights of others.” In several decisions, the Board has found that Meta’s Hate Speech policy, which aims to protect people from the harm caused by hate speech, has a legitimate aim that is recognized by international human rights law standards (see, for example, Knin cartoon decision). III. Necessity and proportionality The principle of necessity and proportionality provides that any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected” ( General Comment No. 34 , para. 34). While the Board finds the content in this case is not hate speech and should remain on Instagram, the Board is not indifferent to the difficulties of moderating hate speech that includes comparisons to animals (see Knin cartoon decision).The UNSR on freedom of expression has noted that on social media, “the scale and complexity of addressing hateful expression presents long-term challenges” (A/HRC/38/35, para. 28). The Board, relying on the Special Rapporteur’s guidance, has previously explained that, although these restrictions would generally not be consistent with governmental human rights obligations (particularly if enforced through criminal or civil penalties), Meta may moderate such speech if it demonstrates the necessity and proportionality of the speech restriction (see South Africa slurs decision). In the event of inconsistencies between company rules and international standards, the Special Rapporteur has called on social-media companies to “give a reasoned explanation of the policy difference in advance, in a way that articulates the variation” ( A/74/486 , para. 48). As previously mentioned in Section 8.1, Meta’s hate speech policy contains several exceptions, one of which is precisely at issue in this case: qualified statements about behavior. Meta understood that no exception was applicable and removed the content. The Board, however, found that in applying an overly literal reading of the content, Meta overlooked important context; disregarded a relevant carve-out from its own policy; and adopted a decision that was neither necessary nor proportionate to achieve the legitimate aim of the Hate Speech policy. In this case, the Board considered the Rabat Plan factors in its analysis (OHCHR, A/HRC/22/17/Add.4 , 2013) and took into account the differences between the international law obligations of states and the human rights responsibilities of Meta, as a social media company. In its analysis, the Board focused on the social and political context, the author, the content itself and form of the speech. As previously mentioned in this decision, the post was published in the context of high social tension characterized by a strong wave of repression arising from the historic protests in Cuba that began in 2021. The Board also notes the death of a young Cuban man in an incident involving the police as relevant context, as it catalyzed calls for protest against the government, such as the one in this case’s content. In the post, a woman issues a statement about what, in her opinion, has been the behavior of Cuban men during the protests and calls on women to take to the streets to defend the lives of “our sons.” The post includes explicit references to the protests and the #SOSCuba hashtag. Linguistic analysis of the post in its entirety and in the context in which it was published leaves no doubt as to its meaning and scope. The post does not attribute a behavior to all men nor to the majority of men. Nor does it purport to or contribute to dehumanizing all or most of a protected characteristic group. The post does not generate violence towards men, nor does it exclude them from public conversations. On the contrary, amid high social tension, it resorts to strong language to encourage Cuban men to participate in protests by saying they have not lived up to their responsibilities. However, despite the fact that the content does not contribute to the generation of any harm, its removal has a significant negative impact on the woman depicted in the video, on the user who shared it and, ultimately, on the political debate. Indeed, Meta’s decision to remove the post is likely to have had a disproportionate impact on the woman in the video who overcame many difficulties that exist in Cuba, including access to the internet and the risks of speaking out against the government. Additionally, the removal is likely to have placed an unnecessary burden on the user – the news platform – which has had to overcome barriers to disseminate information about what is happening in Cuba. The strike Meta applied to the user’s account following the post’s removal could have aggravated the situation, and potentially resulted in the account’s suspension. Finally, the Board also finds that the post is in the public interest and contains a call for protest that is passionate, but does not advocate violence. Therefore, the post’s removal also impacts the public debate in a place where it is already severely limited. The UNSR on freedom of expression has stated in relation to hate speech that the “evaluation of context may lead to a decision to make an exception in some instances, when the content must be protected as, for example, political speech” ( A/74/486 , para. 47 (d)). The Board has repeatedly affirmed the importance of this assertion. In the Colombia protests decision, the Board examined the challenges of assessing the political relevance and public interest of content containing a homophobic slur within a protest context. The Iran protest slogan decision acknowledged that “Meta’s current position is leading to over-removal of political expression in Iran at a historic moment and potentially creates more risks to human rights than it mitigates.” Finally, beyond contextual signals within the content itself, in the Pro-Navalny protests in Russia decision, the Board affirmed the importance of external context, saying, “context is key for assessing necessity and proportionality . . . Facebook should have considered the environment for freedom of expression in Russia generally and specifically government campaigns of disinformation against opponents and their supporters, including in the context of the January protests.” While that case concerned Meta’s Bullying and Harassment policy, the observations on the “environment for freedom of expression” and protests apply to this case on hate speech, too. The Board notes the significant constraints on freedom of expression in Cuba, as well as the physical and legal risks that come with speaking against the government (Section 2). These risks, along with the high cost of data and internet access in Cuba, raise the stakes of moderating content from dissenting voices in the country. One public comment (PC-13017) highlighted the importance of “safeguard[ing] the limited avenues for dissent and organization of protests.” Finally, the Board considered the IACHR’s 2022 report, which notes that the Commission was “informed of persecution, political violence, and sexual assaults against women by state agents in the context of social protests; this is reported to be even more severe in the case of female human rights activists and defenders” (IACHR, 2022 Annual Report , para. 166). Independent media coverage about Cuba has also highlighted the impact of government responses to the July 2021 protests on women, with some civil society organizations arguing that “the greatest manifestation of gender violence in the Cuban context is by the government, and is explicitly demonstrated with the update of the list of women deprived of their freedom for political reasons.” The Board urges Meta to exercise more care when assessing content from geographic contexts where political expression and peaceful assembly are pre-emptively suppressed or responded to with violence or threats of violence. Social-media platforms in Cuba offer a limited, but still significant, channel for government criticism and social activism in the face of authorities that have restricted basic civil liberties and opportunities for offline civic mobilization. While Meta said that it took several steps to mitigate risks to users during the July 2021 protests in Cuba, and again during mass protests planned for November 2021, it did not disclose any risk-mitigation measures at the time the case content was posted. To prepare for future occasions when calls for protests are expected to occur in places where protest will be responded to with violence or threats of violence from public authorities, and to ensure that such calls are reviewed and enforced accurately and with contextual nuance, Meta should consider how the political context could influence its policy and enforcement choices. In order to address these concerns about moderating content that comes from closed civic spaces, the Board reiterates recommendations no. 1 and no. 8 from the cross-check policy advisory opinion , noting their relevance to the Cuban context and content considered here. Recommendation no. 1 urged Meta to have a list-based over-enforcement prevention program to protect expression in line with Meta’s human rights responsibilities. Over-enforcement prevention lists afford users that are included with additional opportunities for human review of their posts that are initially identified as violating Meta’s policies, with the aim of avoiding over-enforcement, or false positives. Recommendation no. 8 said that Meta should create such lists with local input. Meta has agreed to implement both recommendations in part, with implementation currently in progress. 9. Oversight Board decision The Oversight Board overturns Meta’s decision to remove the post. 10. Recommendations The Oversight Board decided not to issue new recommendations in this decision given the relevance of previous recommendations issued in other cases. The Board is aware of the cross-check status of the content creator’s account at the time the content was reviewed and removed. Nevertheless, the Board still found recommendations no. 1 and no. 8 from the cross-check policy advisory opinion , in which the Board provides Meta with guidance for putting cross-check lists together, to be of great importance in this case given the context in Cuba. The Board believes that Meta should follow that guidance closely so that other accounts sharing valuable political speech, like the one in this case, are added to the list in order to benefit from additional layers of content review. For accounts already included in the list, the Board highlights the importance of recommendation no. 3 from the cross-check policy advisory opinion , which aims to improve the accuracy of enhanced content review for accounts on the list. Extending the opportunity for additional layers of content review, and the possibility of contextual information being incorporated in content moderation decisions, to more accounts that merit inclusion in the list – from a human rights perspective – is especially important in closed civic spaces, such as the one considered in this case. The Oversight Board further reiterates guidance provided to Meta throughout this and previous decisions to make sure context is appropriately factored into content moderation decisions and policies are sufficiently clear, to both users and content reviewers ( Violence against women cases). This includes updating internal guidance provided to content reviewers where relevant in order for the company to address any lack of clarity, gaps or inconsistencies that may result in enforcement errors, such as the one in this case. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg, which draws on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Linguistic expertise was provided by Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world. Return to Case Decisions and Policy Advisory Opinions" ig-tom6ixvh,Promoting Ketamine for non-FDA approved treatments,https://www.oversightboard.com/decision/ig-tom6ixvh/,"August 17, 2023",2023,,"TopicFreedom of expression, HealthCommunity StandardRegulated goods","Policies and TopicsTopicFreedom of expression, HealthCommunity StandardRegulated goods",Overturned,United States,The Oversight Board has overturned Meta’s decision to leave up a user’s Instagram post discussing their experience using ketamine as a treatment for anxiety and depression.,61162,9356,"Overturned August 17, 2023 The Oversight Board has overturned Meta’s decision to leave up a user’s Instagram post discussing their experience using ketamine as a treatment for anxiety and depression. Standard Topic Freedom of expression, Health Community Standard Regulated goods Location United States Platform Instagram Promoting Ketamine for non-FDA approved treatments public comments appendix The Oversight Board has overturned Meta’s decision to leave up a user’s Instagram post discussing their experience using ketamine as a treatment for anxiety and depression. The Board finds that the content violated Meta’s Branded Content policies (which apply to content for which creators receive compensation from a third-party “business partner,” as opposed to advertising where Meta receives compensation to surface ads to users) and the company’s Restricted Goods and Services Community Standard. This case indicates that Meta’s strong restrictions on branded content promoting drugs and attempts to buy, sell, or trade drugs may be inconsistently enforced. About the case On December 29, 2022, a verified Instagram user posted 10 related images as part of a single post with a caption. A well-known ketamine therapy provider is tagged as the co-author of the post, which was labelled as a “paid partnership.” Under Meta’s Branded Content policies, Meta’s business partners must add such labels to their content to transparently disclose a commercial relationship with a third party. In the caption, the user stated that they were given ketamine as treatment for anxiety and depression at two of the ketamine therapy provider’s office locations in the United States. While the user described ketamine as medicine, the post contains no mention of a professional diagnosis; no clear evidence that treatment occurred at a licensed clinic; and nothing showing that the treatment took place under medical supervision. The post describes the user’s treatment as a “magical entry into another dimension.” The post also expressed a belief that “psychedelics” (a category that the post implied includes ketamine) are an important emerging mental health medicine. Ten drawings, some including psychedelic imagery, depict the user’s experience in a storyboard style, indicating the user received several “therapy sessions” for “treatment-resistant depression and anxiety.” The account of the user describing the experience has around 200,000 followers and the post was viewed around 85,000 times. Three users reported one or more of the images included in the post, and the content was removed and then restored three times under Meta’s Restricted Goods and Services Community Standard. After the third time the post was removed, the content creator brought it to Meta’s attention. The content was then escalated to policy or subject matter experts for an additional review and restored around six months after it was originally posted. Meta then referred the case to the Board. The content creator’s status as a “managed partner” helped to escalate the post within Meta. “Managed partners” are entities across different industries, including individuals such as celebrities and organizations such as businesses or charities. They receive varying levels of enhanced support, including access to a dedicated partner manager. Key findings As explained more fully below, this case indicates that Meta’s strong restrictions on branded content promoting drugs and attempts to buy, sell, or trade drugs on its platforms may be inconsistently enforced. Because the content in this case was posted as part of a paid partnership, the Branded Content policies should apply. The Board is concerned that Meta did not describe this aspect of the case as part of its referral or initial submissions. Instead, the Board only learned about the paid nature of the post after submitting questions to the company. Meta’s Branded Content policies state that “certain goods, services, or brands may not be promoted with branded content” including “drugs and drug-related products, including illegal or recreational drugs.” As the content in this case was part of a “paid partnership,” clearly promoted the use of ketamine, and was not covered by an exception, it violated these policies. In response to the Board’s questions, Meta acknowledged that not all content with a “paid partnership” label is reviewed against its Branded Content policies, that moderators reviewing content at scale cannot see this label, and that they cannot reroute content to the specialized team in charge of enforcing the Branded Content policies. This greatly increases the risk of under-enforcement against this kind of content. As such, the Board urges Meta to ensure that it reviews content against all relevant policies, including its Branded Content policies. The Board also finds that the content violated the Restricted Goods and Services Community Standard. This permits the promotion of “pharmaceutical drugs” (“drugs that require a prescription or medical professionals to administer”) but prohibits the promotion of “non-medical drugs” (“drugs or substances that are not being used for an intended medical purpose or are used to achieve a high”). As this case indicates, however, some drugs fall in both categories. This tension would be best resolved by emphasizing the essential role of medical professionals in prescribing or administering the drug. As noted in the preceding paragraph, paid content is subject to an even stricter standard. As the content in this case included statements that strongly indicated the use of a drug to achieve a “high” but made no direct reference to a medical diagnosis, nor references to medical staff (e.g., “doctor,” “nurse,” “psychiatrist”), the Board finds that the user in this case did not sufficiently demonstrate the use of ketamine occurred under medical supervision. Thus, the content violates this Community Standard and should be removed. The Board is also concerned about the possibility of inconsistent enforcement of Meta’s policies related to drugs. A recent investigation by the Wall Street Journal based on a review of ads for a four-week period in late 2022 discovered “more than 2,100 ads on Facebook and Instagram that described benefits of prescription drugs without citing risks, promoted drugs for unapproved uses or featured testimonials without disclosing whether they came from actors or company employees.” A public comment received by the Board from the National Association of Boards of Pharmacy (NABP) also notes that unambiguous violations of Meta’s Restricted Goods and Services Community Standard on Meta’s platforms may be common. The NABP noted that “with only a cursory search, less than 1 minute,” they found multiple posts featuring ketamine, clearly marked for recreational use. The Oversight Board’s decision The Oversight Board overturns Meta’s decision to leave up this content, requiring the post to be removed. The Board recommends that Meta: * Case summaries provide an overview of the case and do not have precedential value. 1. Decision summary The Oversight Board overturns Meta’s decision to leave up a user’s Instagram post discussing their experience using ketamine as a treatment for anxiety and depression at a ketamine therapy provider’s offices in the United States. The post included a “paid partnership” label, indicating the user had received compensation from a third party “business partner” for the post. Such posts must conform to Meta’s Branded Content policies. Those policies prohibit the promotion of “drugs and drug-related products, including illegal and recreational drugs,” except for the promotion of pharmacies and prescription drugs under strict requirements that the Board finds were not met in this case. For this reason, the Board concludes that the post violated the Branded Content policies. Even if this post were not a paid partnership, the Board’s view is that it would violate Meta’s Restricted Goods and Services Community Standard. The Standard allows users to promote pharmaceutical drugs but prohibits them from promoting drugs used to induce a “high.” Ketamine is a pharmaceutical drug that also can create a “high”; it has both important therapeutic uses and common recreational uses. The Board finds the Standard should be read to permit posts promoting ketamine, even when it produces a “high,” but only when the post makes clear that it was administered under medical supervision. The Board determines that in this case there was insufficient evidence to demonstrate the presence of medical supervision. In addition to overturning Meta’s decision, the Board recommends that Meta revise its Branded Content policies to clarify the meaning of the ""paid partnership"" label and ensure content reviewers are equipped to enforce Branded Content policies where applicable. The Board also recommends that Meta clarify the definition of non-medical drugs in its Restricted Goods and Services Community Standard to reflect that when a “high” accompanies the drug’s medical use, posts promoting that drug are permissible only when discussing uses where there is strong evidence of medical supervision. Finally, the Board expresses its interest in Meta’s Branded Content policies beyond this case, and asks Meta to share additional information on the enforcement of those policies and/or on business partners with the Board where relevant. 2. Case description and background On December 29, 2022, a verified Instagram user posted a series of 10 related images as part of a single post with a caption. A well-known ketamine therapy provider is tagged as the co-author of the post, meaning the post was shared to the followers of both accounts, and is visible as a permanent post on both accounts. The post was labelled as a “paid partnership.” Under Meta’s Branded Content policies Meta’s business partners must add such labels to their content to transparently disclose a commercial relationship with a third party. These labels appear directly below the username of the user posting the content as text that says “paid partnership with” followed by the name of the business partner. In a single caption below the image series, the user stated that they were given ketamine as treatment for anxiety and depression at two of the ketamine therapy provider’s office locations in the United States. The Instagram account of that provider is again tagged in the caption, allowing users to click through to the account. Although the user described ketamine as a medicine, the post contains no mention of a professional diagnosis, no clear evidence that treatment occurred at a licensed clinic, and nothing showing that the treatment occurred under medical supervision. The post describes the user’s treatment as a “magical entry into another dimension.” The post also expressed a belief that “psychedelics” (a category that the post implied includes ketamine) are an important emerging set of mental health medicines. The 10 images in the series were each professional quality drawings with individual text overlay conveying the user’s experience with the provider. The drawings depict the experience chronologically in a storyboard style, indicating the user received several “therapy sessions” for “treatment-resistant depression and anxiety.” Several drawings include psychedelic imagery, such as rainbows, stars and other objects appearing from heads, as well as day-to-day objects against a background of outer space. Part of the series reflected on the difficult period in the person’s life that coincided with them seeking therapy. Other images sequentially described preparation for the treatment (which involved a process of relaxation), the treatment itself (which consisted of two doses of ketamine), and “reintegration” (which involved a process of reflection following treatment). Another part of the series praised the treatment, including a description of “[t]he feeling of both being pulled out of myself while being brought closer to my inner essential core.” The user compared the treatment to “any good trip.” One image, which was the primary image for Meta’s referral, was a positive depiction and written description of the office, with an endorsement of the “extraordinary staff” who supported the user. The series did not, however, describe any formal medical supervision—for instance, it made no direct reference to a medical diagnosis of depression or anxiety, or to treatment conducted by medical professionals. It also did not specify whether the treatment provider was a licensed health clinic. The post had about 10,000 likes, fewer than 1,000 comments, and was viewed around 85,000 times. The account of the user who was speaking of their experience has about 200,000 followers. In total, three users reported one or more of the 10 images included in the post, and the content was removed and then restored three times under Meta’s Restricted Goods and Services Community Standard. Less than 30 minutes after the first report, the content was removed through human review. The user who posted the content appealed the removal. On appeal, a human reviewer restored the content less than five hours after it was originally removed. The content was reported a second time about one hour later, removed almost immediately, and restored again in less than half an hour, all through human review. Several weeks later, the content was reported once again. This third report was enforced on by an automated system that bases its actions on previous decisions made by content moderators. The automated system removed the content after determining that it violated Instagram’s Community Guidelines, specifically the Restricted Goods and Services Community Standard. The removals were solely based on Meta’s Restricted Goods and Services Community Standard. The Board asked Meta why the content was not removed as a violation of the Branded Content policies’ prohibition on paid promotion of drugs, as the public information available about these policies indicates that they should apply to all pieces of content with a “paid partnership” label. Meta responded that these policies were not applied because the company only applies them “to branded content disclosed via our ‘Paid partnership’ label that the brand partner has actually reviewed and approved.” Meta further explained that brands “may provide certain creators account-level permissions to tag them in branded content (eliminating the need to approve tags for each post),” which means that tags may be automatically approved without any kind of review by the relevant brand partner. In those instances, content is not reviewed by Meta’s specialized teams against the Branded Content policies. Still according to Meta, the “paid partnership” label is not visible to at scale content reviewers, who are not able to reroute content to specialized teams for review – and thus are not engaged in the enforcement of the Branded Content policies. After the third time the post was removed , the content creator brought it to Meta’s attention. The content was then escalated to policy or subject matter experts for an additional review, restored and referred to the Board. The third restoration of the content happened approximately six months after it was originally posted. The creator’s status as a “managed partner” facilitated this escalation. “Managed partners” are entities across different industries, including individuals such as celebrities and organizations such as businesses or charities. Such entities receive varying levels of enhanced support from Meta, including training on how to use Meta’s products and a dedicated partner manager who can work with them to “optimize their presence and maximize the value they generate from Meta’s platforms and services, to ensure that these relationships meet the strategic objectives of managed partners and Meta.” Meta referred the case to the Board, stating that the case is significant because of widespread discussions and increasing use of psychedelic drugs in the United States that blur the line between medical treatment, self-help, and recreation. According to Meta, such ambiguity makes it hard to ascertain whether this content promotes pharmaceutical drugs, which is generally allowed on Meta’s platforms, or describes the use of drugs for non-prescribed purposes or to achieve a “high,” which is generally not allowed. The Board noted the following context in reaching its decision in this case: 3. Oversight Board authority and scope The Board has authority to review decisions that Meta submits for review (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1.1). The Board reviews and decides on content in accordance with Meta’s content policies and values (Charter Article 2). The Bylaws define “Meta policies” as “Meta’s content policies and procedures that govern content on the platform (e.g., Community Standards or Community Guidelines).” The Board finds that Meta’s Branded Content policies fall within the definition of “Meta policies.” The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). Where Meta commits to act on recommendations, the Board monitors their implementation. 4. Sources of authority and guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board decisions The most relevant previous decisions of the Oversight Board include: II. Meta’s content policies This case involves the Instagram Community Guidelines and the Facebook Community Standards, in addition to Meta’s Branded Content policies. Meta’s Community Standards Enforcement Report for Q1 2023 states that “Facebook and Instagram share Content Policies. This means that if content is considered violating on Facebook, it is also considered violating on Instagram.” The Instagram Community Guidelines state that “buying or selling non-medical or pharmaceutical drugs [is] not allowed.” It further states: “We also remove content that attempts to trade, co-ordinate the trade of, donate, gift, or ask for non-medical drugs, as well as content that either admits to personal use (unless in the recovery context) or coordinates or promotes the use of non-medical drugs.” The Guidelines continue: “Remember to always follow the law when offering to sell or buy other regulated goods.” The Guidelines then link to Facebook’s Community Standard on Restricted Goods and Services. The Facebook Restricted Goods and Services Community Standard “prohibits attempts by individuals, manufacturers, and retailers to purchase, sell, raffle, gift, transfer or trade certain goods and services.” Restricted goods include “pharmaceutical drugs (drugs that require a prescription or medical professionals to administer)” and “non-medical drugs (drugs or substances that are not being used for an intended medical purpose or are used to achieve a high).” Meta removes content about “non-medical drugs” that “admits to personal use without acknowledgement of or reference to recovery, treatment, or other assistance to combat usage. This content may not speak positively about, encourage use of, coordinate or provide instructions to make or use non-medical drugs.” According to the policy rationale, the Restricted Goods and Services Community Standard aims to “encourage safety and deter potentially harmful activities.” Meta’s Branded Content policies prohibit “violations of our Community Standards or Community Guidelines.” The list of “prohibited content” includes “drugs and drug-related products, including illegal or recreational drugs” and “unsafe products and supplements.” Additionally, branded content promoting “pharmacies” and “prescription drugs” require that the “business partner sponsoring the branded content be authorized to promote their services.” Authorization for “pharmacies” requires that “the business partner must be certified with LegitScript and receive written permission from Facebook to promote pharmacies.” Authorization for “prescription drugs” requires that the “business partner must apply to Facebook to promote prescription drugs.” “Online pharmacies, telehealth providers, and pharmaceutical manufacturers” are the entities eligible to apply for permission from Facebook. Moreover, branded content posts promoting prescription drugs “must be restricted to people aged 18 or older and restricted to the United States, New Zealand, or Canada. Prescription drug promotion is prohibited outside of these locations.” It is important to note that the Branded Content policies apply to content where content creators receive compensation (“monetary payment or free gifts”) from a third-party “business partner”, as opposed to Meta’s Advertising Standards, which apply to content surfaced by Meta to users in exchange for compensation received by advertisers. The Board’s analysis of the content policies was informed by Meta’s value of “Voice,” which the company describes as “paramount,” as well as its values of “Safety” and “Dignity.” III. Meta’s human rights responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , where it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis of Meta’s human rights responsibilities in this case was informed by the following international standards: 5. User submissions The author of the post was notified of the Board’s review and provided with an opportunity to submit a statement to the Board. The user did not submit a statement. 6. Meta’s submissions When referring this case to the Board, Meta asserted that as the medically supervised use of mind-altering substances continues to grow, its policy line “may become less tenable as more people want to talk about their experiences using legal drugs on our platforms.” Expecting future cases like this one, Meta requested “the Board’s help in finding the right way forward in this area.” Meta explained that in its Community Standard on Restricted Goods and Services, the definitions of “non-medical drugs” and “pharmaceutical drugs” “conflict when a drug is legally administered by medical professionals for treating mental illness in which an altered mental state can be a goal.” Internal guidelines on how to apply the Restricted Goods and Services Community Standard allow for content in which a user admits to using or promotes the use of pharmaceutical drugs in a supervised medical setting. According to Meta, the user’s content in this case discusses their experience “with a safe, legal, medical treatment for depression and anxiety.” According to Meta, three portions of the post indicated the use of a drug to “achieve a ‘high’ or altered mental state”: 1) the description of ketamine treatment as providing a “magical entry into another dimension”; 2) the description of “[t]he feeling of both being pulled out of myself while being brought closer to my inner essential core”; and 3) the description of the treatment as a “good trip.” Meta believes that the experience described by the user exemplifies the “conflict” between the definitions of “pharmaceutical drugs” and “non-medical drugs” mentioned above. Meta stressed the importance of content discussing new treatments considering rising rates of depression and anxiety worldwide, in particular following the COVID-19 pandemic. Meta emphasized how rapidly scientific and regulatory responses to the use of hallucinogens, including ketamine, to treat depression are progressing. According to a 2022 review Meta cited, no known cases of overdose or death resulted from the use of ketamine as an antidepressant in a therapeutic setting in the United States. For Meta, the content “falls squarely within the category of content we wish to allow under our policy.” The company recognizes, however, that “it is possible that the endorsement of legal ketamine use could tempt some to try ketamine illegally.” Meta concluded that the user’s description of their experience with “medically administered ketamine” did not pose a threat to their safety or to the safety of others. For that reason, Meta determined that the content did not violate the Restricted Goods and Services Community Standard. Meta acknowledged that its decision in this case is in tension with the Standard’s general prohibition on content that promotes the use of drugs (whether pharmaceutical or non-medical) to “achieve a high or altered mental state.” Nevertheless, it deemed the decision to be consistent with the purpose of the Standard. Meta further explained that “the admission or promotion of ketamine as a medically administered pharmaceutical is allowed because it is in line with [the company’s] overall policy of promoting discussion about medical treatment.” According to Meta, the decision to keep the content on Instagram is not “an exception, abrogation or contradiction to the policy.” Moreover, it is a decision the company would expect “any reviewer to make,” whether assessing the content at scale or on escalation. The Board asked Meta 22 questions in writing. Questions related to the managed partner status and channels available for appealing moderation decisions; the nature of the collaboration between the user and the ketamine clinic; the role of automation in the enforcement of relevant content policies; Meta’s assessment of the content and the context in light of relevant content policies; the “spirit of the policy” allowance; and Meta’s Branded Content policies. Meta answered all questions. 7. Public comments The Oversight Board received five public comments relevant to this case. These were all submitted from the United States and Canada. Three comments focused on the medical benefits of ketamine therapy and the importance of allowing discussions about it on Meta’s platforms. Two emphasized the dangers of recreational ketamine use. The Board received a comment from the National Association of Boards of Pharmacy (NABP), a US-based non-profit organization whose members include the 50 state pharmacy boards, as well as pharmacy regulators in the District of Columbia, Guam, Puerto Rico, the Virgin Islands, Bahamas and 10 Canadian provinces. The organization stressed that “with only a cursory search, less than 1 minute,” it found many posts featuring “ketamine, clearly marked for recreational use.” The NABP made a similar point about the need to address unambiguous violations of the Restricted Goods and Services Community Standard in the Board’s Asking for Adderall case (PC-11235). There it flagged “instances where content attempting to sell [Adderall and Xanax] has remained on Facebook.” In this case, the organization urged Meta to “prioritize taking action in bright-line cases rather than spending resources on edge cases” such as this one. The Board also received a comment from ketamine therapy provider Mindbloom (PC-11234) about the extent of the mental health crisis in the United States. Mindbloom’s comment noted research on the inefficacy of current treatments for depression. It also noted that the ability to share information about new treatments such as ketamine is essential because many people are not aware that ketamine therapy is an option despite significant published research. To read public comments submitted for this case, please click here . 8. Oversight Board analysis The Board selected this Meta-referred case as an opportunity to examine and clarify Meta’s policy on Restricted Goods and Services in the context of the legalization and normalization of certain drugs, specifically for medical uses. After further review, the Board found that the case also raised important issues involving “paid partnership” content relating to the promotion of pharmaceuticals. The Board examined whether this content should be removed by analyzing Meta’s content policies, which includes the company’s Branded Content policies, the Instagram Community Guidelines and Facebook Community Standards, in addition to Meta’s values and human rights responsibilities. 8.1 Compliance with Meta’s content policies I. Content rules The Board finds that the content in this case violates Meta’s Branded Content policies, which apply if the content is part of a “paid partnership.” The Board also finds that this content would violate the Restricted Goods and Services Community Standard, even if it were not part of a “paid partnership.” Branded Content policies As the content in this case was posted as part of a “paid partnership,” the Branded Content policies should have been applied. The Board is concerned that Meta did not describe this aspect of the case as part of its referral or initial submissions. Instead, the Board only received information about this dimension of the case after it elicited details about the paid nature of the post through rounds of questioning. The Board appreciates Meta’s engagement with those questions, and welcomes the opportunity to address managed partners’ use of Instagram to engage in paid promotion of medical treatments. As mentioned above under Section 2, the special challenges posed by paid content promoting drugs have drawn attention in both medical and legal circles. The Board expresses interest in further cases on these topics, and asks Meta to share all relevant information about cases under consideration for selection by the Board, including information on the Branded Content policies and/or business partners when relevant. Meta’s Branded Content policies state that “certain goods, services, or brands may not be promoted with branded content.” The policies list “drugs and drug-related products, including illegal or recreational drugs” as prohibited goods. The content in this case clearly promoted the use of ketamine. While the experience and treatment the user described appear lawful in the United States, Meta’s policies are clear that such content cannot be promoted on Instagram through a paid partnership. The status of ketamine as a “pharmaceutical” drug or “non-medical” drug in this context is not relevant for the purpose of the Branded Content policies. The removal of the content should therefore have occurred without Meta needing to grapple, in this particular case, with the tensions in its Restricted Goods and Services Community Standard. The Branded Content policies note that some categories of content, including those promoting “pharmacies” or “prescription drugs,” require the business partner sponsoring the content to be “authorized” by Meta to promote their services. This carve-out applies only in a small group of jurisdictions, including the United States, and only online pharmacies, telehealth providers and pharmaceutical manufacturers may apply for such. In this case, Meta confirmed that the business partner sponsoring the content did not have this authorization. Therefore, the post is violating, since it was not properly authorized in conformity with the limited exceptions to the general ban on promoting “pharmacies” or “prescription drugs” under the Branded Content policies. Restricted Goods and Services Community Standard As Meta acknowledges, the Restricted Goods and Services Community Standard contains a tension. On the one hand, it permits promotion of pharmaceutical drugs. On the other hand, it prohibits the promotion of drugs used to produce a “high” or altered mental state. When a pharmaceutical drug causes a “high,” these standards cut in opposite directions. In this case, leaving aside the issues surrounding the creation of the content as part of a paid partnership, Meta submitted that its value of ""Voice” combined with a low likelihood of harm favors treating this case as permitted discussion of pharmaceutical drugs. The Board finds that the tension in these circumstances would best be resolved with reference to a “supervised medical setting” as per Meta’s internal guidelines to content reviewers. Meta’s Restricted Goods and Services Community Standard defines pharmaceutical drugs as drugs that “require a prescription or medical professionals to administer.” It defines “non-medical drugs” as “drugs or substances that are not being used for an intended medical purpose or are used to achieve a high.” According to these definitions, the supervision of the drug usage by the medical profession, either by prescribing it or administering it onsite, is the key distinction between these two kinds of drugs. To be sure, the use of the disjunctive “or” in the definition of “non-medical drugs” means that any substance that can be used to achieve a “high” will be characterized as a “non-medical drug,” even if it is also a “pharmaceutical drug.” Yet the Board finds that the categories of “pharmaceutical” and “non-medical” drugs were meant to be conceptually distinct, and are distinguished by the supervision of the medical profession. The logical consequence of this view is that drugs that can be used to achieve a “high” should still be deemed “pharmaceutical drugs” if their use is supervised by the medical profession. Meta should resolve the conflict within the Community Standard on Restricted Goods and Services by amending it in line with this decision. The Standard should more expressly allow unpaid content admitting to the use of drugs that create a “high,” so long as such drugs are administered under medical supervision. The Standard should explain that medical supervision can be demonstrated by indicators such as a direct mention of a medical diagnosis, a reference to the health service provider’s license, or to medical staff. Applying that standard to this content, the Board finds that the content should be taken down. This would be consistent with Meta’s internal guidance to reviewers, which only allows content in which a user admits to using or promotes the use of pharmaceutical drugs in a supervised medical setting, and guidance to treat drugs that provide “highs” or an “altered mental state” as non-medical drugs. In its submissions to the Board, Meta claimed that the user, in this case, described their experience with “medically administered ketamine.” The Board, however, disagrees with Meta, as it does not find sufficient indicators in the post to confirm that the use of ketamine in this case occurred under medical supervision, i.e., that the drug was administered by a health professional. Specifically, there were not sufficient indicators in the post itself that the user had received a medical diagnosis for depression, or the office was a licensed clinic for the administration of ketamine as a treatment for depression, or the treatment was conducted by medical professionals (there were no direct references to “doctors,” “nurses,” “psychiatrists,” only to “staff”). The Board believes it would be important for Meta to provide this additional guidance to content reviewers enforcing Meta’s Restricted Goods and Services Community Standard. II. Enforcement According to Meta, the automation tool assessed the content and determined it was in violation of the Restricted Goods and Services Community Standard on January 15, 2023, after a third user report made its assessment “[based] on previous enforcement actions on this content.” Meta said that the automation in question in this case is a “Restricted & Regulated Goods"" classifier. Machine learning classifiers are trained to identify violations of Meta’s Community Standards. Meta explained to the Board that their “Restricted & Regulated Goods” classifiers are retrained every six months using “the latest training dataset which takes into account appeal outcomes.” In this case, Meta’s classifiers had not yet been retrained with the appeal outcomes, which assessed the case content as non-violating. For that reason, the successful appeals did not factor into the automation’s decision to remove the content whereas, according to Meta, the earlier takedown decisions did. The Board notes with concern the six-month delay and urges Meta to ensure its automated processes take account of successful appeals as quickly as possible while maintaining the integrity of datasets. Even though automation ultimately made a decision in line with the Board’s analysis of the application of the Restricted Goods and Services Community Standard, that decision was not in accordance with Meta’s own interpretation of the policy at the time the decision was made. When responding to the Board’s questions about the applicability of Meta’s Branded Content policies, the company acknowledged that not all content with a “paid partnership” label is reviewed against those policies, and that in fact at scale human moderators cannot even see this label nor reroute content to the specialized team in charge of enforcing the Branded Content policies. Meta explained that content is not assessed under the Branded Content policies when a “paid partnership” label attached to it has not been previously reviewed and approved by the brand partner. The Board urges Meta to ensure that its enforcement processes equip automated and at scale human moderators to review content against all relevant policies, including Meta’s Branded Content policies where applicable. This case concerns a failure to enforce the Branded Content policies with regard to paid content promoting ketamine. However, more broadly, the Board notes that this case appears to be an instance of under-enforcement of Meta’s drug policies. A recent investigation by the Wall Street Journal based on a review of ads for a four-week period in late 2022 and discovered “more than 2,100 ads on Facebook and Instagram that described benefits of prescription drugs without citing risks, promoted drugs for unapproved uses or featured testimonials without disclosing whether they came from actors or company employees.” The comment received by the Board from the National Association of Boards of Pharmacy (NABP) also notes that violations of Meta’s Restricted Goods and Services Community Standard on Meta’s platform may be common. In this case, the NABP pointed out that ketamine, “clearly marketed for recreational use, remains widely available for sale on Instagram.” Nor is this the first time the NABP has raised this concern. In the 2021 Asking for Adderall case, the NABP pointed out that “content attempting to sell [Adderall and Xanax] has remained on Facebook.” Finally, the U.S. Drug Enforcement Agency recently noted that drug cartels are using social media platforms to sell their goods. A theme that arises is that Meta should closely examine the enforcement of its policies with regard to the sale or paid promotion of drugs. III. Transparency In seeking to understand whether the content in this case was branded, the Board asked Meta several clarifying questions through which it discovered that this content was part of a “paid partnership.” Meta explained that the “paid partnership” label indicates that “the post is branded content for which the creator has been compensated, either with money or something else of value, by a business partner. Creators must tag the relevant brand or business partner when posting branded content, whether posting from a creator, business or personal account.” However, after further questioning, Meta clarified that the presence of a “paid partnership” label does not indicate that the tagged business partner necessarily approved the label, because they “may provide certain creators account-level permissions to tag them in branded content (eliminating the need to approve tags for each post).” This may lead to confusion for users. Meta pointed the Board to a Meta Business Help Center article on this topic, but finding this article from Instagram’s policies requires several steps, and the explanation of how to use the ""paid partnership"" label in the Instagram Help Center implies that all labels are approved. The Board recommends that Meta clarify the meaning of the paid partnership label throughout its Transparency Center, Help Center articles and other spaces where Meta’s policies are explained to users in clear, understandable language. 8.2 Compliance with Meta’s human rights responsibilities The Board found that Meta’s strong restrictions on branded content promoting drugs and on content attempting to buy, sell, trade, co-ordinate the trade of, donate, gift or ask for non-medical drugs is compatible with the company’s human rights responsibilities to “avoid causing or contributing to human rights impacts” and to “seek to prevent or to mitigate adverse human rights impacts” under the UNGPs (Principle 13). This is especially pertinent given the risk posts like the one under analysis represent to the rights to health and the right to information about health-related matters. In the analysis below, the Board assesses this speech restriction in light of Meta’s responsibility to protect freedom of expression (ICCPR, Article 19). Freedom of expression (Article 19 ICCPR) Article 19, para. 2 of the ICCPR provides broad protection for expression. This right includes “freedom to seek, receive and impart information and ideas of all kinds.” The Human Rights Committee, in General Comment No. 34, lists specific forms of expression included under Article 19, and notes that the right to freedom of expression “ may also include commercial advertising” (para. 11, emphasis added ). The Board finds that the paid nature of the content in this case makes it analogous to advertising, and that Meta should consider respect for paid content, including both advertising and branded content, as part of its human rights responsibilities. Article 21 of the CRPD specifies freedom of expression protections for persons with disabilities, who according to Article 1 include “those who have long-term physical, mental, intellectual or sensory impairments, which in interaction with various barriers, may hinder their full and effective participation in society on an equal basis with others.” The CRPD ensures that they can exercise this freedom “on an equal basis with others and through all forms of communication of their choice” (Article 21, CRPD). The UN Committee on Economic, Social and Cultural Rights makes clear that “access to health-related education and information” is a critical part of the right to health enshrined in Article 12 of the ICESCR (General Comment No. 14, para. 11). This is particularly important in the context of increasing rates of depression and other mental health conditions worldwide. As the Board has noted in prior decisions, social media companies should respect freedom of expression around pharmaceutical and non-medical drugs (see Sri Lanka Pharmaceuticals; Asking for Adderall ®; Ayahuasca Brew). Where restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test,” and also apply to restrictions on commercial speech or advertising. The Board uses this framework to interpret Meta’s voluntary human rights commitments, both in relation to the individual content decision under review and in relation to Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although ""companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users' right to freedom of expression"" ( A/74/486 , para. 41). I. Legality (clarity and accessibility of the rules) The principle of legality under international human rights law requires rules limiting expression to be clear and publicly accessible (General Comment No.34, para. 25). Rules restricting expression ""may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution"" and must ""provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not"" ( Ibid ). Applied to rules that govern online speech, the UN Special Rapporteur on freedom of expression has said they should be clear and specific ( A/HRC/38/35 , para. 46). People using Meta’s platforms should be able to access and understand the rules, and content reviewers should have clear guidance regarding their enforcement. Branded Content policies The Board finds that Meta’s Branded Content policies are sufficiently clear and accessible to users who wish to participate in “paid partnerships,” enabling them to understand the conditions under which this is permitted. It is clear to the Board that the prohibition on branded content for “drugs and drug-related products” would encompass services where drugs are administered. Given the proliferation of ketamine treatment for depression, clarity could be aided by specifying that drug-based treatments and therapies are prohibited from “paid partnership” content. The policy also makes it clear that prescription drugs and pharmacies may be part of paid partnership content only if the “business partner is authorized to promote their services.” The listing is narrow and clear, though the relationship between the general rule and this apparent exception could be better explained. The Board notes that two versions of the Branded Content policies appear online, one in Meta’s “ Business Help Center"" which seems to apply to both Facebook and Instagram (linked from the Transparency Center listing of content policies ), and a second on Instagram’s help page, which seems to apply to Instagram. While these rules appear to be consistent, removing this duplication would further aid clarity. The Board is deeply concerned that at scale content reviewers, when assessing “paid partnership” content, are not able to see that it is part of a “paid partnership.” For this reason, reviewers are not able to determine whether a piece of content requires a Branded Content policies assessment, in addition to a Community Standards-based one. This approach makes it far more likely for Meta’s Branded Content policies to be under-enforced. The several-month saga the content creator in this case experienced might have been avoided had the content been properly reviewed against the Branded Content policies when first reported in late December 2022. All branded content in the context of promoting pharmaceuticals should be proactively assessed prior to or closely following posting. Restricted Goods and Services policy The Board finds that the definitions of “non-medical drugs” and “pharmaceutical drugs” adopted by the Restricted Goods and Services policy do not meet the legality requirement. As outlined above, two rules appear to contradict each other when applied to medically supervised use of prescription drugs where those drugs may create a “high” or “altered mental state.” The rules on “pharmaceutical drugs” appear to permit such content, whereas the rules on “non-medical drugs” seem to prohibit the same content. The Board finds that the rules are unclear and that reviewers need better guidance. Clear rules are primarily important for the people whose speech may be restricted, but they are also important for those who must impose the rules. Reviewers, who must reach their decisions swiftly, must be given rules they can apply with confidence. The Board is also deeply concerned about the possibility of inconsistent enforcement of Meta’s Restricted Goods and Services Community Standard, which generally prohibits “attempts to buy, sell or trade” non-medical or pharmaceutical drugs. As the Board highlighted in its Asking for Adderall case, when violating content is left online “inconsistency in enforcement could result in confusion as to what is permitted on Facebook.” II. Legitimate aim Under Article 19, para. 3 of the ICCPR, speech may be restricted for a defined and limited list of reasons. In this case, the Board finds that both the Branded Content policies and Restricted Goods and Services Community Standard policy lines on the promotion of non-medical drugs and on the attempts to buy, sell, or trade non-medical and pharmaceutical drugs serve the legitimate aim of protecting public health. They also protect the rights of others, including the right to health and the right to information about health-related matters (see Sri Lanka pharmaceuticals case). III. Necessity and proportionality The principle of necessity and proportionality provides that any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected” ( General Comment No. 34 , para. 34 ). As explained under Section 2 above, instances of depression have increased worldwide. Partially in response, the use of ketamine to treat depression is also on the rise. While promising, these treatments are still nascent. Moreover, the abuse of ketamine for recreational purposes also appears to be on the upswing. In this context, removing the content is a necessary and proportionate limitation on expression in order to protect the public health and people’s right to health and to information about health-related matters. Branded Content policies The Board finds the WHO’s ethical criteria for medicinal promotion to be illuminating. These criteria state that to “fight drug addiction and dependency,” drugs (in particular narcotic and psychotropic drugs) “should not be advertised to the general public.” (para. 14). While they predate social media by decades, these criteria appear even more relevant today. Additional context from the United States helped the Board reflect on the necessity and proportionality of the Branded Content policies’ ban on the promotion of drugs. According to a Wall Street Journal report , two telehealth companies that used to advertise extensively on Meta platforms both face U.S. Department of Justice investigations “after the Journal reported that some clinicians felt pressured to prescribe stimulants,” and that some patients and employees “said their marketing practices contributed to the abuse of controlled substances.” Therefore, the Board finds that restricting “paid partnership” promotion of ketamine therapy to address the risk of promoting recreational ketamine use amongst Facebook and Instagram users is necessary. Paid partnership content involving health information, especially as it relates to drugs that can easily be abused, has the potential to undermine the user’s right to access health information and to health. These risks are heightened when, on social media, influencers are offered significant incentives to provide companies with access to wide audiences who may be in vulnerable health situations. Where paid promotion touches on either necessary medical treatment or illicit recreational uses, Meta has a responsibility to recognize the potential of its platform for abuse. In the Board’s view, commercial speech promoting a particular drug or service should be distinguished from non-commercial speech. Restrictions that may be considered disproportionate for unpaid content discussing “pharmaceutical” and “non-medical"" drugs may be proportionate when applied to paid content promoting the same products or services. The Board considered whether it would be more proportional to limit this kind of paid content through less intrusive means, such as restricting views to users above a certain age, as Meta does for alcohol and tobacco (substances that also alter individuals’ mental states). However, the risks associated with this kind of content are not restricted to young audiences. Adults may also be susceptible to “patient influencer” testimonials, particularly when they glamorize certain medical treatments that may not be appropriate for all people, and lack appropriate safety warnings or minimize risks. In keeping with the WHO’s ethical criteria, the Board finds that Meta’s strong restrictions on “paid partnerships” for the content in this case is proportionate. While the Board finds that the post should not have been permitted as a “paid partnership,” it also has concerns about the treatment of such partnerships in the narrow set of circumstances where content can promote pharmaceuticals or prescription drugs. Specifically, the Board notes that all a “paid partnership” label needs to do in this context is to disclose an economic relationship. An influencer may apply the same label to a post promoting a new restaurant as they would to a new or experimental medical treatment. In the latter scenario, the Board is concerned about the lack of prominence of these labels, and the lack of any tailored information to either highlight risks or point to additional resources related to those risks. The Board notes the approach to “paid partnership” labels is in stark contrast to Meta’s approach to “inform treatments” on certain categories of health misinformation. Those labels link to further resources from fact-checkers or public health authorities, for example (see Policy Advisory Opinion on COVID-19 Misinformation ). The Board further notes that allowing users to like and comment on such posts may place them at risk of being targeted on Meta’s platforms by persons marketing illicit ketamine or other substances. Some of these individuals may be living with depression and/or have limited access to effective treatments. As such, they may be especially vulnerable to this exploitation. Restricted Goods and Services Community Standard The Board finds that its reading of the Community Standard places a restriction on speech that is necessary and proportional to its aim of preventing drug abuse. The Board distinguishes this case from its previous Asking for Adderall case, where the Board found no direct or immediate connection between the content and the possibility of harm. In the latter, the user simply wanted advice about how to communicate with their doctor about a treatment, and had no intention of selling, illegally obtaining or promoting Adderall. Conversely, in this case, the user is actively seeking to promote the use of ketamine without emphasizing the need of medical supervision, which creates substantive risks to users’ safety, especially when aggregated to similar pieces of content at scale. Meta’s Restricted Goods and Services Community Standard should permit content describing the use of ketamine, but only when that use occurs under medical supervision. Unlike Meta, the Board did not find sufficient indicators in the body of the post in this case to confirm that the use of ketamine occurred under medical supervision. The Board considered whether that restriction should be more permissive, allowing for content describing use under “therapeutic” supervision. It took note of the review cited by Meta reporting that there were no cases of overdose or death that arose from the use of ketamine as an antidepressant in a “therapeutic setting” in the United States. The Board rejects that more permissive position, however, for several reasons. First, it observes that there is evidence that illicit use of ketamine is on the rise, making the status quo in 2022 an unreliable baseline for thinking about how to combat abuse. Second, it notes that the elasticity of the word “therapeutic” would make it difficult for reviewers to enforce the policy. Finally, the Board gives weight to the reliance on the medical profession implicit in Meta’s definition of “pharmaceutical” and “non-medical” drugs, as explained under Section 8.1 above. The Board also considered whether the “medical supervision” restriction conflicted with the Board’s previous decision in the Ayahuasca brew case. In that decision, the Board recommended that Meta change its rules to allow users to discuss the traditional or religious uses of non-medical drugs in a positive way. The Board did not require that the use of the drug had to occur under medical supervision. We find that the “medical supervision” restriction is consistent with the analysis in the Ayahuasca brew case. Traditional and religious uses of a drug usually have a history behind them that operates as its own safeguard against harm. According to experts “regarding traditional ethnobotanicals, safety and efficacy are demonstrated by the long history of use.” Moreover, as the Board intimated in the Ayahuasca brew case, these rituals have a dignitary aspect because of their connection to the spiritual and traditional identity of certain communities. 9. Oversight Board decision The Oversight Board overturns Meta’s decision to leave up this paid partnership content, requiring the post to be removed. 10. Recommendations Content policy 1. Meta should clarify the meaning of the “paid partnership” labels in its Transparency Center and Instagram’s Help Center. That includes explaining the role of business partners in the approval of “paid partnership” labels. The Board will consider this recommendation implemented when Meta’s Branded Content policies are updated to reflect these clarifications. 2. Meta should clarify in the language of the Restricted Goods and Services Community Standard that content that “admits to using or promotes the use of pharmaceutical drugs” is allowed even where that use may result in a “high” in the context of “supervised medical setting.” Meta should also define what a “supervised medical setting” is and explain under the Restricted Goods and Services Community Standard that medical supervision can be demonstrated by indicators such as a direct mention of a medical diagnosis, a reference to the health service provider’s license, or to medical staff. The Board will consider this recommendation implemented when Meta’s Restricted Goods and Services Community Standard has been updated to reflect these clarifications. Enforcement 3. Meta should improve its review process to ensure that content created as part of a “paid partnership” is properly reviewed against all applicable policies (i.e., Community Standards and Branded Content policies), given that Meta does not currently review all branded content under the Branded Content policies. In particular, Meta should establish a pathway for at scale content reviewers to route content potentially violating Branded Content policies to Meta’s specialized teams or automated systems that are able and trained to apply Meta’s Branded Content policies when implicated. The Board will consider this implemented when Meta shares its improved review routing logic, showing how it allows for all relevant platform/content policies to be applied when there is a high likelihood of potential violation of any of the aforementioned policies. 4. Meta should audit the enforcement of policy lines from its Branded Content policies (“we prohibit the promotion of the following [...] 4. Drugs and drug-related products, including illegal or recreational drugs”) and Restricted Goods and Services Community Standard (“do not post content that attempts to buy, sell, trade, co-ordinate the trade of, donate, gift, or asks for non-medical drugs”). The Board finds that Meta has clear and defensible approaches that impose strong restrictions on the paid promotion of drugs (under its Branded Content policies) and attempts to buy, sell or trade drugs (under its Restricted Goods and Services Community Standard). However, the Board finds some indication that these policies could be inconsistently enforced. To clarify whether this is indeed the case, Meta should engage in an audit of how its Branded Content policies and its Restricted Goods and Services Standard are being enforced with regard to pharmaceutical and non-medical drugs. It should then close any gaps in enforcement. The Board will consider this implemented when Meta shares the methodology and results of this audit and discloses how it will close any gaps in enforcement revealed by that audit. *Procedural note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Return to Case Decisions and Policy Advisory Opinions" ig-wbzenhg7,Statement About the Chinese Communist Party,https://www.oversightboard.com/decision/ig-wbzenhg7/,"August 1, 2024",2024,,"TopicFreedom of expression, Governments, ProtestsCommunity StandardViolence and incitement",Violence and incitement,Overturned,China,A user appealed Meta’s decision to remove an Instagram comment calling for the “death” of the Chinese Communist Party.,6115,949,"Overturned August 1, 2024 A user appealed Meta’s decision to remove an Instagram comment calling for the “death” of the Chinese Communist Party. Summary Topic Freedom of expression, Governments, Protests Community Standard Violence and incitement Location China Platform Instagram Summary decisions examine cases in which Meta has reversed its original decision on a piece of content after the Board brought it to the company's attention and include information about Meta's acknowledged errors. They are approved by a Board Member panel, rather than the full Board, do not involve public comments and do not have precedential value for the Board. Summary decisions directly bring about changes to Meta's decisions, providing transparency on these corrections, while identifying where Meta could improve its enforcement. Summary A user appealed Meta’s decision to remove an Instagram comment calling for the “death” of the Chinese Communist Party. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post. About the Case In March 2024, an Instagram user posted a comment saying, “Death to the Chinese Communist Party!” followed by skull emojis. This was in response to a post from a news outlet’s account, featuring a video of Wang Wenbin, a former spokesperson for China’s Ministry of Foreign Affairs, condemning the passing of a bill in the United States House of Representatives that could impact TikTok’s presence in the country. Meta initially removed the user’s post from Facebook under its Violence and Incitement Community Standard , which prohibits “threats of violence.” The company explained that the prohibition includes “certain calls for death if they contain a target and method of violence.” When the Board brought this case to Meta’s attention, the company determined that removal of the comment was incorrect and it restored the content to Instagram. Meta explained that, as explained in its internal guidelines to content reviewers, calls for death of an institution like the Chinese Communist Party are treated as non-violating. Board Authority and Scope The Board has authority to review Meta's decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1). When Meta acknowledges that it made an error and reverses its decision on a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, reduce errors and increase fairness for Facebook, Instagram and Threads users. Significance of Case This case highlights an inconsistency in how Meta enforces its Violence and Incitement policy against metaphorical or figurative statements in a political context, which can disproportionately impact political speech that is critical of states as well as governmental institutions. The case underlines the importance of Meta taking into consideration the target of the speech (in this case, a political party), as well as people’s use of hyperbolic, rhetorical, ironic and satirical speech to criticize institutions, when designing its moderation systems. On rhetorical discourse, the Board in the Russian Poem case observed that excerpts with violent language in the poem “Kill him!” may be read as “describing, not encouraging, a state of mind.” The Board determined that the language was employed as a rhetorical device to convey the user’s message and that, as a result, that part of the content was permitted by Meta’s internal guidelines on its Violence and Incitement policy. Although it addresses a different Community Standard (Hate Speech) from the one at issue in this case (Violence and Incitement), the Myanmar Bot decision is relevant because it also concerns speech directed at states or political institutions. There, the Board concluded that since the profanity in the post did not target people based on race, ethnicity or national origin, but rather a state, it did not violate the Hate Speech Community Standard. The Board emphasized: “It is crucial to ensure that prohibitions on targeting people based on protected characteristics not be construed in a manner that shields governments or institutions from criticism.” The Board has previously urged Meta to put in place adequate procedures for evaluating content in its relevant context ( “ Two Buttons ” Meme , recommendation no. 3). It has also recommended: “To better inform users of the types of statements that are prohibited, Meta should amend the Violence and Incitement Community Standard to (i) explain that rhetorical threats like “death to X” statements are generally permitted, except when the target of the threat is a high-risk person…” ( Iran Protest Slogan , recommendation no. 1); and “Meta should err on the side of issuing scaled allowances where (i) this is not likely to lead to violence; (ii) when potentially violating content is used in protest contexts; and (iii) where public interest is high,” ( Iran Protest Slogan , recommendation no. 2). Meta reported implementation of the “Two Buttons” Meme recommendation and recommendation no. 2 from the Iran Protest Slogan decision, but did not publish information to demonstrate this. For recommendation no. 1 from Iran Protest Slogan, in its Q4 2023 Quarterly Update on the Board , Meta stated: “We have updated our Violence and Incitement Community Standards by providing further details about what constitutes a ‘threat’ and distinguishing our enforcement based on target. As part of this work, we also updated internal guidance.” The Board believes that full implementation of these recommendations could contribute to decreasing the number of enforcement errors under the Violence and Incitement policy. Decision The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention. Return to Case Decisions and Policy Advisory Opinions" ig-wuc3649n,Al-Shifa Hospital,https://www.oversightboard.com/decision/ig-wuc3649n/,"December 19, 2023",2023,December,"TopicSafety, Violence, War and conflictCommunity StandardViolent and graphic content",Violent and graphic content,Overturned,"Israel, Palestinian Territories","The Board overturns Meta’s original decision to remove the content from Instagram. It finds that restoring the content to the platform, with a “mark as disturbing” warning screen, is consistent with Meta’s content policies, values and human-rights responsibilities.",29297,4526,"Overturned December 19, 2023 The Board overturns Meta’s original decision to remove the content from Instagram. It finds that restoring the content to the platform, with a “mark as disturbing” warning screen, is consistent with Meta’s content policies, values and human-rights responsibilities. Expedited Topic Safety, Violence, War and conflict Community Standard Violent and graphic content Location Israel, Palestinian Territories Platform Instagram Hebrew translation.pdf In the weeks following the publication of this decision, we will upload a translation in Hebrew here and an Arabic translation will become available through the ‘language’ tab accessed in the menu at the top of this screen. לקריאת החלטה זו בעברית יש ללחוץ כאן . 1. Summary This case involves an emotionally powerful video of the aftermath of a strike on or near Al-Shifa hospital in Gaza during Israel’s ground offensive, with a caption condemning the attack. Meta’s automated systems removed the post for violating its Violent and Graphic Content Community Standard. After unsuccessfully contesting this decision with Meta, the user appealed to the Oversight Board. After the Board identified the case for review, Meta reversed its decision and restored the content with a warning screen. The Board holds that the original decision to remove the content did not comply with Meta’s content policies or the company’s human-rights responsibilities. The Board approves the decision to restore the content with a warning screen but disapproves of the associated demotion of the content barring it from recommendations. This case, together with Hostages Kidnapped From Israel ( 2023-050-FB-UA ), are the Board’s first cases decided under its expedited review procedures. 2. Context and Meta’s Response On October 7, 2023, Hamas, a designated Tier 1 organization under Meta’s Dangerous Organizations and Individuals Community Standard, led unprecedented terrorist attacks on Israel from Gaza that killed an estimated 1,200 people and resulted in roughly 240 people being taken hostage ( Ministry of Foreign Affairs, Government of Israel ). Israel immediately undertook a military campaign in Gaza in response to the attacks. Israel’s military action has killed more than 18,000 people in Gaza as of mid-December 2023 ( UN Office for the Coordination of Humanitarian Affairs , drawing on data from the Ministry of Health in Gaza), in a conflict where both sides have been accused of violating international law. Both the terrorist attacks and Israel’s subsequent military actions have been the subjects of intense worldwide publicity, debate, scrutiny, and controversy, much of which has taken place on social media platforms, including Instagram and Facebook. Meta immediately designated the events of October 7 a terrorist attack under its Dangerous Organizations and Individuals policy. Under its Community Standards, this means that Meta would remove any content on its platforms that “praises, substantively supports or represents” the October 7 attacks or the perpetrators of them. In reaction to an exceptional surge in violent and graphic content being posted to its platforms following the terrorist attacks and military response, Meta put in place several temporary measures, including a reduction of the confidence thresholds for its Graphic and Violent Content automatic classification system (classifier) to identify and remove content. Meta informed the Board that these measures applied to content originating in Israel and Gaza across all languages. The changes to these classifiers increased the automatic removal of content where there was a lower confidence score for the content violating Meta’s policies. In other words, Meta used its automated tools more aggressively to remove content that might violate its policies. Meta did this to prioritize its value of safety, with more content removed than would have occurred under the higher confidence threshold in place prior to October 7. While this reduced the likelihood that Meta would fail to remove violating content that might otherwise evade detection or where capacity for human review was limited, it also increased the likelihood of Meta mistakenly removing non-violating content related to the conflict. When escalation teams assessed videos as violating its Violent and Graphic Content, Violence and Incitement and Dangerous Organizations and Individuals policies, Meta relied on Media Matching Service banks to automatically remove matching videos. This approach raised the concern of over-enforcement, including people facing restrictions on or suspensions of their accounts following multiple violations of Meta’s content policies (sometimes referred to as “Facebook jail”). To mitigate this concern, Meta withheld “ strikes ” that would ordinarily accompany content post removals that occur automatically based on Media Matching Service banks (as Meta announced in its newsroom post ). Meta’s changes in the classifier confidence threshold and its strike policy are limited to the Israel-Gaza conflict and intended to be temporary. As of December 11, 2023, Meta had not restored confidence thresholds to pre-October 7 levels. 3. Case Description The content in this case involves a video posted on Instagram in the second week of November, showing what appears to be the aftermath of a strike on or near Al-Shifa Hospital in Gaza City during Israel’s ground offensive in the north of the Gaza Strip. The Instagram post in this case shows people, including children, lying on the ground lifeless or injured and crying. One child appears to be dead, with a severe head injury. A caption in Arabic and English below the video states that the hospital has been targeted by the “usurping occupation,” a reference to the Israeli army, and tags human rights and news organizations. Meta’s Violent and Graphic Content Community Standard, which applies to content on Facebook and Instagram, prohibits “[v]ideos of people or dead bodies in non-medical settings if they depict … [v]isible internal organs.” At the time of posting, the policy allowed “[i]magery that shows the violent death of a person or people by accident or murder,” provided that such content was placed behind a “mark as disturbing” warning screen and was only visible to people over the age of 18. This rule was updated on November 29, after the content in this case was restored, to clarify that the rule applies to the “moment of death or the aftermath” as well as imagery of “a person experiencing a life-threatening event.” Meta’s automated systems removed the content in this case for violating the Violent and Graphic Content Community Standard. The user’s appeal against that decision was automatically rejected because Meta’s classifiers indicated “a high confidence level” that the content was violating. The user then appealed Meta’s decision to the Oversight Board. Following the Board’s selection of this case, Meta said it could not conclusively determine that the video showed visible internal organs. Meta therefore concluded that it should not have removed this content, though it was on the “borderline” of violating. Meta further explained that even if internal organs had been visible, the post should have been kept up with a “mark as disturbing” warning screen as it was shared to raise awareness. The company reiterated that, in line with the Graphic and Violent Content policy rationale, such content is permitted when shared to raise awareness “about important issues such as human-rights abuses, armed conflicts or acts of terrorism.” Meta therefore reversed its original decision and restored the content with a warning screen. The warning screen tells users that the content may be disturbing. Adult users can click through to see the post, but Meta removes these posts from the feeds of Instagram users under 18 and also removes them from recommendations to adult Instagram users. Meta also added a separate instance of the same video to a Media Matching Service bank, so other videos identical to this one would be automatically kept up with a warning screen and would only be visible to people over the age of 18. 4. Justification for Expedited Review The Oversight Board’s Bylaws provide for expedited review in “exceptional circumstances, including when content could result in urgent real-world consequences,” and decisions are binding on Meta (Charter, Art. 3, section 7.2; Bylaws, Art. 2, section 2.1.2). The expedited process precludes the level of extensive research, external consultation or public comments that would be undertaken in cases reviewed on ordinary timelines. The case is decided on the information available to the Board at the time of deliberation and is decided by a five-member panel without a full vote of the Board. The Oversight Board selected this case and one other case, Hostages Kidnapped From Israel (2023-050-FB-UA), because of the importance of freedom of expression in conflict situations, which has been imperiled in the context of the Israel-Hamas conflict. Both cases are representative of the types of appeals users in the region have been submitting to the Board since the October 7 attacks and Israel’s subsequent military action. Both cases fall within the Oversight Board’s crisis and conflict situations priority. Meta’s decisions in both cases meet the standard of “urgent real-world consequences” to justify expedited review, and accordingly the Board and Meta agreed to proceed under the Board’s expedited procedures. In its submissions to the Board, Meta recognized that “the decision on how to treat this content is difficult and involves competing values and trade-offs,” welcoming the Board’s input on this issue. 5. User Submissions The author of the post stated in their appeal to the Board that they did not incite any violence, but shared content showing the suffering of Palestinians, including children. The user added that the removal was biased against the suffering of the Palestinians. The user was notified of the Board’s review of their appeal. 6. Decision While members of the Board have disagreements about Israel’s military response and its justification, they unanimously agree on the importance of Meta respecting the right to freedom of expression and other human rights of all those impacted by these events, and their ability to communicate in this crisis. The Board overturns Meta’s original decision to remove the content from Instagram. It finds that restoring the content to the platform, with a “mark as disturbing” warning screen, is consistent with Meta’s content policies, values and human-rights responsibilities. However, the Board also concludes that Meta’s demotion of the restored content, in the form of its exclusion from the possibility of being recommended, does not accord with the company’s responsibilities to respect freedom of expression. 6.1 Compliance With Meta’s Content Policies The Board agrees with Meta that it is difficult to determine whether the video in this case shows “[v]isible internal organs.” Given the context of this case, where there is exceptionally high public interest value in protecting access to information and providing avenues for raising awareness of the impact of the conflict, content that is “on the borderline” of violating the Violent and Graphic Content policy should not be removed. As the content includes imagery that shows a person’s violent death, depicting a bloody head injury, Meta should have applied a warning screen and made it available only to people over the age of 18 in line with its policies. The Board also agrees with Meta’s subsequent determination that even if this video had included visible internal organs, the post’s language condemning or raising awareness of the violence also means that it should have been left up with a “mark as disturbing” warning screen, and not be available to users under 18. The Community Standard does not provide for warning screens in relation to the applicable policy line (“[v]ideos of people or dead bodies in a medical setting if they depict […] [v]isible internal organs”). In the Sudan Graphic Video case, the Board explained that Meta instructs reviewers to follow the letter of its “do not post” policies. The rationale states that “[i]n the context of discussions about important issues such as human-rights abuses, armed conflicts or acts of terrorism, we allow graphic content (with some limitations) to help people to condemn and raise awareness about these situations.” The Community Standard rule, however, prohibits all videos depicting “visible internal organs” in a non-medical context, without providing reviewers the option of adding a warning screen where the policy rationale exception is engaged. Meta’s automated systems do not appear to be configured to apply warning screens to videos depicting graphic content where there is context condemning or raising awareness of the violence. It is also not clear that where this context is present, the applicable classifiers would be able to send the content to human reviewers for further assessment. 6.2 Compliance With Meta’s Human-Rights Responsibilities In line with its human-rights responsibilities, Meta’s moderation of violent and graphic content must respect the right to freedom of expression, which includes freedom to seek, receive and impart information (Art. 19, para. 2, International Covenant on Civil and Political Rights (ICCPR)). As the Board stated in the Armenian Prisoners of War Video case, the protections for freedom of expression under Article 19 of the International Covenant on Civil and Political Rights (ICCPR) “remain engaged during armed conflicts, and should continue to inform Meta’s human rights responsibilities, alongside the mutually reinforcing and complementary rules of international humanitarian law that apply during such conflicts.” The UN Guiding Principles on Business and Human Rights impose a heightened responsibility on businesses operating in a conflict setting (""Business, human rights and conflict-affected regions: towards heightened action,"" A/75/212 ). The Board has emphasized in previous cases that social media platforms like Facebook and Instagram are an important vehicle for transmitting in real-time information about violent events, including news reporting (see e.g. Mention of the Taliban in News Reporting ). They play an especially important role in contexts of armed conflicts, especially where there is limited access for journalists. Furthermore, content depicting violent attacks and human-right abuses is of great public interest (See Sudan Graphic Video) . When restrictions on expression are imposed by a state, under international human rights law they must meet the requirements of legality, legitimate aim and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human-rights commitments, both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. In doing so, the Board attempts to be sensitive to how those rights may be different as applied to a private social media company than as applied to a government. Nonetheless, as the UN Special Rapporteur on freedom of expression has stated that while companies do not have the obligations of governments “their impact is of a sort that requires them to assess the same kind of questions about protecting their right to freedom of expression” ( report A/74/486 , para. 41.). Legality requires that any restriction on freedom of expression should be accessible and clear enough to provide guidance as to what is permitted and what is not. The Board has previously expressed concern that the rules of the Violence and Graphic Content Community Standard do not align fully with the rationale of the policy, which sets out the aims of the policy (See Sudan Graphic Video and Video After Nigeria Church Attack ). The Board reiterates the importance of recommendations no. 1 and no. 2 in the Sudan Graphic Video case, which called on Meta to amend its Violent and Graphic Content Community Standard to allow videos of people or dead bodies when shared for the purpose of raising awareness of or documenting human-rights abuses (that case concerned visible dismemberment.) Meta has conducted a policy development process in response to these recommendations and intends to report on its progress in its next quarterly update to the Board. In the Board’s view, this recommendation should apply to the rules for videos showing visible internal organs, and specifically provide for warning screens as an enforcement measure where the raising awareness (including factual reporting) and condemnation exception is engaged. Under Article 19, para. 3 of the ICCPR, expression may be restricted for a defined and limited list of reasons. The Board has previously found that the Violent and Graphic Content policy legitimately aims to protect the rights of others, including the privacy of the depicted individual (See Sudan Graphic Video and Video After Nigeria Church Attack ). The present case demonstrates, additionally, that restricting access to the content for people under 18 served the legitimate aim of protecting the right to health of minors (Convention on the Rights of the Child, Article 24). The principle of necessity and proportionality provides that any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected” ( General Comment No. 34, para. 34 ). The Board has previously found in relation to violent and graphic content that a warning screen “does not place an undue burden on those who wish to see the content while informing others about the nature of the content and allowing them to decide whether to see it or not” (See Sudan Graphic Video ). Warning screens prevent users from unwillingly seeing potentially disturbing content. Victims’ rights are further protected by Meta’s policy to remove videos and photos that show the violent death of someone (or its immediate aftermath) when a family member requests this. The content in this case can be distinguished from that in the Russian Poem case, which showed a still image of a body lying on the ground at long range, where the face of the victim was not visible, and where there were no clear visual indicators of violence. Applying a warning screen in that case was inconsistent with Meta’s guidance to reviewers and not a necessary or proportionate restriction on expression. The content in this case is more similar to the content in the Video After Nigeria Church Attack decision, showing dead and injured people at close range, with very clear visual indicators of violence. In this case, the depiction of injured and lifeless children makes the video especially distressing. In circumstances like these, providing users with the choice of whether to see disturbing content is a necessary and proportionate measure (see also Armenian Prisoners of War Video ). The Board finds that excluding content raising awareness of potential human-rights abuses and violations of the laws of war, conflicts or acts of terrorism from recommendations reaching adults is not a necessary or proportionate restriction on freedom of expression, in view of the very high public interest in such content. Warning screens and removal from recommendations serve separate functions, and should in some instances be decoupled, in particular in crisis situations. Recommendations on Instagram are generated by automated systems that suggest content to users based on users’ predicted interests. Removing content from recommendation systems means reducing the reach that this content would otherwise get. The Board finds this practice interferes with freedom of expression in disproportionate ways in so far as it applies to content that is already limited to adult users and that is posted to raise awareness, condemn, or report on matters of public interest such as the development of a violent conflict. The Board recognizes that immediate responses to a crisis can require exceptional temporary measures, and that in some contexts it is legitimate to prioritize safety concerns and to temporarily and proportionally place greater restrictions on freedom of expression. Some of these are outlined, for example, in the commitments to counter “terrorist and violent extremist content” established in the Christchurch Call . However, the Board notes that the Christchurch Call emphasizes the need to respond to such content in a manner consistent with human rights and fundamental freedoms. The Board believes that safety concerns do not justify erring on the side of removing graphic content that has the purpose of raising awareness about or condemning potential war crimes, crimes against humanity, or grave violations of human rights. Such restrictions can even obstruct information necessary for the safety of people on the ground in those conflicts. Measures such as not imposing strikes do help to mitigate the potentially disproportionate adverse effects of enforcement errors due to emergency measures like reducing confidence thresholds for removal of content during conflict situations. They are, however, not sufficient to protect the ability of users to share content that raises awareness about potential human-rights abuses and violations of humanitarian law, and other critical information in conflict situations. The Board has repeatedly highlighted the need to develop a principled and transparent framework for content moderation during crises and in conflict zones (See Haitian Police Station Video and Tigray Communication Affairs Bureau ). It is precisely at times of rapidly changing conflict that large social media companies must devote the resources necessary to ensure that freedom of expression is not needlessly curtailed. At such times, journalistic sources are often subject to physical and other attacks, making news reporting by ordinary citizens on social media especially essential. The Board has also previously observed that in contexts of war or political unrest, there will be more graphic and violent content captured by users and shared on the platform for the purpose of raising awareness of or documenting abuses (See Sudan Graphic Video ). In contexts such as the Israel-Gaza conflict, where there is an alarming number of civilians killed or injured, a high proportion of children among them, amid a worsening humanitarian crisis, these kinds of allowances are especially important. While acknowledging Meta’s ongoing policy development process on its Violent and Graphic Content policy, the Board would expect Meta to be ready to rapidly deploy temporary measures to allow this kind of content with warning screens, and not remove it from recommendations. The Board notes that the situation in Gaza at the time this content was posted did not engage the same set of challenges for Meta as the October 7 attacks. In Gaza, there have been difficulties in attaining information from people on the ground, while journalist access to the territory is limited and Internet connectivity has been disrupted. Moreover, unlike the early aftermath of the October 7 attacks, the Gaza situation presented in this case did not involve terrorists using social media to broadcast their atrocities. In the context of armed conflict, by contrast, Meta should be ensuring that its actions are not making it more difficult for people to share content that provides information that raises awareness about harms against civilians, and may be relevant to determining whether violations of international humanitarian law and international human rights law have occurred. The question of whether content was shared to raise awareness of or condemn events on the ground should be the starting point for any reviewer assessing such content, and Meta’s automated systems should be designed to avoid incorrectly removing content that should benefit from applicable exceptions. This case further illustrates that insufficient human oversight of automated moderation in the context of a crisis response can lead to erroneous removal of speech that may be of significant public interest. Both the initial decision to remove this content as well as the rejection of the user’s appeal were taken automatically based on a classifier score, without any human review. This, in turn, may have been exacerbated by Meta’s crisis response of lowering the removal threshold of content under the Violent and Graphic Content policy following the October 7 attacks. This means that even if the classifier gives a relatively lower score to the likelihood of violation than would usually be required, Meta removes that content. For Meta to employ its automated systems in a manner compatible with its human-rights commitments, the Board reminds Meta of recommendation no. 1 in the Colombia Police Cartoon case. In that case, the Board called on Meta to ensure that content with high rates of appeal and high rates of successful appeal be reassessed for possible removal from its Media Matching Service banks. In response to this recommendation, Meta established a designated working group committed to governance improvements across its Media Matching Service banks (See Meta's most recent updates on this here ). The Board notes that it is important for this group to pay particular attention to the use of Media Matching Services in the context of armed conflicts. In the Breast Cancer Symptoms and Nudity case (recommendation no. 3 and no. 6), the Board recommended that Meta inform users when automation is used to take enforcement action against their content, and to disclose data on the number of automated removal decisions per Community Standard and the proportion of those decisions subsequently reversed following human review. This is particularly important when the confidence thresholds for content that is likely violating have reportedly been significantly lowered. The Board urges Meta to make further progress in the implementation of recommendation no. 6 and share evidence of implementation with the Board for recommendation no. 3. Restrictions on freedom of expression must be non-discriminatory, including on the basis of nationality, ethnicity, religion or belief, or political or other opinion (Article 2, para. 1, and Article 26, ICCPR). Discriminatory enforcement of the Community Standards undermines this fundamental aspect of freedom of expression. In the Shared Al Jazeera Post case, the Board raised serious concerns that errors in Meta’s content moderation in Israel and the Occupied Palestinian Territories may be unequally distributed, and called for an independent investigation (Shared Al Jazeera Post decision, recommendation no. 3). The Business for Social Responsibility (BSR) Human Rights Impact Assessment , which Meta commissioned in response to that recommendation, identified “various instances of unintentional bias where Meta policy and practice, combined with broader external dynamics, does lead to different human-rights impacts on Palestinian and Arabic speaking users."" The Board encourages Meta to deliver on commitments it made in response to the BSR report. Finally, Meta has a responsibility to preserve evidence of potential human-rights violations and violations of international humanitarian law, as also recommended in the BSR report (recommendation 21) and advocated by civil society groups. Even when content is removed from Meta’s platforms, it is vital to preserve such evidence in the interest of future accountability (See Sudan Graphic Video and Armenian Prisoners of War Video ). While Meta explained that it retains all content that violates its Community Standards for a period of one year, the Board urges that content specifically related to potential war crimes, crimes against humanity, and grave violations of human rights be identified and preserved in a more enduring and accessible way for purposes of longer-term accountability. The Board notes that Meta has agreed to implement recommendation no. 1 in the Armenian Prisoners of War Video case. This called on Meta to develop a protocol to preserve and, where appropriate, share with competent authorities, information to assist in investigations and legal processes to remedy or prosecute atrocity crimes or grave human-rights violations. Meta has informed the Board that it is in the final stages of developing a “consistent approach to retaining potential evidence of atrocity crimes and serious violations of international human rights law” and expects to provide the Board with a briefing about its approach soon. The Board expects Meta to fully implement the above recommendation. *Procedural Note: The Oversight Board's expedited decisions are prepared by panels of five members and are not subject to majority approval of the full Board. Board decisions do not necessarily represent the personal views of all members. Return to Case Decisions and Policy Advisory Opinions" ig-wxhs8uei,Pakistan Political Candidate Accused of Blasphemy,https://www.oversightboard.com/decision/ig-wxhs8uei/,"September 19, 2024",2024,,"TopicElections, ReligionCommunity StandardCoordinating harm and publicizing crime","Policies and TopicsTopicElections, ReligionCommunity StandardCoordinating harm and publicizing crime",Upheld,Pakistan,"The Board has upheld Meta’s decision to remove a post containing an accusation of blasphemy against a political candidate, given the potential for imminent harm in the immediate run-up to Pakistan’s 2024 elections.",39621,6135,"Upheld September 19, 2024 The Board has upheld Meta’s decision to remove a post containing an accusation of blasphemy against a political candidate, given the potential for imminent harm in the immediate run-up to Pakistan’s 2024 elections. Standard Topic Elections, Religion Community Standard Coordinating harm and publicizing crime Location Pakistan Platform Instagram Urdu translation Pakistan Political Candidate Accused of Blasphemy Decision PDF To read the decision in Urdu, click here . مکمل فیصلہ اردو میں پڑھنے کے لیے، یہاں پر کلک کریں The Board has upheld Meta’s decision to remove a post containing an accusation of blasphemy against a political candidate. In the immediate run-up to Pakistan’s 2024 elections, there was potential for imminent harm. However, the Board finds it is not clear the relevant rule under the Coordinating Harm and Promoting Crime policy, which prevents users from revealing the identity of a person in an “outing-risk group,” extends to public figures accused of blasphemy in Pakistan or elsewhere. It is concerning this framing does not easily translate across cultures and languages, creating confusion for users trying to understand the rules. Meta should update its policy to make clear that users must not post accusations of blasphemy against identifiable individuals in locations where blasphemy is a crime and/or where there are significant safety risks to those accused. About the Case In January 2024, an Instagram user posted a six-second video of a candidate in Pakistan’s February 2024 elections giving a speech. In the clip, the candidate praises former Prime Minister Nawaz Sharif, stating that “the person after God is Nawaz Sharif.” The video had text overlay in which the user criticizes this praise for “crossing all limits of kufr,” alleging he is a non-believer according to the teachings of Islam. Three Instagram users reported the content the day after it was posted and a human reviewer found it did not violate Meta’s Community Standards. The users who reported the content did not appeal that decision. Several other users reported the post over the following days but Meta maintained the content did not violate its rules, following both human review and automatic closing of some reports. In February 2024, Meta’s High Risk Early Review Operations (HERO) system identified the content for further review based on indications it was highly likely to go viral. The content was escalated to Meta’s policy experts who removed it for violating the Coordinating Harm and Promoting Crime policy rule based on “outing.” Meta defines “outing” as “exposing the identity or locations affiliated with anyone who is alleged to be a member of an outing-risk group.” According to Meta’s internal guidance to reviewers, an outing-risk group includes people accused of blasphemy in Pakistan. When the video was flagged by HERO and removed, it had been viewed 48,000 times and shared more than 14,000 times. In March 2024, Meta referred the case to the Oversight Board. Offenses relating to religion are against the law in Pakistan and the country’s social media rules mandate the removal of “blasphemous” online content. Key Findings The Board finds that, given the risks associated with blasphemy accusations in Pakistan, removing the content was in line with the Coordinating Harm and Promoting Crime policy’s rationale to prevent “offline harm.” It is not intuitive to users that risks facing members of certain religious or belief minorities relate to “outing,” as commonly understood (in other words, risks resulting from a private status being publicly disclosed). The use of the term “outing” in this context is confusing, both in English and Urdu. Neither is it clear that people accused of blasphemy would consider themselves members of a “group” at risk of “outing,” or that politicians would fall within an “outing-risk group” for speeches given in public, especially during elections. In short, the policy simply does not make it clear to users that the video would be violating. Furthermore, the policy does not specify which contexts are covered by its line against outing and which groups are considered at risk. It also does not explicitly state that those accused of blasphemy are protected in locations where such accusations pose an imminent risk of harm. Meta explained that while it has an internal list of outing-risk groups, it does not publicly provide this list so that bad actors cannot get around the rules. The Board does not agree that this reason justifies the policy’s overall lack of clarity. Clearly defining outing contexts and at-risk groups would inform potential targets of blasphemy allegations that such allegations are explicitly against Meta’s rules and will be removed. This, in turn, could strengthen reporting by users accused of blasphemy in contexts where blasphemy poses legal and safety risks, including Pakistan. Greater specificity in the public rule may also lead to more accurate enforcement by human reviewers. The Board is also concerned that several reviewers found the content to be non-violating even though users repeatedly reported it and Meta’s internal guidance, which is clearer, explicitly includes people accused of blasphemy in Pakistan in its outing-risk groups. It was only when Meta’s HERO system identified the content, seemingly after it had gone viral, that it was escalated to internal policy experts and found to be violating. As such, Meta’s at-scale reviewers should receive more tailored training, especially in contexts like Pakistan. The Oversight Board’s Decision The Oversight Board upholds Meta’s decision to remove the content. The Board recommends that Meta: *Case summaries provide an overview of cases and do not have precedential value. 1. Case Description and Background At the end of January 2024, an Instagram user posted a six-second video clip on Instagram that shows a candidate in Pakistan’s February 2024 elections giving a speech in Urdu. In the clip, the candidate praises former Prime Minister Nawaz Sharif stating that “the person after God is Nawaz Sharif.” The video has text overlay in which the user criticizes this praise for “crossing all limits of ‘kufr,’” with “kufr” meaning not believing in Allah according to the teachings of Islam. Three Instagram users reported the content the day after it was posted and a human reviewer found it did not violate Meta’s Community Standards. The reporting users did not appeal. A day later, two additional users reported the content. Reviewers decided the content was non-violating in all instances. Three days after it was posted, another user reported the content. Reviewers actioned the reports and found it did not violate Meta’s rules. The content was then reported by different users nine more times in the following days but Meta automatically closed these reports based on the prior decisions. All user reports were reviewed within the same day of reporting. In early February, five days after it was posted, Meta’s High Risk Early Review Operations (HERO) system identified the content for further review based on signals indicating it was highly likely to go viral. The HERO system escalated the content to Meta’s policy experts. They removed it for violating the Coordinating Harm and Promoting Crime Community Standard for “outing: exposing the identity or locations affiliated with anyone who is alleged to be a member of an outing-risk group.” Based on Meta’s internal guidance to reviewers, an outing-risk group includes people accused of blasphemy in Pakistan. By the time the video clip was removed, it had been viewed approximately 48,000 times and shared more than 14,000 times. In late March 2024, Meta referred the case to the Oversight Board. The Board noted the following context in reaching its decision. The video clip was posted in the lead-up to Pakistan’s February 2024 elections, in which former Prime Minister Nawaz Sharif’s brother, Shehbaz Sharif, was elected Prime Minister for another term. The political candidate praising Nawaz Sharif in the video belongs to the same political party as the brothers. Based on research commissioned by the Board, there were many posts online featuring the video echoing the blasphemy allegation. At the same time, there are other posts sharing the six-second video clip but which counter the accusation of “kufr,” claiming the edited clip takes the candidate’s full speech out of context. A different post that featured 60 seconds of the same speech, recorded by a different camera, was shared more than 1,000 times and viewed approximately 100,000 times. This longer video provides fuller context to the election candidate’s reference to Allah, which the text overlay to the video clip in this case had claimed is blasphemous. Pakistan criminalizes offenses relating to religion under Sections 295-298 of the Pakistan Penal Code , including for defiling the Quran, derogatory remarks about the Prophet Muhammad, and the deliberate and malicious intention of outraging “religious feelings.” Pakistan’s social media rules (2021) mandate the removal of online content if it is “blasphemous,” according to the penal code. This has led to people, often religious minorities and perceived critics of Islam being convicted of blasphemy for online posts and sentenced to death. According to the UN Special Rapporteur on freedom of religion or belief, religious minorities are often the target of blasphemy laws and broadly designated as “blasphemers” or “apostates,” ( A/HRC/55/47 , para. 14). In Pakistan, Ahmadiyya Muslims and Christians are among those targets ( A/HRC/40/58 , para. 37). The UN Special Rapporteur has also noted that even those belonging to major religious denominations, including within Islam, who actively oppose maligning their religion through blasphemy laws also bear “an increased risk of being accused of ‘betrayal’ or ‘blasphemy’ and having retaliatory penalties inflicted upon themselves,” ( A/HRC/28/66 , para. 7; see also public comment PC-29617 ). Blasphemy charges are also used to intimidate political opponents. Blasphemy accusations have also led to mob lynchings in Pakistan, which have occurred in the country for decades, although not always implicating social media. Recent incidents include: Politicians have also been the target of blasphemy-related violence. One of the most high-profile incidents involved former Governor of Punjab Salman Taseer, who was killed by his own bodyguard in 2011. Taseer had advocated for the repeal of Pakistan’s blasphemy laws. The bodyguard was sentenced to death , with crowds taking to the streets to protest. After the bodyguard’s execution, protestors erected a shrine around his grave. Another incident in 2011 involved unidentified perpetrators who killed the Federal Minister for Minorities Affairs Shahbaz Bhatti. Like Taseer, Bhatti had been critical of Pakistan’s blasphemy laws. In addition to UN special procedures, human rights and religious freedom organizations as well as other governments have all condemned the blasphemy-related mob violence in Pakistan resulting from accusations of blasphemy. Experts consulted by the Board confirmed that filing a police report against an accused person for blasphemy can result in their arrest to protect them from mobs. However, as the June 2024 incident has shown, police custody can be insufficient to protect accused blasphemers from mob violence. Despite this, blasphemy prosecutions continue in Pakistan, and social media posts have formed the basis for conviction. For instance, a professor is facing the death penalty and has been imprisoned for more than 10 years for an allegedly blasphemous Facebook post in 2013. His lawyer was killed in 2014 for defending him. In 2020, police filed a blasphemy case against a human rights defender for a social media post. In March 2024, a 22-year-old student was convicted of blasphemy and sentenced to death for allegedly sending derogatory images about the Prophet Muhammad and his wives on WhatsApp. The Pakistani government has a history of monitoring online content for blasphemy and has ordered social media companies to restrict access to posts it considers blasphemous. The government has also met with Meta about posts it considers blasphemous. Meta reported in its July 2023 – December 2023 transparency report that it restricted access in Pakistan to over 2,500 posts reported by the Pakistan Telecommunication Authority for allegedly violating local laws, including posts for blasphemy and “anti-religious sentiment.” These reports only cover content Meta removes on the basis of a government request that do not otherwise violate Meta’s content policies (i.e., posts flagged by the government that the company removes for violating Meta’s rule on “outing” would not be included in this data). Based on the information provided by Meta on this case, there appears to be no indication that the government requested the review or removal of this content. As a member of the Global Network initiative, Meta has committed to respecting freedom of expression when faced with overbroad government restrictions on content. 2. User Submissions The user did not provide a statement for this case. 3. Meta’s Content Policies and Submissions I. Meta’s Content Policies Coordinating Harm and Promoting Crime The Coordinating Harm and Promoting Crime policy aims to “prevent and disrupt offline harm and copycat behavior” by prohibiting “facilitating, organizing, promoting, or admitting to certain criminal or harmful activities targeted at people, businesses, property or animals.” Two policy lines in the Community Standards address “outing,” the first is applied at-scale, and the second requires “additional context to enforce,” (which means that this policy line is only enforced following escalation). The first policy line applies to this case. It specifically prohibits: “outing: exposing the identity or locations affiliated with anyone who is alleged to be a member of an outing-risk group.” This policy line does not explain which groups are considered to be “outing-risk groups.” The second policy line, which is only enforced on escalation, also prohibits, “outing: exposing the identity of a person and putting them at risk of harm” for a specific list of vulnerable groups, including LGBTQIA+ members, unveiled women, activists and prisoners of war. Persons accused of blasphemy are not among the groups listed. Based on Meta’s internal guidance provided to reviewers, “outing-risk groups” under the first outing policy line include people accused of blasphemy in Pakistan, in addition to other specified locations. Moreover, “outing” must be involuntary; a person cannot out themselves (for example, by declaring themselves to be a member of an outing-risk group). To violate the policy under Meta’s internal guidance, it is immaterial whether the blasphemy allegation is substantiated or whether the content misrepresents blasphemy. A mere allegation is sufficient to put the person accused within the “at-risk” group and for content to be removed. Spirit of the Policy Exception Meta may apply a “spirit of the policy” allowance to content when the policy rationale (the text introducing each Community Standard) and Meta’s values demand a different outcome than a strict reading of the rules (set out in the “do not post” section and in the list of prohibited content). In previous decisions, the Board has recommended that Meta provide a public explanation of this policy allowance ( Sri Lanka Pharmaceuticals , recommendation no. 1, Communal Violence in Indian State of Odisha ). The relevant recommendations were accepted by Meta and are either fully implemented or in progress, according to the latest assessment by the Board. II. Meta’s Submissions Meta explained that people accused of blasphemy in Pakistan were added to its internal list of “outing-risk groups” under the Coordinating Harm and Promoting Crime policy in late 2017, following violence related to blasphemy allegations in the country. As part of its election integrity efforts for Pakistan’s 2024 elections, Meta prioritized the monitoring of content containing accusations of blasphemy given the high risk of offline harm, including extrajudicial violence, resulting from such allegations. Meta claims these integrity efforts resulted in the identification of the content in this case. In its case referral, Meta noted the tension between voice and safety in this kind of content during an election period. Meta noted the public interest value in criticism of political candidates while acknowledging the various risks to safety posed by accusations of blasphemy in Pakistan, such as violence and death against politicians. Meta found that the video clip’s text overlay, which stated the electoral candidate had “crossed all limits of kufr,” constituted a blasphemy allegation. For Meta, such language either suggests the political candidate has committed “shirk” – in other words, the belief in more than one God or holding up anything or anyone as equal to God – or it accuses the political candidate of violating Pakistan’s blasphemy laws. In either case, Meta determined the risk of offline harm outweighed the potential expressive value of the video. The company clarified that had the video not included the text overlay, it would have remained on the platform. Meta also provided an explanation of the HERO system it uses to detect high-risk content (in addition to user reports): the high risk of a particular content depends on the likelihood that it will go viral. Meta uses various signals to predict if content will go viral. These signals include the visibility of a piece of content on a user’s screen, even partially, the post’s language and the top country in which it is being shared when it is detected. HERO is not tied to a particular Community Standard. Moreover, Meta does not have a static or set definition for “high virality.” Instead, high virality is informed by factors that vary across markets. As a result, the way that Meta weighs high-virality signals during high-risk events, which may include election periods, varies. Meta’s internal teams may leverage view count during election periods to respond to specific risks As such, HERO identifies content under any policy regardless of the likelihood of policy violation. The Board asked Meta questions on the “outing-risk groups” rule in the Coordinating Harm and Promoting Crime policy and its enforcement, the company’s election integrity efforts for Pakistan, and government requests for content takedowns under Pakistan’s blasphemy laws and Meta’s Community Standards. Meta responded to all the questions. 4. Public Comments The Oversight Board received three public comments that met the the terms for submission . Two of the comments were submitted from Europe and one from Central and South Asia. To read public comments submitted with consent to publish, click here . The submissions covered the following themes: Meta’s content moderation of posts containing blasphemy allegations, the human rights impact of such allegations and related prosecutions in Pakistan, and the role that blasphemy accusations against public figures play in Pakistan and other countries. 5. Oversight Board Analysis This case highlights the tension between Meta’s values of protecting voice, including political criticism during elections, and of ensuring the safety of people accused of blasphemy, given threats to life and liberty that such accusations can carry in Pakistan. The Board analyzed Meta’s decision in this case against Meta’s content policies, values and human rights responsibilities. The Board also assessed the implications of this case for Meta’s broader approach to content governance. 5.1 Compliance with Meta’s Content Policies I. Content Rules The Board finds that the rule in Meta’s policy that prohibits revealing the identity of anyone alleged to be a member of an “outing-risk group” was not violated because it is unclear this extends to public figures accused of blasphemy in Pakistan or elsewhere. In Pakistan, while members of certain religious or belief minorities may be considered “groups at risk” of harm, it is not intuitive that these risks relate to “outing” as commonly understood (i.e., risks resulting from a private status being publicly disclosed). Similarly, people accused of blasphemy do not necessarily consider themselves members of a “group” (compared to individuals who share a protected characteristic, which would include religious minorities). Moreover, it is not intuitive that politicians fall within an “outing-risk group” for information revealed in public speeches, especially in an election context. Indeed, other parts of Meta’s rules in this area distinguish “political figures” and do not provide them protection for certain forms of “outing.” The Board finds that even if the internal guidance for reviewers contains more specific enforcement guidance listing the “outing groups” (or more accurately, contexts) covered by the policy, the public-facing policy does not contain the basic elements that would clearly prohibit the content in this case. However, in exercising its adjudication and oversight function, the Board finds that reading the Coordinating Harm and Promoting Crime policy’s prohibition in light of the policy rationale warrants the removal of the content, a conclusion that is reinforced by the human rights analysis below. According to the policy rationale, the Coordinating Harm and Promoting Crime Community Standard aims to “prevent and disrupt offline harm,” including by forbidding people from “facilitating, organizing, promoting or admitting to certain criminal or harmful activities targeted at people.” Meta allows for debating the legality or raising awareness of criminal or harmful activity as long as the post does not advocate for or coordinate harm. In this case, the Board finds that removing the content serves the policy rationale to prevent offline harm given the legal and safety risks that blasphemy accusations can carry in Pakistan. The user’s post cannot be interpreted as raising awareness or discussing the legality of blasphemy in Pakistan. Rather, it does the opposite: it accuses someone of engaging in blasphemy in a location where they could face prosecution and/or safety risks. The accusation against the political candidate was in the immediate run-up to the February 2024 elections, when the candidate would have been actively engaged in campaigning. The potential for imminent harm, such as vigilante violence and criminal prosecution, was present. This amounted to “facilitating” a criminal or harmful activity prohibited by the Coordinating Harm and Promoting Crime policy. A minority of the Board finds that the content should be removed on the basis of the spirit of the policy. For the minority, this policy exception should only be used on a very exceptional basis, especially to remove content. However, there are situations, such as those in this case, where it is necessary to address situations of heightened risk of harm that are not expressly prohibited in Meta’s specific “do not post” rules. This is the case here, because the Coordinating Harm and Promoting Crime Community Standard does not expressly provide that blasphemy accusations in Pakistan are prohibited. However, removal conforms to the spirit of the policy overall, and its aims of reducing harm. The minority considers that when Meta removes content based on the “spirit of the policy,” this should be documented, so that its use can be tracked, and then form a basis for identifying policy gaps that ought to be addressed. 5.2 Compliance with Meta’s Human Rights Responsibilities The Board finds that removing the content from the platform was consistent with Meta’s human rights responsibilities, though Meta must address concerns about the clarity of its rules in this area and the speed of its enforcement. Freedom of Expression (Article 19 ICCPR) Article 19 of the ICCPR encompasses the freedom to “seek, receive and impart information and ideas of all kinds” and provides broad protection for expression, including “political discourse” and commentary on “public affairs.” This includes ideas and views that may be controversial or deeply offensive, ( General Comment 34 , para. 11). The value of expression is particularly high when discussing matters of public concern, and freedom of expression is considered an “essential condition” for the effective exercise of one’s right to vote during elections ( General Comment 25, para. 12). All public figures, including those exercising the highest political authority such as heads of state and government, are legitimately subject to criticism and political opposition (General Comment 34, para. 38). Blasphemy laws are incompatible with Article 19 of the ICCPR (General Comment 34, para. 48). According to the UN High Commissioner for Human Rights, the right to freedom of religion or belief does not include the right to have a religion or a belief free from criticism or ridicule. On this basis, blasphemy laws should be repealed (see General Comment 34, para. 48 and A/HRC/31/18 , para. 59-60; Rabat Plan of Action, report A/HRC/22/17/Add.4 , at para. 19.) Indeed, blasphemy laws often cultivate religious intolerance and lead to persecution of religious minorities and dissent. Rather than criminalizing blasphemy and speech that reflects religious intolerance, in 2011, the international community rallied around UN Human Rights Council resolution 16/18 , which set forth a useful toolkit of time-proven measures to combat religious intolerance and only resort to speech bans in cases of the risk of imminent violence. When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR; General Comment 34, para. 22 and 34). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s human rights responsibilities in line with the UN Guiding Principles on Business and Human Rights, which Meta itself has committed to in its Corporate Human Rights Policy . The Board does this both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users' right to freedom of expression,” ( A/74/486 , para. 41). I. Legality (Clarity and Accessibility of the Rules) The principle of legality requires rules limiting expression to be accessible and clear, formulated with sufficient precision to enable an individual to regulate their conduct accordingly (General Comment No. 34, para. 25). Additionally, these rules “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” and must “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not,” ( Ibid. ). The UN Special Rapporteur on freedom of expression has stated that when applied to private actors’ governance of online speech, rules should be clear and specific (A/HRC/38/35, para. 46). People using Meta’s platforms should be able to access and understand the rules and content reviewers should have clear guidance regarding their enforcement. The Board finds that the Coordinating Harm and Promoting Crime policy that prohibits identifying a member of an outing-risk group is not clear to users. First, the Board considers the use of the term “outing” to be confusing, both in English and in various languages the rule is translated into, including Urdu. Though “outing” generally refers to the non-consensual revelation of another person’s private status and is commonly used in the context of the nonconsensual disclosure of a person’s sexual orientation or gender identity, this term is less commonly used in other contexts, such as religious affiliation or belief. Persons accused of blasphemy typically do not consider themselves at risk of “outing.” In addition, the translation of the phrase “outing-risk group” into other languages is problematic. For example, the Urdu translation of the public version of the outing policy is especially unclear. An Urdu speaker in Pakistan would not understand that “شناخت ظاہر کرنا” means “outing risk.” The translation also does not specify what “outing-risk” means. The Board is concerned that the current framing is not easily translated across various cultural contexts, creating potential confusion for users seeking to understand the rules. The lack of transparency is exacerbated by the fact that the Instagram Community Guidelines do not have a clear link to Meta’s Coordinating Harm and Promoting Crime policy, which would make it harder for the user to know which rules apply to content accusing someone of blasphemy. Second, the (public facing) policy does not specify which contexts are covered by this policy line and which groups are considered at risk. This policy line does not expressly state that those accused of blasphemy, in locations where such accusations pose an imminent risk of harm, are protected. This is especially problematic for members of religious minorities who are most often the targets of blasphemy allegations, in particular where individuals may for safety reasons keep their religious affiliations or beliefs discreet and be vulnerable to “outing.” It is important for these communities that the rules give them confidence that content directly endangering their safety is prohibited. While internal guidance to reviewers is clearer, the repeated failure of reviewers in this case to correctly identify that the user’s post infringed that guidance indicates that it is still insufficient. Meta explained it does not publicly specify the list of outing-risk groups covered by the policy so that bad actors cannot get around the rules. The Board does not agree that this consideration justifies the policy’s lack of clarity. Clearly defining the outing contexts and at-risk groups covered by this policy would inform potential targets of blasphemy allegations that such allegations are explicitly against Meta’s rules and will be removed. In fact, Meta already does this in the other policy line against “outing” that requires additional context to enforce. That policy line, which was immaterial to this case, lists various at-risk outing groups (for example, LGBTQIA+ members, unveiled women, defectors and prisoners of war) that fall within the scope of that policy line. Applying the same approach in relation to blasphemy allegations would therefore not depart from Meta’s current approach on clarity for its users on other comparable policy lines where the risks and tradeoffs appear to be similar. Providing this clarity could in turn strengthen reporting by users accused of blasphemy in contexts where this poses legal and safety risks, including in Pakistan. Greater specificity in the public-facing rules may also lead to more accurate enforcement by human reviewers. The Board strongly urges Meta to specify in its public-facing rules that accusations of blasphemy and apostasy against individuals are prohibited in certain locations where they pose legal and safety risks. This would be not only much clearer in a standalone rule, separated from the concept of “outing” but also consistent with Meta’s approach when listing at-risk groups in other parts of the Coordinating Harm and Promoting Crime policy. The Board does not contemplate every single detail to be laid out in the public-facing language of the Coordinating Harm and Promoting Crime Community Standard. But as a bare minimum, the relevant elements of the prohibited content, such as the types of groups that are protected, the types of locations the rule applies to, and the types of expression that fall under the prohibition, would give more clarity to users. This would address Meta’s concern about bad actors trying to evade the rules while making potential targets of blasphemy accusations aware that this type of content is prohibited. II. Legitimate Aim Any restriction on freedom of expression should also pursue one or more of the legitimate aims listed in the ICCPR, which includes protecting the rights of others (Article 19, para. 3, ICCPR ). This includes the rights to life, liberty and security of persons (Articles 6 and 9, ICCPR). The Board also recognizes that protecting people from offense is not considered a legitimate aim under international human rights standards. The Board has previously recognized that Meta’s Coordinating Harm and Promoting Crime policy pursues the legitimate aim of protecting the rights of others in the context of elections, such as the right to vote ( Australian Electoral Commission Voting Rules ). The Board finds that the policy’s aim to “prevent and disrupt offline harm” is consistent with the legitimate aim of protecting the rights to life, liberty and security of persons. III. Necessity and Proportionality Under ICCPR Article 19(3), necessity and proportionality requires that restrictions on expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected,” (General Comment No. 34, para. 34). The Board finds that removing the content in this case is consistent with the principle of necessity and proportionality, and finds the six factors for assessing incitement to violence and discrimination in the Rabat Plan of Action instructive to assess the likelihood of harm resulting from this post. Those factors are the content and form of the expression, the speaker’s intent, the identity of the speaker, their reach, and the likelihood and imminence of harm. In relation to the content and form of the expression and the speaker’s intent, as outlined above, the content in the post clearly communicates a desire to accuse the political candidate of blasphemy, doing so without indication of intent to raise awareness about or debate the legality of such speech. In relation to the identity of the speaker and the reach of their content, the Board notes that the user is not a public figure with influence over others, and has relatively few followers. Nevertheless, the account’s privacy settings were set to public at the time the content was posted, and the content was shared approximately 14,000 times by about 9,000 users. This shows that speech from non-public figures can still be disseminated very broadly on social media, and virality is challenging to predict. The reach of this post increased its potential for harm, notwithstanding that the person who posted it does not seem to be in a position of authority. The Board considers that there is a likelihood of imminent harm given the context around blasphemy allegations in Pakistan, where such accusations pose a serious risk of physical harm and even death. That context of national legal prohibitions, related prosecutions and violence from would-be vigilantes, is set out in section 1 above (see also public comments PC-29615 and PC-29617). Given the examination of the six Rabat factors, the Board finds it was necessary and proportionate to remove the post in question. Moreover, the Board is particularly concerned that numerous reviewers enforcing Meta’s Community Standards all found the content to be non-violating. This was despite repeated reports from users and Meta’s claimed prioritization of enforcement against content of this kind as part of its election integrity efforts in Pakistan, given the high risk of offline harm it could pose. It was only when Meta’s HERO system identified the content days later, seemingly after it had gone viral, that it was escalated to internal policy experts and found to violate the Coordinating Harm and Promoting Crime Community Standard. Various human reviewers missed earlier opportunities to accurately enforce against the content, indicating a need for tailored training for reviewers to understand how to spot violations in contexts like Pakistan. This is especially important in election contexts, where tensions may escalate and accurate enforcement is essential to guard against unnecessary restrictions on speech and prevent offline harm. In evaluating whether its election integrity efforts in Pakistan were successful, Meta should consider why so many reviewers failed to accurately enforce against this post, and how to ensure more effective election integrity efforts in countries with similar risks in future. While blasphemy accusations can create significant risks for politicians in countries where blasphemy is criminalized, there can nevertheless be important discussions about blasphemy, in particular in the context of an election. Meta must be cautious to avoid over-enforcement of the policy against content that is not accusing individuals of blasphemy, but rather engaging in political discussions. Such over-removal would be particularly concerning in contexts where political speech is already subject to excessive government restrictions that do not comply with international human rights law. Not all content that uses the term “kufr” will be a blasphemy accusation, as demonstrated by the variety of similar videos shared of the same events as in this case. Therefore, the Board reminds Meta that its human rights responsibilities require it to respect political expression when the content is shared to counter allegations of blasphemy or engage in discussions about blasphemy without placing individuals at risk. It is important that training to moderators emphasizes the importance of freedom of expression in this context, and allows them to escalate decisions to more specialized teams when more contextual analysis may be needed to reach a correct decision. 6. The Oversight Board’s Decision The Oversight Board upholds Meta’s decision to take down the content. 7. Recommendations Content Policy 1. To ensure safety for targets of blasphemy accusations, Meta should update the Coordinating Harm and Promoting Crime policy to make clear that users must not post accusations of blasphemy against identifiable individuals in locations where blasphemy is a crime and/or there are significant safety risks to persons accused of blasphemy. The Board will consider this recommendation implemented when Meta updates its public-facing Coordinating Harm and Promoting Crime Community Standard to reflect the change. Enforcement 2. To ensure adequate enforcement of the Coordinating Harm and Promoting Crime policy line against blasphemy accusations in locations where such accusations pose an imminent risk of harm to the person accused, Meta should train at-scale reviewers covering such locations and provide them with more specific enforcement guidance to effectively identify and consider nuance and context in posts containing blasphemy allegations. The Board will consider this recommendation implemented when Meta provides updated internal documents demonstrating that the training of at-scale reviewers to better detect this type of content occurred. *Procedural Note: Return to Case Decisions and Policy Advisory Opinions" ig-zj7j6d28,Holocaust Denial,https://www.oversightboard.com/decision/ig-zj7j6d28/,"January 23, 2024",2024,January,"TopicDiscrimination, Freedom of expressionCommunity StandardHate speech","Policies and TopicsTopicDiscrimination, Freedom of expressionCommunity StandardHate speech",Overturned,"Canada, Germany",The Oversight Board has overturned Meta’s original decision to leave up an Instagram post containing false and distorted claims about the Holocaust.,56620,8704,"Overturned January 23, 2024 The Oversight Board has overturned Meta’s original decision to leave up an Instagram post containing false and distorted claims about the Holocaust. Standard Topic Discrimination, Freedom of expression Community Standard Hate speech Location Canada, Germany Platform Instagram Holocaust Denial Public Comments Appendix Hebrew Translation Holocaust Denial Decision PDF To read this decision in Hebrew, click here . לקריאת החלטה זו בעברית יש ללחוץ כאן . The Oversight Board has overturned Meta’s original decision to leave up an Instagram post containing false and distorted claims about the Holocaust. The Board finds that the content violated Meta’s Hate Speech Community Standard, which bans Holocaust denial. This prohibition is consistent with Meta’s human-rights responsibilities. The Board is concerned about Meta’s failure to remove this content and has questions about the effectiveness of the company’s enforcement. The Board recommends Meta take steps to ensure it is systematically measuring the accuracy of its enforcement of Holocaust denial content, at a more granular level. About the Case On September 8, 2020, an Instagram user posted a meme of Squidward – a cartoon character from the television series SpongeBob SquarePants. This includes a speech bubble entitled “Fun Facts About The Holocaust,” which contains false and distorted claims about the Holocaust. The claims, in English, question the number of victims of the Holocaust, suggesting it is not possible that six million Jewish people could have been murdered based on supposed population numbers the user quotes for before and after the Second World War. The post also questions the existence of crematoria at Auschwitz by claiming the chimneys were built after the war, and that world leaders at the time did not acknowledge the Holocaust in their memoirs. On October 12, 2020, several weeks after the content was posted, Meta revised its Hate Speech Community Standard to explicitly prohibit Holocaust denial or distortion. Since the content was posted in September 2020, users reported it six times for violating Meta’s Hate Speech policy. Four of these reports were reviewed by Meta’s automated systems that either assessed the content as non-violating or automatically closed the reports due to the company’s COVID-19 automation policies. These policies, introduced at the beginning of the pandemic in 2020, automatically closed certain review jobs to reduce the volume of reports being sent to human reviewers, while keeping open potentially “high-risk” reports. Two of the six reports from users led to human reviewers assessing the content as non-violating. A user who reported the post in May 2023, after Meta announced it would no longer allow Holocaust denial, appealed the company’s decision to leave the content up. However, this was also automatically closed due to Meta’s COVID-19 automation policies, which were still in force in May 2023. They then appealed to the Oversight Board. Key Findings The Board finds that this content violates Meta’s Hate Speech Community Standard, which prohibits Holocaust denial on Facebook and Instagram. Experts consulted by the Board confirmed that all the post’s claims about the Holocaust were either blatantly untrue or misrepresented historical facts. The Board finds that Meta’s policy banning Holocaust denial is consistent with its human-rights responsibilities. Additionally, the Board is concerned that Meta did not remove this content even after the company changed its policies to explicitly prohibit Holocaust denial, despite human and automated reviews. As part of this decision, the Board commissioned an assessment of Holocaust denial content on Meta’s platforms, which revealed use of the Squidward meme format to spread various types of antisemitic narratives. While the assessment showed a marked decline since October 2020 in content using terms like “Holohoax,” it found that there are gaps in Meta’s removal of Holocaust denial content. The assessment showed that content denying the Holocaust can still be found on Meta’s platforms, potentially because some users try to evade enforcement in alternative ways, such as by replacing vowels in words with symbols, or creating implicit narratives about Holocaust denial using memes and cartoons. It is important to understand Holocaust denial as an element of antisemitism, which is discriminatory in its consequences. The Board has questions about the effectiveness and accuracy of Meta’s moderation systems in removing Holocaust denial content from its platforms. Meta’s human reviewers are not provided the opportunity to label enforcement data in a granular way (i.e., violating content is labelled as “hate speech” rather than “Holocaust denial”). Based on insight gained from questions posed to Meta in this and previous cases, the Board understands these challenges are technically surmountable, if resource intensive. Meta should build systems to label enforcement data at a more granular level, especially in view of the real-world consequences of Holocaust denial. This would potentially improve its accuracy in moderating content that denies the Holocaust by providing better training materials for classifiers and human reviewers. As Meta increases its reliance on artificial intelligence to moderate content, the Board is interested in how the development of such systems can be shaped to prioritize more accurate enforcement of hate speech at a granular policy level. The Board is also concerned that, as of May 2023, Meta was still applying its COVID-19 automation policies. In response to questions from the Board, Meta revealed that it automatically closed the user’s appeal against its decision to leave this content on Instagram in May 2023, more than three years after the pandemic began and shortly after both the World Health Organization and United States declared that COVID-19 was no longer a “public health emergency of international concern.” There was a pressing need for Meta to prioritize the removal of hate speech and it is concerning that measures introduced as a pandemic contingency could endure long after circumstances reasonably justified them. The Oversight Board’s Decision The Oversight Board overturns Meta’s original decision to leave up the content. The Board recommends that Meta: * Case summaries provide an overview of cases and do not have precedential value. 1. Decision Summary The Oversight Board overturns Meta’s original decision to leave up an Instagram post including false and distorted claims about the Holocaust. The Board finds that the content violated Meta’s Hate Speech Community Standard, which prohibits Holocaust denial. After the Board selected the case for review, Meta determined that its original decision to leave up the content was in error and removed the post. 2. Case Description and Background On September 8, 2020, an Instagram user posted a meme of Squidward – a cartoon character from the television series, SpongeBob SquarePants – which includes a speech bubble entitled “Fun Facts About The Holocaust,” containing false and distorted claims about the Holocaust. The post, in English, calls into question the number of victims of the Holocaust, suggesting it is not possible that six million Jewish people could have been murdered based on supposed population numbers the user quotes for before and after the Second World War. The post also questions the existence of crematoria at Auschwitz by claiming that the chimneys were built after the war, and claims that world leaders at the time did not acknowledge the Holocaust in their memoirs. The caption below the image includes several tags relating to memes, some of which target specific geographical audiences. The user who posted the content had about 10,000 followers and was not considered a public figure by Meta. In comments on their own post responding to criticism from others, the user reiterated that the false claims were “real history.” The post was viewed under 500 times and had fewer than 100 likes. On October 12, 2020, several weeks after the content was originally posted, Meta announced revisions to its Hate Speech Community Standard to explicitly prohibit Holocaust denial or distortion, noting that “organizations that study trends in hate speech are reporting increases in online attacks against many groups worldwide,” and that their decision was “supported by the well-documented rise in anti-Semitism globally and the alarming level of ignorance about the Holocaust.” Meta added “denying or distorting information about the Holocaust” to its list of “designated dehumanizing comparisons, generalizations, or behavioural statements” within the Community Standard (Tier 1). Two years later, on November 23, 2022, Meta reorganized the Hate Speech Community Standard to remove the word “distortion” and list “Holocaust denial” under Tier 1 as an example of prohibited “harmful stereotypes historically linked to intimidation, exclusion, or violence on the basis of a protected characteristic.” Since the content was posted in September 2020, users reported it six times for hate speech. Four of these reports were made before Meta’s October 12, 2020 policy change and two came after. Of the six reports, four were reviewed by automation and were either assessed as non-violating or auto-closed due to Meta’s “COVID-19 automation policies,” with the post left up on Instagram. According to Meta, its COVID-19 automation policies, introduced at the beginning of the pandemic in 2020, “auto-closed review jobs based on a variety of criteria” to reduce the volume of reports being sent to human reviewers, while keeping open potentially “high-risk” reports. Two of the six reports led to human reviewers assessing the content as non-violating, one prior to the October 2020 policy change and one after, in May 2023. In both instances, the reviewers determined the content did not violate Meta’s content policies and they did not remove the post from Instagram. The user who reported the content in May 2023 appealed Meta’s decision to leave the content up, but that appeal was also auto-closed due to Meta’s COVID-19-related automation policies, which were still in force at the time. The same user then appealed to the Board, noting in their submission that it was “quite frankly shocking that this [content] is allowed to remain up.” The Board notes the following background in relation to antisemitism and Holocaust denial in reaching its decision in this case. In January 2022, the UN General Assembly adopted by consensus resolution 76/250 , which reaffirmed the importance of remembering the nearly six million victims of the Holocaust and expressed concern at the spread of Holocaust denial on online platforms. It also noted the concerns of the UN Special Rapporteur on contemporary forms of racism (report A/74/253 ) that the frequency of antisemitic incidents appears to be increasing in magnitude in several regions, especially in North America and Europe. The resolution emphasizes that Holocaust denial is a form of antisemitism, and explains that “Holocaust denial refers specifically to any attempt to claim that the Holocaust did not take place, and may include publicly denying or calling into doubt the use of principal mechanisms of destruction (such as gas chambers, mass shooting, starvation, and torture) or the intentionality of the genocide of the Jewish people.” The UN Special Rapporteur on freedom of religion or belief also emphasized in 2019 the growing use of antisemitic tropes, including “slogans, images, stereotypes and conspiracy theories meant to incite and justify hostility, discrimination and violence against Jews” (report A/74/358, at para. 30.) The Board notes that Holocaust denial and distortion are forms of conspiracy theory and reinforce harmful antisemitic stereotypes, in particular the dangerous idea that Jewish people invented the Holocaust as fiction to advance purported plans of world domination. Organizations such as the Anti-Defamation League (ADL) and the American Jewish Committee have reported a sharp increase in antisemitic incidents. The ADL researches and documents antisemitic content online, most recently in its 2022 Online Holocaust Denial Report Card and two August 2023 investigations into antisemitic content on major platforms. It gave Meta a score of C on the report card, based on an alphabetical grading scale of A to F, with A being the highest score. In its investigations, ADL pointed out that “Facebook and Instagram, in fact, continue hosting some hate groups that parent company Meta has previously banned as ‘dangerous organizations’.” The investigation also emphasized that the problem was particularly bad on Instagram, with the platform having “recommended accounts spreading the most virulent and graphic antisemitism identified in the study to a 14-year-old persona” created for the investigation. 3. Oversight Board Authority and Scope The Board has authority to review Meta’s decision following an appeal from the person who previously reported the content that was left up (Charter Article 2, Section 1; Bylaws Article 3, Section 1). The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). When Meta commits to act on recommendations, the Board monitors their implementation. When the Board selects a case like this one, in which Meta subsequently acknowledges that it made an error, the Board reviews the original decision to increase understanding of the content moderation process, and to make recommendations to reduce errors and increase fairness for people who use Facebook and Instagram. 4. Sources of Authority and Guidance The following standards and precedents informed the Board’s analysis in this case: I. Oversight Board decisions II. Meta’s Content Policies The Instagram Community Guidelines state: “It’s never OK to encourage violence or attack anyone based on their race, ethnicity, national origin, sex, gender, gender identity, sexual orientation, religious affiliation, disabilities or diseases. When hate speech is being shared to challenge it or to raise awareness, we may allow it. In those instances, we ask that you express your intent clearly.” Instagram’s Community Guidelines direct users to Facebook’s Hate Speech Community Standard , which states that hate speech is not allowed on the platform “because it creates an environment of intimidation and exclusion and, in some cases, may promote real-world violence.” Facebook’s Hate Speech Community Standard defines hate speech as a direct attack against people on the basis of protected characteristics, including race, ethnicity and/or national origin, and describes three tiers of attack. When the content was posted in September 2020, the Community Standards did not explicitly prohibit Holocaust denial in any tier. Tier 1 did, however, prohibit: “Mocking the concept, events or victims of hate crimes even if no real person is depicted in an image.” In response to questions from the Board, Meta explained that its Internal Implementation Standards currently list the Holocaust as a specific example of what it considers a “hate crime,” but that it does not keep logs of the changes to Implementation Standards and Known Questions in the same way it logs changes to Community Standards in the Transparency Center. On October 12, 2020, Meta announced it was updating its Hate Speech policy “to prohibit any content that denies or distorts the Holocaust,” citing “the well-documented rise in anti-Semitism globally and the alarming level of ignorance about the Holocaust, especially among young people.” It also cited a recent survey of adults in the United States aged between 18 and 39, which showed that “almost a quarter said they believed the Holocaust was a myth, that it had been exaggerated or they weren’t sure.” On the same day, Tier 1 of the policy was updated, adding “Denying or distorting information about the Holocaust” to a list of 10 other examples of “[d]esignated dehumanizing comparisons, generalization or behavioral statements (in written or visual form).” On November 23, 2022, Meta updated its policy. It now prohibits content targeting a person or group of people based on protected characteristic(s) with “dehumanizing speech or imagery in the form of comparisons, generalizations or unqualified behavioral statements (in written or visual form) to or about: [...] Harmful stereotypes historically linked to intimidation, exclusion, or violence on the basis of a protected characteristic, such as [...] Holocaust denial.” The Board’s analysis of the content policies was also informed by Meta's commitment to voice which the company describes as “paramount,” and its values of safety and dignity. III. Meta’s Human-Rights Responsibilities The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human-rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy , in which it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board's analysis of Meta’s human-rights responsibilities in this case was informed by the following international standards: 5. User Submissions The Board received two submissions from users in this case. The first was from the person who reported the content, with their submission forming part of their appeal to the Board. The second submission was from the person who posted the content, who was invited to submit a comment after the Board selected this case, following Meta taking action to reverse its prior decision and remove the content. In their appeal to the Board, the reporting user (who appealed Meta’s decision to leave up the content) stated it was shocking for the company to keep up the content because it was “blatantly using neonazi holocaust denial arguments.” Noting that “millions of Jews, Roma, Disabled, and LGBTQ people were murdered by the Nazi regime,” the reporting user emphasized that this content is hate speech and illegal in Germany. The user who posted the content claimed in their submission to the Board that they were an “LGBT comedian” who was on a mission to parody the talking points and beliefs of the “alt-right [alternative right].” They said they believed the post was removed for making fun of the “alt right’s beliefs in Holocaust denial” and that their mission was to “uplift marginalized communities.” 6. Meta Submissions After the Board selected this case, Meta reviewed its original decision and ultimately decided to remove the content for violating its Hate Speech policy. It did not apply a standard strike to the content creator’s account as the content had been posted more than 90 days previously. This is in accordance with Meta’s strike policy . Meta explained to the Board that the specific prohibition of Holocaust denial in the Hate Speech policy was added approximately one month after the user posted the content in question. Meta explained that the second human review of the content, on May 25, 2023, erroneously found the content non-violating as it happened after the policy change. In response to questions from the Board, Meta confirmed that prior to the change, Holocaust denial content would not have been removed, but if it had been coupled with additional hate speech or another violation of the Community Standards, it would have been removed. Meta said the content in this case did not contain any additional hate speech or violations. Meta noted in its submission to the Board that the content violated the current Hate Speech policy by “denying the existence of the Holocaust.” First, it questions the number of victims, suggesting it is not possible that six million Jewish people were murdered based on supposed population numbers. It also calls into question the existence of crematoria at Auschwitz. The Board asked Meta 13 questions. These related to the company’s COVID-19 automation policies that led to reports being auto-closed; the policy development process that led to Holocaust denial being prohibited; its enforcement practices related to Holocaust denial content; and the measures that Meta is taking to provide reliable information about the Holocaust and the harms of antisemitism. All questions were answered. 7. Public Comments The Oversight Board received 35 public comments relevant to this case. Seven comments were submitted from Asia Pacific and Oceania; three from Central and South Asia; four from Europe; one from Latin America and the Caribbean; five from Middle East and North Africa; and 15 from the United States and Canada. The submissions covered the following themes: the online and offline harms resulting from antisemitic hate speech; social media platforms’ Holocaust denial policies and their enforcement; and how international human-rights standards on limiting expression should be applied to moderation of Holocaust denial content. To read public comments submitted for this case, please click here . 8. Oversight Board Analysis The Board examined whether this content should be removed by analyzing Meta’s content policies, human-rights responsibilities and values. The Board also assessed the implications of this case for Meta’s broader approach to content governance. The Board selected this case because it provided an opportunity to examine the structural issues that could contribute to this type of content evading detection and removal, and the issue of content removal due to changes in Meta’s policy. It also enabled the Board to evaluate the merits of the Holocaust denial policy in general, under applicable human-rights standards. 8.1 Compliance With Meta’s Content Policies I. Content Rules The Board finds that the content in this post violates Meta’s Hate Speech Community Standard, which prohibits Holocaust denial on Facebook and Instagram. The Board reached out to external experts to clarify how the forms of denial and distortion in this case content fit into racist and antisemitic narratives on Meta’s platforms and more broadly. Experts confirmed that all of the claims in the post were forms of Holocaust denial or distortion: while some of the claims were blatantly untrue, others misrepresented historical facts. Experts also noted that the claims in the content are common antisemitic Holocaust denial tropes on social media. Finally, and as the Brandeis Center noted in their public comment, “[t]he Holocaust was proven beyond a reasonable doubt in front of a duly constituted international court. In its judgment in the case against Major War Criminals of the Nazi regime, the Nuremberg Tribunal considered that the Holocaust had been ‘proved in the greatest detail’” (PC-15024, Louis D. Brandeis Center for Human Rights Under Law). The Board also commissioned an assessment of Holocaust denial content on Meta’s platforms to understand its prevalence and nature, and the assessment revealed the use of the Squidward meme format to spread various types of antisemitic narratives. The assessment primarily used CrowdTangle, a social media research tool, and was limited to publicly available content. Nonetheless, it provided helpful insight into potential user exposure to Holocaust denial content and confirmed that the content in this case fits into dominant Holocaust denial narratives. In its Hate Speech Community Standard, Meta explains that it may allow content that would otherwise be prohibited for purposes of “condemnation” or “raising awareness,” or if it is “used self-referentially” or in an “empowering way.” Meta explains that to benefit from these exceptions, it requires people to “clearly indicate their intent.” The Board finds that none of these exceptions applied to this case content. Additionally, under a heading requiring “additional information and/or context to enforce,” there is also an exception for satire, introduced as the result of a Board recommendation in the Two Buttons Meme case. This exception only applies “if the violating elements of the content are being satirized or attributed to something or someone else in order to mock or criticize them.” The content creator in this case claims their post was intended to “parody talking points of the alt-right,” and “uplift marginalized communities.” However, the Board finds no evidence of this stated intent in the post itself. There is none of the exaggeration characteristic of satire in the meme, which replicates typical claims made by Holocaust deniers. Similarly, the cartoon meme style in which the claims are presented are the same as typical Holocaust denial content deployed to attack Jewish people. The assessment the Board commissioned noted that “children’s television cartoon characters are often co-opted, particularly in meme formats, in order to bypass content moderation systems and target younger audiences.” As noted above, Squidward is a children’s cartoon character that is used in multiple antisemitic meme formats. Moreover, the hashtags used do not denote satirical intent, but rather appear to be a further attempt to increase the reach of the content. Finally, the content creator’s comment on their own post, in response to criticism from other users, that the content is “real history” indicates that others did not understand the post to be satirical and shows the user doubling-down on the false claims. The first human review of this content occurred on October 7, 2020, while the September 23, 2020, Hate Speech policy was still in place and prior to the explicit Holocaust denial prohibition being introduced. The content was also later reviewed by a human reviewer after the prohibition had been introduced, on May 25, 2023. Given that none of the exceptions applied, Meta should have found in the second review that the content violated the current policy on Holocaust denial. As Meta now accepts, the content disputed the number of victims of the Holocaust and the existence of crematoria at Auschwitz. The Board additionally finds that the content calls into question the fact that the Holocaust happened by claiming world leaders’ memoirs didn’t mention it, and that this claim also violates the prohibition on Holocaust denial. Under the Hate Speech policy prior to the October 2020 changes, the content should have still been removed, as it also violated the pre-existing prohibition on “mocking the concept, events or victims of hate crimes.” To deny and distort key facts of the Holocaust using a cartoon character in the style of a meme was inherently mocking, as it ridicules the Holocaust as a “hate crime,” as well as mocks the memory of its victims. II. Enforcement Action The assessment commissioned by the Board reviewed Holocaust denial content on Meta’s platforms and found that determined users try to evade enforcement in various ways, such as by replacing vowels in words with symbols or creating implicit narratives about Holocaust denial that use memes, cartoons and other tropes to relay the same sentiment without directly saying, for example, “the Holocaust didn’t happen.” It also found that “while searches for neutral or factual terms… did yield credible results, other searches for more charged terms led to Holocaust denial content.” The report confirmed the prevalence of claims minimizing the number of Jewish people who were murdered in the Holocaust. Finally, the report noted that Holocaust denial-related content is easier to find and gets more interaction on Instagram than on Facebook. The assessment also shows a marked decline in content using terms like “Holohoax” and the name of a neo-Nazi propaganda film, “Europa, the Last Battle,” since October 2020, but that there are still gaps in Meta’s removal of Holocaust denial content. As noted by the ADL in its public comment to the Board, “Holocaust denial and distortion continues to be broadcast in mainstream spaces, both on and offline. Despite clear policies that prohibit Holocaust denial and distortion, this antisemitic conspiracy theory still percolates across social media” (PC- 15004, Anti-Defamation League). The Board notes with concern that the content in this case evaded removal even after Meta changed its policies to explicitly prohibit Holocaust denial, despite two reports being made after the policy change and one being reviewed by a human moderator. As explained below, COVID-19 automation policies led to the automatic closure of one of the reports made on this content after the policy change. Furthermore, as Meta does not require its at-scale reviewers to document the reasons for finding content non-violating, there is no further information about why the human reviewer who reviewed the May 25, 2023 report incorrectly kept the content on the platform. The Board emphasizes that when Meta changes its policies, it is responsible for ensuring that human and automated enforcement of those policies is properly and promptly updated. If content posted prior to policy changes is reported by another user or detected by automation after a policy change that impacts that content, as happened in this case, it should be actioned in accordance with the new policy. That requires updating training materials for human reviewers, as well as classifiers or any other automated tool used to review content on Meta’s platforms, and ensuring systems are in place to measure the effectiveness of these interventions in operationalizing updates to the Community Standards. When the Board asked Meta how effective its moderation systems are at removing Holocaust denial content, Meta was not able to provide the Board with data. The Board takes note of Meta’s claimed capacity limitations in measuring both the amount of violating Holocaust denial content on its platforms, and the accuracy of its enforcement, but also understands that these challenges are technically surmountable, if resource intensive. Currently, human reviewers are not given the opportunity to label enforcement data with any granularity. For example, violating content is labelled as “hate speech” rather than as “Holocaust denial.” The Board recommends that Meta build systems to label enforcement data, including false positives (mistaken removal of non-violating posts) of Holocaust denial content, at a more granular level – especially in view of the real-world consequences of Holocaust denial identified by Meta when it made its policy change. This would enable Meta to measure and report on enforcement accuracy, increasing transparency and potentially improving accuracy. With the limits of human and automated moderation, and the increasing reliance on artificial intelligence to aid content moderation, the Board is interested in how the development of such systems can be shaped to prioritize improving more accurate enforcement at a more granular policy level. In response to the Board's recommendation no. 5 in the Mention of the Taliban in News Reporting case, Meta said it would develop new tools that would allow it to ""gather more granular details about our enforcement of the [Dangerous Organizations and Individuals] news reporting policy allowance."" In the Board’s view, this should also be extended to enforcement of the Hate Speech policy. The Board is also concerned about the application of Meta’s COVID-19 automation policies that were still in force as of May 2023. These led to the automatic closure of one of the reports made on this content after the Hate Speech policy was changed, as well as the automatic closure of the appeal that led to the Board taking on this case. Meta first announced that it would be sending content reviewers home due to the COVID-19 pandemic in March 2020. In response to questions from the Board, Meta explained that the “policy was created at the beginning of the COVID-19 pandemic in 2020 due to a temporary reduction in human reviewer capacity. This automation policy auto-closed review jobs based on a variety of conditions and criteria to reduce the volume of reports for human reviewers but kept open [for review] reports that are potentially high risk.” The user’s appeal was auto-closed in May 2023, more than three years after the COVID-19 pandemic began, and shortly after both the WHO and the United States declared COVID-19 was no longer a “public health emergency of international concern.” The Board is concerned that measures Meta introduced to handle the pandemic at its outset, which significantly reduced the availability of access to appeal and careful human review, became a new and permanent modus operandi enduring long after circumstances reasonably justified it. During the COVID-19 pandemic, antisemitism increased and conspiracy theories circulated claiming that Jewish people were purposefully spreading the virus. There was a pressing need for Meta to prioritize the review and removal of hate speech, given the severe impacts of such speech on individuals’ rights, as soon as the circumstances of this emergency allowed. The Board is concerned that what was introduced as a pandemic contingency could be extended for a significant period of time when the necessity of such a significant scaling back of the kind of careful human review necessary to implement Meta's detailed and sensitive policies is not demonstrated, and recommends that Meta restore review of content moderation decisions as soon as possible, and publish information in its Transparency Center when it does so. 8.2 Compliance With Meta’s Human-Rights Responsibilities Freedom of Expression (Article 19 ICCPR) Article 19 of the ICCPR provides for broad protection of the right to freedom of expression, including discussions on matters of history. The Human Rights Committee has said that the scope of this right “embraces even expression that may be regarded as deeply offensive, although such expression may be restricted in accordance with the provisions of article 19, paragraph 3 and article 20,” ( General Comment No. 34 , para. 11). When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human-rights commitments, both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. Additionally, the ICCPR requires states to prohibit advocacy of racial hatred that constitutes incitement to hostility, discrimination or violence (Article 20, para. 2, ICCPR). As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression” ( A/74/486, para. 41 ). Meta has the responsibility to prevent and mitigate incitement on its platforms. Public comments in this case reflect diverging views on how international human-rights standards on limiting expression should be applied to the moderation of Holocaust denial content online. Several public comments argued that Meta’s human-rights responsibilities require such content to be removed (see PC-15023, American Jewish Committee and its Jacob Blaustein Institute for the Advancement of Human Rights; PC-15024, Louis D. Brandeis Center for Human Rights Under Law; and PC-15018, Prof. Yuval Shany of the Hebrew University of Jerusalem Faculty of Law). Others argued that Meta should address the lack of specificity in the policy by defining Holocaust denial, making clearer the prohibition aims at addressing antisemitism, as well as improve training of human reviewers (see PC-15034, University of California, Irvine – International Justice Clinic). Finally, some public comments argued that Holocaust denial content should only be removed when it constitutes direct incitement to violence under Article 20, para. 2 of the ICCPR (see PC-15022, Future of Free Speech Project). I. Legality (Clarity and Accessibility of the Rules) The Board finds that the current Hate Speech policy prohibition on Holocaust denial is sufficiently clear to satisfy the legality standard. Since its revision in October 2020, the Hate Speech Community Standard clearly states that content denying the Holocaust is not allowed. However, the Board notes that the language is less clear than when it was first introduced, in two ways. First, as noted above, UN resolution 76/250 specifically urges social media companies to address Holocaust denial or distortion [emphasis added]. That means the removal of the word “distortion” (originally included alongside “denial”) in 2022 lessened the policy’s conformity with UN recommendations. Second, the placement of the policy line under the prohibition on “Harmful stereotypes historically linked to intimidation, exclusion, or violence on the basis of a protected characteristic,” reduces the policy’s clarity. Holocaust denial is linked to antisemitic stereotypes, but not all instances will necessarily be an example of direct stereotyping. The Hate Speech policy prior to the October 2020 revisions did not expressly prohibit Holocaust denial, but the prohibition on “mocking the concept, events or victims of hate crimes” did, in the Board’s view, cover most instances of Holocaust denial, even if it did not address fully the nature of the Holocaust. Notwithstanding that the current policy on Holocaust denial is expressly included in the Facebook Community Standards, the same is not true of the Instagram Community Guidelines, in which Holocaust denial is not mentioned at all. The Board emphasizes that it has asked Meta in several recommendations to align its Instagram and Facebook standards and distinguish where there are inconsistencies. Meta has committed to implementing these recommendations fully, but it also explains in its Transparency Center that it does “not believe adding a short explanation to the Community Guidelines introduction will fully address the board’s recommendation and may lead to further confusion. Instead, we are working to update the Instagram Community Guidelines so that they are consistent with the Facebook Community Standards in all of the shared policy areas.” In its quarterly update, Meta said this is a key priority but has had to be deprioritized because of regulatory compliance work. Meta will not complete this recommendation this year and expects to have an update on the progress in Q2 2024. Noting that its commissioned research and civil society investigations indicate that Holocaust denial is more prevalent on Instagram, the Board reiterates its prior recommendation and urges Meta to continue to communicate any delays and implement any short-term policy solutions available to bring more clarity to Instagram users, in particular on the issue of Holocaust denial. Content is accessible on Meta’s platforms on a continuing basis and content moderation policies are applied on a continuing basis. Therefore, Meta removing old posts still hosted on Facebook or Instagram, after a rule change that clearly prohibits that content, does not violate the requirements of legality. Rather, continuous publication of the content that Meta hosts after a substantive policy change or clarification when it comes to Tier 1 (and in other situations where human life is at risk) necessitates removal, even for posts that pre-date the introduction of new rules. Meta does not count strikes “on violating content posted over 90 days ago for most violations or over 4 years ago for more severe violations.” This means that in most cases, there would also be no penalty for previously permitted content that later comes to violate new rules. However, the strikes policy means that users could incur penalties where Meta changes a rule and subsequently enforces it against content posted up to 90 days prior to the rule change. The Board emphasizes that while it is consistent with the principle of legality to remove content after a rule change in the specific context of social media, for the reasons outlined above, it is not appropriate to apply retroactive punishment in the form of strikes when removing content that was permitted when it was posted. II. Legitimate Aim In numerous cases, the Board has recognized that Meta’s Hate Speech Community Standard pursues the legitimate aim of protecting the rights of others. Meta explicitly states that it does not allow hate speech because it “creates an environment of intimidation and exclusion, and in some cases may promote offline violence.” Meta also noted similar aims when it announced the introduction of a specific prohibition on Holocaust denial. Numerous public comments noted this addition was necessary to safeguard against increased incitement to violence and hostility (PC-15023, American Jewish Committee and its Jacob Blaustein Institute for the Advancement of Human Rights), emphasizing that Holocaust denial and distortion amounts to a discriminatory attack against Jewish people and promotes antisemitic stereotypes, often connected to and spread during antisemitic hate crimes. It is important to understand Holocaust denial as a constitutive element of antisemitism that is discriminatory in its consequences. The denial of the Holocaust amounts to the denial of “barbarous acts which have outraged the conscience of mankind,” as described by the Universal Declaration of Human Rights (see also UN General Assembly Resolution 76/250 ). The Hate Speech Community Standard and its prohibition on Holocaust denial pursues the legitimate aim of respecting the rights to equality and non-discrimination of Jewish people as well as their right to freedom of expression. Allowing such hate speech would create an environment of intimidation that effectively excludes Jewish people from Meta’s platforms [see, e.g., PC-15021, Monika Hübscher, noting that “individuals impacted by antisemitic hate speech on social media describe the attacks in a language that equals the depictions of physical acts. Exposure to hate on social networks can lead to feelings of fear, insecurity, heightened anxiety, and even sleep disturbance”]. Meta’s prohibition on Holocaust denial also serves the legitimate aim of respecting the right to reputation and the dignity and memory of those who perished in the most inhumane circumstances and the rights of their relatives. Such hate speech is a fundamental attack on the dignity of human beings (see also Universal Declaration of Human Rights, Article 1). III. Necessity and Proportionality Meta's decision to ban Holocaust denial is consistent with its human-rights responsibilities. The Board notes that Meta’s responsibilities to remove hate speech in the form of Holocaust denial can be considered necessary and proportionate in numerous ways. Under ICCPR Article 19, para. 3, necessity requires that restrictions on expression “must be appropriate to achieve their protective function.” The removal of the content would not be necessary “if the protection could be achieved in other ways that do not restrict freedom of expression” ( General Comment No. 34 , para. 33). Proportionality requires that any restriction “must be the least intrusive instrument amongst those which might achieve their protective function” ( General Comment No. 34 , para. 34). The Board considers that there are different ways to approach content that denies the Holocaust. While the majority of Board members consider – for various reasons explained below – that the prohibition on Holocaust denial satisfies the principle of necessity and proportionality, a minority considers that Meta did not meet the conditions for establishing this prohibition. For the majority, UN General Comment No. 34 does not invalidate prohibitions on Holocaust denial that are specific to the regulation of hate speech, as Meta’s prohibition is, when such denial is understood as an attack against a protected group. Meta’s rule expressly prohibiting Holocaust denial as hate speech was a response to an alarming rise in the dissemination of such antisemitic content online that was internationally denounced; the on and offline harm that such hate speech causes; and the staggering ignorance about the commission of these heinous crimes of the Holocaust that offend the conscience of humanity and whose veracity has been conclusively demonstrated. The Board notes that the prohibition is also responsive to UN General Assembly Resolution 76/250, which “urges ...social media companies to take active measures to combat antisemitism and Holocaust denial or distortion by means of information and communications technologies and to facilitate reporting of such content.” The majority notes that Meta’s prohibition is also not absolute, as specific exceptions exist to allow condemnation, awareness raising and satire, as well as broader exceptions such as the newsworthiness allowance. The ban on Holocaust denial is therefore in conformity with the ICCPR and the obligations expressed in Article 4 (a) of the International Convention on the Elimination of All Forms of Racial Discrimination. Holocaust denial is “a dissemination of ideas based on racial hatred,” given that “Holocaust denial in its various forms is an expression of antisemitism” (see also UN General Assembly Resolution 76/250 ). Furthermore, in the above-mentioned context, the post, by denying the facts of the Holocaust, may contribute to the creation of an extremely hostile environment on the platform, causing exclusion of the impacted communities, and profound pain and suffering. Therefore, there is “a direct and immediate connection between the expression and the threat” to voice, dignity, safety and reputation of others that justifies the prohibition in the sense required by General Comment 34, at para. 35. For some members of the majority, there are additional reasons to support Meta’s prohibition. A legally proven fact cannot be the subject matter of divergent opinions when such lies have directly harmful consequences on others’ rights to be protected from violence and discrimination. The presentation of Holocaust denial as opinion about “historical facts” is therefore an abuse of the right to freedom of expression. These same members note that in Faurisson v. France, (550/1993) the UN Human Rights Committee found that a ban on Holocaust denial complied with the requirements of Article 19, para. 3. The Committee came to this conclusion in the context of the French Gayssot Act, which made it illegal to question the existence or size of the crimes against humanity recognized in the Charter of the Nuremberg Tribunal. The Committee’s decision, which relates to enforcement of a law that would seemingly prohibit the content under consideration in this case, supports the Board’s conclusion that Meta’s eventual removal of the post was permissible under international human rights law. For other members of the majority, who depart from considering the Faurrison case to be a currently valid doctrine, the company's decision is consistent with the principles of necessity and proportionality for different reasons summarized below and arising from the Board's precedents. In previous cases, the Board has agreed with the UN Special Rapporteur on freedom of expression that although some restrictions (such as general bans on certain speech) would generally not be consistent with governmental human rights obligations (particularly if enforced through criminal or civil penalties), Meta may prohibit such speech provided that it demonstrates the necessity and proportionality of the restriction (see South Africa Slurs decision and Zwarte Piet decision). In these cases, companies should ""give a reasoned explanation of the policy difference in advance, in a way that articulates the variation"" ( A/74/486 , para. 48; A/HRC/38/35 , para. 28). For a prohibition of this kind to be compatible with Meta's responsibilities, it is necessary that it be based on a human rights analysis demonstrating that the policy is in pursuit of a legitimate aim; is useful, necessary and proportionate to achieve that aim (see South Africa Slurs decision); and that the prohibition should be periodically reviewed to ensure that the need persists (see Removal of COVID-19 Misinformation policy advisory opinion). For these members of the majority, these conditions are met as demonstrated by the evidence the Board found and summarized in earlier sections of this decision, particularly, in the alarming rise of antisemitism globally and the growth online and offline of antisemitic violence. The Board agrees that there are different forms of intervention that social media platforms such as Meta can deploy besides content removal to address hate speech against Jewish people. The UN Special Rapporteur on freedom of opinion and expression has recommended that social media companies should consider a range of possible responses to problematic content beyond removal to ensure restrictions are narrowly tailored, including geo-blocking, reducing amplification, warning labels and promoting counter-speech ( A/74/486 , para. 51). The Board welcomes various initiatives from Meta to counter antisemitism, in addition to removal of violating content, including educating people about the Holocaust, directing people to credible information off Facebook if they search for terms associated with the Holocaust or its denial on its platforms, and engaging with organizations and institutions that work on combating hate and antisemitism. The Board encourages Meta to roll these initiatives out uniformly across Instagram and explore targeting them at people who violate the Holocaust denial policy. For the majority, given the evidence of the negative impact of Holocaust denial on users of Meta's platforms, these measures, while valuable, cannot fully protect Jewish people from discrimination and violence. As public comments also note, “less severe interventions than removal of Holocaust denial content, such as labels, warning screens, or other measures to reduce dissemination, may be useful but would not provide the same protection [as removal]” (PC-15023, American Jewish Committee and its Jacob Blaustein Institute for the Advancement of Human Rights). In the absence of less intrusive means to effectively combat hate speech against Jewish people, the majority finds the Holocaust denial prohibition meets the requirements of necessity and proportionality. While a minority of Board Members also firmly condemns Holocaust denial and believes it should be addressed by social media companies, they find the majority’s necessity and proportionality analysis is out of step with the UN human rights mechanisms’ approach to freedom of expression over the last 10 years. First, with regard to reliance on the Human Rights Committee’s 1986 Faurisson case in justifying the removal as necessary and proportionate, the minority highlighted (as did PC-15022, Future of Free Speech Project) that the lead drafter of General Comment 34 confirmed that Faurisson was effectively overruled by General Comment 34, which was adopted in 2011 ( Michael O’Flaherty, Freedom of Expression : Article 19 of the ICCPR and the Human Rights Committee’s General Comment 34, 12 Hum. Rts. L.Rev. 627, 653 (2012)). Paragraph 49 of General Comment 34 states that the ICCPR does not permit the general prohibition of expressions of erroneous opinions about historical facts. Any restrictions on expression must meet the strict tests of necessity and proportionality, which require considering likely and imminent harm. The minority finds that the reliance on Article 4 of the ICERD is misplaced as the Committee on the Elimination of Racial Discrimination (CERD, which is charged with monitoring implementation of the ICERD) specifically addressed the topic of genocide denial, stating it should only be banned when the statements “clearly constitute incitement to racial violence or hatred. The Committee also underlines that 'the expression of opinions about historical facts’ should not be prohibited or punished,” ( CERD General Recommendation No.35 , para 14) [emphasis added]. This minority of Board Members is not convinced that content removal is the least intrusive means available to Meta to address antisemitism, and that Meta’s failure to demonstrate otherwise does not satisfy the requirement of necessity and proportionality. The Special Rapporteur has stated “just as States should evaluate whether a limitation on speech is the least restrictive approach, so too should companies carry out this kind of evaluation. And, in carrying out the evaluation, companies should bear the burden of publicly demonstrating necessity and proportionality ” ( A/74/486 , para. 51) [emphasis added]. For this minority, Meta should have publicly demonstrated why removal of such posts is the least intrusive means of the many tools it has at its disposal to avert likely near-term harms, such as discrimination or violence. If it cannot provide such a justification, then Meta should be transparent in acknowledging that its speech rules depart from UN human-rights standards and provide a public justification for doing so. The minority believes that the Board would then be positioned to consider Meta’s public justification and a public dialogue would ensue without distorting existing UN human-rights standards. 9. Oversight Board Decision The Oversight Board overturns Meta's original decision to leave up the content. 10. Recommendations Enforcement 1. To ensure that the Holocaust denial policy is accurately enforced, Meta should take the technical steps to ensure that it is sufficiently and systematically measuring the accuracy of its enforcement of Holocaust denial content. This includes gathering more granular details about its enforcement of this content, as Meta has done in implementing the Mention of the Taliban in News Reporting recommendation no. 5. The Board will consider this recommendation implemented when Meta provides the Board with its first analysis of enforcement accuracy of Holocaust denial content. Transparency 2. To provide greater transparency that Meta’s appeals capacity is restored to pre-pandemic levels, Meta should publicly confirm whether it has fully ended all COVID-19 automation policies put in place during the COVID-19 pandemic. The Board will consider this recommendation implemented when Meta publishes information publicly on each COVID-19 automation policy and when each was ended or will end. The Oversight Board also reiterates the importance of its previous recommendations calling for alignment of the Instagram Community Guidelines and Facebook Community Standards, noting the relevance of these recommendations to the issue of Holocaust denial (recommendation no. 7 and 9 from the Breast Cancer Symptoms and Nudity case; recommendation no. 10 from the Öcalan’s Isolation case; no. 1 from the Ayahuasca Brew case; and recommendation no. 9 from the Sharing Private Residential Information policy advisory opinion). In line with those recommendations, Meta should continue to communicate delays in aligning these rules, and it should implement any short-term solutions to bring clarity to Instagram users. *Procedural Note: The Oversight Board’s decisions are prepared by panels of five Members and approved by a majority of the Board. Board decisions do not necessarily represent the personal views of all Members. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by an independent research institute headquartered at the University of Gothenburg, which draws on a team of over 50 social scientists on six continents, as well as more than 3,200 country experts from around the world. The Board was also assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis. Return to Case Decisions and Policy Advisory Opinions" th-nc063kad,Statements About the Japanese Prime Minister,https://www.oversightboard.com/decision/th-nc063kad/,"September 10, 2024",2024,,"TopicElections, Freedom of expressionCommunity StandardViolence and incitement","Policies and TopicsTopicElections, Freedom of expressionCommunity StandardViolence and incitement",Overturned,Japan,"In the case of a user’s reply to a Threads post about the Japanese Prime Minister and a tax fraud scandal, it was neither necessary nor consistent with Meta’s human rights responsibilities for the content to be removed.",42938,6557,"Overturned September 10, 2024 In the case of a user’s reply to a Threads post about the Japanese Prime Minister and a tax fraud scandal, it was neither necessary nor consistent with Meta’s human rights responsibilities for the content to be removed. Standard Topic Elections, Freedom of expression Community Standard Violence and incitement Location Japan Platform Threads Japanese Translation Statements About the Japanese Prime Minister Decision PDF To read the decision in Japanese, click here . 決定内容の全文を日本語で読むには、 こちら をクリックしてください。 In the case of a user’s reply to a Threads post about the Japanese Prime Minister and a tax fraud scandal, it was neither necessary nor consistent with Meta’s human rights responsibilities for the content to be removed. This case grapples with the issue of how Meta should distinguish between figurative and actual threats of violence. The Board has repeatedly highlighted over-enforcement against figurative threats. It is concerning that Meta’s Violence and Incitement policy still does not clearly distinguish literal from figurative threats. In this case, the threat against a political leader was intended as non-literal political criticism calling attention to alleged corruption, using strong language, which is not unusual on Japanese social media. It was unlikely to cause harm. Even though the two moderators involved spoke Japanese and understood the local sociopolitical context, they still removed the content in error. Therefore, Meta should provide additional guidance to its reviewers on how to evaluate language and local context, and ensure its internal guidelines are consistent with the policy rationale. About the Case In January 2024, a Threads post was shared that shows a news article about the Japanese Prime Minister Fumio Kishida and his response to fundraising irregularities involving his party. The post’s caption criticizes the Prime Minister for tax evasion. A user replied publicly to that post, calling for an explanation to be given to Japan’s legislative body followed by the word “hah,” and referring to the Prime Minister as a tax evader by using the phrase “死ね,” which translates as “drop dead/die” in English. The phrase is included in several hashtags and the user’s reply also includes derogatory language about a person who wears glasses. The user’s reply to the Threads post did not receive any likes and was reported once under Meta’s Bullying and Harassment rules. Three weeks later, a human reviewer determined the content broke the Violence and Incitement rules instead. When the user appealed, another human reviewer decided once more that the content was violating. The user then appealed to the Board. After the Board selected the case, Meta decided its original decision was wrong and restored the user’s reply to Threads. Around the time of the original Threads post and the user’s reply, Japanese politicians from the Liberal Democratic Party had been charged with underreporting fundraising incomes, although this did not include Prime Minister Kishida. Since 2022, when former Prime Minister Shinzo Abe was assassinated, there has been some concern about political violence in Japan. Fumio Kishida recently announced he will not seek re-election as leader of Japan’s Liberal Democratic Party on September 27, 2024 and is to step down as Prime Minister. Key Findings The Board finds that the phrase “drop dead/die” (translated from the original “死ね”) was not a credible threat and did not break the Violence and Incitement rule that prohibits “threats of violence that could lead to death.” Experts confirmed the phrase is broadly used in a figurative sense as a statement of dislike and disapproval. The content also points to this figurative use, with inclusion of the word “hah” expressing amusement or irony. However, Meta’s Violence and Incitement rule that prohibits calls for death using the phrase “death to” against high-risk persons is not clear enough. Meta’s policy rationale suggests that context matters when evaluating threats but, as has been noted by the Board in a previous case, Meta’s at-scale human reviewers are not empowered to assess the intent or credibility of a threat, so if a post includes threatening statements like “death to” and a target (i.e., “a call for violence against a target”), it is removed. Repeating a 2022 recommendation , the Board calls on Meta to include an explanation in the policy’s public language that rhetorical threats using the phrase “death to” are generally allowed, except when directed at high-risk individuals, and to provide criteria on when threatening statements directed at heads of state are permitted to protect rhetorical political speech. It is also confusing how this policy differs in its treatment of “public figures” and “high-risk persons.” Currently, medium severity violence threats against public figures are only removed when “credible,” compared with content removal “regardless of credibility” for other individuals. More confusingly still, there is another line in this policy that gives “additional protections” to high-risk persons. Internal guidance on this to reviewers, which is not available publicly, specifically indicates that “death to” content against such high-risk people should be removed. When asked by the Board, Meta said its policy offers greater protection for users’ speech involving medium severity threats at public figures because people often use hyperbolic language to express their disdain, without intending any violence. However, threats of high-severity violence, including death calls against high-risk persons, carry a greater risk of potential offline harm. In this case, Meta identified the Japanese Prime Minister as falling into both categories. The Board has real concerns about the policy’s definitions of “public figures” and “high-risk persons” not being clear enough to users, especially when the two categories interact. In response to the Board’s previous recommendations, Meta has completed some policy work to strike a better balance between violent speech and political expression, but it has not yet publicly clarified who “high-risk persons” are. The Board believes providing a general definition with illustrative examples in the Community Standards would allow users to understand that this protection is based on the person’s occupation, political activity or public service. The Board offered such a list in the 2022 Iran Protest Slogan case. The Oversight Board’s Decision The Board overturns Meta’s original decision to take down the content. The Board recommends that Meta: *Case summaries provide an overview of cases and do not have precedential value. 1. Case Description and Background In January 2024, a user replied publicly to a Threads post containing a screenshot of a news article. The article included a statement by Prime Minister Fumio Kishida about unreported fundraising revenues involving members of his Liberal Democratic Party. In the statement, Kishida said the amount “remained intact and was not a slush fund.” The main Threads post included an image of the Prime Minister and a caption criticizing him for tax evasion. The user’s response to the post calls for an explanation to be given to Japan’s legislative body and includes the interjection “hah.” It also includes several hashtags using the phrase “死ね” (transliterated as “shi-ne” and translated as “drop dead/die”) to refer to the Prime Minister as a tax evader as well as derogatory language for a person who wears glasses, such as #dietaxevasionglasses and #diefilthshitglasses (translated from Japanese). All the content is in Japanese. Both the post and reply were made around the time of the Prime Minister’s parliamentary statement addressing his party’s alleged underreporting of this revenue. Fumio Kishida, who has served as Japan’s Prime Minister since October 2021, recently announced he will not seek re-election in the Liberal Democratic Party’s leadership election, to be held on September 27, 2024. The user’s reply did not receive any likes or responses. It was reported once under the Bullying and Harassment policy for “calls for death” towards a public figure. Due to a backlog, a human moderator reviewed the content approximately three weeks later, determining that it violated Meta’s Violence and Incitement policy and removing it from Threads. The user then appealed to Meta. A second human reviewer also found that the content violated the Violence and Incitement policy. Finally, the user appealed to the Board. After the Board selected this case, Meta determined that its original decision to remove the content was an error and restored it on Threads. The Oversight Board considered the following context in coming to its decision. When the reply to the Threads post was disseminated in January 2024, prosecutors had recently indicted Japanese politicians belonging to the Liberal Democratic Party for underreporting fundraising incomes. Prime Minister Kishida himself was not indicted. Research commissioned by the Board identified a general sentiment of disapproval and criticism on Threads towards the Prime Minister in relation to the tax fraud allegations, with other posts containing the phrase “死ね” (drop dead/die). Experts consulted by the Board noted that people in Japan use social media frequently to post political criticism. In the past, online message boards have served as anonymous platforms to express social discontent without fear of consequences (see also public comments, PC-29594 and PC-29589). According to experts consulted by the Board, in recent decades, political violence in Japan has been rare. For this reason, the nation was shocked in 2022 when Prime Minister Shinzo Abe was assassinated while campaigning. Concerns about political violence rose in April 2023 when a man used a pipe bomb during a campaign speech by Prime Minister Kishida, wounding two bystanders but not harming the Prime Minister. According to linguistic experts consulted by the Board, the phrases used in the post are offensive and widely used to convey severe disapproval or frustration. While the phrase “死ね” (drop dead/die) may in some instances be used literally as a threat, it is generally used figuratively to express anger without being a genuine threat (see also public comment by Ayako Hatano, PC-29588). In 2017, the UN Special Rapporteur on Freedom of Expression voiced concerns about freedom of expression in Japan. These concerns related to the use of direct and indirect pressure by government officials on media, the limited capacity to debate historical events and the increased restrictions on information access based on assertions of national security. In its 2024 Global Expression Report , Article 19 placed Japan at 30 out of 161 countries. Freedom House classified Japan as “Free” in its 2023 Freedom on the Net evaluation, but raised concerns about government intervention in the online media ecosystem, the lack of independent regulatory bodies and the lack of clear definitions in the recent legislative amendments regulating online insults. However, the organization’s Freedom in the World report gave the country 96 out of 100 points for political and civil liberties. Japan also consistently receives high ratings in democracy and rule of law indices. In 2023, the World Justice Project’s Rule of Law Index ranked Japan 14 of 142 countries. 2. User Submissions In their statement to the Board, the user who posted the reply claimed they were merely criticizing the Liberal Democratic Party government for its alleged acts of condoning and abetting tax evasion. They said that Meta’s removal of their post contributed to the obstruction of freedom of speech in Japan by prohibiting criticism of a public figure. 3. Meta’s Content Policies and Submissions I. Meta’s Content Policies The Board’s analysis was informed by Meta’s commitment to voice, which the company describes as “paramount,” and its value of safety. Meta assessed the content under its Violence and Incitement and Bullying and Harassment policies and initially removed it under the Violence and Incitement Policy. After the Board identified this case for review, the company determined the content did not violate either policy. Violence and Incitement Community Standard The policy rationale for the Violence and Incitement Community Standard explains that Meta intends to “prevent potential offline harm that may be related to content on [its] platforms” while acknowledging that “people commonly express disdain or disagreement by threatening or calling for violence in non-serious and casual ways.” It acknowledges: “Context matters, so [Meta] consider[s] various factors such as condemnation or awareness raising of violent threats, … or the public visibility and vulnerability of the target of the threats.” This policy provides for universal protection for everyone against “threats of violence that could lead to death (or other forms of high-severity violence).” Threats include “statements or visuals representing an intention, aspiration or call for violence against a target.” Before April 2024, the policy prohibited “threats that could lead to serious injury (mid-severity violence) and admission of past violence” towards certain people and groups, including high-risk persons. In April 2024, Meta updated this policy to provide universal protection against such threats for everyone regardless of credibility, except for threats “against public figures,” which the policy requires to be “credible.” The only mention of “high-risk persons” left in the current version of the policy relates to low-severity threats that still allows for “[a]dditional protections for Private Adults, All Children, high-risk persons and persons or groups based on their protected characteristics ...” The public-facing language of the policy does not define the term “high-risk persons.” However, Meta’s internal guidelines to reviewers contain a list of high-risk persons that includes heads of state; former heads of state; candidates and former candidates for head of state; candidates in national and supranational elections for up to 30 days after election if not elected; people with a history of assassination attempts; activists and journalists (see Iran Protest Slogan decision). Bullying and Harassment Community Standard Meta’s Bullying and Harassment Community Standard prohibits various forms of abuse directed against individuals, including “making threats” and “distinguishes between public figures and private individuals” to “allow discussion, which often includes critical commentary of people who are featured in the news or who have a large public audience.” This policy prohibits “severe” attacks on public figures, as well as certain attacks where the public figure is “purposefully exposed,” defined as “directly tagg[ing] [a public figure] in the post or comment.” The policy defines a “public figure” to include “state and national level government officials, political candidates for those offices, people with over one million fans or followers on social media and people who receive substantial news coverage.” II. Meta’s Submissions Meta informed the Board that the term “死ね” (drop dead/die) in the hashtags did not violate its policies in this case. Meta regards this use as a political statement that contains figurative speech, rather than a credible call for death. The company explained that it often cannot distinguish at-scale between statements containing credible death threats and figurative language intended to make a political point, which is why it initially removed the content. Meta told the Board that Prime Minister Kishida is considered a public figure under the company’s Violence and Incitement and Bullying and Harassment policies, while the user replying to the post was not considered a public figure. Meta also informed the Board that Prime Minister Kishida is considered a “high-risk person” under the Violence and Incitement Community Standard. Violence and Incitement Community Standard Under its Violence and Incitement policy, Meta prohibits: “Threats of violence that could lead to death (or other forms of high-severity violence).” In its non-public guidelines to human reviewers, Meta notes that it removes calls for death of a high-risk person if those calls use the words “death to.” Meta told the Board that the concept of a high-risk person is limited to this policy and includes political leaders, who may be at higher risk of assassination or other violence. Meta acknowledged that it is challenging to maintain a distinction between the phrases “death to” and “die” in every case, particularly when the meaning of the phrases may overlap in the original language. In this case, the content uses the phrases “die” and not “death to” in the hashtags, #dietaxevasionglasses and #diefilthshitglasses (translated from Japanese). In addition, Meta noted that even if it treated “die” and “death to” similarly (as a call for death), the company would not remove this content on escalation because it would be a non-literal threat that does not violate the spirit of the policy. The spirit of the policy allowance permits content when a strict interpretation of a policy produces an outcome that is at odds with that policy’s intent (see Sri Lanka Pharmaceuticals decision). Meta deemed the threat to be non-literal because the other words of the hashtags and of the reply itself are about political accountability through hearings before Japan’s legislature. As such, the call to have a political leader be held to account before a legislative body indicated that the death threat was figurative rather than literal. For these reasons, Meta determined that the content does not violate the Violence and Incitement policy. Bullying and Harassment Community Standard Meta informed the Board that the content did not violate its Bullying and Harassment policy because the content did not “purposefully expose” a public figure. The user did not tag or reply to a comment by Prime Minister Kishida and did not post the content on the Prime Minister’s page. Meta therefore determined the content did not purposefully expose Prime Minister Kishida and would not violate the Bullying and Harassment policy even if the threat was literal. The Board asked Meta 19 questions in writing. Questions related to Meta’s enforcement practices and resources in Japan, the training provided for at-scale human reviewers and how it incorporates local context, the process for escalating at-scale policy lines, the feasibility to enforce the policy prohibiting death threats against high-risk persons only on escalation, Meta’s review backlog on Threads and automated detection capacities. Meta answered 17 questions in full and two questions in part. The company partially answered the questions related to the review backlog and governmental requests to take down content in Japan. 4. Public Comments The Oversight Board received 20 public comments that met the terms for submission : 13 from Asia Pacific and Oceania, three from the United States and Canada, three from Europe and one from Central and South Asia. To read public comments submitted with consent to publish, click here . The submissions covered the following themes: the sociopolitical context in Japan; online threats of violence against politicians and limitations on freedom of expression; the use of rhetorical threats or calls for violence in Japanese political discourse; the linguistic context of the phrase “drop dead/die”; and Meta’s choice not to recommend political content on Threads for pages not followed by users. 5. Oversight Board Analysis The Board examined whether this content should be removed by analyzing Meta’s content policies, human rights responsibilities and values. The Board also assessed the implications of this case for Meta’s broader approach to content governance. 5.1 Compliance with Meta’s Content Policies I. Content Rules Violence and Incitement Community Standard The Board finds that the content in this case does not violate the Violence and Incitement policy prohibiting “threats of violence that could lead to death (or other forms of high-severity violence).” The phrase “死ね” (drop dead/die) was used in a non-literal way and was not a credible threat. Linguistic experts consulted by the Board explained that although sometimes this phrase can be used to threaten someone’s life literally, it is broadly used in a figurative sense as a statement of dislike and disapproval. The experts found that the use of the term in this content fell into the figurative category. Data experts, who examined the incidence of the phrase on Threads and other platforms, concluded that the term is commonly used figuratively or ironically. This includes examples of users reporting they are “dying” of pain or wishing other users to “die” because of a comment that those users made. The reply itself also suggests that the phrase was meant figuratively. The user’s reply to the Threads post called for the head of the National Tax Agency to appear before the national legislative body and explain the fraud allegations. The reply also included the interjection “hah.” In the Board’s view, the word “hah,” which usually expresses amusement or irony, suggests a non-literal meaning of the term “死ね” (drop dead/die). Similarly, the Board agrees with Meta’s assessment that the user’s proposed remedy – that Kishida be held to account by the country’s legislative body – suggests the content was political criticism, rather than a literal call for death. The Board acknowledges that recent events in Japan would lead to sensitivities about any call for the death of a political leader. The assassination of Prime Minister Abe in 2022 and the use of a pipe bomb near Prime Minister Kishida in 2023 underscore the critical importance of taking credible death threats seriously. In this case, however, the call for death was simply not credible. Bullying and Harassment Community Standard The Board finds that the content in this case does not violate the Bullying and Harassment policy. The Board agrees with Meta that while Prime Minister Kishida meets the policy criteria for public figures, he was not “purposefully exposed” by the content. The user did not post the reply directly to Prime Minister Kishida’s page and did not tag him, thus the content did not directly address the Prime Minister. 5.2 Compliance with Meta’s Human Rights Responsibilities The Board finds that removing the content from the platform was not consistent with Meta’s human rights responsibilities. Freedom of Expression (Article 19 ICCPR) Article 19 of the International Covenant on Civil and Political Rights (ICCPR) provides “particularly high” protection for “public debate concerning public figures in the political domain and public institutions,” ( General Comment No. 34 , para. 38). When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s human rights responsibilities in line with the UN Guiding Principles on Business and Human Rights , which Meta itself has committed to in its Corporate Human Rights Policy . The Board does this both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression,” ( A/74/486 , para. 41). The Board has recognized the importance of political speech against a head of state, even when it is offensive, as such leaders are legitimately subject to criticism and political opposition (see Iran Protest Slogan and Colombia Protest s decisions; General Comment No. 34, at paras 11 and 38). I. Legality (Clarity and Accessibility of the Rules) The principle of legality requires rules limiting expression to be accessible and clear, formulated with sufficient precision to enable an individual to regulate their conduct accordingly (General Comment No. 34, para. 25). Additionally, these rules “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” and must “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not,” (Ibid.). The UN Special Rapporteur on freedom of expression has stated that when applied to private actors’ governance of online speech, rules should be clear and specific ( A/HRC/38/35 , para. 46). People using Meta’s platforms should be able to access and understand the rules, and content reviewers should have clear guidance regarding their enforcement. The Board finds that the prohibition on calls for death using the phrase “death to” against high-risk persons is not sufficiently clear and accessible to users. The policy rationale allows for context when evaluating the credibility of a threat, such as content posted for condemnation, awareness raising and non-serious or casual threats. However, the policy rationale does not specify how non-literal statements are to be distinguished from credible threats. As noted by the Board in the Iran Protest Slogan case, at-scale human reviewers follow specific guidelines based on signals or criteria such as calls to death against a target. They are not empowered to assess the intent or the credibility of a threat, so if a post includes threatening statements like “death to” or “drop dead” (as in this case) and a target, it is removed. The Board therefore reiterates recommendation no. 1 from the Iran Protest Slogan case, which states that Meta should include an explanation in the public-facing language of the Violence and Incitement policy that rhetorical threats using the phrase “death to” are generally allowed, except when directed at high-risk individuals, and provide criteria for when threatening statements directed at heads of state are permitted to protect clearly rhetorical political speech. The policy is also not sufficiently clear about its treatment of “public figures” and “high-risk persons.” The policy currently gives less protection to public figures, noting that threats of medium severity violence towards public figures are removed only when “credible,” while such threats are removed “regardless of credibility” for other figures. In contrast, the policy gives more protection to high-risk persons, through a policy line citing “additional protections” for such groups. As noted above, internal guidance also offers more protection to high-risk persons, calling for removal of “death to” when directed at high-risk persons. In response to the Board’s question, Meta explained that the policy offers greater protection for speech containing medium severity threats directed at public figures because people frequently express disdain or disagreement with adult public figures using hyperbolic language but often do not intend to incite violence. In contrast, threats of high severity violence carry a greater risk of potential offline harm, including death calls against high-risk persons. In this case, Meta deemed Prime Minister Kishida to fall into both categories. The Board is concerned that the Violence and Incitement policy definitions of “public figures” and “high-risk persons” do not provide sufficient clarity for users to understand either category, much less what happens when the two categories interact. In the Iran Protest Slogan case, the Board recommended that Meta amend the Violence and Incitement Community Standard to include an illustrative list of high-risk persons, explaining the category may include heads of state. Since the publication of that decision, Meta initiated a policy development process to strike a better balance between violent speech and political expression. Nevertheless, the company has not yet publicly clarified who is a high-risk person. During its work on this case, the Board held a briefing session with Meta where the company explained that publishing its internal definition of high-risk persons could lead some users to circumvent existing policies and enforcement guidelines. The Board acknowledges Meta’s concern that publishing detailed guidelines could allow certain users to evade established enforcement rules. However, the Board believes that Meta should not take an all-or-nothing approach. Instead, Meta should publish a general definition of high-risk persons and an illustrative list of examples. Such an approach would allow users to understand that the protection of these persons is based on their occupation, political activity, public service or other risk-related activity. The Board believes that such an approach would not impede enforcement efficiency. Indeed, the Board has already offered such a list with Meta’s agreement in the Iran Protest Slogan case, noting: “In addition to heads of state, other examples of high-risk persons include: former heads of state; candidates and former candidates for head of state; candidates in national and supranational elections for up to 30 days after election if not elected; people with a history of assassination attempts; activists and journalists.” Given that these examples are already in the public domain, they should be reflected in the Community Standard itself. Building on the Board’s findings in the Iran Protest Slogan case and the updates that Meta has already implemented to the Violence and Incitement policy, the Board recommends that Meta provide a general definition for high-risk persons clarifying that high-risk persons encompass people, like political leaders, who may be at higher risk of assassination or other violence and provide illustrative examples, such as those discussed in the Iran Protest Slogan case. II. Legitimate Aim Any restriction on freedom of expression should also pursue one or more of the legitimate aims listed in the ICCPR. The Violence and Incitement Community Standard aims to “prevent potential offline harm” by removing content that poses “a genuine risk of physical harm or direct threats to public safety.” This policy serves the legitimate aim of protecting the right to life and the right to security of person (Article 6, ICCPR; Article 9 ICCPR). III. Necessity and Proportionality Under ICCPR Article 19(3), necessity and proportionality require that restrictions on expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected,” (General Comment No. 34, para. 34). The Board finds that Meta’s original decision to remove the content under its Violence and Incitement policy was not necessary, as it was not the least intrusive measure to protect the safety of Prime Minister Kishida. This analysis is the crux of this case, as it again grapples with the challenging issue of how Meta should distinguish between rhetorical and actual threats. The Board has repeatedly expressed concern about over-enforcement against figurative threats in the Iran Protest Slogan , Iranian Woman Confronted on Street and Reporting on Pakistani Parliament Speech cases. These cases might be distinguished from the case in question because they concerned a slogan, a coordinated protest movement, or an impending election. Yet the core issue is the same, the restriction of political speech due to a non-credible threat of violence. The Board believes that Meta should enable such discussions and ensure that users can express their political views, including dislike or disapproval of politicians’ actions and behavior, without creating unnecessary barriers. However, the Board is concerned that Meta’s Violence and Incitement policy still does not clearly distinguish literal and figurative threats. This problem is further emphasized by the fact that the content in this case was marked as violating by two human moderators who, according to Meta, spoke Japanese and were familiar with the local sociopolitical context. The six factors described in the Rabat Plan of Action (context, speaker, intent, content of the speech, extent of the speech and likelihood of imminent harm) provide valuable guidance in assessing the credibility of the threats. Although the Rabat framework was created to assess incitement to national, racial or religious hatred, the six-factor test is useful for evaluating incitement to violence generally (see, for example, Iran Protest Slogan and Call for Women’s Protest in Cuba decisions). Given Meta’s original assumption that removing the content was necessary to protect the safety of Prime Minister Kishida, the Board used the six factors to assess the credibility of the alleged threat in this case. The content was posted during the 2023 tax fraud scandal involving Prime Minister Kishida’s party. Experts consulted by the Board explained that while political criticism online has increased in Japan, there is no clear link between online threats and recent violence against Japanese politicians. The user had fewer than 1, 000 followers and was not a public figure, while the content received no views or likes, reflecting the low interest in the reply. The user’s intent appeared to be political criticism by calling attention to political corruption, using strong language, which is not unusual on Japanese social media (see public comments PC-29589 and PC-29594), and was unlikely to cause imminent harm. The Board acknowledges that assessing the credibility of threats of violence is a context-specific, very difficult exercise, especially when enforcing against content on a global scale. The Board also understands that Meta can conduct a more accurate evaluation of credibility of threats on-escalation. The Board considered recommending that Meta enforce the policy prohibiting threats using the phrase “death to” against high-risk persons on-escalation only. Escalation-only policies require additional context to enforce against content, with decisions made by subject matter experts as opposed to the at-scale human moderators who initially review the content. The Board understands that the number of Meta’s subject matter experts is significantly lower than the number of at-scale human reviewers, thus the former’s capacity is limited. As such, escalation-only enforcement of this policy may lead to a significant amount of content not being reviewed due to lower expert capacity. Moreover, escalation-only rules can only be enforced if brought to Meta’s attention by some other means, for example, Trusted Partners or content with significant press coverage (see Sudan’s Rapid Support Forces Video Captive decision). This means that Meta will be able to review threats of death using the phrase “death to” only when flagged through certain channels. The Board ultimately determined that this would likely result in under-enforcement and more death threats remaining on Meta’s platforms. Moreover, because Meta could not provide validated data about the prevalence of such content on its platforms, the Board could not assess the magnitude of such under-enforcement. The Board therefore is of the view that to effectively protect political speech, Meta should provide additional guidance to its reviewers to evaluate language and local context, ensuring the guidelines it issues for moderators are consistent with the underlying policy rationale. In its previous cases on similar issues (see Iran Protest Slogan, Iranian Woman Confronted on Street and Reporting on Pakistani Parliament Speech), the Board explored policy and enforcement solutions, often time-sensitive and narrowly tailored to the specific context, including elections, crises and conflicts. This has allowed Meta to adjust its enforcement practices and account for specific context by using mechanisms such as the Crisis Policy Protocol (CPP) and Integrity Product Operations Center (IPOC). In this case, Meta informed the Board that it did not establish any special enforcement measures. Meta stated that a single incident such as the assassination of former Prime Minister Shinzo Abe, while tragic, is generally not sufficient to trigger such mechanisms, unless there are additional signals of wider risk or instability. Instead, Meta designated the assassination under its “Violating Violent Event” protocol, limited to content related to that instance of violence only. In these circumstances Meta can only rely on its general policy and enforcement practices. Therefore, developing a scalable solution to distinguish credible from figurative threats is the only way to effectively protect political expression. Moreover, if Meta chooses to continue enforcing this policy at-scale, the accuracy of its automated systems will continue to be impacted by the quality of training data provided by human moderators. The Board reiterates its findings from the Iranian Woman Confronted on Street decision, that when human moderators remove figurative statements based on the rigid enforcement of a rule, that mistake is likely to be reproduced and amplified through automation, leading to over-enforcement. Based on the Board’s findings that calls for death require a context-driven assessment of the probability that a threat will result in real-world harm, this could require more nuanced enforcement guidelines to at-scale human reviewers than those currently available. Meta’s internal guidelines instruct reviewers to remove calls for death using the specific phrase “death to” when directed against high-risk individuals. These guidelines do not reflect the Violence and Incitement policy rationale, which states that “context matters” and that it accounts for non-serious and casual ways of threatening or calling for violence to express disdain or disagreement. The Board, therefore, finds that Meta should update its internal guidelines and specific instructions to reviewers to explicitly allow for consideration of local context and language, and to account for “non-serious and casual ways” of threatening or calling for violence to express such disdain or disagreement. Finally, the Board is also concerned about Meta’s ability to handle context-sensitive content on Threads. Meta informed the Board that the review of the content in this case was delayed for about three weeks due to a backlog. Meta explained that at the time of enforcement, Threads content moderation relied exclusively on human reviewers for Threads reports, whereas the company typically uses multiple techniques to prevent backlogs from accumulating, such as automatic closure of reports. Automatic closing of reports after 48 hours means that, unless there are any mechanisms to keep them open, the reports will be closed without review, leaving users without an effective remedy. 6. The Oversight Board’s Decision The Oversight Board overturns Meta's original decision to take down the content. 7. Recommendations Content Policy 1. Meta should update the Violence and Incitement policy to provide a general definition for “high-risk persons” clarifying that high-risk persons encompass people, like political leaders, who may be at higher risk of assassination or other violence and provide illustrative examples. The Board will consider this recommendation implemented when the public-facing language of the Violence and Incitement policy reflects the proposed change. Enforcement 2. Meta should update its internal guidelines to at-scale reviewers about calls for death using the specific phrase “death to” when directed against high-risk persons. This update should allow posts that, in the local context and language, express disdain or disagreement through non-serious and casual ways of threatening violence. The Board will consider this recommendation implemented when Meta shares relevant data on the reduction of false positive identification of content containing calls for death using the specific phrase “death to” when directed against high-risk persons. Content Policy 3. Meta should hyperlink to its Bullying and Harassment definition of public figures in the Violence and Incitement policy, and in any other Community Standards where public figures are referenced, to allow users to distinguish it from high-risk persons. The Board will consider this recommendation implemented when the public-facing language of the Violence and Incitement policy, and of Meta’s Community Standards more broadly, reflects the proposed change. *Procedural Note: The Oversight Board’s decisions are made by panels of five Members and approved by a majority vote of the full Board. Board decisions do not necessarily represent the views of all Members. Under its Charter , the Oversight Board may review appeals from users whose content Meta removed, appeals from users who reported content that Meta left up, and decisions that Meta refers to it (Charter Article 2, Section 1). The Board has binding authority to uphold or overturn Meta’s content decisions (Charter Article 3, Section 5; Charter Article 4). The Board may issue non-binding recommendations that Meta is required to respond to (Charter Article 3, Section 4; Article 4). When Meta commits to act on recommendations, the Board monitors their implementation. For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, a digital investigations group providing risk advisory and threat intelligence services to mitigate online harms, also provided research. Linguistic expertise was provided by Lionbridge Technologies, LLC, whose specialists are fluent in more than 350 languages and work from 5,000 cities across the world. **Translation Note: The translation process for the announcement of the case concerning Statements About the Japanese Prime Minister led to the use of a phrase in the Japanese version of the announcement that had a similar meaning, but differed from the original phrase used. Instead of the original phrase “死ね” (shi-ne), the announcement used the term “くたばれ” (kutabare) as the translation of “drop dead.” We understand that this may have caused confusion. Please be assured that the Board’s deliberation and decision were based on the original wording of “死ね” (shi-ne) and that we are committed to ensuring accuracy in our translation processes. Return to Case Decisions and Policy Advisory Opinions"