The advisory group reviewing Facebook and Instagram’s content moderation decisions issued its first annual report Wednesday, capping off its first year in operation.
The Oversight Board apparently received over one million appeals from Facebook and Instagram users in 2021. Most of those requests asked the board to overturn content on Meta apps that were removed for breaking the rules against hate speech, violence and bullying. The board issued decisions and explanations on 20 cases it calls “significant.” In 70% of the cases the group reviewed, it overturned Meta’s initial determination.
“There was clearly enormous pent-up demand among Facebook and Instagram users for some way to appeal content moderation decisions Meta made, to an organization independent from Meta,” the board writes in the report.
The Oversight Board’s most prominent decision to date is the question of reinstating former President Donald Trump, who was removed from Facebook after encouraging the insurrection at the U.S. Capitol. The board responded to that decision by asking Meta to clarify the rules it used to kick the former president off the platform to begin with. “In applying this penalty, Facebook did not follow a clear, published procedure,” the board wrote at the time, adding that Facebook did not have a rule for “indefinite” suspensions like the one issued to Trump.
Beyond its decisions, which set a kind of precedent for future policy enforcement, the board also makes more general recommendations to Meta for how the company should think about particular aspects of content moderation and rules it should put in place.
In less high-profile instances, the board recommended that Meta tighten Facebook and Instagram’s rules against doxing, requested that the company issue a transparent report specific to how well it has enforced COVID-19-related rules and asked it to prioritize fact-checking for governments that share health misinformation through official channels.
The Oversight Board made 86 policy recommendations in its first year. Meta has implemented a few of the board’s suggestions for better moderation transparency, including giving users more insight when they violate the platform’s hate speech rules and informing them about if AI or human moderation led to an enforcement decision and ignored others outright. Those outcomes are tracked in the annual report, which does shed some light about how effective the group’s impact really is and how often Meta implements or glosses over its recommendations.
The Oversight Board reviews content moderation cases from all around the world, at times sorting through linguistic and cultural nuances that Meta itself has failed to integrate into its moderation decisions, automated or not. Facebook whistleblower Frances Haugen has repeatedly raised alarms about the company’s capability to monitor its social platforms in non English-speaking markets. According to the report, half of the Oversight Board’s decisions pertained to countries in the Global South, including some in Latin America and Africa.
Initially, the board only reviewed cases in which users were requesting that content be restored to Instagram and Facebook, but the group expanded to considering cases requesting that content be removed a few months in. Still, the Oversight Board’s realm of decision making is limited to questions about individual posts and not the many other features that people use on Instagram and Facebook.
The board writes that it wants to expand the scope of its powers to advise Meta on moderation affecting accounts and groups across its platforms not just individual posts. The Oversight Board is currently “in dialogue” with the company, which still has the final word on what the semi-independent advisory group can actually do.