In December 2019, Facebook announced an initial commitment of $130 million to launch its new Facebook Oversight Board. The Board, which has been called Facebook’s Supreme Court, is designed as a way for users to appeal decisions made by Facebook about its enforcement of community standards, which prohibit “activity and content like violence and criminal behavior, pornography and other objectionable content and behavior, as well as ads on the service.” The Oversight Board Trust was created as a “non-charitable purpose trust” under Delaware law and the initial commitment of $140 million will serve to cover operation costs.
Facebook is currently vetting candidates for up to 40 spots on the Oversight Board. The Board will have its own staff, which will operate independently from Facebook, and Facebook anticipates that the staff would include a director, case managers, and staff members or contractors.
The Board is seen as a way for Facebook to build user trust and position itself proactively against potential government regulation. Facebook believes that the board should be focused on human rights principles, including the freedom of expression, privacy and remedy. It worked with BSR, an independent nonprofit organization with “expertise in human rights practices and policies” to commission an impact assessment on how best to respect and promote these human rights principles. The recommendations in the impact assessment include “diversity of board members, remedies, user support, transparent communications and privacy-protective tools,” which Facebook states has helped inform its bylaws and its charter.
In Volume 21, Issue 1 of the North Carolina Journal of Law and Technology, Evelyn Douek published a piece entitled “Facebook’s ‘Oversight Board:’ Move Fast with Stable Infrastructure and Humility” in which she analyzed the values the Board can bring to Facebook’s content moderation cases. Douek proposes that the Board will be able to highlight weaknesses in the policy formation process at Facebook, which would help streamline the legislative process in creating its Community Standards. Second, Douek argues that the Board could serve as an important forum for public discussion “necessary for persons in a pluralistic community to come to accept the rules that govern them, even if they disagree with the substance.”
Douek highlights how the Facebook Oversight Board is intended to help solve the impossible challenge of content moderation, made impossible by the sheer scale of a platform like Facebook, which consists of over 2 billion monthly active users and 2.5 billion pieces of content shared every day. The content is moderated to align with Facebook’s Community Standards and Facebook takes action on any content that is in breach of these rules. Facebook also re-reviews decisions that were appealed by users. In 2019, that amounted to nearly 25 million requests for appeal. Facebook has employed a dual prong approach which consists of: (1) an “industrial” decision factory that approaches decisions with consistency and aims to reduce the Community Standards to bright line rules, and (2) the use of artificial intelligence, which took down over 95% of content violating Community Standards before it was even reported by a user.
However, it is difficult to train AI to appreciate the infinite spectrum of human nuance, particularly in instances of bullying, harassment and hate speech. AI does not give a person who has a decision made against them the feeling of being heard, nor does it give public reasoning for its sentiments. AI can also not determine all of the value that should be encoded into the detection algorithms, because, for example, what constitutes hate speech varies around the world. Therefore, the benefits of outsourcing this role to an independent body like the FOB are “greater transparency and reason-giving provided by Facebook employees and policy-makers within the content moderation ecosystem.”
Since, FOB will be resolving disputes revolving around the exercise of power by Facebook over free speech and expression, Douek argues that this ultimately means that the FOB will be resolving disputes more analogous to public law. Thus, the FOB’s decisions will need to take into account a conception of the “public interest,” rather than just each immediate case. To achieve this balance, FOB will adopt a “Weak-Form Review” on this “Judicial-Style Check on Policy Making.” Thus, it will only review individual decisions under the Community Standards and “will not decide cases where reversing Facebook’s decision would violate the law.”
Moreover, Douek highlights two potential limits to the legitimacy of FOB. First, Facebook will still always have the “power to overrule” the decisions made by FOB. Second, there will be some difficult cases where FOB will not have a “right answer.” Despite these limitations, FOB can be a great way to fill the loopholes and blind spots in Facebook’s policies. Most of Facebooks’ policies are made haphazardly in response to high profile controversies and scandals. Having a Judicial-Style Check will give FOB a chance to account for all perspectives and give an opportunity to review different, practical and unintended consequences of a rule’s application. This will lead Facebook to have better policies, and will also encourage users to be responsible for their actions. As users will know they are accountable and will be provided reason for the decisions made, they will likely avoid those actions that led to such decisions.
Although, FOB will face challenges in giving practical public reasoning of their decisions due to lack of global norms and justifications to online free speech. FOB is still an important and promising innovation, which attempts to regulate and set standards over online disclosure, which can become an independent source of universally accepted free speech norms.
Madiha Chhotani & Meredith Richards