# Meta’s Oversight Board Obtained 400K Appeals in 2023

Table of Contents
Meta’s Oversight Board Obtained 400K Appeals in 2023
Meta’s Oversight Board has at all times been an experiment, an instance of how exterior, unbiased oversight of social platform moderation selections may present a extra equitable means ahead for social media apps.
But, 4 years on, it doesn’t seem to be anyone else goes to take up the trigger, regardless of the Oversight Board influencing varied Meta insurance policies and outcomes, which have improved the corporate’s programs for coping with widespread points and considerations.
Which once more underlines why social platform moderation is difficult, and with out uniform guidelines in place, to which all platforms want to stick, the method will proceed to be a mishmash of ideas, with various ranges of impact.
At this time, beneath the cloud of current funding cuts, the Oversight Board has revealed its annual report, which exhibits how its selections have impacted Meta insurance policies, and what it’s been capable of obtain, on a small scale, within the social moderation area.
As per the Board:
“2023 was a 12 months of impression and innovation for the Board. Our suggestions continued to enhance how individuals expertise Meta’s platforms and, by publishing extra selections in new codecs, we tackled extra onerous questions of content material moderation than ever earlier than. From protest slogans in Iran to criticism of gender-based violence, our selections continued to guard vital voices on Fb and Instagram.”
Certainly, based on the Oversight Board, it issued greater than 50 selections in 2023, overturning Meta’s authentic resolution in round 90% of instances.
Which, at Meta’s scale, actually isn’t that a lot. However nonetheless, it’s one thing, and people selections have had an impression on Meta’s broader insurance policies.

But, even so, the Board is just capable of function at a small scale, and demand for critiques of Meta’s moderation selections stays excessive.

As detailed right here, the Board obtained nearly 400k appeals in 2023, however was solely capable of present 53 selections. Now, that’s not a direct comparability of impression, as such, as a result of because the Board notes, it goals to listen to instances that can have broader relevance, and thus, any adjustments made consequently will attain past that case in isolation. For instance, a change in coverage may impression hundreds of those instances, and see them resolved, or addressed, with out having to listen to them individually.
Besides, 400k appeals, 4 years in, exhibits that there’s clearly demand for an umpire or arbitrator of some type to listen to appeals towards platform moderation selections.
Which is the entire level of the Oversight Board undertaking, in that it’s supposed to indicate regulators that an exterior appeals course of is required, with the intention to take these selections out of the fingers of Meta administration. But nobody appears to wish to push this case. Lawmakers and regulators proceed to carry committee hearings and critiques, however there’s been no vital push to create a broader, extra common ruling physique over digital platform selections.
That also looks like the higher, extra equitable path, but on the identical time, you’d additionally successfully want our bodies of this kind in each area, with the intention to cater for various authorized rules and approaches.
That appears unlikely, so whereas the Oversight Board has seemingly confirmed its use case, and the worth of getting unbiased evaluate on moderation calls and processes, it appears unlikely to vary broader approaches to such from government-appointed teams.
And with the Board dropping funding, and scaling again, it looks like ultimately will probably be gone as nicely, leaving these selections solely within the fingers of platform administration. Which everybody will complain about, and CEOs will proceed to be hauled earlier than congress each six months or so to reply for his or her failures.
But, the answer is seemingly too complicated, or too dangerous to implement. So we’ll simply depend on fines and public shaming to maintain the platforms in line, which historically hasn’t been efficient.
And within the quick evolving age of AI, this looks like a good much less workable scenario, however once more, regardless of the Oversight Board displaying the best way, nobody appears to be taking over the mantle as but.
You possibly can try the Oversight Board’s full 2023 report right here.
Andrew Hutchinson