Facebook has published its latest Community Standards Enforcement Report, which covers all the content removals and enforcement actions that the platform enacted throughout the second quarter of 2021.

Among the report’s intriguing observations are some observations on significant patterns, as well as advances in Facebook’s detection systems. First and foremost, Facebook has stated that it has deleted more than 20 million pieces of material from across Facebook and Instagram for breaching its policies on COVID-19-related disinformation. This is in response to a critical requirement at the moment.

“Because of their continuous violations of our policies against promoting COVID-19 and vaccination disinformation, we have suspended or terminated over 3,000 users, pages, and groups. We also showed warnings on more than 190 million pieces of COVID-related content on Facebook that our third-party fact-checking partners deemed to be inaccurate, partially false, changed, or lacking context, as determined by our fact-checking partners.”

The ability to resist these moves is critical as the vaccination roll-out continues, and given Facebook’s huge reach, this is a vital area for the social media giant to concentrate its efforts on. Of course, Facebook has been widely criticized for providing a platform for health misinformation in the first place, but the data presented here indicates that the social media platform is actively working to counter these elements in a variety of ways, with the goal of limiting the impact of such misinformation.

According to Facebook, its efforts to combat hate speech continue to deliver good outcomes in terms of other significant trends:

“The prevalence of hate speech on Facebook has decreased for the third quarter in a row, according to a new study. In the second quarter, it was 0.05 percent, or 5 views per 10,000 views, a decrease from 0.05-0.06 percent, or 5 to 6 views per 10,000 views, in the first quarter “”We had 00 views in the first quarter.”

Facebook hate speech stats

At Facebook’s scale, five views per 10,000 users would still indicate that a considerable amount of hate speech is making its way through to users, but, as previously said, Facebook’s systems are improving, which should reduce the effect of hate speech in the future.

Although, at the same time, it is worth mentioning that Instagram has witnessed an increase in the removal of hate speech posts.

Instagram hate

Taking more action on such matters is a great development, but it may also indicate a shift in the emphasis placed on such material, which is reflected in the steady increase in the identification of harmful groups in the app.

Instagram dangerous organizations stats

While it is encouraging to see that Instagram is enforcing more of these categories over time, it may also indicate that platform usage patterns are evolving, which may be of more concern to Instagram in the long run, as you can see from the chart above.

Another worrying aspect is the dramatic increase in the amount of suicide and self-injury content that has been published.

Facebook suicide and self injury stats

According to Facebook, this increase is mostly due to a technical update that allowed its censors to ‘go back and detect infringing content we missed’ during the previous month. However, it is a worrying tendency to keep an eye on, and it will be interesting to observe whether or not this aspect continues to increase in popularity.

Also, fake accounts on Facebook are still at 5% of overall profile numbers – the same rate that Facebook has been reporting for years.

Facebook says that it took action against 1.7 billion fake profiles in the period.

To put it simply, despite Facebook’s ongoing attempts to identify and remove fraudulent accounts, there are still a large number of them. According to the data, the relative rate of false accounts does not appear to vary, even as Facebook’s capabilities improve, thus there will always appear to be several million phony profiles on Facebook at any given moment, as previously stated.

This appears to be something that might be handled, however it appears to be the case that this is not the case.

Nonetheless, Facebook claims that its automatic detection methods are becoming more effective:

In 12 out of 13 policy categories on Facebook and nine out of 11 policy areas on Instagram, our proactive rate (the proportion of material we took action on before a user reported it to us) is greater than 90 percent.

As a result, Facebook is reducing visibility for information that violates or is objectionable, even if it is unable to increase its exposure rates across the board. What exactly this implies in practice is difficult to determine because, for the most part, Facebook’s raw enforcement statistics are relatively steady in most regions, despite the fact that Facebook’s usage counts are continually growing.

As a result, one could assume that overall enforcement statistics would improve as well. However, aside from the previously mentioned aspects, the majority of other statistics have remained stable, despite the previously mentioned improvements.

That implies that Facebook is becoming better at identifying breaches, or that it is becoming worse – or that it is remaining unchanged. Despite the fact that it’s impossible to say, Facebook is taking action on a large amount of content and identifying numerous breaches before they are even seen by the general public.

Although this appears to be positive news, there is nothing in these data that indicates a big improvement in the current quarter.

You can read Facebook’s full Community Standards Enforcement Report for Q2 2021 here.