Hate speech on Facebook is here to stay – Quartz
We and our team are the first Australian sociologists to receive funding via Facebook content policy research award, which we used to investigate hate speech on the pages of the LGBTQI + community in five Asian countries: India, Myanmar, Indonesia, Philippines and Australia.
We looked at three aspects of the regulation of hate speech in the Asia-Pacific region over an 18-month period. First, we mapped hate speech law in our case study countries, in order to understand how this problem could be legally countered. We also examined whether Facebook’s definition of “hate speech” includes all recognized forms and contexts for this disturbing behavior.
Additionally, we mapped Facebook’s content regulation teams, discussing with staff how company policies and procedures were working to identify emerging forms of hate.
Even though Facebook funded our study, it said for privacy reasons it couldn’t give us access to a hate speech dataset it is removing. We were therefore unable to test how effectively its internal moderators classify hate.
Instead, we captured posts and comments from each country’s top three LGBTQI + public Facebook pages, to find hate speech that had been missed by the platform’s artificial intelligence filters or by human moderators.
Directors feel disappointed
We asked the admins of these pages about their experiences with moderating hate and what they thought Facebook could do to help them reduce abuse.
They told us that Facebook would often dismiss their hate speech reports, even when the post clearly violated its guidelines. community standards. In some cases, messages that were originally deleted would be displayed again on call.
Most page admins said the so-called “flagging” process rarely worked and found it crippling. They wanted Facebook to consult them more to get a better idea of the types of abuse they see displayed and why it constitutes hate speech in their cultural context.
Defining hate speech is not the problem
However, during our study, we were delighted to find that Facebook broaden its definition hate speech, which now encompasses a wider range of hateful behavior. It also explicitly recognizes that what happens online can trigger violence offline.
It should be noted that in the countries we have focused on, “hate speech” is rarely specifically prohibited by law. We found that other regulations such as cybersecurity laws or religious tolerance could be used to tackle hate speech, but instead tended to be used to suppress political dissent.
We concluded that Facebook’s problem is not with defining hate, but with being unable to identify certain types of hate, like the one posted in minority languages and regional dialects. It often does not respond appropriately to user reports of hateful content.
Where the hatred was worse
Media reports showed Facebook struggles to automatically identify hate published in minority languages. He has did not provide training material to its own moderators in local languages, although many are from Asia-Pacific countries where English is not the first language.
In the Philippines and Indonesia in particular, we have found that LGBTIQ + groups are exposed to an unacceptable level of discrimination and intimidation. This includes death threats, targeting of Muslims, and threats of stoning or beheading.
On the Indian pages, Facebook filters failed to capture the vomiting emojis posted in response to the gay wedding photos and dismissed some very clear defamation reports.
In Australia, by contrast, we found no unmoderated hate speech – only other types of insensitive and inappropriate comments. This could indicate that less abuse is posted, or that there is more effective English moderation on the part of Facebook or the page admins.
Similarly, in Myanmar, LGBTIQ + groups experienced very little hate speech. But we are aware that Facebook is working hard to reduce hate speech on his platform there, following its use for persecute the Rohingya Muslim minority.
Facebook took some important steps towards combating hate speech. However, we are concerned that COVID-19 has forced the platform to become more dependent on the moderation of the machine. This too at a time when he can only automatically identify hate in about fifty languages - even though thousands are spoken every day across the region.
What we recommend
Our report to Facebook offers several key recommendations to help improve its approach to tackling hate on its platform. Overall, we urged the company to meet more regularly with persecuted groups in the region, so that they can learn more about hate in their local contexts and languages.
This must happen alongside an increase in the number of its national policy specialists and internal moderators with expertise in minority languages.
Mirroring efforts in Europe, Facebook must also develop and promote its channel of trusted partners. This provides visible and official hate speech reporting partner organizations through which people can directly report hate activity to Facebook during crises such as the Christchurch Mosque attacks.
More generally, we would like to see governments and NGOs cooperate to set up an Asian regional hate speech monitoring trial, similar to a organized by the European Union.
Following the lead of the EU, such an initiative could help identify urgent trends in hate speech in the region, strengthen Facebook’s local reporting partnerships and reduce the overall incidence of hate content on Facebook.