Facebook’s Uneven Enforcement of Hate Speech Rules in India Highlighted in New Study

A research report by advocacy group Equality Labs has concluded that the social networking giant has done little to effectively moderate speech that violates the company’s own guidelines.

New Delhi: Facebook failed to permanently delete hundreds of posts that targeted caste and religious minorities in India even after they were reported to the social networking giant, a new research report by a South Asian American human rights organisation has claimed.

Equality Labs, an advocacy group that focuses on technology and human rights, studied the process these posts go through after being reported over a period of four months in 2018.

A team of 20 international researchers – that included Dalits, Muslims, Christians, Buddhists, and others – systematically recorded 1,000 Facebook posts that they found to be in violation of the platform’s community standards.

Also read: On Social Media, Hate Speech Is Ok – Reporting It May Cost You

Their findings? Over 40% of all the posts that were removed – after they reported them – were restored after a period of 90 days on average. An overwhelming majority of the posts that were restored were Islamophobic in nature.

The research group selected posts that they deemed to be characteristic of ‘Tier-1’ hate speech, which constitutes grounds for immediate removal from the platform.

Facebook defines Tier 1 hate speech as :

“Attacks, which target a person or group of people who share one of the above-listed characteristics or immigration status (including all subsets except those described as having carried out violent crimes or sexual offences), where attack is defined as any violent speech or support in written or visual form.

Dehumanising speech such as reference or comparison to: Insects, Animals that are culturally perceived as intellectually or physically inferior, Filth, bacteria, disease and faeces, Sexual predator, Subhumanity, Violent and sexual criminals, Other criminals (including but not limited to “thieves”, “bank robbers” or saying that “all [protected characteristic or quasi-protected characteristic] are ‘criminals'”)

Mocking the concept, events or victims of hate crimes, even if no real person is depicted in an image

Designated dehumanising comparisons in both written and visual form”

Despite the clearly mentioned guidelines, this is one example of a post that was initially removed but later restore:

Translation: Those illegitimate children whose mothers took their salwars off after seeing swords in the hands of Mughals today proudly claim to be Muslim: Do you agree with Yogiji’s statement?)

“All community standards violations identified were reported to Facebook using the user reporting mechanisms and Facebook’s response systematically tracked. Through this approach, we created a data set of over 1000 violating Facebook posts, spanning 4 key Indian languages,” one of the researchers told The Wire.

The report states that of all the hate speech posts reported, astonishingly, over 90% continue to exist on the platform and these posts advocate violence, use slurs and are characteristic of the Tier 1 hate speech standards mentioned above.

Also read: Amid Growing Online Hate, India Must Reconsider Immunities to Facebook, Twitter

“By tracking Facebook’s response to our violation reports, we were also able to gain significant insights into Facebook’s moderation performance. Our review of a 1000+ moderation decisions suggests that there are significant issues with the moderation process as it affects India and makes Indian caste, religious, gender, and queer minorities as well as civil society activists and journalists extremely vulnerable on these platforms,” the researcher added.

Not enough languages

Another problem that the report – which is titled ‘Facebook India, Towards the Tipping Point of Violence’ – brings out is that the platform’s hate speech guidelines are not translated into local languages commonly used in India, even though the organisation had engaged with Facebook on the issue of localisation on earlier instances.

At present, the Facebook pages that lay out community standards for Indian languages often present just the headings in a regional language while the rest of the text is in English.

The report slams Facebook for its failure to protect users: “How can Facebook guarantee the safety of all of its users if the basic community standards are not available for all to read?….Safety cannot be an afterthought — it must be central to the production workflow”.

Castiesm is another area that the study examines. According to the report, the rate of removal for reported posts was the lowest in the casteism category.

Here’s an instance of a group that continues to exist on Facebook despite being reported repeatedly:

An example of content that targets a specific caste on Facebook. Credit: Equality Labs

An example of content that targets a specific caste on Facebook. Credit: Equality Labs

Misogyny and posts that promote online violence against women continue to plague the platform, as do transphobic and homophobic posts. There are also posts targeting Christian minorities in India, the report notes.

‘This report is a snap shot of our advocacy that was meant to uncover what is going on with their (Facebook’s) content moderation. The report  is a beginning of a necessary conversation to allow more Indians more insight into how Facebook works, how so much hate speech has become normalised, and the categories of hate speech that are now commonplace. There is so much of this content in many of our languages that it was not hard to find it is omnipresent. Our study was analysing Facebook’s response. Our secondary goal was to provide the content analysis,” said Thenmozhi Soundararajan, executive director of Equality labs.

Also read: Why Facebook Is Losing the War on Hate Speech in Myanmar

“This data set also provides a unique window into the type of problematic content that circulates on Facebook during the pre-election period in India, and the type of rhetoric and attacks that Indian minorities have become normalised against,” she added.

In a separate response to The Wire, Equality Labs said that although their research is only a small window into the larger problem of moderation, the findings raise critical concerns which they believe would warrant a thorough audit of the company’s moderation process in the country.

“This audit team must have clear competencies in caste, religious, and gender/queer minorities and include members of Indian minorities in its composition,” the organisation noted.

Mariya Salim is a researcher and women’s rights activist.