New Delhi: New research suggests that the spread of fake news in India that sparks mob lynchings is largely done out of “reasons of prejudice and ideology”, rather than “ignorance or digital literacy”.
“It can be seen…that assuming most misinformation spreads through rural and/or illiterate users and targeting functional digital and information literacy interventions primarily at these groups would be both inaccurate and ineffective,” a new qualitative study put out by researchers at the London School of Economics (LSE) notes.
This goes against, in part, conventional wisdom that all WhatsApp needs to do is conduct a series of workshops across the country as a method of imparting digital literacy and teaching users how to spot fake news.
Furthermore, a “typology” of users constructed by the study also suggests that if a WhatsApp user is “male, urban or rural, young or middle-aged, technologically literate, Hindu, upper or middle class”, they are more likely to share (certain types of) fake news or hate-speech.
Also read: During General Elections, WhatsApp Groups Saw More Automated, Spam-Like Behaviour
“Some user narratives in our fieldwork go as far as to suggest that this type of technologically-literate, male, Hindu user is also more likely to create and administer the groups responsible for ideologically charged misinformation, disinformation and hate-speech on WhatsApp in the first place,” the study notes.
“On the other hand, if a WhatsApp user is lower caste, Dalit, or Muslim and/ or a woman and/or rural, particularly with lower levels of technological literacy, then such a user is less likely to create and curate and unlikely even to forward ideologically-charged misinformation and disinformation,” it adds.
This research is part of a group of 20 academic projects that were funded by WhatsApp in 2018 in response to widespread criticism that the company was doing little to examine the role it played in the spread of fake news and misinformation in a wide range of fields.
The LSE study, which conducted extended qualitative interviews with over 250 users across four states (Karnataka, Maharashtra, Madhya Pradesh and Uttar Pradesh) in 2019, doesn’t look to come up with sweeping generalisations but instead is more interested in examining the “social and psychological formation of ‘WhatsApp vigilante’ groupings”.
WhatsApp lynchings
In the last two years, a string of murders and lynchings have been linked by the mainstream media to messages spread on the Facebook-owned messaging application.
Not all of the killings are the same – some are over Muslims and cattle-smuggling, while others revolve around rumours of child-snatching – but they all nevertheless thrust WhatsApp into a controversial debate over security and privacy.
In July 2018, the Narendra Modi government specifically took aim at WhatsApp’s role in the issue and asked it to curb the spread of hate speech and identify users who were responsible for spreading rumours.
Also read: Liability, Not Encryption, Is What India’s New Intermediary Regulations Are Trying to Fix
When it comes to spreading fake news or rumours, one school of thought has advocated that some of this is due to a lack of digital literacy (‘people are unaware of what they are forwarding’) or a lack of education (‘ignorant villagers believe in ignorant things’).
The LSE study – which is authored by Shakuntala Banaji, Ram Bhat, Anushi Agrawal, Nihal Passanha and Mukti Sadhana Pravin – strongly argues that a section of rural and urban upper and middle-caste Hindu men and women are predisposed to simply believe in formation against discriminated groups.
It notes:
“A key finding is that in the case of violence against a specific group (Muslims, Christians, Dalits, Adivasis, etc.) there exists widespread, simmering distrust, hatred, contempt and suspicion towards Pakistanis, Muslims, Dalits and critical or dissenting citizens amongst a section of rural and urban upper and middle caste Hindu men and women.
WhatsApp users in these demographics are predisposed both to believe disinformation and to share misinformation about discriminated groups in face-to-face and WhatsApp networks. Regardless of the inaccuracy of sources or of the WhatsApp posts, this type of user appears to derive confidence in (mis)information and/or hate-speech from the correspondence of message content with their own set of prejudiced ideological positions and discriminatory beliefs.” [Emphasis added].
Education and media literacy levels don’t really matter in this regard, the study’s focus groups with WhatsApp users also noted.
“Particularly amongst well-educated users who are aware of the need for a politically correct stance on disinformation, we found a tendency to position themselves as alert, ethical, responsible and savvy media users. However…even very educated and media literate users are often not aware of the contradictions in their beliefs and behaviours,” it says.
WhatsApp ‘reporters’ and group behaviour
The internal dynamics of a WhatsApp group also create the necessary social infrastructure for disinformation to thrive.
For instance, the study notes that in its fieldwork it came across the role of a “WhatsApp reporter’ in many groups – a few members who want to “post first” or “forward first”. These people do so in hopes of gaining social capital and acquiring a reputation for being very knowledgeable and informed.
While these users may not be ideologically biased, they are often less concerned with the reliability of the news they forward, but are ironically seen as more accurate.
Also read: What to Believe — And Not Believe — About Fake News in India
“Positioning themselves as amateur reporters in the context of their own communities, the users who want to ‘post first’ or ‘forward first’ are less concerned with reliability or the potential to start dangerous rumours than with immediacy and impact. Our analysis was also confirmed by other users who commented on those in their groups who always “post first” or from whom many forwards emanate, suggesting that the proportion of inaccurate or false information might be larger from these people but that they still had a reputation as very knowledgeable and informed,” the study notes.
“Our analysis of the data suggests that for many amateur “WhatsApp reporters”, the internet appears to be a vast repository of stock messages. Given the limited time in which to gain first mover advantage, they take what they can get. All of this connects to the strong affective element at play in the use of WhatsApp for the circulation of content that is apparently informational. Amidst the flow of hundreds of messages, the ones which stand out are those that convey a sense of immediacy, and those that can shock,” it adds.
Other behaviour that plays a role in spreading misinformation is the lack of “reflexivity” involved in “in receiving, decoding and forwarding information and misinformation”. Or, put simply, forwarding messages without checking what the actual information is about.
“They claim that the preview is sufficient to give them a sense of the value of the message and its importance,” the study notes.
Another form of WhatsApp messaging that is closely linked to this behaviour is blind bulk-forwarding of messages.
As the research notes, sheer exhaustion simply leads to bulk deletion and bulk forwarding of potentially dangerous messages:
“The cognitive, physical and emotional labour that would be required to process, and decide whether to respond, amend, delete, forward-to-many or forward-to-some the bulk of messages received on any given day is significant – and in some cases would (and does) comprise a full-time, paid job. In the face of such relentless flows of sayings, greetings, prayers, comments, information, misinformation and disinformation, ordinary users who are not paid to administer social media spaces demonstrate two key tendencies associated with the need for speed – bulk deletion and/or bulk forwarding of messages.”
What’s the solution?
The research project suggests a mixture of regulatory and technological steps – but warns grimly that even current forwarding restrictions are being circumvented in various parts of the country.
“Further, we have learnt that to circumvent the restrictions on how many forwards at a time, and how many members are allowed in groups, many users are downloading and using outdated and/or unauthorised versions of the app (such as WhatsApp GB and WhatsApp Plus) which are available on Android operating systems which enable them to bypass some of the recent changes in the application,” it notes.
The need of the hour, according to the study, are methods that will allow WhatsApp to “identify and block the phone numbers” of users who are responsible for posting hate-speech and to take down their posts.
Also read: WhatsApp told India That Tracing Fake News Would Break Encryption. Is This True?
Secondly, it recommends that the Facebook-owned company set up a “beacon feature” that can broadcast a warning or advisory to users in specific locations about specific issues – say an area in which signs of vigilante violence, cattle-killing or child-kidnapping have been shown to spread within minutes.
“Finally, we also recommend that WhatsApp introduce a mechanism whereby users, especially women and sexual minorities are able to report hate speech, misogyny, sexual violence etc. on a separate fasttracked route in partnership with local or state-level law enforcement,” the study says.
Note: This article has been updated at 5:16 PM to add all the names of the study’s authors.