Governments all over the world blame digital technologies for various social problems such as fake news, lynchings and electoral manipulation.
This is truly a perverse situation because, often these governments are run by political parties who encourage their supporters, in subtle and not so subtle ways, to participate in these activities. In some cases, official information technology cells political parties are directly involved.
Thus, many political entities end up indulging in behaviors they officially condemn – creating fake news and actively propagating it through their websites, Facebook pages, WhatsApp campaigns and Twitter handles – and, in turn, blame technology companies for it. In addition, we have armies of internet trolls who unleash threats – including of rape and murder – upon political and ideological opponents, with overt or tacit support from political entities, using these very apps.
It is reasonable to expect technology companies to put ethical policies and regulatory mechanisms in place to manage the hosting and transmission of controversial material. However, absolutist expectations that they should be held responsible for “everything” are often promoted by official agencies.
Also read: ‘The India Eye’ – The Fake News Factory Promoted by NaMo App
The “technology-as-a-villain” narrative deflects the bigger issue of identifying and punishing those who manufacture audio-visual lies, deliver hate speeches and carry out lynchings. This blame shift suggests that the buck stops with technology, and that people will go about doing their dirty work in any case.
It also makes for convenient “nationalist” sentiment, to blame a “foreign” company that is messing around with “us,” engaging in these nefarious activities. Technology then becomes another prop used to normalise social polarisation. Even deeper still, this masking makes invisible the fundamental questions as to why so much polarisation and hate for the “other” exists – be it an immigrant or a person of another faith.
What kind of social and economic policies have resulted in this state of permanent civil strife?
It almost seems that everything was going fine until the internet came along and edified such hate spewing machines. Sure, the internet made it easy for everyone to “publish” whatever they wished, gave them anonymity, destroyed empathy for those who held different views, spread information at great speed and made it much harder to track who said what.
Yet, the larger truth is that it is real people who manufacture alternative facts with clear motives to deceive. It is real people who are the bullies and trolls.
The WhatsApp problem
WhatsApp, a messaging app, is used by more than 220 million people in India. Its reach is unparalleled and people tend to implicitly trust messages that are sent or forwarded by known contacts.
The Indian government threatened WhatsApp with legal action after hoaxes on the app led to lynchings. “When rumours and fake news get propagated by mischief-mongers, the medium used for such propagation cannot evade responsibility and accountability,” it said. “… If [WhatsApp] remains a mute spectator, [it is] liable to be treated as an abettor and thereafter face consequent legal action.”
WhatsApp responded by reducing the number of messages that one could forward, and flagging those messages as “forwarded”. But it expressed its inability to monitor content, as chats are encrypted and not readable by WhatsApp itself.
The government then demanded, in the name of national security, that the encryption of WhatsApp be broken so that content could be shared with state agencies. The company refused, but stated that it plans to run “long-term public safety ad campaigns” and “news literacy workshops” to stem the flow of misinformation.
But how effective are these measures in the face of hate-crazed partisans who forward sensational, bias-affirming content without a second thought? Is it realistic to ascribe to such persons any kind of desire to undertake any fact-checking?
Even harder is dealing with entities who spread fake news and hateful material purposefully to polarise the masses for political gain. While the State demands that encryption be broken, political parties which are a part of the government blatantly use the same encryption as a shield through which opponents and watchdogs cannot see what misinformation is being spread. So, even if things became more traceable on WhatsApp, a lack of political will to apprehend the culprits would make all of it redundant.
The WhatsApp problem is compounded by two other factors in India:
1. The issue of scale: There are tens of thousands of WhatsApp groups and a severe shortage of trained police officials who can follow-up on complaints. It is not easy to locate the origin of messages. Even if the fake information spreading on WhatsApp groups leaks into the larger public domain, fact-checkers run out of steam because of the sheer quantum.
2. Micro-targeting: Many groups have been created using careful data analytics resulting in micro-targeting and hyper-segmentation (e.g. people of a certain caste or community in a specific region.) This means that a group is likely to share certain propensities across its members and is essentially an echo chamber that will be responsive to specific types of messages which can then be customised by, say, political parties to evoke the “right” emotions.
Also read: Most People Trust ‘Neutral Media’, Says New Report on Fake News
Such content is usually hyper-local, in local languages and is circulated within a “closed” group, thus not being accessible for scrutiny. Unlike national level campaigns, the content of which may leak out, local campaigns – especially in rural areas and small – focus on rumours, religious, caste and local disputes, often petty things, but with a huge potential to catalyse emotions.
WhatsApp is not evil. But in a particular socio-economic context, it can be deadly.
In many European countries, it remains just a tool for communication – a reliable SMS extension over WiFi. However, in Germany – where the Nazi virus still lingers – many use it effectively to spread Nazi propaganda. In Brazil and India as well, it serves as a toxic tool. Even if we got rid of it, something else, like Telegram, would appear as a substitute.
Despite whatever further measures may be taken to make WhatsApp more “accountable,” or even rapid access to fact-checks, those with a stake in fake news will find the means to beat them. Many political parties have already equipped themselves with the ability to bypass restrictions imposed by WhatsApp.
They have started sending messages by copy-pasting and have appointed more people to distribute the messages manually. And while all this carries on, WhatsApp puts out ads on the radio, television and print, about how the medium should be used for spreading happiness, and not rumours!
Facebook, Twitter and YouTube
Now look at a different breed of digital technologies – that of advertisement-driven platforms like Facebook, Twitter and Youtube. Here, the problem of content moderation is easier as it is not hobbled by encryption. The content is also served publicly. Therefore, fake news or hate speech videos can be removed easily. But two issues emerge.
The first is the position, traditionally taken by these platforms, that they resemble a town square – anyone can come and deliver their speech; who comes and what they say is something they do not control. This “freedom” is derived from section 230 of the US Communications Decency Act, 1996. It gives websites broad legal immunity that online platforms cannot be sued for something posted by users, even if they do act like publishers by editing or moderating posts.
However, the US and the world have moved away from this position and platforms are now expected to do some policing to meet community standards – prevent child pornography, hate speech and other crimes including the “hacking” of elections by external agencies.
A platform’s “liability for third-party content under Indian law is limited to the post-facto removal of illegal content upon receipt of a government or judicial order specifying the illegal content.” This legal liability framework does not impose any other responsibilities on the platforms, but provides them the “freedom to police content through their private ordering mechanisms.”
Also read: Accountability, Not Curbs on Free Speech, is the Answer to Harmful Content Online
This gives rise to the second issue – that private corporations are now using their own “discretion” to make judgement-calls on what is bad or undesirable. While it may be fine for a platform to delete a video which propagates fake news, it becomes problematic when deciding what is hate speech or obscene.
Typically, this judgement should be made either by courts or “autonomous” regulatory bodies. However, when it falls on private entities, these decisions can be dictated by state-enforced repression of free speech to the heckler’s veto i.e. mob enforced censorship.
The expectation from such platforms to moderate and check content arises from the simple fact, that these companies rake-in humongous amounts of revenue through advertising. Google tracks nearly every move we make on the web, including what we search for, and then shows us search results as well as embedded advertisements that match our “profile.”
Facebook analyses every single post for thousands of attributes, records likes and shares, all in a bid to determine your preferences and how you are likely to respond to any given situation. It then offers advertisers the ability to precisely target advertisements.
These business models create a natural tendency to host anything that increases traffic – clicks and views – without bothering about who is advertising what. So, fake news, hate speech and violent videos all exist on these platforms and, indeed, pull in lots of internet traffic. These models have brought about the threats to democracy.
The case for regulating micro-targeting for political gain
We need regulations and rules that would prevent reprehensible material from appearing on these platforms.
Facebook allowed dishonest advertisements from various Brexit lobbyists and sold advertising space to Russian agencies at the time of the US presidential elections of 2016. Twitter also facilitated the spread of fake news and misinformation during these elections. But it is real people that created the content and its advertising.
In their well-researched book Network Propaganda, the authors state:
Actors who want to get people to do things – usually to spend money, sometimes to vote or protest – value that service. Describing this business as ‘advertising’ or ‘behavioural marketing’ rather than ‘micro-targeted manipulation’ makes it seem less controversial.
But even if you think that micro-targeted behavioural marketing is fine for parting people with their money, the normative considerations are acutely different in the context of democratic elections. That same platform-based, micro-targeted manipulation used on voters threatens to undermine the very possibility of a democratic polity.
That is true whether it is used by the incumbent government to manipulate its population or by committed outsiders bent on subverting democracy.
These micro-targeting-based mobilisation strategies – whether WhatsApp or Facebook – reveal a profound change in how political appeals are being made to potential voters. Political campaigning is being transformed from a public activity to a personalised, custom-made, private conversation between a political party (or leader) and its constituents. It will only become harder to ascertain what untruths are contained in these interactions.
UK Parliamentary reports have called for comprehensive regulations to keep political advertising fully transparent and public, and not permit any micro-targeting. It says that there cannot be situations where “Only the voter, the campaigner and the platform know who has been targeted with which messages” and “Only the company and the campaigner know why a voter was targeted and how much was spent on a particular campaign.”
The famous MIT study titled ‘The spread of true and false news online’, published in March 2018, revealed something worrisome:
“Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories. The effects were most pronounced for false political news than for news about terrorism, natural disasters, science, urban legends, or financial information…
Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.”
Meanwhile, we in India can pat ourselves on the back and be proud that we finally have a global index where we’re on top – we are globally number one in terms of disseminating fake news.
Anurag Mehra teaches engineering and policy at IIT Bombay. His policy focus is the interface between technology, culture and politics.