One-Step Forwards, Two Steps Back: WhatsApp’s Use in Indian Elections

WhatsApp and other closed messaging platforms have proven to be a popular channel to circulate disinformation and hate speech with a view to gaining electoral advantage.

The Indian General Elections are a massive enterprise. A projected 950 million people will be eligible to vote across the country in the 2024 elections, for 543 electoral constituencies[1], featuring dozens of national political parties and tens of thousands of election workers and party operatives. On this massive stage of elections to the world’s biggest democracy, what voters hear matters – and the landscape of political communication and media has been radically altered over the last decade.

Some more numbers – India has an estimated 760 million ‘active’ internet users[2], accessing the internet more than once a month. 400 million of those are active on WhatsApp – the messaging platform’s largest user base.[3] Several million others use alternative platforms like Facebook Messenger, Telegram and Signal. According to a study by the Reuters Institute, WhatsApp is the second largest, and Telegram is the fifth largest online platform for Indians to access news.[4] Flying under the radar of election authorities, media regulators and policymakers, these messaging platforms have now become a core feature of electoral communications and media in India.

Given its reach and popularity of use, it’s no surprise that political parties, candidates, campaign management firms and the plethora of other actors involved in understanding and winning over the Indian electorate have adapted their strategies to utilise WhatsApp’s potential for elections. Unsurprisingly, this has led to several familiar concerns around electoral media now being reflected in the use of messaging platforms – disinformation and hate speech are rampant, while the grey-market of personal information fuels targeted propaganda.

However, even as its importance has grown, there is surprisingly little study or analysis of the means by which the use of WhatsApp and other instant messaging tools are influencing elections, and the implications of the rising use of the platform. Moreover, there is little academic or political consensus on how legal or technological measures might address these issues. Despite the increasing influence of messaging systems, the focus of regulation and analysis of electoral influence through online platforms has been on social media platforms typically characterised by their ‘open’, public or broadcast nature, as opposed to the ‘closed’ systems characterised by messaging platforms.

If we want to make sense of how contemporary digital platforms are impacting electoral integrity and political communication, particularly in the Global South, we need to pay close attention to how messaging platforms are fundamentally altering media ecosystems, the dynamics of their use, and the challenges they pose for election authorities and media regulators. In this case study, I examine how WhatsApp’s nature as a closed, extensible platform, along with its rise as a core communications infrastructure, are shaping electoral communication practices in India, why existing regulations have failed to contend with closed messaging platforms, and how platform governance practices can begin to comprehend and tackle these issues.

From messenger to platform to super app? WhatsApp’s evolution in India

WhatsApp is a platform owned by Meta Platforms Inc., the US-based technology giant that also owns social media services like Facebook and Instagram. While initially released in 2009, the platform’s growth rapidly expanded from the mid-2010s, owing to no small extent to the growing internet infrastructure in countries like India, Nigeria, Indonesia and Brazil, which remain its largest user bases. As smartphone internet connectivity saw massive growth, so did WhatsApp, and it quickly became the most widely used communications platform in countries across the Global South.

This period of growth in WhatsApp coincided with a major change in its security infrastructure – in 2016, WhatsApp made a decision to enable end-to-end encryption by default on its platform[5], rendering it impossible to intercept communications shared between WhatsApp users, and widely considered a best practice in increasing the security and safety of online communications. Even as encryption improved communications security and trust, rolling out encrypted communications infrastructure at the switch of a button (or in this case, through a software update), raised the shackles of law enforcement and national security agencies around the world. Encrypted communications lead to what some term as the ‘going dark’ problem – an inability to monitor and intercept communications for surveillance purposes, and consequent challenges to investigating criminal activity or other unlawful conduct through the platform.[6] WhatsApp’s switch to end-to-end encryption, meant, in effect, that law enforcement (or any other third-party, including ISPs or threat actors) would not be able to access private communications without access to a person’s device. End-to-end encryption also implied that communications on WhatsApp could not be moderated in the same way as on other platforms, as there was no way to monitor and govern these systems. As discussed later, this particular bugbear is repeatedly raised in discussions around regulating WhatsApp and other encrypted messaging apps, and presents unique challenges for regulating online disinformation and hate speech.

Also read: Disappearing Messages: WhatsApp Says Will Leave India if Forced to Break Encryption

Another event, from 2014, is crucial to understanding WhatsApp and its evolution. In 2014, the social media firm Facebook (now Meta Inc.) purchased WhatsApp in one of the biggest acquisitions by technology companies.[7] Subsequent to this acquisition, WhatsApp has transformed from a one-to-one communications application into a broader ‘platform ecosystem’. WhatsApp today is best conceptualised as a ‘platform’ – a system that allows for a range of activities between a diverse set of users, but the ‘rules’ of which are established centrally, usually by a private body, and enforced through the technical and organisational architecture of the platform.[8] In the case of WhatsApp, the decision-making rests with Meta, a corporate firm, which can unilaterally make changes to policy, extend new technological features on the app, and determine the extent and means of usage of the app. Even as it facilitates interactions between its various users (who are often also differentiated by WhatsApp as business users and ‘regular’ users), it places itself as the intermediary between these interactions, primarily, as a for-profit firm, to extract rent from these activities. These ‘rents’ have taken different forms – including monetising commercial use of the platform through its APIs, but also through the control and monetising of user data.[9]

Conceptualised as a platform, WhatsApp can also be studied through its ‘extensibility’ – the manner in which new features and services are made a part of its core data infrastructure. For example, WhatsApp has gradually transformed from its role as a one-to-one communications system, by incorporating public communication and broadcasting features, including the expansion of narrowcasting facilities like ‘private’ groups, where messages can be sent to up to 1024 individual accounts, a number which has consistently risen.[10] Similarly, WhatsApp regularly changes its privacy policy and the nature of information it shares with its parent company, Meta, and its other affiliates, like Facebook and Instagram.[11] As a platform, and part of a broader data ecosystem within its parent company, Meta, WhatsApp is able to leverage its position as a popular (in some cases, ubiquitous) messaging service to facilitate its growth and leverage its network in one domain into newer markets or features – for example, through its recent forays into digital payments, where its payments infrastructure was rolled out to its millions of users in India who had primarily been using WhatsApp as a messaging application.[12]

One-step forwards, two steps back: WhatsApp’s use in Indian elections

If you were to open a database of fact-checked political misinformation circulating on WhatsApp during the 2019 Indian General Elections, you would find not only laudatory claims of the achievements of political parties and politicians, but also hateful, often communally divisive rhetoric – using violent imagery and language to denigrate members of different castes and religions.[13] Such rhetoric is common to anyone following electoral politics in India, where hateful and violent speech is increasingly a tactic relied upon to rile up an electorate. Unsurprisingly, WhatsApp and other closed messaging platforms have proven to be a popular channel to circulate disinformation and hate speech with a view to gaining electoral advantage.[14]

WhatsApp’s use in ‘political’ election-related communications in India first came to media attention in the 2019 General Elections. Reports noted that voters were turning to WhatsApp as a primary source of political news and information, and that political parties and their campaign teams were reaching out to potential voters by enrolling them in WhatsApp ‘groups’ and constantly sending a stream of election-related messages to them.[15] As per these reports, the messages consisted of a mix of regular campaign information as well as messages clearly intended to incite communal division and disinformation targeted at the leaders of opposing political parties. Reports also highlighted the circulation of specifically election-related disinformation – about particular candidates, or fake polls projecting victories for particular parties.[16]

First-party accounts from former party officials are revealing of the strategies involved in electoral communications through WhatsApp. Singh’s account of working with the electoral communications team at the Bhartiya Janta Party, the party leading the current government in India, is particularly telling.[17] His account notes how the BJP’s electoral propaganda machine functions as a professionalised, streamlined process, channelling data from various sources to profile and target potential voters through its massive network of party ‘volunteers’. His account notes the central role that data analysis and the usage of personal data plays in collating and making lists of voters, classified according to caste, religion or other attributes that allow for easier targeting on WhatsApp groups. Personal information, including the names, phone numbers, National ID (Aadhaar) numbers and addresses of voters are easily available online, often published by electoral bodies themselves as voter lists, or otherwise gathered and sold by ‘data brokers’ to parties. This information is used to disaggregate lists of voters in a particular constituency into specific categories to be targeted according to campaigner’s beliefs about information that is most likely to appeal to these voters.[18]

Similar strategies have been reported to be used by other major parties, including the Indian National Congress, who aimed to create 3,00,000 WhatsApp Groups in the 2019 elections to reach out to their base.[19]

The techniques of micro-targeting relies on data analytics capabilities provided, often, by private data analytics firms. A 2018 report looking into ‘big data’ analytics indicated how certain firms created electoral data repositories for use in elections which can help in generating both high-level electoral strategies, but also in targeting political communications to specific constituencies or demographics.[20] Indeed, reports indicate that the parent company of Cambridge Analytica, the SCL Group, which was at the heart of a major scandal involving the manipulation of voters on Facebook, may have been involved in building political parties’ capabilities to target voters in India, as far back as the 2014 General Elections.[21]

The operation of voter targeting and the generation of propaganda, while relying on data analytics and mass messaging platforms like WhatsApp, relies on large amounts of volunteer labour. A party volunteer – called a WhatsApp Pramukh or a Whatsapp ‘Leader’ – is assigned to one or multiple lists to oversee the project of collating people into WhatsApp groups and ensuring a constant stream of pro-party messages. According to the BJP, in the 2019 General Elections, around 900,000 such pramukhs were assigned to these tasks – a number that will surely increase in 2024. The generation of campaign information is also streamlined, through the creation of social media ‘War Rooms’ and IT Cells, which essentially are tasked specifically with monitoring social media, generating propaganda, and creating dissemination strategies.[22]

The above examples indicate that electoral propaganda – hate speech and misinformation – including through WhatsApp, have become increasingly professionalised activities within political parties, existing within a broader ecosystem of the wide availability of personal data for behavioural targeting, and enrolling a whole set of technologies – including data analytics capabilities, social media, and personal communication services.[23]

Given the centrality of WhatsApp to the media ecosystem in India, a few studies have attempted to understand the social and political implications of its use, including its impact on electoral politics. A study by Narayanan et. al. based on information circulating on public WhatsApp groups (i.e. groups which are open to join based on publicly-available links), indicated significant amounts of ‘junk news’, as well as communally polarising messages circulating on groups with links to major political parties, including the BJP and the Indian National Congress.[24] Research by Garimella and Eckles has shown how multi-media (text and video) content is more likely to achieve ‘virality’ and contribute to disinformation, providing some more insight into what kinds of messages are more easily ‘platformed’ and how these contribute to campaign strategies.[25]

Some studies have also shown the limitations of current interventions against disinformation. For example, Reis et. al. have shown the limited influence of fact-checking in spreading certain kinds of political misinformation.[26] Badrinathan’s study of WhatsApp use in state elections in India examines ‘ground up’ interventions to educate individuals about disinformation received on WhatsApp, and finds that counter-information strategies can often be unproductive in countering propaganda.[27] Detailed studies of WhatsApp use in India by scholars both how prevalent misinformation is, as well as the difficulty involved in reducing its spread. Despite the existence of a few studies of this nature, researching the dynamics of WhatsApp usage in India can be particularly difficult owing to its closed nature, particularly when seeking to understand the scale and nature of the distribution of disinformation and similar viral communications.

Left on read: electoral integrity and the failure of platform regulation in India

Political and election-related online media in India, of the kind described above, is governed through overlapping regimes of private content moderation practices and legal rules. It is important to unpack how these rules interact with the practices of private messaging platforms, and what implications these governance regimes can have.

In general, the practices of online platforms, including messaging platforms, are governed through India’s Information Technology Act, 2000. Section 79 of the IT Act specifies that online ‘intermediaries’ which facilitate third-party communications, should not generally be liable for the content of that communication. However, this ‘safe harbour’ from liability is contingent on the intermediaries following specific rules laid down by the executive through delegated legislation. In 2021, these rules were updated to specifically regulate the activities of social media platforms as well as messaging platforms like WhatsApp.[28]

The Social Media Rules pose substantial concerns for civil liberties and the rule of law. Two aspects are particularly concerning. First, Rule 4(2) specifies that messaging platforms must, upon the receipt of a court order, “enable the identification of the first originator of the information” that has been circulated on its services. Rule 4(2) requires that messaging platforms implement traceability features into their services, which are incompatible with current standards for end-to-end encryption. According to WhatsApp, implementing traceability in this manner would compromise their ability to provide end-to-end encrypted communications.[29]

Second, Rule 3(b) states that social media intermediaries, the definition of which encompasses platforms like WhatsApp, must also comply with a host of content moderation rules, including notice and takedown rules for ‘fake’, ‘false’, or ‘misleading’ information identified by Government agencies known as Fact Check Units. Rule 3(b) provides a large amount of discretion to executive bodies, the Fact Check Units, to determine the truthfulness of content, and to force platforms to remove such information upon threat of losing their safe harbour.

The Government of India’s response to the criticisms is that such regulations are necessary for the prevention of illegal and harmful speech on platforms. The Government has claimed, for example, that the traceability requirement for messaging platforms balances privacy interests in end-to-end encryption with law enforcement’s legitimate interests in accessing information about illegal activity – claiming that traceability can be technically implemented without undermining the encryption of messages themselves. Similarly, the government has claimed that Fact Check Units are necessary to take on the problem of online misinformation. The constitutionality and legality of these provisions is currently being adjudicated before various constitutional courts around the country, and the arguments put forward on either side indicate the difficulties of regulating online speech and maintaining the balance of rights between freedom of expression, privacy and safe and responsible communications online.[30] That said, aspects of the IT Rules have impacted WhatsApp’s practices on content moderation. WhatsApp has established a tiered grievance redressal mechanism, which includes providing users the option to ‘report’ other users by forwarding WhatsApp the content of their messages, which WhatsApp can then take action on. They have also started publishing transparency reports on their moderation practices since the release of the new Rules, which indicates that they ban millions of users every month based on user complaints mechanism as well as on ‘proactive’ measures to identify problematic content and accounts.[31]

Apart from media regulation, political-electoral messaging increasingly depends on targeting individuals based on personal information like caste, gender and religion, among others, and combining this information with phone numbers to target people through messaging platforms.[32] Personal information of this nature is a readily available commodity for data brokers and party agents to collate and combine into lists which allow targeted propaganda and electoral messaging, owing to a mix of lax security standards as well as the lack of data protection and privacy regulations that allows individuals to have control over the use of their personal information. While the Government of India has now adopted a regulation – the Digital Personal Data Protection Act, 2023 – its utility is untested, and its various exemptions (such as for ‘publicly’ available data) mean that there are several loopholes which can be exploited by data brokers who make personal data available for targeted use in elections.[33]

Another relevant body of law relates to the regulation of communications specifically during elections. Elections in India are monitored by a constitutionally-established and formally independent institution called the Election Commission of India (“ECI”), which monitors, among other things, the period of ‘electoral silence’ during which campaigning is not permitted, as well as establishing a ‘model code of conduct’ – a voluntary agreement to be followed by participating political parties, and monitoring and establishing limits on election expenditure, including so-called ‘paid news’ – media that is paid for by a candidate or party.[34]

Despite being the constitutional authority to oversee elections, the ECI has not been able to effectively regulate the use of social media or messaging platforms during elections. Shortly before the 2019 General Elections, the ECI established a ‘voluntary’ code of ethics for social media platforms[35], which according to reports, was established in lieu of stricter legal regulations after lobbying by social media firms including Facebook.[36] According to this code, social media firms voluntarily agreed to take down content privately flagged by the ECI, which violated legal norms. There were no regulatory mechanisms to monitor or ensure compliance with this code, nor any consequences for failing to adhere to it. The ECI’s approach towards online platforms also suffers from a lack of clarity about the scope of its powers over social media regulation, particularly in the case of platforms like WhatsApp. For example, as recently as the 2023 state elections in Karnataka, the ECI was unclear on whether its powers to monitor the electoral silence period extended to campaigning over social media platforms.[37]

Apart from the legal and regulatory regimes, an important vector of governance of messaging platforms is through the policies established and overseen by the platform itself. Indeed, the policies and practices of platforms may be the most influential form of governance, particularly in the absence of clear regulation. In the case of WhatsApp in India, for example, WhatsApp has repeatedly claimed it is cognizant of the problems of hate speech and disinformation on its platform, and has announced that it takes steps to deter such behaviour.

For example, WhatsApp has implemented limits on ‘forwarding’ content – including labelling certain kinds of content, as well as preventing simultaneous broadcasts across groups.[38] They also implement spam filters to block ‘bots’ or accounts that might be responsible for mass automated broadcasts. In the context of elections specifically, senior WhatsApp employees have previously claimed that they are aware of political parties ‘abusing’ WhatsApp to send automated messages, and would take steps to ban such abuse.[39] WhatsApp also claims to ban political parties or political candidates that send WhatsApp messages to users ‘without permission.’[40] WhatsApp has also ‘partnered’ with accredited fact-checking organisations in India, to make it easier for individuals to verify the veracity of information they have received on the platform, by forwarding suspect information to specific fact-checker accounts.[41]

Closing the accountability gap for closed messaging platforms

Closed platform ecosystems like WhatsApp and Telegram have led to new patterns of media consumption and sharing. The available evidence, from India, as well as Brazil[42], Indonesia[43], Nigeria[44], as well as among diaspora communities[45], clearly indicates that WhatsApp and other messaging services, particularly Telegram, are increasingly providing the infrastructure for electoral propaganda and politically-motivated hate speech to circulate. Even though it may not be possible to clearly ascribe specific developments in electoral politics to the rise of platform-mediated communications, it is clear these platforms are increasingly becoming prominent features of contemporary political and electoral media landscapes.

What lessons can we learn from the recent history of messaging platforms usage during elections in India and elsewhere? What should policymakers, civil society and platforms keep in mind for the upcoming spate of elections around the world?

It is important for policymakers to take action. For one, all relevant stakeholders need to firmly commit to the right to privacy, including the right to private communications, and abstain from undermining encryption. A number of recent proposals, from policymakers, researchers, civil society and platforms, have suggested that platforms can offer ‘workarounds’ to encryption through mechanisms like client-side scanning, which would scan messages before they are encrypted, in order to filter out unlawful or harmful speech. These proposals rehash age-old debates about only allowing ‘good actors’ access to private communications or (unchecked) power over content governance. Yet, the counter-arguments remain the same – implementing such proposals can severely undermine communications privacy, safe use of the internet, the integrity of communications, and open up very real possibilities of abuse. The Government of India must commit to not undermine encryption and protect the constitutionally recognised fundamental right to privacy.

At the same time, we must acknowledge that closed messaging platforms are particularly appealing for bad actors to spread harmful and illegal communications, owing to the lack of any meaningful content governance in such systems, including the lack of legal oversight or even internal governance mechanisms. Tackling the issue requires new ways to approach closed-messaging platforms as media infrastructure for different kinds of communication. In particular, messaging platforms like WhatsApp must take steps that acknowledge how their features are providing the infrastructure for propaganda, disinformation and hate speech, particularly during elections, when trust in democratic institutions is vital to maintain. In doing so, WhatsApp and other closed messaging systems could develop distinct rules for communications intended to be widely broadcast, and those intended to be for limited circulation. This is particularly important given how messaging platforms are increasingly used for broadcast purposes, conflating the lines between ‘social media’ and private messaging uses. WhatsApp, for example, could and should consider what effective limits on ‘viral’ forwards look like, including limiting how many forwards can be received by groups, limiting group size, or changing how (and how many) individuals are added to group accounts.

Platform interventions should also be guided by a legal framework, instead of operating entirely of their own accord. Voluntary arrangements are generally insufficient in ensuring compliance, platforms must be bound to clear legal frameworks that allow election authorities to monitor platform compliance with election rules, including political ad spending or communication through features like the WhatsApp Business API. Platform regulations for closed messaging platforms could evolve to specifically empower counter-propaganda and fact checking through independent bodies meeting specified criteria (instead of providing the power to fact check to government executive agencies). The Government of India should consider implementing legislative mechanisms which require platforms to share certain forms of data about their content moderation practices with regulators, researchers or publicly. This could be similar to the DSA Transparency Database recently implemented in the EU.

More broadly, regulation must also target the broader ecosystem that enables the targeting of voters, including how personal information is collated by campaigners, for example, through clearer rules on the collection, sharing and use of personal data, including information that is ostensibly ‘publicly’ available through voter lists. The Government of India must commit to implementing, enforcing and strengthening privacy mechanisms in the Digital Personal Data Protection Act, 2023, as well as ensure the privacy and security of government public databases, which have been the subject of several data breaches.

Apart from focussing on platforms themselves, election authorities are well placed to act against the broader ecosystem of electoral communications that utilises messaging platforms as vectors of disinformation and hate speech. Election authorities must be empowered to act against parties that breach election rules on ‘paid news’ and electoral silence, including monitoring of electoral spending on campaigns that rely on targeting voters through closed messaging platforms. Independent election authorities like the ECI must be empowered to act against disinformation and practices that undermine electoral integrity.In India, the ECI’s powers to ask for information from and monitor action taken by closed messaging platforms during elections should be clarified, and the scope of its powers under the Representation of the People Act should be appropriately amended to strengthen its independence and allow them to effectively take action against violations of ‘paid news’ and other forms of electoral malpractice through online platforms.

Finally, greater research into the nature of communication and media practices on closed messaging platforms needs to be encouraged. While some quantitative methodologies are evolving to study closed platforms at scale, and qualitative researchers are studying the issue through ethnographic work or policy analysis, there is a large vacuum of research on communicative practices on WhatsApp that can feed into policies on electoral media, particular from the Global South. Platforms themselves should do more to open up metadata and other information available with them that may be useful for researchers, in ways that maintain the privacy of their users, including, for example, information about internal moderation practices or about design interventions made by platforms.

This article was originally published on Mozilla Foundation.

What Could the Future of Indian Data Protection Law Look Like?

Much of the new Bill will likely be based on the recommendations of the JPC, which most agree, failed to respond to progressive critiques of the proposed legislation.

Almost three years since the introduction of the Personal Data Protection Bill, Parliament has decided to withdraw the legislation and start anew the process of drafting a law for data protection.

What can we expect from this ‘renewal’ of the data protection law? The rationale for scrapping the nearly five-year old process of drafting the current Bill, according to Union IT minister Ashwini Vaishnaw, is to redraft the legislation in line with the recommendations of the Joint Parliamentary Committee (JPC), which were submitted in December, 2021. Meanwhile, minister of state for IT Rajeev Chandrasekhar has suggested it was to ease the burden of compliance on small businesses.

One might even speculate that the government is stalling for time, given its own dismal record on data protection and expanding surveillance architecture. In any event, we can expect a redrafted law to look substantially different from its previous iterations.

Much of the new bill will likely be based on the recommendations of the JPC. The committee, set up to closely scrutinise the draft Bill, failed to respond to progressive critiques of the proposed legislation – including how it might safeguard against unchecked government surveillance and unaccountable use of personal information, and how to bolster data protection regulation and digital rights in a manner cognisant of the nature of the threats posed by new forms of commercial surveillance and profiling.

Also read: Why India’s Process for Authorising Surveillance on Citizens Is Deeply Flawed

Instead, the JPC’s recommendations do more to confuse and confound than clarify the direction of a future privacy and data protection law in India. 

First, the JPC’s recommendations substantially expand the scope of unchecked government surveillance, without responding to the concerns about privacy and human rights raised by the increasing use of data-based technologies in governance projects. In particular, the expansion of the government’s power to expropriate and regulate ‘non-personal’ data opens up new concerns about government surveillance which are not accounted for in the Bill.

It is increasingly clear that certain kinds of aggregate data can have privacy implications that are not grounded in personally identifiable information. Consider, for example, the ability to use demographic data (including gender, caste, or other religion) to discriminately target particular communities based on certain common traits, without identifying the individuals themselves.

Such targeting based on aggregate data already takes place in certain systems, like the Delhi Police’s so-called ‘predictive policing’ system, which disproportionately targets informal settlements and economically vulnerable groups. However, instead of examining the implications of using such data about vulnerable populations, the JPC seems keen to expand the Union government’s powers over the realm of such data, including the power to demand the sharing of non-personal data by companies or other data controllers (under Clause 91), under a law drafted for very different purposes.

Such access to ‘non-personal data’ without extending appropriate safeguards for its use, offers another avenue of expanding state surveillance, which can particularly affect marginalised populations. 

Secondly, the JPC’s recommendations appear to privilege extractive business models based on profiling and surveillance, over rights and democratic control over data. The intervening period between the 2019 Bill and its eventual withdrawal evidenced that large digital platforms, which dominate our online environment, are further consolidating market power in India, with newer data-based business models presenting greater threats to privacy.

However, the JPC report echoes the line being promoted by the Union government for some years now, which characterises data generated about people as an ‘asset’ or ‘resource’ which should be used productively for economic benefit. In establishing data access for economic growth as a policy priority for data regulation in India, the JPC’s recommendations strike a foreboding note for how structural challenges to data protection issues might be dealt with by the data regulator (in this case, the Data Protection Authority, or DPA), particularly if enforcing or maintaining strong standards for privacy challenges dominant business models and threatens the models of economic growth that the Bill promotes.

Also read: Looking Beyond Privacy: The Importance of Economic Rights to Our Data

Indeed, lessons from the implementation of the General Data Protection Regulation (GDPR) in the EU indicate precisely that regulatory agencies need to be structurally protected from political influence. The proposed Data Protection Authority, however, lacks clear independence from the Union government, meaning policy choices in vogue might guide its hand, rather than a commitment to data protection and privacy.

Ultimately, despite the government’s rhetoric against ‘big tech’, privileging the economic value of data over structural rights-based protections over the same will end up entrenching extractive business models widely prevalent in India today. 

The JPC’s recommendations also fail to contend with the nature of privacy harms arising from emerging technologies characterised as ‘big data’ and ‘artificial intelligence’. Data collected online is increasingly the basis upon which important decisions about individuals and groups are made, in ways which are often intentionally obscured from the people it affects.

Corporations and governments now use data about people in incredibly complex ways, including for modelling and predicting attributes and individual or group behaviours, making statistical correlations between individuals. Machine learning and contemporary ‘artificial intelligence’ technologies compute vast sets of data about people in order to profile them, to serve them advertisements and online content, or to calculate interests on loans, the risk of insurance fraud, the probabilities of health risk, the suitability of an employee… The list goes on.

However, individuals have little control over how this data is processed and what its implications could be, particularly once they have ‘consented’ to being tracked online or having data collected. As these technologies grow in influence, other jurisdictions, including the US, EU and China, are developing laws to mitigate their harmful effects.

Even while the JPC appears to recognise these concerns as an aspect of privacy regulation, its recommendations fail to appropriately respond to the concerns, with the only recommendation being that data controllers must be transparent about the ‘fairness of algorithms’, without specifying what such fairness implies, or how data subjects can respond to unfair, discriminatory or harmful processing of data by such technologies.

Also read: What the JPC Report on the Data Protection Bill Gets Right and Wrong

While the government dithers on introducing privacy legislation, a need for a robust regulatory regime has never been more apparent. Our social lives are increasingly enmeshed with data-processing technologies used by both private and public actors in ways which are not transparent or accountable to individuals – from expanding police use of facial recognition, to the use (and abuse) of worker data by private platforms.

The scrutiny provided by the largely closed-door process of the JPC provided little assurance that the government values individual freedoms and rights over the claims of the government or large businesses over data. Following their recommendations, the revised Bill may end up privileging surveillance and profiling-based business models, rather than providing improved structural protections required for equitable participation in the digital economy.

Yet, the renewal of the drafting process still keeps the gate open for a privacy and rights-protective legislation. Hopefully, if it allows an opportunity to democratically shape the future of the digital economy in India, there’s a silver lining to its unceremonious withdrawal.

Divij Joshi is Doctoral Researcher at University College London

India’s Digital Response to COVID-19 Risks Creating a Crisis of Trust

Building and deploying technologies without transparency, no matter how well-meaning, is a recipe for potential misuse and abuse.

A smartphone app used by over 60 million people. Drones in the sky tracking people’s movement and checking their temperature. Facial recognition cameras reporting to the police on whether someone has broken quarantine.

These are some of the ways in which Central and state governments have put technology at the forefront of the efforts against the COVID-19 pandemic in India. It may appear intuitive and appealing, at this time of crisis, to turn to technologies like the internet and artificial intelligence, which have been widely adopted and seen such tremendous social and financial investment in recent years. However, in the haste to deploy these digital solutions, there has been little introspection on implementing the legal and technical frameworks which can ensure that these technologies help, rather than hinder, public health and social trust.

The manner in which the Aarogya Setu app is being deployed is symptomatic of this lack of introspection. Aarogya Setu was designed as a ‘digital contact tracing’ app which can inform users whether they are at risk of COVID-19 infection, to help people self-quarantine and allow them to approach public health authorities. However, reports are emerging on a daily basis of how this app, which was intended to be ‘consensual’ and voluntary, is now being mandated by the Central government, for everyone from government employees, to delivery workers and construction workers.

Reports have also emerged of the arbitrary arrest and quarantine of a woman in Mumbai, allegedly based on information gathered from Aarogya Setu. The Central government also has plans to use the application to determine people’s mobility, by issuing ‘e-passes’ on the app, and the CISF has suggested making the app mandatory for travelling in public transport like the Delhi Metro.

This use of digital technology is disproportionately affecting poor and marginalised communities. In a country where an estimated 65% of the population does not have internet access, let alone a smartphone and constant power supply, making a smartphone app as the focal point to determine people’s livelihoods will leave out the millions who cannot rely on Internet connectivity or power access. Moreover, ‘social distancing’ is an impossibility for the millions who are dependent on daily wages for their livelihoods, and enforcing the same through surveillance and punitive measures like enforced quarantine, will likely compound their difficulties.

Also read: Will Bluetooth and Aarogya Setu Allow Us to Safely Exit the COVID-19 Lockdown?

In the absence of safeguards, these technologies often make decisions which are both incorrect, and difficult to challenge or override. For example, if a person’s ‘health status’ is determined by Aarogya Setu instead of by clinical testing, a negative result can mistakenly subject people to limitations on their movement, possibly depriving them of daily wages, while leaving them with little prospects of understanding or overriding such a decision. This is apart from the fact that the technological claims of ‘digital contact tracing’ have been widely disputed the world over – countries with widespread smartphone adoption like Singapore and Taiwan have cautioned against relying heavily on digital contact tracing without backing it up by widespread testing and human contact tracing.

The same technologies being encouraged to aid in humanitarian efforts have historically been used by governments and corporations alike to aid undemocratic surveillance and control, in a manner which have left people with little control over their data and their lives in an increasingly ‘digital’ world. Incidents like the misuse of Aadhaar data by governments, or the misuse of Facebook data to influence voters have created a serious crisis of trust in digital technologies.

This crisis of trust pervades and hampers our current ‘technology-first’ efforts to mitigate the pandemic. Reopening a nationwide lockdown and resuming a semblance of social life requires widespread trust and cooperation between and among individuals, communities and the government, particularly the public health system. If, instead, technologies are used to punish and stigmatise individuals, there can be no expectation of such cooperation.

Building and deploying technologies without transparency, or without involving communities in understanding its functioning and its limitations will deepen the crisis of trust between citizens and the government. Similarly, using these technologies to increase policing, surveillance, and stigmatisation, will mean that individuals may choose to hide their health status or travel history from health authorities, putting themselves and others at risk, and ultimately hampering the collective efforts against the pandemic.

Also read: How Reliable and Effective Are the Mobile Apps Being Used to Fight COVID-19?

Mitigating this crisis of trust requires designing our legal and our technical systems in ways which prioritise democratic control and individual autonomy. Various legal systems are attempting to develop norms around the deployment of these technologies, focussing on building privacy and trust.

The European Union has encouraged transparent, voluntary, decentralised and privacy-preserving mechanisms like the open DP3T protocol, which ensure that the only data gathered from the apps is that which is strictly necessary for individuals to identify whether they have potentially contacted a CoViD-positive individual, and which allows individuals to determine how to use such information, including whether to share it with public health authorities. The Government of Australia and independent lawyers in the UK have proposed temporary legislation for enhancing transparency and trust in the use of CoViD surveillance technologies, such as legal mandates which ensure the independent oversight of the technologies, that data is used only for public health purposes, and that the surveillance tools are dismantled once the pandemic is over. 

In India, at present, there is no framework which controls the use of surveillance or decision-making technologies in this context, particularly when deployed by the police or within government systems. Instead, apps like Aarogya Setu are reliant on privacy policies which, in the absence of a legal framework, have little legal authority and are difficult for a common citizen to enforce. It is therefore imperative that governments at the state and central level enact temporary legislation which governs and limits the deployments of these technologies. At the outset, any intervention based on digital surveillance must take into account the limitations of such technologies, must be strictly deployed within public health systems instead of the security and policing apparatus. Legal frameworks establishing independent and routine audits of these technologies can ensure transparency and efficacy of these technologies. A legal framework must incorporate norms of non-exclusion by ensuring that viable non-digital alternatives exist to any essential and pervasive digital intervention, including identification of affected individuals for testing or medical intervention, or for controlling movement and access to government services.

Legal frameworks must prioritise transparency by establishing what information about individuals is sought to be collected, and establishing a legal obligation to only use such information within the public health system, which can safeguard against its function creep. Finally, the law must establish the temporality of these measures through ongoing parliamentary oversight or a ‘sunset’ provision, which ensures that the surveillance measures are not continued beyond the period of the pandemic. 

Digital technologies have been a tremendous resource for society in this time of crisis – allowing communities to build solidarity and offer mutual aid, and allowing us to continue social ties in the midst of a pandemic and a lockdown. However, we must be vigilant against the misguided reliance on technologies which exclude and which punish, which will imperil not only our responses to the pandemic, but the democratic values we cherish and savour.

Divij Joshi is an independent legal researcher.

The UIDAI Has No Authority to Verify Indian Citizenship

Even if you take into account an off-hand and dangerously vague direction from the Supreme Court, it still has absolutely no legal authority to do so.

Over the last two days media reports have emerged that the Unique Identification Authority of India (UIDAI), the body responsible for the implementation of the Aadhaar project, has issued notices to 127 people to “prove their claims to citizenship”.

The reports forced the UIDAI to release one of its classic denials, where it has claimed that “Aadhaar is not a citizenship document”, that “Aadhaar has got nothing to do with citizenship issues as such”, and that the notices were part of a routine inquiry to determine whether the Aadhaar numbers were “fraudulently obtained”.

The denial sparked legitimate outrage against UIDAI’s conduct, with politicians like Asaduddin Owaisi taking to Twitter to chastise the Aadhaar agency for its irresponsibility.

The central question at issue is the UIDAI’s legal authority to verify citizenship.

The powers of the UIDAI are drawn from, and limited to the terms of the Aadhaar Act, 2016. Section 9 of the Act states that “The Aadhaar number or the authentication thereof shall not, by itself, confer any right of, or be proof of, citizenship or domicile in respect of an Aadhaar number holder.” Conversely, it implies that citizenship itself does not imply that an Aadhaar card has been validly obtained.

Under the schema of the Aadhaar Act, the question of claims of citizenship does not arise. The issuance of an Aadhaar number is instead tied to claims to residency, and proof of residency is sufficient for enrollment in Aadhaar under Section 3 of the Aadhaar Act. Any of various documentary and non-documentary proofs may be sufficient for proof of residence (detailed under Schedule IV of the Aadhaar (Enrollment and Update) Regulations, 2016). Citizenship is a wholly distinct legal matter, which is to be determined under the terms of the Citizenship Act, 1955.

Also Read: With Notices to ‘Illegal Immigrants’, UIDAI Raises Two Key Questions

What, then, is the UIDAI’s justification in asking for proof of claims to citizenship?

The answer, as the agency’s statement briefly touches upon, may lie in an offhand direction issued by the Supreme Court of India.

In 2013, the apex court, in the course of the hearings challenging the Aadhaar project, ordered that “where a person applies to get the Aadhaar Card (sic) voluntarily, it may be checked whether that person is entitled to it under the law and it should not be given to any illegal immigrant.” The order was confirmed by the majority judgement in KS Puttaswamy v Union of India, which directed the UIDAI to take suitable measures to ensure that “illegal immigrants are not able to take such benefits.”

There are no documents to indicate that the UIDAI has taken any steps to implement this direction of the Supreme Court, which in itself is dangerously vague and capriciously framed. Admittedly, the UIDAI does have the powers (under Sections 28-30 of the Aadhaar (Enrollment and Update) Regulations, 2016) to cancel or deactivate any Aadhaar number, including where the enrollment ‘appears to be fraudulent’ or where valid supporting documents have not been obtained. The procedure involves a ‘field inquiry’ upon any case where cancellation of omission may be required, upon which the Authority may issue an order of cancellation or deactivation. However, even in the event that this exercise is borne out of the Supreme Court’s direction, and even if interpreted in its broadest manner, the UIDAI still has absolutely no authority to verify claims of citizenship or illegal migration.

The UIDAI is only empowered to check the validity of documents on the basis of which an Aadhaar number has been claimed, and cannot by itself go into the question of citizenship, or of the legality of a person’s entry or stay in India. The former is a question under the Citizenship Act, and the latter is a question to be determined under the relevant legal regime under the Foreigners Act, the Foreigners Order and related legislation. The UIDAI should not aspire to usurp the role of either the NRC or the Foreigner’s Registration Office in attempting to determine questions of citizenship or validity of entry and stay in India.

The UIDAI’s exercise is plainly dangerous, arbitrary and illegal. In falling back on the vague Supreme Court direction, the UIDAI has entered into the contentious fray of determining ‘citizenship’ and claims to Indian nationality. As Owaisi points out, the arbitrary power conferred by such determinations will likely have a disproportionate impact on the poor, Muslims and Dalits, who are already the targets of excessive violence occurring from the frenzy around the exercises for the determination of citizenship.

The end of the road could result in denying essential government services linked to an Aadhaar number on the basis of this arbitrary exercise, realising the worst fears about the Aadhaar project and the debate around citizenship.

Divij Joshi is an independent legal researcher.

The ‘Special Status’ of Kashmir’s Internet Must Go

Orders for online censorship should have to demonstrate both necessity and proportionality before an independent judicial authority.

Jammu and Kashmir is currently in the process of witnessing one of the most comprehensive ‘information blackouts’ in India’s modern history.

Of the essential supplies that the Valley has been cut out from, one of the most crucial is the blockage of almost all lines of communication to and from Kashmir, including telephony and one-way broadcasts like cable TV. This is the 51st internet shutdown in Jammu and Kashmir since January 2019, and according to one estimate, the 179th since 2012, accounting for more than half the shutdowns ordered across the country. 

Even for Kashmir, the latest sweeping communications blackout is a reason for concern. As noted by David Kaye, special rapporteur for the United Nations, the blockage of even one-way communication sets a worrying precedent for state responses towards censorship. Censorship is not limited to the blackout, either.

On August 12, it was reported that the government issued orders to Twitter to block certain users who were reportedly spreading ‘false information’ about the situation in the Valley, including verified handles. The arbitrary manner in which communication is curtailed in (and about) Kashmir is a microcosm of the administration’s chokehold on freedoms within the Valley, and the implications of such restrictions should worry us all. 

Denying democratic debate

The latest communication blockade in Kashmir began on August 5, the day on which the Union government read down Article 370 of the constitution, which granted Jammu and Kashmir special status, via a presidential order – a move which has now been challenged before the Supreme Court.

The communications shutdown, imposed hours before this decision was made, was ostensibly to prevent civil unrest in the Valley in the aftermath of the unexpected decision. The effect, on the other hand, has been the creation of chaos and confusion both within and outside the Valley, with Indian citizens being unable to exercise the most basic fundamental rights of freedom of speech and information, even in the face of one of the most significant political decisions impacting their lives. 

Also read: CRPF Helpline Flooded With Calls From Kashmiris Seeking to Know About Families

Communications blockages have become a mainstay in the administration of Kashmir, most often under the guise of maintaining law and order in the face of civil unrest. Yet, the evidence does not support claims that communication blockages in themselves are necessary or sufficient for restoring law and order in times of civil unrest.

A comprehensive study of the impact of internet shutdowns in India indicates to the contrary – arresting communication in times of unrest can fuel uncertainty and panic. It prevents reliable and authentic communication from reaching a population which is already at risk, which can instigate hitherto non-violent protestors from coordinating political activities and thereby resorting to potentially violent tactics. 

The absence of authentic sources of information, including reliable local journalism, also provides a fertile breeding ground for propaganda and disinformation outside the blockaded area. This is fatal to any democratic discussion about the immediate effects of the government’s decision – citizens are not only not aware of the sentiment of Kashmir’s residents, but are increasingly being actively misled through propaganda machinery, both internal and from across national borders. How, then, are we to accept the government’s claims of normalcy in the region, particularly when the little information that manages to slip out indicates, contrary to these claims, an atmosphere of fright and unrest? 

As noted in a report by the Berkman Klein Centre at Harvard University, broad communication shutdowns could be a response to the government’s inability to control offshore platforms or encrypted messaging platforms which are increasingly responsible for hosting online speech. Yet, online platforms like Facebook have also been complicit in arbitrarily censoring ‘controversial’ (read, dissenting) speech from or about Kashmir.

The New York Times reported last year that Facebook’s content moderators were told to apply greater scrutiny to content which contained the phrase ‘free Kashmir’. Twitter was also recently pulled up for acceding to Indian government requests for censoring content by Kashmiri citizens and journalists without adequate due process or transparency. 

As the internet becomes increasingly enmeshed in everyday lives and livelihoods of Kashmiris, necessary for everything from political mobilisation to cross-cultural consumption and commerce, the severely disruptive effect of internet blackouts makes other efforts towards restoring normalcy seem futile and needs to be reckoned with. Our legal framework needs to account for and respond to this other, subtler, abrogation of the constitution that is taking place in Kashmir. 

A legal black box

The institutional opacity in Kashmir’s internet shutdowns is so comprehensive that the legal routes resorted to in each instance need to be speculated upon, and even the Right to Information Act has not managed to broach the government’s resistance to transparency.

Some reports indicate the complete lack of due process and failure to follow any procedural norm in the issuance of blocking or censorship orders – with government officials resorting to merely telephoning internet service providers and ordering shutdowns, even prior to an official order.

The lack of transparency and due process is also a failure of the legal regimes under which shutdowns ostensibly take place, which operate concurrently – Section 144 of the Code of Criminal Procedure; Section 69A of the Information Technology Act, and Section 5(2) of the Telegraph Act, and associated rules. Of these, Section 144, which is commonly used, has no statutory procedural guidelines for its application, even though the Supreme Court has stated that its use must be limited to cases where “there is an actual and prominent threat endangering public order and tranquility.”

Also read: Amid Communication Blackout, How One Kashmiri Connected Distressed Families

Even though a modicum of procedural safeguards exist under the rules made under Section 69A and the recently notified Temporary Suspension of Telecom Services Rules under the Telegraph Act, the processes under which these take place, too, are shrouded in secrecy, and their flaws are well documented.

In any event, while these regimes for censorship have been upheld by courts in different instances, the brazen manner in which they are being regularly deployed are unlikely to meet the constitutional standard of necessity and proportionality. This is particularly so in the frequent cases of shutdowns which are taken as ‘precautionary measures’, or censorship before the fact. Such restrictions require the government to demonstrate a significantly higher threshold that there is an imminent danger of public disorder likely to occur. It is unlikely that broad internet shutdowns applicable over vast populations would qualify this threshold. 

It is imperative that the laws that have placed Kashmir’s, and indeed India’s, online freedoms in such jeopardy, must be revisited. At the very least, orders for online censorship must be able to demonstrate their necessity and proportionality, including temporal and geographical limitations on their applicability, before an independent judicial authority prior to their application. These laws must also ensure the proactive transparency of any order for censorship. Online platforms, for their part, should ensure that their systems for complying with government censorship requests are transparent and follow due process, including pushing back against arbitrary and overbroad censorship requests. 

The demonstration of the government’s authoritarian tendencies towards the internet have been most apparent in its callous approach towards online freedoms in Kashmir, contributing further to its increased isolation. Remedying this situation must be at the forefront not only to preserve the constitutional rights of Indian citizens, but also to continue to hold out hope for the internet as a vibrant, equal and democratic space. 

Divij Joshi is a research fellow at the Vidhi Centre for Legal Policy, Bengaluru.

What Does Facebook’s ‘Supreme Court’ Mean for the Future of Online Speech?

A single ‘board’ will likely be insufficient to incorporate more diverse opinions and contexts into the company’s content moderation practices.

In January 2019, Facebook made public its blueprint for an independent ‘content oversight board’ – a high-level committee tasked with the oversight of the social media giant’s content moderation decisions.

The board, framed as an appeals court from decisions of the company’s operational side, will have the unenviable task of adjudicating and informing how Facebook regulates the online habits of 2.3 billion users. 

This decision is significant for a number of reasons – not just in that it will affect a user-base that is roughly a third of the world’s population – but also for potentially bringing about a paradigm shift in the operation of the online governance of speech, a notoriously secretive and unaccountable process.

It has even been referred to as Facebook’s ‘constitutional moment’. It is crucial to examine what this decision portends for the future of online speech in India, and globally.

How FB sets global rules for speech

Facebook presently takes decisions regarding third-party content through a patchwork of policy formulations, including both its public facing ‘community standards’, which vaguely define its contours of permissible speech, as well as more confidential and intricate rules which deal with content in more specificity, including within national and local contexts.

At a more practical level, the daily task of applying the rules, identifying and deciding to retain or remove disputed posts is delegated to thousands of human workers, or, in some cases, to automated systems. The human moderators are provided minimal training and low pay, and due to vaguely formulated rules, must take decisions on content based on subjective considerations.

Both the process of formulating its content rules, as well as enforcing them, are opaque, error-prone and lack meaningful accountability. Facebook’s ‘global’ rules and policies on content indicate a distinct bent towards the American free speech traditions and ignorance of social, political and cultural realities elsewhere, including in India, its second-largest market.

Also read: New Recommendations to Regulate Online Hate Speech Could Pose More Problems Than Solutions

A recent study by Equality Labs documents Facebook’s failure to consistently or effectively moderate hate speech targeted against minorities in India, indicating the company’s unwillingness or inability to grapple with harmful content which is outside of its own cultural and legal context. Leaked documents, for example, suggest that Facebook asks its moderators to vet content from India on grounds of ‘hurting religious sentiment’ – a vague standard which can pave the way for censorship. 

The arbitrariness and opacity is further compounded by the vagaries of the Indian legal system, where few practical legal avenues are available for individuals, both to request social media companies to remove content, or to challenge their actions in taking down, blocking, or censoring content. In a situation where the online habits of millions of Indians are in effect governed largely by the private rules and practices of Facebook, its decision to potentially overhaul its governance practices assumes even greater significance.

Facebook’s proposal will divest limited power over content moderation decisions to a proposed Oversight Board, a 40-member panel (initially chosen by Facebook), which will scrutinise the company’s own commitment to its internal rules and adjudicate whether they were correctly applied. The announcement comes in the wake of increasing criticism at this opacity and discomfort at the amount of power exercised by the company over conversations often involving the world’s most politically sensitive issues.

The social media giant has also faced criticism for its handling of content moderation decisions ranging from the takedown of violent live streaming videos such as the Christchurch shooting, or the removal of content deemed to be ‘coordinated inauthentic behaviour’ (or ‘fake news’ in general parlance) during the Indian elections. In this atmosphere of distrust, the last two years have also seen an exponential increase in political efforts to exercise greater control over the governance of online content – from Singapore’s law for curbing ‘fake news’, to imminent efforts in India to automatically filter ‘unlawful’ speech. 

Also read: Accountability, Not Curbs on Free Speech, is the Answer to Harmful Content Online

Under attack from all fronts, the proposal can be seen as an attempt to allay fears that Facebook is acting in a motivated manner, detrimental to the interests of its users, as well as an acknowledgement that the freedom of expression of such a large public forum should not be governed by a monolithic, for-profit, US-based corporation.

Will Facebook’s ‘Supreme Court’ solve its crisis? 

The acknowledgement by Facebook that it exercises too much undemocratic control over the online expression of billions of users is in itself unprecedented, let alone its decision to outsource some of this power to an ‘independent’ authority. Facebook, and similar social media platforms, have long shunned responsibility for third-party content, a status that has entitled them to significant legal protection as well as freedom from public scrutiny of their role in governing online speech.

A departure from this position is an acceptance of what has been known for some time now – that social media companies are the primary actors in structuring and moderating online speech, and consequently responsible for privately shaping public and private discourse at an unprecedented scale. We must be wary of how this power is exercised and what it means for the free expression of societies and individuals to be subject to the whims of unaccountable private corporations.

As a recent corporate accountability index which studies online freedom indicates, most major online platforms continue to be non-transparent and unaccountable towards their users and shun responsibility towards fostering both free and equitable online communities.

Divesting this enormous power to an independent board has been compared to a ‘constitutional’ moment for Facebook, where it attempts to create a political structure distinct from its commercial and operational motives. Indeed, Facebook’s Oversight Board has been compared to a Supreme Court within a constitutional system, which separates an executive system from the power of its own oversight. 

Yet, from the limited information released about the content oversight board so far, there remain some important and uncomfortable questions regarding the true impact of this board.

First, while the proposals repeatedly insist upon the oversight board’s independence from Facebook, the body will necessarily be nested within the company’s corporate structure, and will be beholden to the same. In the event of a conflict between the board’s independence and the company’s primary obligation to its shareholders, the latter would necessarily prevail as a matter of law, which casts uncertainty on the claims that the decisions of the body will remain independent of Facebook’s commercial motives.

Second, the proposal does not go far enough to remedy the problems Facebook has identified. For one, the structure of the Board does not take into account the incomprehensible scale of Facebook’s speech regulation – the 40-member panel (one member per 57 million users) can hardly keep track of tens of thousands of decisions made daily, let alone parse the varied contexts in which the expression is taking place. Moreover, the company has indicated that, while the Board may bind itself to precedent, it will not directly influence company policy such as the ‘community standards’. This is a serious restriction on the scope of the Board. 

Also read: New Report Finds Online Media in India Is Already Highly Regulated

Finally, a single Board will likely be insufficient to incorporate more diverse opinions and contexts into its content moderation practices. While Facebook has indicated that board members will represent the ‘entire Facebook community’ and not specific constituencies, it is unclear how meaningful representation will work out in practice, and on what grounds the membership of the board is expected to be built.

Focusing on concerns of the ‘entire community’ could once again belie an ignorance of community and context-specific concerns, and could weaken its commitment to diversity, particularly when the large bulk of its user base and revenue is likely to come from countries like India and Bangladesh. 

Establishing and enforcing global standards for speech is an enormously complicated and difficult issue, and we should not expect Facebook to bear the entirety of the burden of maintaining a free and equitable online community. Facebook’s efforts towards greater independent stewardship of speech regulation should be lauded to the extent that it reckons with Silicon Valley’s enormous and undemocratic political power, and other large firms would do well to follow this example and abandon their hubris on matters of speech regulation.

Meaningful reform, however, will have to stem from democratic political communities. Their institutions must be up to the task of framing the appropriate rules and conditions to temper the power of private platforms and introduce transparency and accountability for the future of online speech.

Divij Joshi is a research fellow at the Vidhi Centre for Legal Policy, Bengaluru.

India’s Quest for Data Sovereignty Needs to Go Beyond Grandstanding Gestures

The draft of the national e-commerce policy instils little faith in the government’s progressiveness in dealing with subtle and complex issues of data governance.

The Indian government wants all of the data, but doesn’t seem to know what to do with it.

This, at least, is what the latest draft of the national e-commerce policy seems to imply. As a document to chart the government’s approach towards ‘e-commerce’ for the future, the draft policy instils little faith in the government’s preparedness and progressiveness in dealing with subtle and complex issues of data governance, and comes off as little more than grandstanding.

The fact that the proposed policy is a non-starter is apparent from its vague scope and lack of a definition over what it seeks to govern or regulate.

Most of the definitions referred to in the policy point towards ‘e-commerce’ covering the market for online goods and services a la marketplaces like Amazon or Flipkart, or digital platforms like Netflix or Spotify. The document also seeks to govern ‘the digital economy’, a vague term which seems to imply nearly all networked communications and relationships, from social media to private communication applications; user-generated content hosts to news blogs.

Indeed, the policy often veers entirely outside the realm of ‘e-commerce’, into issues of law enforcement access to information and regulation of content on social media through placing responsibility on platforms to ensure ‘genuineness’ of content – an ill-defined and potentially dangerous proposal.

In the absence of a clear definition, the draft policy is ineffective in achieving its ambition of charting “a policy framework that will enable the country to benefit from rapid digitalization of the domestic, as well as global economy.”

Also Read: As Push for Global E-Commerce Rules Gain Pace, India Starts Taking a Stand

Even where it identifies issues of concern, the policy has loopholes and contradictions, and fails to provide a comprehensive and practical solution to these issues. For example, where it speaks of regulating online marketplaces, it ignores legal conventions on intermediary liability and proposes heavy handed measures for trademark and copyright protection.

This includes making marketplaces directly liable for counterfeit goods, and creating a private industry-led mechanism for removal of content which is deemed copyright infringing, without any public oversight over the same.

Can protectionism alone achieve data sovereignty?

Perhaps the most obvious objective of the policy is its attempt to redefine the global balance of data politics.

Admittedly, developing countries, including India, have legitimate grievances about the manner in which the global digital economy has evolved. The lack of governmental or societal control over key players in the digital environment – from the practices of Silicon Valley companies that control public speech and exercise functional sovereignty over data; to the international agreements which prevent fair taxation of digital commerce – is a legitimate concern for India’s government and its people.

The draft e-commerce policy does well to note these concerns and approach the issue of data taking into account its political economy. The policy notes that the economic value of non-personalised data should be thought of as a community asset and utilised for the public benefit, preferring an approach where such data is held in ‘public trust’ by the government, instead of by ‘non-Indians’. Commendable as this approach is, there are a two major problems with the manner in which this is articulated in the draft policy.

First, the policy fails to address how such data would be governed or utilised for the public benefit, and whether it entails a form of nationalisation, which it seems to vaguely suggest. The government’s approach to achieving ‘data sovereignty’ is still fixated on data localisation and restrictions on cross-border flows of data – essentially, requiring ‘data’ to be physically stored in servers located in India. Yet, apart from claiming that building server farms can increase job creation, it is unclear whether localisation is a necessary or sufficient condition for genuine data sovereignty.

The government needs to give more thought before encouraging a tactic which raises concerns of ‘Balkanising’ the internet and issues of cyber security and data protection, at least without considering alternative approaches. Moreover, the policy fails to address other, more fundamental flaws in the manner in which data governance takes place in India – from methodological flaws in data collection at all levels of government, to issues with the use and promotion of open data by the government – which are equally critical to ensuring that data can be utilised as a public good.

Second, the policy equates the public interest in ‘community data’ with domestic commercial interests, without outlining genuinely beneficial use cases for community data. While the draft policy sends a clear message of prioritising the use of data for domestic companies over foreign ones, it fails to probe whether this is the most beneficial manner in which community data can be utilised.

Commuters are reflected on an advertisement of Reliance Industries' Jio telecoms unit, at a bus stop in Mumbai, India, February 21, 2017. Credit: Reuters/Shailesh Andrade

It proceeds on the assumption that Reliance Jio, say, is more likely to empower Indians through the use of data than Google. Credit: Reuters/Shailesh Andrade

It proceeds on the assumption that Reliance Jio, say, is more likely to empower Indians through the use of data than Google. This is aptly clear in the way the policy privileges the use of data held by foreign companies for domestic companies through compulsory data sharing, or in the use by domestic companies of ‘community data’ held in IoT devices.  

Also Read: Will India’s Draft E-commerce Policy Help Give it a Firmer Stand in WTO Talks?

In doing so, it fails to articulate any practical applications of community data by domestic companies or otherwise, while distancing the government from its own obligation to use data for improved governance, apart from using speciously defined buzzwords like ‘Artificial Intelligence’. The protectionist bent is apparent even in cases where similar regulatory conditions should logically apply to domestic companies, for example, in transparency and fairness requirements for ‘e-commerce marketplaces’.

In the absence of strong reasoning, the e-commerce policy comes of as grandstanding and strong-arming global technology giants to protect the interests of domestic capital, and not a genuine measure for forward-looking governance of data or e-commerce.

Overall, the policy is not too dissimilar from its previous iteration, which was leaked and subsequently withdrawn after significant criticism. There is still time to go back to the drawing board and meaningfully articulate how e-commerce and the data economy can be made genuinely beneficial to the Indian public.

Divij Joshi is a research fellow at the Vidhi Centre for Legal Policy, Bengaluru.

India’s Curbs on Amazon and Flipkart Address Concerns, But Still Lack Clarity

The Centre’s FDI policy changes may be a case of all the right moves in all the wrong places.

Amazon and Flipkart have become the latest casualties of the Indian government’s adventurism in regulating e-commerce in India.

Just a few days after a crucial rule came into effect, Morgan Stanley predicted that Walmart may divest its stake in Flipkart, while The New York Times has forecast “less consumer choice and higher costs” for Indian consumers.

Admittedly, the new rules are vague and imprecise, and the regulatory process through which they were enacted leaves much to be desired – leaving businesses uncertain about their obligations, and their consumers in a lurch.

However, the rules are also an acknowledgement of the growing need to reckon with regulatory concerns posed by economically powerful platforms. The principles informing these rules are worth a calm introspection, rather than hurried reprisal.

Platforms, power and politics

The offending piece of regulation, innocently named ‘Press Note 2’, lays down the obligations to be followed by e-commerce marketplaces like Amazon and Flipkart, which are defined as “platforms” that “facilitate interactions between buyers and sellers”.

Broadly, the new rules try to address the above problems by requiring two structural changes to how e-commerce marketplaces operate.

First, the rules seek to ensure a structural separation between the marketplace and the goods or services sold on it. Therefore, the platforms can no longer sell products produced by companies it is related to or has control over. This separation existed since 2016 under the extant FDI policy, but business models had been suitably adapted to find loopholes, which continued to allow marketplaces to sell their own inventories. These loopholes have now been closed, reducing the chance of a conflict of interest in a platform prioritising its own goods or services over those of a competitor.

Also read: Mukesh Ambani Planning to Take on Amazon, Walmart With an E-Commerce Platform

Second, marketplaces are obliged to operate in a ‘fair and non-discriminatory manner’, with respect to their treatment of third-party vendors, as well as in the offering of discounts, which is defined as a requirement to provide similar terms of service to all their vendors.

This is a novel and radical requirement, similar to the requirement of network service providers that they ensure neutrality with regards to the data that flows through their infrastructure, a crucial requirement for an open internet which is presently in force in India. The new rules treat platforms as essential infrastructural utilities (like water or electricity), and recognise the importance of non-discriminatory access to such infrastructure as a systemic condition upon which a fair digital market must operate.

In and of themselves, these requirements are neither unprecedented, nor, contrary to reports, do they present radical threats to India’s e-commerce market. Instead, they extend existing concepts of market regulation to regulate the economic power of online platforms. Properly implemented, such requirements are crucial for the future of a fair and innovative digital market.

From ‘network’ to ‘platform’ neutrality?

Economic power tends to naturally concentrate in digital platforms because they exhibit ‘network effects’, where the platform’s value to its users increases in proportion to its use by others – more sellers attract more buyers, and vice versa.

Once such networks are established, often through the usage of heavy discounts on products in the early stages, they become difficult for users on either side of the market to exit, particularly where the platforms control crucial market information regarding consumer choices and purchasing patterns which can not be transferred by the seller or buyer to competing platforms.

The ability of online platforms to exercise a high degree of control over information and networks poses a significant concern to the structural integrity of the economy, and, in a broader sense, to the polity.

While Amazon and Flipkart may be able to exploit efficiencies in their business models to offer lower costs to consumers, this could come at the cost of granting significant political control over public facilities, like crucial market information and access to networks, to such private entities.

In the context of e-commerce, it could allow dominant entities to determine the success or failure of a product or company, stifle innovations which threaten to disrupt an incumbent business model and monopolise the gains and insights from information flows over the platforms.

These concerns are not merely theoretical – e-commerce marketplaces prioritise entities which they control or have a stake in. In India, Cloudtail and WS Retail are affiliated to Amazon and Flipkart respectively.

These type of entities routinely use information gathered from users and sellers to sell similar competing products in high-performing categories. And they leverage their gatekeeping power to impose unfair terms of service to sellers and users, such as arbitrary changes in return or exchange policies.

Unclear drafting, improper scope – the Pitfalls of the new FDI rules

Regardless of its intentions, the process, scope, and framing of these rules is highly problematic. To begin with, while the concerns posed by dominant platforms have been noted academic literature and by regulators in other countries, the present FDI policy does not appear to be informed by any evidence-based study.

Ex-ante market regulation, particularly in an area of rapid innovation, must have a sound empirical base and satisfy a clear (and high) threshold for intervention. Unfortunately, the policy does not offer any justification for the use of such blunt tools, nor does it explore alternatives for achieving competitive outcomes.

Further, while the present scope of the FDI rules intersects snugly with major online e-commerce platforms, the rules do not extend to similar platforms with only domestic capital, like Paytm Mall or Reliance’s upcoming e-commerce venture, which sits uncomfortably with the pro-competitive nature of the rules.

Finally, the conditions for the continued operation of e-commerce marketplaces, as well as the operational requirements to implement fairness and non-discrimination are incredibly broad and difficult to implement in practice. A similar proposal for platform neutrality in France, for example, was tempered by the condition that any discrimination must be justified by the ‘need to protect rights, ensure service quality or for other legitimate business reasons’, which provides a reasonable standard by which platforms can continue to model their business around.

Also read: What India Could Learn from US When it Comes to Examining FDI Inflows

While such concerns could be addressed within the sphere of competition law, the Competition Commission of India appears unprepared to deal with these concerns.

In November, last year, it washed its hands off a complaint by the All India Online Vendor’s Association against Flipkart, noting (without a detailed examination) that neither Flipkart nor Amazon met the required threshold of holding a ‘dominant position’ in e-commerce, to come under scrutiny of India’s competition law.

A previous finding of the commission, after a seven year investigation, held Google accountable for preferential treatment of its own services through its search engine platform, but failed to provide useful precedent to engage with the competitive concerns of online platforms. A thorough re-examination of the application of competition law to digital platforms will be necessary if the principles-based, ex-post regulatory option of competition regulation is to be considered viable and future-proof.

With the government looking to revamp India’s ‘e-commerce’ policy, it is high time that regulators and policy makers stop shooting in the dark and draw up a coherent plan for India’s digital market which tackles the complex issue of platform regulation with evidence and nuance.

Divij Joshi is a research fellow at the Vidhi Centre for Legal Policy, Bengaluru.

Accountability, Not Curbs on Free Speech, is the Answer to Harmful Content Online

The government’s ham-fisted rules will only entrench private censorship of public discourse, not promote greater due process which is what is needed.

The draft amendments to India’s regulations covering ‘intermediaries’, suggested by the IT ministry following ‘secret consultations’ with internet companies, are likely to introduce further ambiguity into the already broad and vague legal regime governing online intermediaries.

If passed, they will ease the private censorship and surveillance of speech by powerful companies, while doing little to address the actual problem of undemocratic and unsafe online spaces.

Also read: Will India’s Snooping State Survive Judicial Scrutiny?

There is an urgent need not only to recall the vague and harmful draft rules, but to ask the government to ensure greater transparency and accountability from online platforms.

 Safe harbour and the government’s dilemmas

Section 79 of the Information Technology Act, the so-called ‘safe harbour’ to intermediaries – a broad term encompassing (largely private) internet service providers like telecom companies, as well as online platforms like Facebook and Twitter – is the backbone to one of the most crucial enablers of online freedoms. This provision protects intermediaries from being directly liable for the words and actions of third parties using their services. Without this, any illegal content posted on or through any such service, could potentially warrant civil or criminal legal action against the intermediary.

The outcome would be a severely restricted internet – online platforms and services would heavily censor content to avoid liability, which would also require them to monitor of all content posted by their users – creating a private surveillance regime.

As per Section 79, the safe harbour is available to intermediaries only if they remove illegal content upon obtaining ‘actual knowledge’ and also observe ‘due diligence’ and comply with the rules made by the executive. These rules were notified in 2011, as the Intermediary Guidelines Rules, and when notified, contained vague provisions including a requirement to takedown content which was

‘grossly harmful, harassing, blasphemous, defamatory, obscene, pornographic, paedophilic, libellous, invasive of another’s privacy, hateful, or racially, ethnically objectionable, disparaging, relating or encouraging money laundering or gambling, or otherwise unlawful in any manner.’

Also read: India’s Electoral Laws are Ill-Equipped to Deal with Digital Propaganda

The vague drafting of Section 79 and the 2011 Rules created a regime where intermediaries were unsure of when could be held liable, prompting them to over-censor and take down any content notified by any private person or government authority, upon fear of criminal sanction. However, this regime was overturned by the Supreme Court in Shreya Singhal v Union of India. In this case, the court read down the ‘actual knowledge’ requirement under Section 79 to mean a judicial order or a notification by the ‘appropriate government’. The court noted the difficulty and danger in private parties like Facebook and Google being required to adjudge the legality of content and filter the content on their platforms. Today, this is the law of the land.

Recent events appear to have prompted the Indian government to rethink its intermediary liability regime. The government’s interests appear to be tied particularly to concerns over electoral interference by Cambridge Analytica on Facebook and disinformation on WhatsApp.

Having its hands tied by the Supreme Court’s directions, the government now appears to be attempting to curtail ‘unlawful speech’ by requiring proactive censorship by intermediaries. The most concerning aspect is contained in Rule 3(9) which reads that

“the Intermediary shall deploy technology-based automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content.”

Credit: Reuters

Additionally, the draft rules have been amended to require intermediaries to provide ‘traceability’ of the origin of messages – an obvious reference to the end-to-end encryption provided by WhatsApp, which does not permit message traceability.

While the traceability requirement is concerning and ought to be opposed, it must be noted that this requirement is already required of intermediaries under Section 69 of the IT Act and the Rules made under that section. However, the multiple legal provisions do create uncertainty about the precise regime which applies to content decryption, and, due to its vagueness and in the absence of the safeguards present (which are still minimally present under Section 69), are almost certainly unconstitutional and foul of the Supreme Court’s position on the fundamental right to privacy.

Also read: Attempt to Curb ‘Unlawful Content’ Online is Chilling to Free Speech

The more concerning aspect is the requirement of ‘proactive’ censorship of ‘unlawful content’. Firstly, it assumes that intermediaries are or should be in a position to determine the legality of content – which must be a judicial determination – and censor speech without any standards for the same. Such broad ‘prior restraint’ of speech without a judicial determination of its legality is also likely to be unconstitutional for its tendency towards mass private censorship.

Moreover, the advocacy of the use of ‘automated tools’ assumes that such tools are capable of filtering only unlawful speech, whereas the reality is very different – even the most sophisticated filtering technologies are liable to both censor legal content and leave out illegal content.

 Rights without responsibilities?

The draft rules bring to the fore certain tensions with the Indian intermediary liability regime, which both courts and governments have been grappling with. In the aftermath of Shreya Singhal, intermediaries need only comply with judicial orders for content takedown. However, in the absence of an expedited judicial process for takedown (as exists in Chile, for example), this requirement is onerous on users who are at the receiving end of unlawful speech online, and allows intermediaries to neglect unlawful content on their platform.

In this backdrop, the Supreme Court has already bypassed the requirements of the IT Act to evolve the doctrine of ‘auto blocking’ in specific cases. This doctrine is, in essence, the same as that under the new draft rules, and dangerous for the same reasons.

However, while online platforms like Facebook, Google or Twitter, project themselves as impartial ‘intermediaries’, the reality is that their primary role lies in filtering, censoring, prioritising or disabling certain forms of content.

In this manner, even in the absence of legal requirements, intermediaries are responsible for the private ordering of public speech, a dangerous trend which enables platforms to exercise enormous power over our online lives, without any responsibility towards the same. Facebook’s prioritisation of misleading political advertisements, and Twitter’s failure to act upon violence against marginalised communities is evidence of the need to democratise online platforms and make them more accountable.

Also read: Is the Modi Govt Snooping on You? Here Are Five Questions You Should Be Asking

The draft IT Rules pose no solution to this lack of transparency and accountability of online platforms – rather, they entrench their power and make them more culpable in the private censorship of public discourse, along with the government of the day. A smarter legal intervention could promote greater due process in the private practices of intermediaries – requiring them to make their content moderation practices open and transparent, for example by releasing transparency reports, and by making them more accountable to follow due process in the moderation of content on their platforms, like making it easier for users to notify abusive or illegal behaviour, and notifying the steps taken to address the same.

This would promote safer and more democratic online spaces where online communities, not the government or the executives in online companies, can be at the forefront of battling harmful and illegal speech.

You can make your voices heard by writing to the ministry of information technology until January 15.  India must use this opportunity to oppose censorship and advocate for safer and democratic online communities.

Divij Joshi is a research fellow at the Vidhi Centre for Legal Policy, Bengaluru.

India’s Electoral Laws are Ill-Equipped to Deal with Digital Propaganda

From having measures to prevent immoral micro-targeting of voters to tracking social media expenditure, India is woefully unprepared.

‘Fake news’, online abuse and weaponised algorithms are now routine hallmarks of elections all over the world.

With four major state elections and India’s general elections around the corner, it is an apt time to examine whether our laws and institutions are equipped to deal with the fallout of such digital propaganda.

Uncovering empirical evidence of deliberate ‘electoral manipulation’ as a result of online propaganda may be difficult, particularly in comparison to the effects of traditional forms of propaganda and mass media. Indeed, various studies have reached conflicting conclusions on the impact of online disinformation on civic participation and elections. 

Nevertheless, its effects are palpable in how it shapes online communities and political discourse.

Firstly, online propaganda relies upon use of personal information and behavioural targeting to filter political information being received by its recipients, thereby amplifying information likely to appeal to voters and suppressing that which would not. Data brokers like Cambridge Analytica exploit this personal information to offer targeted propaganda to electorates around the globe.

Secondly, propaganda on social media can be distinguished from traditional media in its ability to rapidly spread information without the publisher being held to account. Unlike traditional media platforms, online platforms rarely bear legal responsibility for their content. This combination of virality and lack of accountability has made platforms like WhatsApp important sources of political propaganda during elections in India and elsewhere, simultaneously offering anonymity (for the source) and visibility (for the information).

Also read: Minister Ditches BBC Summit After Research Links Fake News to Nationalism

Thirdly, online propaganda weaponises the computational elements used in platforms. ‘Bots’ are software that manipulate the visibility of propaganda by gaming the algorithms platforms use to prioritise content (for example, by flooding Twitter with hashtags) or to drown out dissenting voices by engaging in targeted online abuse at scale (a strategy widely known to be employed by the ruling BJP).

We must be cautious in attributing major political upheavals or electoral decisions to a singular cause like disinformation or online propaganda, but their influence in deepening political prejudices, skewing political opinion and diluting informed political discourse should not be understated. And our laws and institutions must be up to the challenge of countering this phenomenon.

Weak legal framework

A number of problems with the present legal framework hinder effective regulation of digital propaganda in India.

Firstly, the lack of a clear data protection framework allows unrestricted access to personal data, which can be manipulated by political agents to spread targeted disinformation and propaganda. The present data protection law has limited applicability to political parties or to data brokers that market personal data at a massive scale. This enables the creation, for example, of WhatsApp groups based on voter lists coupled with phone numbers and caste, gender and other sensitive information to target voters with propaganda without their consent.

Secondly, election law in India is not equipped to deal with digital propaganda. The Election Commission, which is the constitutional authority for regulating state and national elections, is relying on online platforms to self-regulate and censor ‘illegal’ content. For example, it has ‘partnered’ with Facebook to censor all ‘election-related’ content during the no-campaigning period of 48 hours prior to elections and to monitor ‘fake news’ and ‘inflammatory content’. However, the scope of the regulated content is vague, and in the absence of a clear legal basis, these platforms can censor or amplify certain information without accountability or transparency. This in turn exacerbates the power imbalances between platforms and their communities and further entrenches their stronghold over the democratic process.

The Election Commission has ‘partnered’ with Facebook to censor ‘election-related’ content during the no-campaigning period and to monitor ‘fake news’ and ‘inflammatory content’.

Moreover, there is a legal vacuum when it comes to dealing with paid political propaganda. The EC’s attempt to tackle this issue using existing electoral finance laws, by disqualifying a BJP MLA for using ‘paid news’, was struck down by the Delhi High Court, which categorically stated that the EC lacks the power to regulate the content of any media in the absence of a specific law.

Online propaganda often blurs the distinction between harmful and legitimate or legal expression, making regulation a fraught process. The backlash against Union government’s recent attempts to curtail ‘fake news’ aptly demonstrates the need for a more nuanced undertaking to tackle online propaganda through legal reform.

How tackle digital propaganda

To begin with, the Central government must bring in a robust data protection framework and empower an independent body to look into the misuse of personal information by political parties and candidates during elections. The recent inputs by the Justice Srikrishna Committee provide a valuable starting point for such reform.

Secondly, election-specific reforms must be introduced to deal with disinformation and paid advertisements. These approaches are an extension of existing election finance regulations, which are aimed at increasing transparency into the electoral process. Social media platforms should be made to disclose the source of funding for advertisements and actively promoted political content to their community, along with information on why the advertisement was being targeted at a specific user.

Although Facebook has indicated that it would implement such measures in India, it is unclear when and whether this will take place. Further, even as platforms should be encouraged to remove disinformation and harmful content like online abuse related to elections, the terms and processes must be transparent, tailored to local conditions, and accountable to an independent public agency.

Also read: What to Believe — And Not Believe — About Fake News in India

Similarly, measures to prevent ‘paid for’ online propaganda can be introduced to the Representation of the People Act, as mooted by the Law Commission of India. This includes making it a disqualifying electoral offence to pay for, or be paid for, circulation of political information relating to an election without providing necessary disclosures. The provision must be narrowly drawn and must include exemptions for political information coming from a disclosed source owned or controlled by the party or the candidate.

These measures can form a small part of the course correction for social media and democracy. The Internet, once expected to create a utopian space for realising democratic ideals, is being perverted to undermine these very values. Democracies all over the world must act now to reclaim it.

Divij Joshi is a research fellow at the Vidhi Centre for Legal Policy, Bengaluru.