Liability, Not Encryption, Is What India’s New Intermediary Regulations Are Trying to Fix

It’s time to rethink the legal immunity we hand out to massive internet companies.

After several months of speculation over possible amendments to India’s intermediary laws, the Ministry of Electronics and Information Technology has published draft amendments called the Information Technology Intermediaries Guidelines (Amendment) Rules, 2018.

These have been drafted under Section 79 of the Information Technology Act, which provision deals with intermediary liability.  

The government’s proposal – especially the requirement to proactively identify and disable content – has come under heavy criticism on the grounds that it creates a pre-censorship regime like the one found in China. The issue is a lot more complicated than is being made out by the government’s critics.

The duty to block content has always been imposed on traditional publishers like newspapers or broadcasters such as news channels. A failure to do so leads to legal liability under the law. In the early days of the internet, the Silicon Valley giants, who were then startups, argued that it was impossible to proactively monitor content because of the sheer volume of information on the internet.

Proactive monitoring of all that information would mean hiring a large number of moderators, which was financially unviable for startups. Instead, these startups asked for a subsidy in the form of legal immunity, which created the legal fiction that intermediaries would be deemed to have no knowledge of illegal content (despite it being on publicly viewable platforms) until they were informed by users or the government.

The US Congress responded by creating a ‘safe harbour’ for these intermediaries which was then adapted to differing degrees by the rest of the world. These safe harbour provisions saved technology companies the cost of monitoring their own platforms and led to the world’s greatest experiment with mass communication that was not moderated by editors. The results have not been good. The troll armies, the toxic hate targeted against women have shown us the consequences of handing over the internet to the mob.

Also read: If WhatsApp Doesn’t Regulate Itself, Parliament May Have to Step In

Even Silicon Valley was smart enough to know that despite the immunity bestowed upon it by the US Congress and other legislatures, they would still have to conduct proactive filtering for the most outrageous content such as child pornography so as to maintain their goodwill. Google is also known to have developed software programs like Content ID to carry out proactive monitoring of copyright infringing content. The EU recently enacted a new law requiring large internet platforms like YouTube, Facebook and Twitter to proactively monitor the content they host for copyright infringement and take down infringing content without waiting for legal notices from the copyright owner. Similar pressure is being exerted by European countries on the issue of hate speech and child pornography.

The response of Silicon Valley has been to invest in more artificial intelligence programmes. Two years ago, Reuters reported on a far-reaching censorship programme launched by Silicon Valley to target extremist propaganda through AI programmes.  

Viewed in this backdrop, the Indian government’s proposal for proactive filtering is not as absurd as is being made out by its critics. Along with the requirement for large social media companies with more than five million users to establish offices and subsidiaries in India, this new policy will hopefully lead to a scenario where Silicon Valley begins to invest in better policing fake news, hate speech and hopefully, make the internet a better place.

That said, the government needs to consider retaining certain immunities for startups who are too small or who lack the mountains of data required to create AI programmes to proactively monitor the internet. If the government does not retain higher level of immunities for startups it will only end up cementing the position of Silicon Valley monopolies.

The contours of ‘unlawful content’

There has been a fair degree of criticism in the press about the requirement to proactively identify ‘unlawful information or content’. Concerns have been voiced that the phrase is vague and may lead to excessive censorship. This criticism may be misplaced because different provisions of the IPC and other laws are quite clear on the speech that qualifies as unlawful.

Again, it should be remembered that these draft rules do not create new offences but only provide conditions for immunities from offences that are defined in other laws such as the IPC.   

Encryption and the fake news epidemic

The second proposal in the draft rules that have caused outrage, is the requirement for all intermediaries to ensure traceability of the originator of content shared on their platform.

In India, the spate of lynching related to fake news spread on WhatsApp led to multiple meetings between the government and executives from WhatsApp on the issue of tracing the sources of these messages. The proposal contained in the rules is therefore not very surprising. It should be noted that the requirement to make content traceable is different from making encryption or mandating companies to decrypt information. Section 69 of the law allows the government to force decryption but there is no public evidence to show that it has taken steps to actually act and use this provision.

Multiple commentators have claimed that the proposals in the draft rules require messaging services like WhatsApp to break its end-to-end encryption thereby compromising on privacy. I am not sure that is the correct interpretation.

Also read: WhatsApp told India That Tracing Fake News Would Break Encryption. Is This True?

These draft rules require intermediaries to introduce a traceability requirement only if they want to be granted legal immunity offered under Section 79 of the IT Act. I had mooted a similar proposal earlier in these pages. It is necessary to remember that the immunity under Section 79 is a ‘subsidy’ and as a society, we are not bound to extend it to all internet companies.

If WhatsApp wants to retain its immunities offered by Section 79, it will have to give up its end-to-end encryption system in order to facilitate traceability. The lack of immunity under Section 79 will mean that WhatsApp’s management will be as liable as anybody else who facilitates the publication, transmission or broadcast of any hate speech mass. Consequences could include civil lawsuits for damages and criminal prosecutions.

If the draft rule is notified into law, the choice of retaining encryption lies with WhatsApp. The law does not make encryption illegal or force WhatsApp to introduce a traceability requirement.

If WhatsApp feels that the risk of prosecution and civil liability is too high, it will seek intermediary immunity under Section 79 and will have to introduce a traceability requirement which may involve breaking end to end encryption. Introducing a traceability requirement will mean that WhatsApp will also have to recruit a rather large staff to process requests because the messenger service is going to be inundated with traceability requests for investigating agencies.  

Prashant Reddy T. is a Senior Resident Fellow at the Vidhi Centre for Legal Policy, New Delhi.