The New York Times last week called them “Chinese-style censorship” rules. While not everyone agrees the rules go that far, Indian commentators have also raised concerns about the proposed changes, with Nikhil Pahwa of Medianama describing them as “a serious and imminent threat to the open internet in India”.
Indeed, they have been controversial from the get-go. The Indian Express reported in December that officials from the Ministry of Electronics and Information Technology met privately with social media companies and members of the tech industry seeking their views on the proposed rules. It was only after the report led to questions about the way in which the government was moving forward that the ministry announced a public consultation process, inviting comments and then counter-comments on the proposed rules. The last date for submitting those was February 14.
But what are these new rules? And why do they matter?
What are intermediaries?
Almost anyone will agree that Facebook is not the same as a newspaper. It does not have an editor and, other than having community guidelines, does not control what people say on its platform. So should Facebook be liable, for example, if someone uses it to spread hate speech or defame someone else?
Lawmakers the world over, including eventually in India, concluded that it did not make sense to have companies such as Facebook face direct liability for what turns up on their platforms. These entities were classified as intermediaries because they facilitated exchange of information between people, without exercising any control on what is said.
By this definition, the term “intermediary” covers a large number of organisations. There are social media firms such as Facebook and WhatsApp but also e-commerce marketplaces where anyone can list a product, domain registrars that allow you to buy the name of a website, internet service providers that enable you to access the internet, and even physical cyber cafes.
Crucially, the rules, under the Information Technology Act of 2008, provide legal immunity – generally called “safe harbour” – to intermediaries for any material travelling through their pipes.
What are the proposed rules and why are they controversial?
The IT Act granted intermediaries safe harbour as long as they complied with a certain number of rules. For example, Section 79 stated that if such entities were informed – by anyone – that certain content on their platforms was “grossly harmful” or “obscene”, they had to take it down. In 2015, while also striking down the infamous Section 66A, the Supreme Court wrote down a part of Section 79, ruling that intermediaries would have to take down material only if they were told to do so by a court or a government agency.
The proposed rules go way further, putting a much greater onus on intermediaries to crack down on material that may be considered illegal. Here are the major proposed changes:
Monthly warning: The proposal would require intermediaries to tell their users every month that they need to comply with the rules and regulations. Critics have argued that a monthly notice would lead to fatigue and people would simply start ignoring the warning.
Tracing the originator: Any government agency could ask for assistance from an intermediary and the entity would have to respond within 72 hours. A rule that seems aimed directly at WhatsApp and other services which provide end-to-end encryption and claim their users have fully private conversations would require the intermediary to enable the “tracing out of originator”. They would essentially have to ensure the government can find out who first sent a message, effectively meaning encryption would not be allowed.
Keeping your data: Companies are already required to keep data for a period of 90 days if a government agency asks for it. The new rules state the data has to be held for 180 days and that a request from any government agency, which could mean any entity from the home ministry to a panchayat, can ask a platform to hold it indefinitely. And users do not have to be informed.
Proactive censorship: The proposed rules want intermediaries to “deploy technology-based automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content”. In other words, the government wants these platforms to use artificial intelligence to crack down on what it considers illegal speech, with no clear checks on how this process would actually work.
Indian or foreign: The proposal requires any foreign intermediary with over 50 lakh users to incorporate a local company and set up a registered office in India, with all the legal expenses that would entail. It does not explain how the 50 lakh user number was arrived at or how it would be evaluated.
What necessitated the proposed changes?
For sometime now, the Indian government has been wondering what to do about WhatsApp, a messaging service it blames for spreading fake news which in the past has led to violence, even though researchers have suggested focusing on the platform may be a misguided approach. In many ways, the proposed rules solve all the problems India has with WhatsApp. If they become official, they would force the messaging service to end encryption and allow the government to seek information about anyone’s messages.
But, naturally, tinkering with the Intermediary Rules will have an impact across the spectrum. So the government seems to have taken the opportunity to make wholesale changes to how India’s internet works. The thrust of the move comes from the growing impulse, across India’s internet policies and regulations, that the country needs to be more assertive in forcing data to be kept within its borders and made easily accessible to law enforcement agencies.