Skip to main content

#FOEIndiaSeries | 13. Keeping Track of Others' Content

Free Speech in India

This is one of 14 articles (available via this page) through which I hope to share a sense of free speech and content law in India. Part I of this series considers the socio-legal basis of free speech law in India, Part II explores what regulation, both legal and social, says and, in some cases, what it should perhaps say while Part III, finally, looks at the processes through which free speech regulation is implemented in India.

Wherever possible, I've tried to avoid mention of matters I've been involved in myself. I've also tried to ensure that the series is accessible to non-lawyers.

The terms ‘child pornography’ and 'revenge porn' have been used simply because of how common they are, both in popular discourse and occasionally at law, even though neither term is accurate. 'Child porn' refers to indecent images of children and, where real children feature, is evidence of child abuse in and of itself. 'Revenge porn' generally refers to the non-consensual release of explicit imagery of a woman by a former partner of hers. It, too, is a manifestation of abuse, and is far more an expression of power than an expression of pornography.

Of course, none of the content of these articles is professional advice and it should not be relied on for any purpose. It is tinged with personal opinion, may not be accurate, and is incomplete.  

Posts in the Series

Part I.    The Foundations of the Law

1.    The Parameters of Indian Discourse    
2.    The Backbone of the Law    
3.    Legislative and Other Input

Part II. Regulating the Substance of Speech    

4.    Creative Content and Trade    
5.    Reputation and Honour    
6.    Keeping the State Functional    
7.    Maintaining Law in a Plural State    
8.    Women’s Existence in Patriarchy
9.    Sexual Abuse and Reportage
10.    Privacy and Rights-Based Legislation    
11.    Explicit Content: Choice, Consent and Coercion    
12.    State Paternalism and Public Interest

Part III.  The Processes of the Law    

13.    Keeping Track of Others’ Content
14.    The Mechanics of Regulation



Part III. The Processes of the Law


13. Keeping Track of Others’ Content


The law may not only hold one responsible for one’s own speech but also for others’ speech particularly if one has a role in publishing it. There is case law which indicates that, for certain speech-related offences, it would be no defence to say, “But I didn’t write it!” if one were to be accused of a legal wrong. For example, if a book contained defamatory content, or, for that matter, were merely accused of containing such content, its publisher would not necessarily be able to escape legal liability simply because the text had been written by the author, a separate person. Publisher-author agreements may routinely contain clauses which speak of how authors will ensure that publishers do not suffer from adverse legal consequences as a result of publishing their works. However, it isn’t always clear that these clauses are meaningful in any sense of the word not least because one cannot contract one’s way out of criminal liability. And when it comes to defamation and many of the other grounds on which content may be assailed, it is criminal law which may be invoked, and not necessarily civil law.

A number of criminal laws contain explicit and almost identical provisions to tackle cases where corporate bodies, firms, and associations of individuals are accused of having committed offences. These provisions tend to state that when an offence is committed, in addition to the organisation itself, the persons who are in charge of and responsible to the organisation for the conduct of its business would be deemed to be guilty of the offence, and would consequently be liable to be punished. Such persons would only be able to escape criminal liability if they were able to demonstrate that the offence had been committed without their knowledge or that they had done what they could to prevent its having been committed. That said, if it were proved that the offence had been committed with the consent or connivance of any director, manager, secretary, partner, or any other officer of the organisation or that its commission was attributable to the officer’s neglect, the officer in question would be liable to being proceeded against and punished.

These are rather stringent provisions, and they have the potential to make life extremely difficult for individuals who work with organisations, and who are responsible for how the organisation conducts its business even if they are not necessarily involved in the nitty-gritty of its day-to-day functioning. For example, the named publisher of a newspaper could perhaps face legal proceedings for what appeared in the paper even though he may never have seen a particular image, and the decision to place it in the newspaper may have been taken by a colleague without reference to him. He may ultimately escape liability but even dealing with legal proceedings takes a toll.

When it comes to content which is placed on online by users, the intermediaries who run platforms, whether they are online marketplaces or SocMed sites, have some amount of leeway which is granted to them by the 2011 Information Technology (Intermediaries Guidelines) Rules which were issued by the Ministry of Communications and Information Technology under the aegis of the 2000 Information Technology Act. These Rules place a number of obligations on intermediaries and require them to observe ‘due diligence’ — if they meet their obligations, intermediaries are entitled to be shielded from the full force of the law should users upload unlawful content on to their platforms.

In relevant part, under these Rules, intermediaries must publish their terms of use, access, and service, as well as a privacy policy. In the former document, they must inform users not to ‘host, display, upload, modify, publish, transmit, update or share’ prohibited information. Abbreviating what the Rules have to say, the Rules prohibit content which users have no right to post, harms minors, infringes proprietary rights, impersonates or misleads others in regard to the identity of the users who may also be the senders of messages, contains viruses or malware, threatens the safety or integrity of the state or the individuals in it possibly by inciting crimes. Additionally, prohibited information includes content which is ‘grossly harmful, harassing, blasphemous defamatory, obscene, pornographic, paedophilic, libellous, invasive of another's privacy, hateful, or racially, ethnically objectionable, disparaging, relating or encouraging money laundering or gambling, or otherwise unlawful’ in its nature. Intermediaries must not knowingly host or publish such information, or determine the communication of such information. If they remove content which contains prohibited information once they become aware of it either by themselves or through someone else who informs them of it in the manner contemplated by law, they would not generally be liable to be punished for the content having been uploaded or having been accessible through their platform.

The speech which is prohibited under the 2011 Information Technology (Intermediaries Guidelines) Rules falls into two categories: that which is clearly illegal under allied statutory laws, and that whose illegality would not be supported by statute. For example, there is absolutely no doubt whatsoever that paedophilic content is illegal under a number of statutes including one which specifically targets sexual offences against children ie POCSO, and the core criminal law statute: the 1860 Indian Penal Code. Neither is there any remotely credible argument to be made to the effect that such content, particularly where it features real children, should be legal in light of the harm it does. As opposed to this, consider content which is merely ‘disparaging’ — a great deal of content which is otherwise illegal also happens to be disparaging. It is difficult to imagine that defamatory speech would not usually be considered to be disparaging, for instance, particularly by those in relation to whom it was made. However, there are also cases where speech may be disparaging without otherwise being illegal.

Consider terms such as ‘wine and cheese liberal’, ‘Lutyens leftist’, or ‘Khan Market socialist’ with their references to ‘posh’ areas of Delhi. The terms essentially mock upper-class people for what may be perceived as their pretence of being ‘one with the poor’. They may, amongst the various reasons for which they are employed, be used by upper-class people who attempt to silence other people of the same class who seem to evince having a social conscience, and who purport to speak out about issues which primarily impact poor people. Alternatively, they may be used by disadvantaged or marginalised peoples to mock upper claste people who they see as simply being discriminatory or abusive, or, more insidiously, who they perceive as appropriating their voices and their experiences for their own benefit possibly by doing such things as making money by writing about ‘The Subaltern Experience’ or something analogous to it without necessarily the faintest idea of what they’re talking about.

It isn’t entirely inaccurate to refer to those who engage in such conduct as ‘cocktail activists’, and the like: it is, after all, to anyone familiar with Delhi, easy to picture the invariably upper-claste authors of such pieces scrolling through what a few marginalised people say online, picking up a few keywords, adding some likely extraneous material to those words, and pretending to have — or, perhaps, even worse, genuinely believing that they have — written empathetic and insightful text about what they believe are the trials and tribulations faced by marginalised persons despite being almost certain to have completely failed to go beyond AtrocityLite™ since their writing would invariably be entirely divorced from both their own experiences and those of marginalised people. Obviously, such an exercise is likely to be conducted at an upscale eatery whilst occasionally glancing up to thoughtfully look out at ‘the world’ (which could easily mean ‘a well-maintained attached garden’), sipping on ridiculously diluted coconut water on the rocks served looking like a cocktail, and typing away on one of the most expensive devices available in the market. It’s not a flattering picture, and it may well be a comical one, but what is almost unarguable is that there’s more than a sliver of realism embedded in terms such as ‘cocktail activist’ which inspires their adoption even though they are disparaging.

The point, here, of course, is that speech that is disparaging isn’t necessarily content which should be considered inappropriate or unacceptable. Prohibiting it forces on to everyone an aesthetic of civility which is all too often the mode through which the powerful communicate in a manner which may be seen to be temperate but which could easily and does often effect abuse. Civility is not a marker of propriety or of rightness. It is not a marker of respect in and of itself, and there is very little reason to promote the idea that non-abusive disparaging commentary is necessarily unacceptable especially when it is used by marginalised people who tend to be completely ignored when they are soft-spoken and polite. Unfortunately, this is not a view which is in perfect consonance with the law — the 2011 Information Technology (Intermediaries Guidelines) Rules do see disparaging content as problematic speech which deserves to be silenced. Of course, the term is not defined by the law, so intermediaries have a considerable amount of leeway to decide if content is actually disparaging and if it should be removed.

The Rules do not require intermediaries to remove content which is not prohibited. However, if they become aware of prohibited content and fail to remove it, they lose the protection of the shield which the Rules offer them. Due to this, even though the initial determination of whether or not content should be removed is in the hands of intermediaries, there is no guarantee that intermediaries would step in to protect free speech. They have a huge incentive to remove any speech or content in regard to which they may receive a complaint simply to ensure that they are accorded some protection should a criminal complaint be made in relation to the same content. The punishments levied by criminal law, of course, cannot be taken lightly and it is hardly fair to expect persons employed by various intermediaries to risk having to bear them. As such, the environment which is created by the structure of the Rules is hardly conducive to free speech.

The obligations of intermediaries have been weakened through clarifications made after the initial issue of the Rules which, amongst other things, lengthen the complaint response time of which they may avail and clarify what constitutes intermediaries' knowledge of the upload of unlawful content by users. To change the environment, however, that is not enough: it is the basic structure of the Rules which turns intermediaries into arbiters of the legitimacy of speech in the first instance that is problematic.

As with all concerns relating to free speech though, this is not a black and white issue. There are forms of speech which are made online that, amongst other things, are hateful, which incite violence against specific persons and communities, and which should be removed. It isn’t at all clear what a workable and accessible alternative to requiring intermediaries to remove content if it falls within certain prohibited categories would be. Neither is it clear, whether it would be possible to detail what would fall within the scope of prohibited content with a great deal of specificity. After all, the law cannot predict and prepare for every possible expression of thought it may encounter. Laws, by their very nature, tend to deal in generalities and hope for the best. Sometimes, that isn’t enough.

The idea of holding one person accountable for the actions or speech of another isn’t new to the law. It has been closely associated with what the law refers to as ‘vicarious liability’ and usually arises where a person commits a wrong for which the person who is responsible for his conduct may also be determined to be responsible. In addition to this, statutes sometimes recognise committing a crime as a primary offence, and facilitating the commission of that crime as a secondary offence. For example, the 1957 Copyright Act has, for decades, treated committing copyright infringement as a wrong which it has called ‘primary infringement’, and called providing a place at which to commit the wrong ‘secondary infringement’ although the statute doesn’t explicitly label the acts in those terms. Nonetheless, it has stipulated punishments for both primary and secondary infringement.

Those who ostensibly manage content are not always realistically in a position to manage what is said in groups they administer or on platforms they control. There have been attempts to hold the administrators of groups on messaging services responsible for what is said on their groups although it isn’t at all obvious that they would have much control over what members of the group post. They could conceivably try to take preventive measures such as by laying clear ground rules, or possibly responding by removing members who post inappropriate content from groups. However, unless they actually moderate each post before it is published — and that is not how most groups works — it is difficult to see how it is fair to expect a group administrator to be able to keep potentially illegal content from being published.

As technology has developed, so too have the concerns relating to speech, and how best to regulate it particularly when that speech is generated by users of specific platforms who may number in the millions. The answers have rarely been clear, and applying standards seemingly without thinking through them or by apparently applying them without any consideration to context has led to results which are less than ideal. A platform could easily say, ‘No nudity permitted,’ in a bid to curb pornographic content, and then go on to censor images of breastfeeding women even if specific images are not explicit whilst simultaneously allowing images of shirtless men to remain visible.

There have also been attempts at implementing ham-handed measures to restrain problematic speech online. For example, suggestions have been made to censor advertisements for clinics which conduct sex-selective abortion by simply blocking the term ‘sex selection’ along with other terms allied to it. What this inadvertently does is not only restrict access to the advertisements which are problematic but all studies relating to the problem, all reportage of the issue, and, possibly worst of all, all information relating to how such practices may be reported to the authorities if members of the public suspect that they are being carried out at a clinic. The idea of relegating an essentially social problem to ‘technology’ to solve without human input is attractive but it tends to very rarely be workable except — in theory, at any rate — in the rare case where the solution is extremely fine-tuned to address the specific problem it seeks to solve.

Even with human input, it is rarely easy to regulate speech. Copyright infringement, for example, is routinely addressed by having access to links to webpages and websites (which make available infringing copies of works) be disabled usually at the ISP level. The problem with this in practice is that it is invariably effected by having a list of ‘infringing links’ be made by investigators who pass it on to lawyers who may simply (quite literally) ‘cut, copy, paste’ it into a plaint or other document asking for the links to be blocked on the ground of infringing content being available via them. If there is a mistake which is made at any stage of this process right from the time when the list is made, there is a good chance that it will not be corrected by anyone dealing with the issue due to a paucity of time and resources to keep double-checking the list which may contain several hundred items. And, so, it could well be that it is only when URLs begin to be blocked that infuriated internet users could perhaps find that non-infringing websites which they rely on may have been blocked. The simplest solution is obviously to check the list carefully before issuing orders to block sites but doing so takes resources which the authorities are unlikely to have at their disposal. And it is just as unlikely that anyone to whom an order was issued would refuse to block websites, and thereby expose themselves to legal risk.

The law demands that some people bear responsibility for what others say but actually trying to determine who would be responsible and to what extent involves treading on a rickety structure in a minefield. It is not easy for anyone who is involved, and it is not yet clear what the best way to regulate speech is especially when it is online. While issues like copyright infringement are primarily about the enforcement of proprietary rights, other issues such as being able to freely speak (without being attacked by armies of trolls or being unfairly governed by the terms of use of private entities) about subjects ranging from illnesses which we may suffer from to human rights abuses we may be subject to affect all our lives. What we have to say may not pass the tests of civility or otherwise be aesthetically pleasing but our speech — and our silences — influence the way our society is shaped and the manner in which we live our lives. Free speech is an issue which all of us have a stake in, although the law does not yet seem to have decided exactly what its role is or how to implement what it seeks to achieve.

(This post is by Nandita Saikia and was first published at IN Content Law.)