Why It Is a Bad Idea for the MIB to Moderate Online Content
The Central Government has, with effect from 9 November 2020, brought under the jurisdiction and control of the Ministry of Information and Broadcasting (MIB) news and current affairs content on online platforms. This means that content on intermediaries such as social media platforms (Facebook, Twitter), blogging platforms, video hosting platforms (YouTube) will be moderated by MIB.
MIB has conventionally been responsible to license and censor content on radio, cinema, and television broadcasting through laws such as the Cinematograph Act and the Cable Television Regulations Act. It is a bit concerning that new-age digital platforms may also be subjected to older regulation, especially considering the different nature of how these new platforms operate.
Besides, definitions in the notice are ambiguous. News and current affairs are not terms with clearly laid out definitions. So it is unclear whether only statements of fact will be subject to this notice or whether it will also apply to opinions and analysis. Websites like Medianama being subject to the same regulation as YouTubers such as Dhruv Rathee or Faye D’Souza may be problematic.
This raises the question of why this notice was issued in the first place. An argument could be made that MIB has been brought in to regulate the spread of information disorder (mis/disinformation) on Platforms. However, this exact goal is also being addressed through the impending update of the existing Information Technology (Intermediaries Guidelines) Rules, 2011 (“Intermediaries Guidelines”). The amendments to the guidelines are not perfect and we have disagreements on the second-order effects that might arise, however, they are better suited to tackling information disorder and inflammatory content. Shifting this task to MIB’s ambit may end up having an unintended effect on the kind of content people might create/curate on the internet.
Secondly, as a recognized principle of law, special law takes precedence over general law or any previous laws on a subject – so this creates some apprehension about why we might need legislation on this matter, since currently, online platforms are ‘intermediaries’, regulated by the Information Technology Act, 2000 (the “IT Act”), and are provided with safe harbour immunity subject to compliance with certain conditions. Content made available on the Internet in India is also subject to defamation laws, the Indian Penal Code, and copyright infringement — so for the MIB to step in to solve for information disorder and inflammatory content was not necessarily required.
Multiplicity of regulation of intermediaries creates confusion and fear within the media, entertainment, and the social media industry. The content regulation powers these platforms may be subject to can curb fundamental rights of freedom of speech and expression, encapsulated in Article 19(1) of the Indian Constitution. As a precaution, platforms such as YouTube and Facebook might end up over-regulating their content to avoid fines and liability.
Instead of placing new-age content platforms under MIB’s ambit, it may be a better approach to instead regulate them to tackle information disorder and place liability on platforms. There is some precedent around the world for us to learn from. The EU, typically active in leading the discourse on how platforms and tech companies should be regulated has proposed some interesting reforms in the new Digital Services Act Package (“DSA”). The DSA is a reassessment of the E-Commerce Directive (which is the legal framework for online services) – necessary because digital technologies and business models have evolved rapidly.
Some interesting suggestions made in a study conducted by the Policy Department for Economic, Scientific and Quality of Life Policies, to possibly incorporate in the DSA, are as follows:
Shifting from an intermediary liability approach to a ‘responsibility’ approach. Distinguishing between liability and responsibility helps define a category of service providers — who are not liable but are responsible. In effect, they can be granted immunity for content in a similar way as for hosting providers but be made responsible for the activities that they actually perform – such as facilitation, dissemination or profiling.
Introduction of a good samaritan clause, similar to Section 230 in the US, where platforms are encouraged to take voluntary active measures to moderate harmful content.
Working towards platform neutrality so that algorithms are programmed to not systematically favour political, ideological, or religious opinions.
Move towards a more vertical approach of notice and action. For example, notice wait and take down for defamation, notice and take down and notice and suspension for hate speech. Procedural rules for notice and action must be established.
There is also a suggestion to establish a Social Media Council to provide open, transparent, participatory, independent moderation practices – gathering platforms, media, journalists, bloggers academics, civil society and any other stakeholder. This mechanism would be based on international standards of human rights, without creating legal obligations.
There is also some merit in considering the idea of putting in place a mechanism for a ‘counter-notice’ procedure to prevent over-blocking. Doing so will enable content providers to give their views to the hosting services if their content is proposed to be blocked.
Recognize that given the massive explosion of online content, public authorities may not be well geared to ensure enforcement of content moderation and may need to be complemented with private bodies. These could be the platforms themselves (but this should not lead to a privatisation of public interest), and co-regulation could potentially be an effective tool.
All things considered, the move to place online platforms under the regulatory ambit of the MIB seems to have more cons than pros. If the goal is to solve information disorder and moderation of harmful content, the MIB may not be the right solution. A combination of updating the intermediary guidelines and introducing more thoughtful regulation might prove to be more useful with fewer implications for free speech going forward.