Commentary

Find our newspaper columns, blogs, and other commentary pieces in this section. Our research focuses on Advanced Biology, High-Tech Geopolitics, Strategic Studies, Indo-Pacific Studies & Economic Policy

High-Tech Geopolitics Prateek Waghre High-Tech Geopolitics Prateek Waghre

The wrong way to regulate disinformation

This article originally appeared in Deccan Herald.

When the Kerala Governor signed a controversial Ordinance, now withdrawn, proposing amendments to the Kerala Police Act, there was understandably a significant amount of criticism and ire directed at the state government for a provision that warranted a three-year jail term for intentionally and falsely defaming a person or a group of people. After the backlash, the state’s Chief Minister announced his intention not to implement the fresh amendment.

How not to regulate information disorderFor anyone tracking the information ecosystem and how different levels of state administration are responding to information disorder (misinformation, disinformation and malinformation) this attempted overreach is not surprising. In Kerala alone, over the last few months, we have witnessed accusations from the opposition of ‘Trump-ian’ behaviour on the part of the state administration to decry any unflattering information as ‘fake news’. Even in September, the Chief Minister had to assure people that measures to curb information disorder will not affect media freedom, after pushback against decisions to expand fact-checking initiatives beyond Covid-19 related news. In October, it was reported that over 200 cases were filed for ‘fake news’ in the preceding five months.Of course, this is by no means limited to one state, or a particular part of the political spectrum. Across the country, there have been measures such as banning social media news platforms, notifications/warnings to WhatsApp admins, a PIL seeking Aadhaar linking to social media accounts, as well as recommendations to the Union Home Minister for ‘real-time social media monitoring’. Arrests/FIRs against journalists and private citizens for ‘fake news’ and ‘rumour-mongering’ have taken place in several states.How to regulate information disorder?Before proceeding to ‘the how’, it is important to consider two fundamental questions when it comes to the topic of regulating disinformation. First, should we? Four or five years ago, many people would have said no. Yet, today, many people will probably say yes. What will we say in the four or five years from now? We don’t know. ...For the complete article, go here.

Read More
High-Tech Geopolitics Prateek Waghre High-Tech Geopolitics Prateek Waghre

Tackling Information Disorder, the malaise of our times

This article was originally published in Deccan Herald.

The term ‘fake news’ – popularised by a certain world leader – is today used as a catch-all term for any situation in which there is a perceived or genuine falsification of facts irrespective of the intent. But the term itself lacks the nuance to differentiate between the many kinds of information operations that are common, especially on the internet.

Broadly, these can be categorized as disinformation (false content propagated with the intent to cause harm), misinformation (false content propagated without the knowledge that it is false/misleading or the intention to cause harm), and malinformation (genuine content shared with a false context and an intention to harm). Collectively, this trinity is referred to as ‘information disorder’.

Over the last 4 weeks, Facebook and Twitter have made some important announcements regarding their content moderation strategies. In January, Facebook said it was banning ‘deepfakes (videos in which a person is artificially inserted by an algorithm based on photos) on its platform. It also released additional plans for its proposed ‘Oversight Board’, which it sees as a ‘Supreme Court’ for content moderation disputes. Meanwhile, in early February, Twitter announced its new policy for dealing with manipulated media. But the question really is whether these solutions can address the problem.

Custodians of the internet

Before dissecting the finer aspects of these policies to see if they could work, it is important to unequivocally state that content moderation is hard. The conversation typically veers towards extremes: Platforms are seen to be either too lenient with harmful content or too eager when it comes to censoring ‘free expression’. The job at hand involves striking a difficult balance and it’s important to acknowledge there will always be tradeoffs.

Yet, as Tarleton Gillespie says in Custodians of the Internet, moderation is the very essence of what platforms offer. This is based on the twin-pillars of personalisation and the ‘safe harbour’ that they enjoy. The former implies that they will always tailor content for an individual user and the latter essentially grants them the discretion to choose whether a piece of content can stay up on the platform or not, without legal ramifications (except in a narrow set of special circumstances like child sex abuse material, court-orders, etc.) This of course reveals the concept of a ‘neutral’ platform for what it is, a myth. Which is why it is important to look at these policies with as critical an eye as possible.

Deepfakes and Synthetic/Manipulated Media

Let’s look at Facebook’s decision to ban ‘deepfakes’ using algorithmic detection. The move is laudable, however, this will not address the lightly edited videos that also plague the platform. Additionally, disinformation agents have modified their modus operandi to use malinformation since it is much harder to detect by algorithms. This form of information disorder is also very common in India.

Twitter’s policy goes further and aims to label/obfuscate not only deepfakes but any synthetic/manipulated media after March 5. It will also highlight and notify users that they are sharing information that has been debunked by fact-checkers. In theory, this sounds promising but determining context across geographies with varying norms will be challenging. Twitter should consider opening up flagged tweets to researchers.

The ‘Supreme Court’ of content moderation

The genesis of Facebook’s Oversight Board was a November 2018 Facebook post by Mark Zuckerberg ostensibly in response to the growing pressure on the company in the aftermath of Cambridge Analytica, the 2016 election interference revelations, and the social network’s role in aiding the spread of disinformation in Myanmar in the run-up to the Rohingya genocide. The Board will be operated by a Trust to which the company has made an irrevocable pledge of $130 million.

For now, cases will be limited to individual pieces of content that have already been taken down and can be referred in one of two ways: By Facebook itself or by individuals who have exhausted all appeals within its ecosystem (including Instagram). And while the geographical balance has been considered, for a platform that has approximately 2.5 billion monthly active users and removes nearly 12 billion pieces of content a quarter, it is hard to imagine the group being able to keep up with the barrage of cases it is likely to face.

There is also no guarantee that geographical diversity will translate to the genuine diversity required to deal with kind of nuanced cases that may come up. There is no commitment as to when the Board will also be able to look into instances where controversial content has been left online. Combined with the potential failings of its deepfakes policy to address malinformation, this will result in a tradeoff where harmful, misleading content will likely stay online.

Another area of concern is the requirement to have an account in the Facebook ecosystem to be able to refer a case. Whenever the board’s ambit expands beyond content takedown cases, this requirement will exclude individuals/groups, not on Facebook/Instagram from seeking recourse, even if they are impacted.

The elephant in the room is, of course, WhatsApp. With over 400 million users in India and support for end-to-end encryption, it is the main vehicle for information disorder operations in the country. The oft-repeated demands for weakening encryption and providing backdoors are not the solution either.

Information disorder, itself, is not new. Rumours, propaganda, and lies are as old as humanity itself and surveillance will not stop them. Social media platforms significantly increase the velocity at which this information flows thereby increasing the impact of information disorder significantly. Treating this solely as a problem for platforms to solve is equivalent to addressing a demand-side problem through exclusive supply-side measures. Until individuals start viewing new information with a healthy dose of skepticism and media organisations stop being incentivised to amplify information disorder there is little hope of addressing this issue in the short to medium term.

(Prateek Waghre is a research analyst at The Takshashila Institution)

Read More