Commentary

Find our newspaper columns, blogs, and other commentary pieces in this section. Our research focuses on Advanced Biology, High-Tech Geopolitics, Strategic Studies, Indo-Pacific Studies & Economic Policy

High-Tech Geopolitics Prateek Waghre High-Tech Geopolitics Prateek Waghre

(Re)Defining Social Media as Digital Communication Networks

This article originally appeared in TheQuint with the headline 'We Need a Better Definition for Social Media To Solve Its Problems.' An excerpt is reproduced here.

The Need For a New Term

Conversations around ‘social media platforms’ also tend to fixate on specific companies, the prevalence of certain types of information on their platforms (misleading information, hate speech, etc.) and their actions in response (content enforcement of community standards, applications of labels, compliance with government orders, etc.). While this is certainly relevant, it is out of step with the nascent yet growing understanding of the reality that most users, and especially motivated actors (whether good or bad), operate across a range of social media platforms. In the current information ecosystem, any effects — adverse or positive — are rarely limited to one particular network but ripple outwards across different networks, as well as off them.

There’s nothing wrong with an evolving term, but it must be consistent and account for future-use cases. Does ‘social media platforms’ translate well to the currently buzz-wordy ‘metaverse’ use-case, which, with communication at its core, shares some of the fundamental characteristics identified earlier? Paradoxically, the term ‘social media platform’ is simultaneously evolving and stagnant, expansive yet limiting.This is one of the reasons my colleagues at The Takshashila Institution and I proposed the frame of 'Digital Communication Networks' (DCNs), which have three components — capability, operators and networks.Read More

Read More
High-Tech Geopolitics Prateek Waghre High-Tech Geopolitics Prateek Waghre

It’s Not Just About 50 Tweets and One Platform

This article originally appeared in TheWire. An excerpt is reproduced here.Transparency and a voluntary actThis latest attempt came to light because Twitter disclosed this action in the Lumen Database, a project that houses voluntary submissions. And while Twitter is being criticised for complying, reports suggest that the company wasn’t the only one that received such a request. It just happened to be the only one that chose to disclose it proactively.Expanding on legal scholar Jack Balkin’s model for speech regulation, there are ‘3C’s’ available (cooperation, cooption and confrontation) for companies in their interaction with state power. Apart from Twitter’s seemingly short-lived dalliance with confrontation in February 2021, technology platforms have mostly chosen the cooperation and cooption options in India (in contrast to their posturing in the west).This is particularly evident in their reaction to the recent Intermediary Guidelines and Digital Media Ethics Code. We’ll ask for transparency, but what we’re likely to get is ‘transparency theatre’ – ranging from inscrutable reports, to a deluge of information which, as communications scholar Sun-ha Hong argues, ‘won’t save us’.Reports allege that the most recent Twitter posts were flagged because they were misleading. But, at the time of writing, it isn’t clear exactly which law(s) were allegedly violated. We can demand that social media platforms are more transparent, but the current legal regime dealing with ‘blocking’ (Section 69A of the IT ACT) place no such obligations on the government. On the contrary, as  lawyers Gurshabad Grover and Torsha Sorkar point out, it enables them to issue ‘secret blocking’ orders. Civil society groups have advocated against these provisions, but the political class (whether in government or opposition) is yet to make any serious attempts to change the status quo. 

Read More
Prateek Waghre Prateek Waghre

Are Tech Platforms Doing Enough to Combat ‘Super Disinformers’?

This is an excerpt from an op-ed published on TheQuint.

On 2 December, Twitter labeled multiple tweets – including one by the head of the Bharatiya Janata Party’s IT Cell Amit Malviya – which included an edited video clip from the ongoing farm law protests under its Synthetic and Manipulated Media policy.At the time many wondered whether this marked the start of a more interventionist role by the platform in the Indian context or if this application was a one and done.Since then, there have been at least two more instances of the application of this policy.
First, a – now deleted – tweet dated 30 August by Vivek Agnihotri was labelled (archive link) for sharing an edited clip of Joe Biden. It can certainly be debated whether this action was made in the Indian context, because of the user, or in the context of the US, because of the topic.Second, since 10 December, a number of tweets, examples of which can be seen here and here, misrepresenting sloganeering from a 2019 gathering in America as being linked to the current protests against the farm laws, have been labelled as well. This group included a tweet by Tarek Fateh. The reactions to these actions by Twitter have themselves been polarised ranging from celebratory, ‘it is about time’, ‘too little too late’ to accusations of interference in Indian politics by a ‘foreign company’.

The Repeat Super-Disinformer

Some of the accounts affected have large follower bases and high interaction rates, giving them the ability to amplify content and narratives, thus becoming ‘superspreaders.’ A Reuters Institute study on COVID-19 misinformation found that while ‘politicians, celebrities and public figures’ made up approximately 20 per cent of the false claims covered by it, these claims accounted for 80 per cent of the interactions.
They are also not first-time offenders, thus making them ‘repeat disinformers.’ It should be noted that these are also not the only accounts which routinely spread disinformation.Such behaviour can be attributed, in varying degrees, to most parts of the political spectrum and therefore it is also helpful to situate such content using the framework of ‘Dangerous Speech.’This combination creates a category of repeat super-disinformers that play an outsized role in vitiating the information ecosystem at many levels.....

Read more

Read More
High-Tech Geopolitics Prateek Waghre High-Tech Geopolitics Prateek Waghre

Tackling Information Disorder, the malaise of our times

This article was originally published in Deccan Herald.

The term ‘fake news’ – popularised by a certain world leader – is today used as a catch-all term for any situation in which there is a perceived or genuine falsification of facts irrespective of the intent. But the term itself lacks the nuance to differentiate between the many kinds of information operations that are common, especially on the internet.

Broadly, these can be categorized as disinformation (false content propagated with the intent to cause harm), misinformation (false content propagated without the knowledge that it is false/misleading or the intention to cause harm), and malinformation (genuine content shared with a false context and an intention to harm). Collectively, this trinity is referred to as ‘information disorder’.

Over the last 4 weeks, Facebook and Twitter have made some important announcements regarding their content moderation strategies. In January, Facebook said it was banning ‘deepfakes (videos in which a person is artificially inserted by an algorithm based on photos) on its platform. It also released additional plans for its proposed ‘Oversight Board’, which it sees as a ‘Supreme Court’ for content moderation disputes. Meanwhile, in early February, Twitter announced its new policy for dealing with manipulated media. But the question really is whether these solutions can address the problem.

Custodians of the internet

Before dissecting the finer aspects of these policies to see if they could work, it is important to unequivocally state that content moderation is hard. The conversation typically veers towards extremes: Platforms are seen to be either too lenient with harmful content or too eager when it comes to censoring ‘free expression’. The job at hand involves striking a difficult balance and it’s important to acknowledge there will always be tradeoffs.

Yet, as Tarleton Gillespie says in Custodians of the Internet, moderation is the very essence of what platforms offer. This is based on the twin-pillars of personalisation and the ‘safe harbour’ that they enjoy. The former implies that they will always tailor content for an individual user and the latter essentially grants them the discretion to choose whether a piece of content can stay up on the platform or not, without legal ramifications (except in a narrow set of special circumstances like child sex abuse material, court-orders, etc.) This of course reveals the concept of a ‘neutral’ platform for what it is, a myth. Which is why it is important to look at these policies with as critical an eye as possible.

Deepfakes and Synthetic/Manipulated Media

Let’s look at Facebook’s decision to ban ‘deepfakes’ using algorithmic detection. The move is laudable, however, this will not address the lightly edited videos that also plague the platform. Additionally, disinformation agents have modified their modus operandi to use malinformation since it is much harder to detect by algorithms. This form of information disorder is also very common in India.

Twitter’s policy goes further and aims to label/obfuscate not only deepfakes but any synthetic/manipulated media after March 5. It will also highlight and notify users that they are sharing information that has been debunked by fact-checkers. In theory, this sounds promising but determining context across geographies with varying norms will be challenging. Twitter should consider opening up flagged tweets to researchers.

The ‘Supreme Court’ of content moderation

The genesis of Facebook’s Oversight Board was a November 2018 Facebook post by Mark Zuckerberg ostensibly in response to the growing pressure on the company in the aftermath of Cambridge Analytica, the 2016 election interference revelations, and the social network’s role in aiding the spread of disinformation in Myanmar in the run-up to the Rohingya genocide. The Board will be operated by a Trust to which the company has made an irrevocable pledge of $130 million.

For now, cases will be limited to individual pieces of content that have already been taken down and can be referred in one of two ways: By Facebook itself or by individuals who have exhausted all appeals within its ecosystem (including Instagram). And while the geographical balance has been considered, for a platform that has approximately 2.5 billion monthly active users and removes nearly 12 billion pieces of content a quarter, it is hard to imagine the group being able to keep up with the barrage of cases it is likely to face.

There is also no guarantee that geographical diversity will translate to the genuine diversity required to deal with kind of nuanced cases that may come up. There is no commitment as to when the Board will also be able to look into instances where controversial content has been left online. Combined with the potential failings of its deepfakes policy to address malinformation, this will result in a tradeoff where harmful, misleading content will likely stay online.

Another area of concern is the requirement to have an account in the Facebook ecosystem to be able to refer a case. Whenever the board’s ambit expands beyond content takedown cases, this requirement will exclude individuals/groups, not on Facebook/Instagram from seeking recourse, even if they are impacted.

The elephant in the room is, of course, WhatsApp. With over 400 million users in India and support for end-to-end encryption, it is the main vehicle for information disorder operations in the country. The oft-repeated demands for weakening encryption and providing backdoors are not the solution either.

Information disorder, itself, is not new. Rumours, propaganda, and lies are as old as humanity itself and surveillance will not stop them. Social media platforms significantly increase the velocity at which this information flows thereby increasing the impact of information disorder significantly. Treating this solely as a problem for platforms to solve is equivalent to addressing a demand-side problem through exclusive supply-side measures. Until individuals start viewing new information with a healthy dose of skepticism and media organisations stop being incentivised to amplify information disorder there is little hope of addressing this issue in the short to medium term.

(Prateek Waghre is a research analyst at The Takshashila Institution)

Read More

Joining a New Social Media Platform Does Not Make Sense

Mastodon is what’s happening in India right now. Indian Twitter users are moving to the platform and have taken to using hashtags such as #CasteistTwitter and #cancelallBlueTicksinIndia. A key reason for this to transpire is that Twitter has been, to put it mildly, less than perfect, in moderating content in India. There is the incident with lawyer Sanjay Hegde that caused this to blow up, along with accusations that Twitter had been blocking hundreds and thousands of tweets in India since 2017 with a focus on accounts from Kashmir.Enter Mastodon. The platform, developed by developer Eugen Rochko, is opensourced, so no one entity gets to decide what content belongs on the communities there. Also, the data on Mastodon is not owned by one single corporation, so you know that your behavior on there is not being quantified and being sold to people who would use that to profile and target you.Plus, each server (community) has a relatively small size with a separate admin, moderator, and by extension, code of conduct. All of this sounds wonderful. The character count is also 500 words as opposed to 280 (if that is the sort of thing you consider to be an advantage).Mastodon is moving the needle forward by a significant increment when it comes to social networking. The idea is for us to move towards a future where user data isn’t monetised and people can host their own servers instead. As a tech enthusiast, that sounds wonderful and I honestly wish that this is what Twitter should have been.Keeping all of that in mind, I don’t think I will be joining Mastodon. Hear me out. A large part of it is not because Mastodon does not have its own problems, let’s set that aside for now and move on to the attention economy. Much like how goods and services compete for a share of your wallet, social media has for the longest time been competing for attention and mind-space. Because the more time you spend on the platform, the more ads you will see and the more money they will make. No wonder it is so hard to quit Instagram and Facebook.Joining a new platform for social media today is an investment that does not make sense unless the other one shuts down. There is a high chance of people initially quitting Twitter, only to come back to it while being addicted to another platform. The more platforms you are on, the thinner your attention is stretched. That is objectively bad for anyone who thinks they spend a lot of time on their phone.If you’re lucky to be one of the few people who do not suffer from that and are indifferent to the dopamine that notifications induce in your brain, this one doesn’t apply to you. Then there is the network effect and inertia. I for one, am for moving the needle forward little by little. But here, there is little to gain right now, with more to lose.Network effects are when products (in this case, platforms), gain value when more people use them. So, it makes sense for you to use WhatsApp and not Signal, as all your friends are on WhatsApp. Similarly, it makes sense for you to be on Twitter as your favorite celebs and news outlets are on there. Mastodon does not have the network effect advantage, so most people who do not specifically have their network on Mastodon, do not get a lot of value out of using it.In addition, there is inertia. Remember when we set aside Mastodon’s problems earlier, here is where they fit in. Mastodon is not as intuitive as using Twitter or Facebook. That makes it a deal-breaker for people of certain ages and also happens to be a significant con for people who don’t want to spend a non-trivial chunk of their time learning about servers, instances, toots, and so on.There also isn’t an official Mastodon app, however, there are a bunch of client apps that can be used instead, most popular among them is Tusky, but reviews will tell you that it is fairly buggy and that is to be expected. There is so much right with Mastodon. It is a great working example of the democratisation of social media. It also happens to exist in an age where it would be near impossible to get funding for or to start a new social media platform. The problem is that for people who don’t explicitly feel the need or see the value in joining Mastodon, are unlikely to split their attention further by joining a new platform. The switching costs, network effects, and inertia are simply too high.Rohan is a policy analyst at The Takshashila Institution and the co-author of Data Localization in a Globalized World: An Indian Perspective.This article was first published in Deccan Chronicle.

Read More