Commentary
Find our newspaper columns, blogs, and other commentary pieces in this section. Our research focuses on Advanced Biology, High-Tech Geopolitics, Strategic Studies, Indo-Pacific Studies & Economic Policy
US-तालिबान समझौते से शांति की कितनी उम्मीद और भारत की चिंताएं?
https://audioboom.com/posts/7519553
Trump’s India visit tightens defense ties
Donald Trump concluded his 36-hour India tour on Tuesday evening. This was his first visit to India since being elected the 45th president of the United States in November 2016. His tour to India was much anticipated by both the countries, which have a common strategic objective of balancing China’s rise.This objective was reflected immediately in Trump’s first speech after landing in India, where he took a jab at China’s undemocratic rise. India’s rise “is all the more inspiring because you have done it as a democratic country, you have done it as a peaceful country, you have done it as a tolerant country, and you have done it as a great free country,” he said in his speech at Motera Stadium in Ahmedabad, Gujarat.The article was originally published in Asia Times.
Climate change and geopolitics converge to yield locust swarms
The butterfly effect occurs when a trivial cause, such as a butterfly fluttering its wings somewhere in an Amazon rainforest, triggers a series of events that end up having a massive impact elsewhere—a tornado ravaging the state of Texas in the US, for example. Edward Lorenz, the American meteorologist who coined the phrase in the early 1960s, came up with it while building a mathematical model to predict weather patterns. It is a fitting metaphor to explain a “plague" that is currently destroying vegetation and livelihoods in East Africa, the Arabian peninsula, Iran, Pakistan, and India.Read more
Hotstar blocked John Oliver show even before Modi govt could ask. It’s a dangerous new trend
This article was originally published in ThePrint. Censorship in response to moral panic and outrage was the norm, but now in India, we’re even cutting out the middlemen.
When riots were taking place in northeast Delhi and US President Donald Trump was set to land in India, HBO’s popular Sunday night show Last Week Tonight hosted by John Oliver aired an episode on Prime Minister Narendra Modi. This episode, however, did not appear on Hotstar’s listings for the show, which is normally updated Tuesday mornings in India (it has still not been added at the time of writing this). International publications like Time magazine and The Economist have been the subject of outrage for carrying stories critical of PM Modi in the past. Netflix, too, has faced criticism for producing and housing shows like Leila and Sacred Games. Perhaps, the desire to avoid facing similar public anger prompted Disney-owned Star India to take this step.
It is important to look at the implications of this intervention.All the world’s an outrageA moral panic is a situation where the public fear and level of intervention (typically by the state) are disproportionate to the objective threat posed by a person/group/event.In India, one of the most infamous cases of a technology company bowing to moral panic occurred in January 2017. The Narendra Modi government threatened Amazon with the revocation of visas when it became aware that the online retailer’s Canadian website listed doormats that bore the likeness of the Indian flag on them. It was fitting that this threat was issued on Twitter by then External Affairs Minister Sushma Swaraj since the social networking platform was also the place where the anger built-up. It should come as no surprise that Amazon acquiesced, even though it was bound by no law to do so. While such depictions of national symbols are punishable under Indian law, it is debatable whether it should apply to the Canadian website of an American company, not intended for India-based users.
This wasn’t the first instance of sensitivities being enforced extra-territorially on internet companies and certainly won’t be the last. And this is very much a global phenomenon. While the decision by the Chinese state channel CCTV and several other companies to effectively boycott the NBA team Houston Rockets and the censorship of content supporting Hong Kong protests by Blizzard Entertainment garnered worldwide attention, these were only the latest in a long list of companies that have had to apologise to China and ‘correct’ themselves for issues like listing/depicting Taiwan as a separate country or quoting Dalai Lama on social media websites that were not even accessible in the country.
In Saudi Arabia, Netflix had to remove an episode of Hasan Minhaj’s Patriot Act that was heavily critical of Crown Prince Mohammed bin Salman. In the United States as well, content delivery network Cloudflare has twice stopped offering services to websites (Daily Stormer in 2017 and 8Chan in 2019) when faced with heavy criticism because of the nature of the content on them. In both cases, CEO Matthew Prince expressed his dismay at the fact that a service provider had the ability to make this decision.Of Streisand and censorshipThe key difference in the current scenario is that Hotstar appears to have made a proactive intervention. There was no mass outrage or moral panic that it was forced to respond to. By choosing not the make this John Oliver episode available on its platform, it effectively cut out the middlemen and skipped to the censorship step. A move that was ultimately self-defeating since the main segment of the episode is available in India through YouTube anyway and has already garnered more than 60 lakh views while the app was subjected to one-star ratings on Google’s Play Store.The attempt has only drawn more attention to both the episode and the company itself. This is commonly known as the Streisand Effect. Although a more cynical assessment could be that this step has earned Star India some ‘brownie points’ from the Modi government.Earlier this month, The Internet and Mobile Association of India (IAMAI) announced a new ‘Self-Regulation for Online Curated Content Providers’ with four signatories (Hotstar, Jio, SonyLiv, and Voot). Notably, an earlier version of the code released in February 2019 had additional signatories that chose to opt-out of this version. It was also reported that some of the underlying causes for discontent were lack of transparency, due process, and limited scope of consultations in the lead-up to the new code.Some of the broad changes in the new code include widening the criteria for restricted content from disrespecting national symbols to the sovereignty and integrity of India. It also empowered the body responsible for grievance redressal to impose financial penalties. In addition, signatories of the code and the grievance redressal body are obliged to receive any complaints forwarded/filed by the government.A letter by Internet Freedom Foundation to Justice A.P. Shah cited as concerns, the code’s consideration of the reduction of liability over creativity and the risk of industry capture by large media houses. The pre-emptive action taken in the case of Last Week Tonight’s Modi episode perfectly encapsulates the risks of such a self-regulatory regime. It signals both intents and potentially the establishment of processes to readily restrict content deemed inimical to corporate interests. Such self-censorship, once operationalised, is a slippery slope and can result in much more censorship down the road.The general trend of responding to outrage by falling in line was problematic in itself. But in India’s current context, the eagerness to self-censor is significantly more harmful especially when you consider that other forms of mass media are already beholden to a paternalistic state with severely weakened institutions.The author is a research analyst at The Takshashila Institution, an independent centre for research and education in public policy. Views are personal.
One thing India can teach the West is this — you can be a liberal and a nationalist
The origin, development, and consequences of the politics of nationalism in western Europe and the United States have led many in the West, and indeed most of the world, to see nationalism as a bad thing. It is not surprising therefore that an RSS functionary in the United Kingdom advised its chief Mohan Bhagwat (in his words), “not to use the word nationalism as English is not our language and it could have a different meaning in England. It’s okay to say nation, national, and nationality but not nationalism. Because it alludes to Hitler, Nazism, and fascism in England.”Read more
Privacy Is Not Dead, It Is Terminally Ill
This article was first published in Deccan Chronicle. Earlier last week, The Verge ran a story about how health apps had permissions to change their terms of service without the user’s knowledge. If you are a former alcoholic who tracks how many days it has been since your last cigarette or a depressed professional who is keeping a record of how your days are progressing, that is horrible news. It sets the precedent that it does not matter what conditions you agreed to once you signed up for the app. Thus, your information can and likely will be sold to companies that may want to sell you alcohol or medication. The news comes as a shock to most people who read it, especially considering the personal and sensitive nature of health data. But that is the nature of terms and conditions that technology companies set out in their agreements today. A significant source of revenue for tech products and applications is the data they sell to their clients based on your usage. And it does not make sense to keep asking you for new kinds of permissions every time they want to track or access something. Instead, it works better to have a long-form document that is widely encompassing and grants them all the permissions they might ever need, including the permission to change the terms of the agreement after you signed. After all, no one reads the privacy policies before clicking, ‘I Agree’. This was on display earlier last year when Chaayos started facial recognition, and Nikhil Pahwa went through their privacy policy to unearth this line, “Customer should not expect, that customer’s personal information should always remain private”. The rest of the privacy policy essentially conveyed that Chaayos collects customer data but does not guarantee privacy. And Chaayos is not the cause of an extremely exploitative attitude towards data; it is a symptom. The history of the internet and the revenue model it gave birth to, has led to this point where access to information is a paramount need. If you want a better understanding of it, the New York Times did an excellent job tracing the history of Google’s privacy policy which does serve as a history of the internet. Because of how little regulation existed in the internet space when it was a sunrise industry, the frontrunners today ran with our data on their terms. During all of this, consent has been virtually non-existent. I use the word virtually consciously. Consent has largely been a placeholder during the internet’s rich history. There are two reasons why. Firstly, terms and conditions lead to consent fatigue. Even the best of lawyers do not go through the conditions for every app before they click accept. Secondly, let’s say you press the decline button when asked for additional permissions. Apps are known then to bypass the OS’ permission system without consent to gather that data. But let’s say that we live in an ideal world and apps don’t do that. You manage to read a few agreements and make a conscious decision to accept. You are happy to give your consent for access to the microphone but not the location and thus, deny permission. There is a chance that it still doesn’t matter. Consents tend to be interlinked because of the nature of the internet and smartphone apps. For instance, consider the automation app, ‘If This Then That (IFTTT)’. It serves as a platform to automate functions across multiple services. For instance, it can log in every trip you take on Uber to a Google Sheet. Sounds like a helpful tip to keep track of and claim work reimbursements, doesn’t it? But if you do use that service, you are subject to three interlinked policies, Uber’s, GSuite’s, and IFTTT’s. At this point, any data you generate from that automation will likely be sold for profit. How do we tackle something like this? How do we make sure that privacy is respected more and companies cannot change their agreements once you click accept? Google took a small step towards it by introducing in-context permissions in Android 10. The idea was that if an app wanted additional permissions, say access to your microphone, or your location, it would ask you when it needed it, and not front-load all requests. We are yet to see how effective it is going to be over time. At their best, in-context permissions will tell you why PayTM needs access to your location (because they likely need that information in case there is a fraud), or that your SMS app has been recording your location in the background for no apparent reason. At their worst, they make consent fatigue worse. In context, permissions are likely not the only answer, but it’s a start. Google implementing it is a definite sign that privacy is not dead, just terminally ill. Given time, and combined with measures such as simplified permissions, our generation might see a day when we completely control our data. Views are personal.
Intermediary guidelines might infringe on privacy
This article was first published in Deccan Chronicle.
If you try to keep up to date with the tech policy debates in India, intermediary liability is one of those few topics you cannot escape. In very oversimplified terms, the debate here is whether companies like Facebook should be held accountable for the content that is posted on them.
The Ministry of Electronics and Information Technology (MeitY) came up with proposed changes to the intermediary guidelines back in December 2018. Since then, discourse around the topic has been rife and the new, finalised guidelines are speculated to come out in the next few weeks.
So when Bloomberg reported that MeitY is expected to put out the new rules later this month without ‘any major changes’, speculation around the guidelines was replaced by concern.
One of the most contentious clauses of the intermediary guidelines was to make messages and posts traceable to their origins. That would mean WhatsApp would need to use its resources to track where a message was originating from and then report that to the government.
As The Verge puts it, tech companies could essentially be required to serve as deputies of the state, conducting investigations on behalf of law enforcement, without so much as a court order.
That is deeply troubling. In contemporary India, we have either begun to take secure messaging for granted or just do not think about how secure our communications are today.
Here context matters. More often than not, when I talk about privacy and end-to-end encryption, I get glazed eyes. That is understandable. People find it hard to understand how encryption impacts their lives. But humor me in a thought experiment. As you read this, take a look around you. Take a good look at the person physically closest to you at this moment and ask yourself whether you would be okay with disabling the security on your phone and giving it to them for three days. If the thought of doing that makes you even slightly uncomfortable, you now understand why privacy matters.
Using the new rules, privacy is now going to be chipped away for anyone in India who uses WhatsApp (or any other end-to-end encrypted service). Add to that the fact that India today does not have the strongest of institutions. The issue of whether NaMo TV was a governance tool or a political one taught us that. So if there is anything that the political climate tells us today is that there is a very real chance that these guidelines can and will be used for political gain.
The other side to the story also is that these are intermediary guidelines and do not apply to just platforms. The intermediary is a broader term and encompasses not just platforms such as Facebook, Telegram, or Signal, but also cloud service providers, ISPs, and even cybercafés.
Not all of these players have equal access to information when ordered by law enforcement agencies to disclose it. A consultation report released by Medianama listed instances of harassment of intermediaries.
According to the report, ISPs claimed to live in a constant situation of threat and made to feel like criminals for running their businesses. During raids, people and their families were often asked to part with their phones and electronics, along with access to their passwords. In fact, according to the report, when a cloud service provider for an app in Andhra Pradesh was approached by the Police with a request for information, it had to go out of business being unable to comply. Not all intermediaries are created equal and these guidelines do not acknowledge that.
But the broader problem I see here is that there is no problem statement here. What these guidelines are trying to address is not clear. If the agenda is for the law enforcement agencies to information on digital communications (and that is essential to maintain law and order), it does not make sense to do it through these means. There are international provisions that India can and should resort to address (CLOUD Act in particular).
Once we go down this route, there is a non-zero chance that intermediaries such as WhatsApp might stop providing their services in India. Especially since if they comply, it will set precedent for other countries to follow India in their approach to breaking encryption. This could end up causing these intermediaries to serve as government lieutenants. Regardless of what platform we choose to communicate on, we need to value privacy going forward. If you disagree with the idea, now might be a great time to unlock your phone and hand your phone over to the person physically closest to you.
Using the new rules, privacy is now going to be chipped away for anyone in India who uses any end-to-end encrypted service.
The intermediary is a broad term and encompasses not just platforms but also cloud service providers, ISPs, and even cybercafés.
The writer is a research analyst at The Takshashila Institution All views are the author’s own.
India's Troops in Afghanistan: An Old Request in a New Context
Boots on the ground are secondary; India's key objective in Afghanistan should be to help the Islamic Republic of Afghanistan claim a monopoly over the legitimate use of physical force. India can contribute a lot towards the capacity building of the Afghan National Defence and Security Forces (ANDSF). The biggest challenges it currently faces are related to the decline in the quality of human resources at hand, rather than a shortage of financial resources.Read the full article on The Telegraph here.
What India really needs: A mass uprising to ensure inclusive economic growth
The focus on the country’s middle class ignores the problems of the millions in the informal sector.
In 2001, Jim O’Neill, a British economist working with Goldman Sachs, first coined the acronym BRIC to identify the four rapidly growing economies at the heart of the shift in the global economic power – Brazil, Russia, India, and China. Pivotal to the growth of these economies was the growing middle-class population in these countries, a segment of people with upward economic mobility, increasing spending power, and growing aspirations.In India, the rise of the middle-class has caught the fancy of economists, educationists, developmental organisations, industrialists, and politicians alike. The most common narratives about the middle class continue to be that many of them are young (so provide a large talent pool and workforce), have growing incomes (and significant spending power), and the ability to influence the outcome of economic and political strategies.
The full article is published in and available on Scroll.in
On US President Trump's India Visit
The Print’s daily roundtable TalkPoint posed a question connected to the US President's upcoming India visit: Will the spectacle of Trump's visit without a trade deal boost India-US ties?The US-India relationship over the last four years has been a case of one step forward, two steps backward. The convergence of the threat posed by China has led to a deepening of military ties between the countries with the operationalisation of the Logistics Exchange Memorandum of Agreement (LEMOA) and the signing of the Communications Compatibility and Security Agreement (COMCASA).At the same time, both the US and India have been unable to move forward on the issue of trade. While the Donald Trump administration has hung on to notions of ‘reciprocity’, the Narendra Modi government has raised import tariffs and pushed itself into a corner. Trump’s position on Pakistan has also changed; the plan to reduce and eventually withdraw US troops from Afghanistan is contingent on an understanding with Pakistan. Finally, India’s falling economic growth trajectory has restricted our ability to negotiate both with the US and China.Trump’s visit is unlikely to change any of these structural factors. Apart from a few defence purchase agreements, there is little to look forward to the US president’s tour.Read the entire discussion on ThePrint. here.
NRC website imbroglio highlights need for govt accountability
This article was first published in Deccan Chronicle.
Last week, multiple news outlets reported that the website housing NRC data had gone offline. Reportedly, this happened because a cloud services contract procured by Wipro on behalf of the State government of Assam was not renewed and thus, turned off due to non-payment. For now, officials have made assurances that the data itself is safe. Some aspersions have also been cast on former state officials working on the NRC project. This is still a developing story and there are multiple conspiracy theories being floated about the root-cause ranging on a spectrum from malintent to negligence and good old-fashioned incompetence.
From a public policy perspective, there are multiple questions that come up — should the state be contracting with private enterprise? How accountable should the state be when there is a loss of data or harm caused to people by accumulating this data? How much data should the state gather about its citizens and the potential for misuse? Let’s look at them starting from the narrowest question and then expanding outwards.
AWS V/S MEGHRAJ One of the reasons for outrage has been the use of Amazon Web Services to host this site especially when the National Informatics Center (NIC) itself offers a cloud service called ‘MeghRaj’. The concern cited is that the data may leave the country, or that private contractors will potentially be able to access sensitive data. It is almost cliched to say that the Internet has no borders, but this distinction is important. Data is not any safer just by virtue of it being in India and at a state-operated facility. On the contrary, it is probably better for a website and its data to be hosted with industry-leading operators that follow best-practices and have the expertise to efficiently manage both operations and security. One must consider both the capacity and role of the state in this context. What is the market failure that the state is addressing by offering cloud hosting services in a market where the likes of Amazon, Google, and Microsoft operate?
The objection regarding contractor access to sensitive information is important and merits further consideration. To a large extent, this can be addressed by a contractual requirement to restrict access to individuals with security clearances. Yes, this brings the dimension of a principal-agent problem and lax enforcement of contract law in India. But it is important to contrast it with the alternative — an individual representing the state, where the principal-agent problem is even more acute. As things stand, there are still options to hold a private entity accountable for violation of contract, but there is a lower probability of punitive action against an individual representing the state for harm arising out of action/inaction on their part. As far as causes for outrage go, the fact that the data was stored with AWS should not be one. There are larger aspects at play here.
STATE ACCOUNTABILITY This incident brings with itself a much larger question on the accountability the Government should have towards data. The Indian government keeps a substantial amount of personal and sensitive data on its citizens. For example, data on how much gas you consume, your physical address, the model, make, and the number of your car as well as how many times you traveled out of the country in the last 10 years. That is more sensitive information than most companies in the private sector hold.
Keeping this (and the social contract) in mind, how accountable should the government be? According to the draft of the Personal Data Protection Bill, not very. Section 35 of the bill allows the Government to exempt whole departments from the bill, removing checks and balances that should exist when the Government acts as a collector or processor of your data.
How does that make sense? Why should the state be any less accountable than a private enterprise? In fact, the Government has sold the data of its citizens, without their consent (~25 crore vehicle registrations and 15 crore driving licenses) to the private sector for revenue. As of now, it is hard to conclude whether the incident occurred due to malintent, negligence, or incompetence. But regardless of the cause, it brings with itself a lesson. The Government and all its departments need to be more responsible and be held more accountable when it comes to the data they store and process.
IMPLICATIONS OF A DATA-HUNGRY STATE A case can be made that the state is not a monolith and there exist certain barriers and redundancies due to which databases in the Government do not talk to each other… yet. Chapter 4 of the 2018-19 Economic Survey of India envisioned data as a public good and advocated “combining … disparate datasets.” The combination of limited state capacity, lack of accountability, and a hunger for data can be a dangerous one. While capacity can be supplemented by private enterprise, there is no substitute for accountability. In such a scenario it is extremely important to consider, understand, debate the chronology, implications, and potential for misuse before going ahead with such large-scale activities that could end up severely disrupting many millions of lives.
Section 35 of the draft Personal Data Protection Bill allows the government to exempt whole departments from the Bill, removing checks and balances that should exist when the government acts as a collector or processor of your data.
(The writers are research analysts at The Takshashila Institution All views are the author’s own and are personal.)
No need to panic. ‘Westlessness’ just means world without West’s dominance, not its ideas
If Westlessness is to mean the shifting of the global balance of power away from the victors of a world war fought 75 years ago, then yes, it is both objectively true and, from India’s perspective, quite desirable. The composition of the United Nations Security Council is not only outdated but is the single biggest reason today why the UN is increasingly at the margin on managing international security. To the extent that the change in the global balance is acknowledged and the contemporary balance reflected in the UN, this interpretation of ‘Westlessness’ is useful.Read more
Growing Defence Pensions a Problem. But CDS Rawat’s Retirement Age Proposal not the Solution.
We propose a model to operationalise lateral induction in the Indian armed forces to tackle the burden of growing pensions. Read the full article in The Print here.
Supreme Court can’t care for Shaheen Bagh children more than the parents
Unless in the most exceptional cases, children’s involvement in politics is supported, encouraged, managed, or instigated by the adults in their family. There is a long history of children waving flags, dressing up as political leaders, holding ideological banners, or singing songs in support of political causes. Our reaction to these images depends on our politics: we tend to approve of these actions when we support the cause and are horrified when we don’t. We are not horrified if little children celebrate violence or martyrdom, provided they are doing it for the ‘right’ cause. Most of the time, therefore, adults’ opinion on children in politics is unconsciously self-serving.Read more
Tackling Information Disorder, the malaise of our times
This article was originally published in Deccan Herald.
The term ‘fake news’ – popularised by a certain world leader – is today used as a catch-all term for any situation in which there is a perceived or genuine falsification of facts irrespective of the intent. But the term itself lacks the nuance to differentiate between the many kinds of information operations that are common, especially on the internet.
Broadly, these can be categorized as disinformation (false content propagated with the intent to cause harm), misinformation (false content propagated without the knowledge that it is false/misleading or the intention to cause harm), and malinformation (genuine content shared with a false context and an intention to harm). Collectively, this trinity is referred to as ‘information disorder’.
Over the last 4 weeks, Facebook and Twitter have made some important announcements regarding their content moderation strategies. In January, Facebook said it was banning ‘deepfakes (videos in which a person is artificially inserted by an algorithm based on photos) on its platform. It also released additional plans for its proposed ‘Oversight Board’, which it sees as a ‘Supreme Court’ for content moderation disputes. Meanwhile, in early February, Twitter announced its new policy for dealing with manipulated media. But the question really is whether these solutions can address the problem.
Custodians of the internet
Before dissecting the finer aspects of these policies to see if they could work, it is important to unequivocally state that content moderation is hard. The conversation typically veers towards extremes: Platforms are seen to be either too lenient with harmful content or too eager when it comes to censoring ‘free expression’. The job at hand involves striking a difficult balance and it’s important to acknowledge there will always be tradeoffs.
Yet, as Tarleton Gillespie says in Custodians of the Internet, moderation is the very essence of what platforms offer. This is based on the twin-pillars of personalisation and the ‘safe harbour’ that they enjoy. The former implies that they will always tailor content for an individual user and the latter essentially grants them the discretion to choose whether a piece of content can stay up on the platform or not, without legal ramifications (except in a narrow set of special circumstances like child sex abuse material, court-orders, etc.) This of course reveals the concept of a ‘neutral’ platform for what it is, a myth. Which is why it is important to look at these policies with as critical an eye as possible.
Deepfakes and Synthetic/Manipulated Media
Let’s look at Facebook’s decision to ban ‘deepfakes’ using algorithmic detection. The move is laudable, however, this will not address the lightly edited videos that also plague the platform. Additionally, disinformation agents have modified their modus operandi to use malinformation since it is much harder to detect by algorithms. This form of information disorder is also very common in India.
Twitter’s policy goes further and aims to label/obfuscate not only deepfakes but any synthetic/manipulated media after March 5. It will also highlight and notify users that they are sharing information that has been debunked by fact-checkers. In theory, this sounds promising but determining context across geographies with varying norms will be challenging. Twitter should consider opening up flagged tweets to researchers.
The ‘Supreme Court’ of content moderation
The genesis of Facebook’s Oversight Board was a November 2018 Facebook post by Mark Zuckerberg ostensibly in response to the growing pressure on the company in the aftermath of Cambridge Analytica, the 2016 election interference revelations, and the social network’s role in aiding the spread of disinformation in Myanmar in the run-up to the Rohingya genocide. The Board will be operated by a Trust to which the company has made an irrevocable pledge of $130 million.
For now, cases will be limited to individual pieces of content that have already been taken down and can be referred in one of two ways: By Facebook itself or by individuals who have exhausted all appeals within its ecosystem (including Instagram). And while the geographical balance has been considered, for a platform that has approximately 2.5 billion monthly active users and removes nearly 12 billion pieces of content a quarter, it is hard to imagine the group being able to keep up with the barrage of cases it is likely to face.
There is also no guarantee that geographical diversity will translate to the genuine diversity required to deal with kind of nuanced cases that may come up. There is no commitment as to when the Board will also be able to look into instances where controversial content has been left online. Combined with the potential failings of its deepfakes policy to address malinformation, this will result in a tradeoff where harmful, misleading content will likely stay online.
Another area of concern is the requirement to have an account in the Facebook ecosystem to be able to refer a case. Whenever the board’s ambit expands beyond content takedown cases, this requirement will exclude individuals/groups, not on Facebook/Instagram from seeking recourse, even if they are impacted.
The elephant in the room is, of course, WhatsApp. With over 400 million users in India and support for end-to-end encryption, it is the main vehicle for information disorder operations in the country. The oft-repeated demands for weakening encryption and providing backdoors are not the solution either.
Information disorder, itself, is not new. Rumours, propaganda, and lies are as old as humanity itself and surveillance will not stop them. Social media platforms significantly increase the velocity at which this information flows thereby increasing the impact of information disorder significantly. Treating this solely as a problem for platforms to solve is equivalent to addressing a demand-side problem through exclusive supply-side measures. Until individuals start viewing new information with a healthy dose of skepticism and media organisations stop being incentivised to amplify information disorder there is little hope of addressing this issue in the short to medium term.
(Prateek Waghre is a research analyst at The Takshashila Institution)
Fact-checking alone won't be enough in fight against fake news
Google has recently announced a $1 million grant to help fight misinformation in India. This could not have come at a better time. Misinformation has is a reality and bi-product of the Indian and global information age. It could be Kiran Bedi on Twitter claiming that the sun chants Om or WhatsApp forwards saying that Indira Gandhi entered JNU with force and made the leader of the student’s union, Sitaram Yechury, apologise and resign. As someone who was subject to both these pieces of misinformation, I admit I ended up believing both of them at first, without a second thought. While both of those stories are relatively harmless, misinformation does have an unfortunate history of causing fatalities. For instance, in Tamil Nadu, a mob was guilty of mistaking a 65-year-old woman for being a child trafficker. So when they saw her handing out chocolates to children, they put two and two together and proceeded to lynch the woman. Because of instances like these, and because misinformation has the power to shape the narrative, there is an urgent need to combat it. Countries have already begun to take notice and device measures. For instance, during times when ISIS was a greater force and Russia was emerging as an emerging misinformation threat, the US acknowledged that it was engaged in a war against misinformation. To that end, the Obama administration appointed Richard Stengel, former editor of the TIME magazine, as the undersecretary of Public Diplomacy in the State Department to deal with the threat. Stengel later wrote a book called Information Wars and acknowledged the limitations of the state in providing an effective counter to misinformation through fact-checking. When we try to tackle misinformation, we reason through it based on fundamentally incorrect assumptions. Typically, when we think of misinformation, we picture it to be like this pollutant that hits a population and spreads. Here we imagine that the population misinformation is affecting, is largely passive and homogenous. This theory does not take into account how people interact with the information that they receive or how their contexts impact it. It is a simple theory of communication and does not appreciate the complexities within which the world operates. Amber Sinha elaborates on this in his book, The Networked Public. Paul Lazarsfeld and Joseph Klapper debunked this theory of a passive population in the 1950s. Their argument was that contexts matter. Mass communication and information combined do have the potential to reinforce beliefs, but that reinforcement largely depended on the perception, selective exposure, and retention of information. Lazarsfeld and Klapper’s work is a more sobering look at how misinformation spreads. Most importantly, it tells us why fact-checking doesn’t work. People are not always passive consumers of information. There are multiple factors that significantly impact how information is consumed, such as perception, selective exposure, and confirmation bias. Two people can interpret the same piece of information differently. This is why we see that the media does not change beliefs and opinions, but instead, almost always ends up reinforcing them. So just because people are exposed to facts, does not mean that it is going to fix the problem. I tried to test it myself. To the person who had sent me the story about Indira Gandhi making Sitaram Yechury apologise and resign, I forwarded a link and a screenshot that debunked the forward. To my complete lack of a surprise, they did not respond. Similarly, when Kiran Bedi was told that NASA did not confirm that the Sun sounded like Om, she responded by Tweeting, “We may agree or not agree. Both choices are 🙏”. That makes sense. Remember the last time someone fact-checked you, or just blurted with a statement that went against your worldview. No one likes cognitive dissonance. When our beliefs are questioned, we feel uneasy, our brain tries to reconcile the conflicting ideas to make sense of the world again. It is no fun having your belief system shaken. This brings us back to square one. Misinformation is bad and has the potential to conjure divisive narratives and kill people. If fact-checking does not work, how do we counter it? I do not know the answer to this but would argue that the answer lies in patience and reason. We often think that leading with facts directly wins us an argument. In recent times, I have been guilty of that more often than I would like. But doing that just leads to cognitive dissonance, reconciliation of facts and beliefs, and regression to older values. We need to fundamentally rethink how we are to tackle misinformation. This is why Google’s grant comes at an opportune time. We are yet to see how it will contribute to combating misinformation. While fact-checking is good and should continue, it is not nearly enough to win the information wars.
The scientific argument for marrying outside your caste
Bengaluru: As India becomes more globalised, there are intense deliberations in orthodox families on the merits of getting married within their own community.
The obvious go-to-market strategy is to ask for family recommendations and visit websites tailored for the community. This is typically followed by patrika and gotra matching, and some family meetings.
But as our scientific understanding of diseases and other heritable attributes increases, we have to question whether continued insistence on community-based marriages is relevant.Endogamy is the practice of marrying within the same community, and genetic diseases arising out of a limited gene pool are a major consequence of it.There is a growing need to reflect on these practices and determine what’s the best way to choose a life partner. (Read more)
Coronavirus outbreak, N95 masks, traditional medicine and other burning questions
The coronavirus outbreak originating in China has made its way across the globe, with nearly 12,000 confirmed cases. There are three confirmed cases in India as of 5 February 2020. There is considerable panic as countries race to contain its spread through screening and quarantine measures. Companies and universities have pressed into action; trying to figure out better diagnostics and possible treatment for the new virus.The ambiguity of the virus’s origin and fear created by its rapid spread and lack of cure has incited a deluge of misinformation. So here are four questions you really need to know about this outbreak.
How deadly is the viral outbreak?
The virus is the latest member of the coronavirus family to jump from other animals to infecting humans. The scientific community has already determined the genetic sequence of the virus. We now understand that every infected person can infect up to 4 people. The virus spreads through the air, enveloped in tiny droplets released as a person sneezes, coughs or talks. Spitting in public places or coughing without covering one’s mouth are basic hygiene fails that need to be avoided. (Read more)
We need to revise our approach to anonymised data
Data is a complex, dynamic issue. We often like to make large buckets where we want to classify them. The Personal Data Protection Bill does this by making five broad categories, personal data, personal sensitive data, critical personal data, non-personal data, and anonymised data. While it is nice to have these classifications that help us make sense of how data operates, it is important to remember that the real world does not operate this way.
For instance, think about surnames. If you had a list of Indian surnames in a dataset, they alone would not be enough to identify people. So, you would put that dataset under the ambit of personal data. But since it is India, and context matters, surnames would be able to tell you a lot more about a person such as their caste. As a result, surnames alone might not be able to identify people, but they can go on to identify whole communities. That makes surnames more sensitive than just personal data. So you could make a case for them to be included in the personal sensitive category.
And that is the larger point here, data is dynamic, as a result of how it can be combined or used alone in varying contexts. As a result, it is not always easy to pin it down to broad buckets of categories.
This is something that is often not appreciated enough in policy-making, especially in the case of anonymised or non-personal data. Before I go on, let me explain the difference between the two, as there is a tendency to use them interchangeably.
Anonymised data refers to a dataset where the immediate identifiers (such as names or phone numbers) are stripped off the rest of the dataset. Nonpersonal data, on the other hand, is a broader, negative term. So anything that is not personal data can technically come under this umbrella, think anything from traffic signal data to a company's growth projections for the next decade.
Not only is there a tendency to use the terms interchangeably, but there is also a false underlying belief that data, once anonymised cannot be deanonymised. The reason the assumption is false is that data is essentially like puzzle pieces. Even if it is anonymized, having enough anonymised data can lead to deanonymisation and identification of individuals or even whole communities. For instance, if a malicious hacker has access to a history of your location through Google Maps, and can combine that with a history of your payments information from your bank account (or Google Pay), s/he does not need your name to identify you.
In the Indian policy-making context, there does not seem to be a realization that anonymisation can be reversed once you have enough data. The recently introduced Personal Data Protection Bill seems to be subject to this assumption.
Through Section 91, it allows “the central government to direct any data fiduciary or data processor to provide any personal data anonymised or other non-personal data to enable better targeting of delivery of services or formulation of evidence-based policies by the Central government”.
There are two major concerns here. Firstly, Section 91 gives the Government power to gather and process non-personal data. In addition, multiple other sections ensure that this power is largely unchecked. For instance, Section 35 provides the Government the power to exempt itself from the constraints of the bill. Also, Section 42 ensures that instead of being independent, the Data Protection Authority is constituted by members selected by the Government. Having this unchecked power when it comes to collecting and processing data is problematic especially it has the potential to give the Government the ability to use this data to identify minorities.
Secondly, it just does not make sense to address nonpersonal data under a personal data protection bill. Even before this version of the bill came out, there had been multiple calls to appoint a separate committee to come up with recommendations in this space. It would have then been ideal to have a different bill that looks at non-personal data. Because the subject is so vast, it does not make sense for it to be governed by a few lines in Section 91 for the foreseeable future.
So the bottom line is that anonymised data and nonpersonal data can be used to identify people. The government having unchecked powers to collect and process these kinds of data has the potential to lead to severely negative consequences. It would be better instead, to rethink the approach to non-personal and anonymised data and have a separate committee and regulation for this.
This article was first published in Deccan Chronicle.
(The writer is a technology policy analyst at the Takshashila Institution. Views are personal)
Wuhan and the Need for Improved Global Biosecurity
The Wuhan coronavirus, or nCoV-2019, is likely to become a pandemic in the coming weeks, having already infected at least 17,000 and killed some 400. The World Health Organisation has belatedly declared a public health emergency, while at least 45 million Chinese citizens remain under lockdown. Despite botching up the initial response in Wuhan, authorities in China have since been fast to share information on the outbreak and have even invited overseas experts for help. A draft sequence of the genome has also been published online and scientists from across the world have shared their analysis. Despite wild speculation about the origins of the Wuhan virus there’s absolutely no evidence it is anything other than a naturally mutated pathogen – indeed, it would make no sense for a state to produce a bioweapon that has both high communicability and low lethality.However, future threats to global health will come not only from natural viruses like nCoV-2019 but also from man-made pathogens. Mechanisms that aid early detection and encourage transparency need to be institutionalised quickly as a combination of breakthrough technologies and human malice raise the threat from bioweapons. Major states like China and India are well-positioned to champion this institutionalisation given their high vulnerability to bioweapons attacks and their shared desire to shape global institutions. (Read more)