Commentary

Find our newspaper columns, blogs, and other commentary pieces in this section. Our research focuses on Advanced Biology, High-Tech Geopolitics, Strategic Studies, Indo-Pacific Studies & Economic Policy

Personal Data Protection Bill has its flaws

Data Protection Authority can potentially deal with brokers and the negative externality

Indian tech policy is shifting from formative to decisive. Arguably the biggest increment in this shift comes this week as the Personal Data Protection Bill will (hopefully) be debated and passed by the parliament. The bill itself has gone through public (and private) consultation. But it is still anyone's guess what the final version will look like.

Based on the publically available draft, there is a lot right with the bill. The definitions of different kinds of data are clear, and there is a lot of focus on consent. However, there is not enough focus on regulating data brokers. And that can be a problem. Data brokers are intermediaries who aggregate information from a range of sources. They clean, process, and/or sell data they have. They generally source this data if it is publicly available on the internet or from companies who first hand.

Because the bill does not explicitly discuss brokers, problems lie ahead. Broadly, you could argue that brokers come under either the fiduciary or in India sell lists of people who have been convicted of rape and the list ends up becoming public information.

Similarly, think about cases where databases of shops selling beef, alcoholics or erectile dysfunction are released into the wild. The latter two are instances the US is somewhat familiar with. A data broker can ask its clients to not re-sell the data, or expect certain standards of security to be maintained. But there is no way to logistically ensure that the client is going to adhere to this in a responsible manner. The draft bill talks about how to deal with breaches and who should be notified. But breaches are, by definition, unauthorised. A data broker’s whole business model is selling or processing data. All of which is legal. So, how should the

Indian government be looking at keep data brokers accountable? Some would argue that the answer may lie in data localisation. But localisation will only ensure that data is stored/processed domestically. Even if the broker is located domestically, it doesnt matter unless there is provision in law for mandating accountability.

The issue around brokers is also unlikely to be handled in the final version of the bill. Even though it is important and urgent, it does not take precedence over more fundamental issues. What is going to happen here is that data brokers and their activities are going to be subject to the mandate of the Data Protection Authority (DPA) due to be formed after the bill is passed.

Once the DPA is formed, there are a few ways in which it can potentially deal with brokers and the negative externality their role brings.

One option could be to hold data brokers accountable once a breach has occurred and a broker has been identified as culpable. The problem here is that data moves fast. By the time there is a punitive measure in response to a breach, the damage may have already been done. In addition, such a measure would also encourage brokers to hide traces of the breaches that lead to them.

Another alternative could be to ask every data broker to register themselves.

But that would mean more data brokers being incentivised to move out of the country while maintaining operations in India.

Rohan is a technology policy analyst at The Takshashila Institution.

This article was first published in Deccan Chronicle.

Read More
Strategic Studies Strategic Studies

How India can build its own SpaceX

While the Indian Space Research Organisation is doing a commendable job, the participation of private space companies from India, at the global stage is still very limited. India must adopt an enabling policy framework and incorporate independent & fair institutional design mechanisms to promote NewSpace startups in the country. Establishment of independent space regulatory authority, disputes settlement appellate tribunal and a commercial entity to carry out operations built on legacy ISRO technology are essential.For more, please read here.

Read More

A small step for data protection, big leap awaited

It is an exciting time to be in the Indian tech policy space right now. The government has listed the Personal Data Protection Bill in Parliament for the winter session. The Union Cabinet has aprroved the Bill and it is likely to be introduced for discussion before the on-going winter session of Parliament ends on December 13.

Going forward, this Bill will update the currently non-existent standards for privacy and consent. The law will (as stated in the draft Bill prepared by a high-level committee headed by former Supreme Court judge, B N Srikrishna ), also set up a data protection authority. As these developments occur, and India begins to set its own standards in the space, it is important to keep in mind that this milestone is the beginning for stronger data protection, and not the end.

One of the most important aspects of the Bill is the setting up of the data protection authority (DPA). While the draft Bill sets up broad principles for privacy, a huge chunk of the work has been left for the DPA to carry forward. There are big-ticket items that need to be resolved while keeping in mind the larger vision for data protection in India. For instance, the authority will need to establish and enforce conditions on which personal data can be collected, accessed, and processed without consent. The DPA will need to be the policy formulator as well as the enforcer. Given the pace of progress in technology, the DPA will also need to be proactive in its approach rather than reactive. All of this means that the authority is always going to be strapped for capacity and will need to have appointments whose values align with that of the law’s larger vision. It is a thankless task to manage trade-offs between privacy and innovation in a country like India. That is what the bill is formally setting in motion through establishing the DPA.

Momentous as the Bill’s passage will be, it is crucial to note that this will not automatically mean that personal data is safeguarded going forward. There is potentially a 12-month period between the date it is signed-off by the President and when it is finally notified by the Central Government. This can be followed by a 3 month period to establish the Data Protection Authority and another nine to fifteeen months for all provisions to come into effect. Cumulatively, this could mean that it may be more than two years after it receives Presidential assent before there is a fully functional data protection regime in place. The process could conclude earlier, but given the complexity of the tasks at hand it is not unreasonable to expect that most of the allowed timelines will have to be utilised.

As with any policy, the outcomes will depend on how effectively it can be implemented. Much has already been written about the drawbacks of a consent-based model resulting in consent-fatigue. The Bill calls for privacy by design, but ensuring accountability will be difficult since most design decisions are opaque. A recent study on the EU’s General Data Protection Regulation (GDPR) and ePrivacy Directive violations revealed that 54 per cent of websites tested were non-compliant. Also, considering the number of data fiduciaries (not limited to the online world) one can interact with on a daily basis, a person may never find out if their personal data has been misused, or which entity is responsible. The bill proposes mechanisms for addressing grievances. It also requires entities that handle large volumes of user data to undergo audits and assessments. How responsive and transparent these processes turn out to be will indicators of how efficient the policy is.There have only been limited studies on privacy in the Indian context but the most existing literature points to the collectivist nature of society to explain the low levels of privacy consciousness. While awareness is growing, if people display a high level of apathy towards ensuring protection of their personal data it may push data fiduciaries down the path of non-compliance.

The government should table the Bill at the earliest to allow sufficient time for discussing the finer aspects of the Bill on the floor of the house. The number of questions posed to MEITY on the topic of privacy and data protection indicates a high degree of interest in Parliament on the subject. The government should also endeavour to remain as transparent as possible when framing the remaining provisions. Simultaneously, society should not slide into complacency after the passage of the Bill. Instead, it must continue to stay engaged to ensure that we have a strong data protection regime that succeeds in safeguarding Indians’ fundamental right to privacy.

(Rohan Seth and Prateek Waghre are technology policy analysts at The Takshashila Institution)

This article was originally published in Deccan Herald.

Read More
Economic Policy Anupam Manur Economic Policy Anupam Manur

Bengaluru needs more high-tech companies, not fewer 

The Karnataka government is set to release a new industrial policy next month with the goal of encouraging investment in tier-II cities. As it has been in the past, this goal is likely to be framed in zero-sum terms i.e. achieved by pushing IT companies to move away from Bengaluru and in other cities instead.We will limit this article’s focus on what such a policy direction would mean in high-tech sectors such as biotech, aerospace, and IT. , this push towards creating an alternative of centre gravity for the high-tech industry seems to be an intuitive answer for achieving balanced regional growth. And yet, this view is wrong because it doesn’t square with the empirical experience of high-tech clusters elsewhere in the world.Read more at: https://www.deccanherald.com/opinion/bengaluru-needs-more-high-tech-companies-not-fewer-780314.html

Read More

4 Lessons for India From China’s October 2019 Military Parade

With the People’s Republic of China (PRC) marking its 70th founding anniversary on October 1, the grand military parade at Tiananmen Square was the highlight of the celebrations. It showcased China’s newer arms, ammunition, and technology. Over 15,000 personnel, 160 aircraft, and 580 pieces of military equipment participated in the military parade, including sophisticated weaponry such as hypersonic missiles, intercontinental-range land and submarine-launched ballistic missiles, stealth combat and high-speed reconnaissance drones, and fifth-generation fighter jets.China intended to address both domestic and international audiences through this parade. At home, the leadership hoped that the parade would stir up feelings of nationalism. Internationally, the display of force was intended as a warning to the United States and China’s neighbors. Further, the parade reflected the People’s Liberation Army’s (PLA) progress toward becoming a “world-class military” by 2050.Although policymakers and military leaders across the world were keeping a close eye on China’s military display, perhaps those in India should have been paying the most attention. The parade was not directed at India, but New Delhi can learn a lot from China’s use of military modernization and its ongoing defense reforms. Here are four key lessons New Delhi can take from China’s 2019 military parade. Read more...

Read More

Should India be bolder with China?

India’s response to China’s diplomatic offensive of recent years has been inconsistent and sporadic. Using diplomatic tools in an institutionalised way to highlight China’s vulnerabilities is something India refrains from. This, despite China’s increased diplomatic activism against India. For instance, China raised the dilution of Article 370 in the United Nations Security Council on behalf of Pakistan. It has repeatedly blocked India’s entry into the 48-member Nuclear Supplier Group. It also took over 10 years to sponsor the blacklisting of Masood Azhar as a UN-designated global terrorist.India should not refrain from developing diplomatic leverages and using them against China, whenever required. It should issue statements on China’s “re-education camps” in Xinjiang, its activities in the South China Sea which impact India, and Hong Kong protests. It could also occasionally use Tibet as an irritant like China uses Kashmir. All of these with the presumption that India has improved its border infrastructure to at least maintain status quo in case of escalation of tensions. Read more...

Read More
Strategic Studies Pranay Kotasthane Strategic Studies Pranay Kotasthane

Subcontinent is not ‘India’s own backyard’. Neighbours will continue to pursue foreign policies independently

The Print’s daily roundtable TalkPoint posed a question connected to the new Sri Lankan President Gotabaya Rajapaksa's India visit: With strong leaders like Rajapaksa, Hasina, Oli, is India losing dominance in South Asia?My response:Strong leaders or not, these sovereign South Asian states will continue to pursue their independent foreign policies based on their strategic priorities. The subcontinent is not ‘India’s own backyard’. There’s no need to judge every political change in these countries based on how it will affect India’s ‘dominance’ in South Asia.Structurally, it is natural for these states to play India off against the other powerful economy, China. In fact, smaller states across the world tend to balance their relationships with bigger powers.As long as these states are mindful of India’s security concerns and economic well-being, India shouldn’t be overly concerned with China’s presence. Given China’s overbearing foreign policy approach, it is likely to establish itself as a primary object of hate among India’s South Asian neighbours soon. India must instead do enough to be the second-best option for every smaller nation.From the perspective of these states, both India and China have their comparative advantages. China has more economic wherewithal whereas geographical proximity makes India irreplaceable for them.Therefore, the emergence of strong leaders in Sri Lanka, Bangladesh and Nepal should not be seen as a zero-sum game in India.Read the entire discussion on ThePrint.in website here.   

Read More

Joining a New Social Media Platform Does Not Make Sense

Mastodon is what’s happening in India right now. Indian Twitter users are moving to the platform and have taken to using hashtags such as #CasteistTwitter and #cancelallBlueTicksinIndia. A key reason for this to transpire is that Twitter has been, to put it mildly, less than perfect, in moderating content in India. There is the incident with lawyer Sanjay Hegde that caused this to blow up, along with accusations that Twitter had been blocking hundreds and thousands of tweets in India since 2017 with a focus on accounts from Kashmir.Enter Mastodon. The platform, developed by developer Eugen Rochko, is opensourced, so no one entity gets to decide what content belongs on the communities there. Also, the data on Mastodon is not owned by one single corporation, so you know that your behavior on there is not being quantified and being sold to people who would use that to profile and target you.Plus, each server (community) has a relatively small size with a separate admin, moderator, and by extension, code of conduct. All of this sounds wonderful. The character count is also 500 words as opposed to 280 (if that is the sort of thing you consider to be an advantage).Mastodon is moving the needle forward by a significant increment when it comes to social networking. The idea is for us to move towards a future where user data isn’t monetised and people can host their own servers instead. As a tech enthusiast, that sounds wonderful and I honestly wish that this is what Twitter should have been.Keeping all of that in mind, I don’t think I will be joining Mastodon. Hear me out. A large part of it is not because Mastodon does not have its own problems, let’s set that aside for now and move on to the attention economy. Much like how goods and services compete for a share of your wallet, social media has for the longest time been competing for attention and mind-space. Because the more time you spend on the platform, the more ads you will see and the more money they will make. No wonder it is so hard to quit Instagram and Facebook.Joining a new platform for social media today is an investment that does not make sense unless the other one shuts down. There is a high chance of people initially quitting Twitter, only to come back to it while being addicted to another platform. The more platforms you are on, the thinner your attention is stretched. That is objectively bad for anyone who thinks they spend a lot of time on their phone.If you’re lucky to be one of the few people who do not suffer from that and are indifferent to the dopamine that notifications induce in your brain, this one doesn’t apply to you. Then there is the network effect and inertia. I for one, am for moving the needle forward little by little. But here, there is little to gain right now, with more to lose.Network effects are when products (in this case, platforms), gain value when more people use them. So, it makes sense for you to use WhatsApp and not Signal, as all your friends are on WhatsApp. Similarly, it makes sense for you to be on Twitter as your favorite celebs and news outlets are on there. Mastodon does not have the network effect advantage, so most people who do not specifically have their network on Mastodon, do not get a lot of value out of using it.In addition, there is inertia. Remember when we set aside Mastodon’s problems earlier, here is where they fit in. Mastodon is not as intuitive as using Twitter or Facebook. That makes it a deal-breaker for people of certain ages and also happens to be a significant con for people who don’t want to spend a non-trivial chunk of their time learning about servers, instances, toots, and so on.There also isn’t an official Mastodon app, however, there are a bunch of client apps that can be used instead, most popular among them is Tusky, but reviews will tell you that it is fairly buggy and that is to be expected. There is so much right with Mastodon. It is a great working example of the democratisation of social media. It also happens to exist in an age where it would be near impossible to get funding for or to start a new social media platform. The problem is that for people who don’t explicitly feel the need or see the value in joining Mastodon, are unlikely to split their attention further by joining a new platform. The switching costs, network effects, and inertia are simply too high.Rohan is a policy analyst at The Takshashila Institution and the co-author of Data Localization in a Globalized World: An Indian Perspective.This article was first published in Deccan Chronicle.

Read More
Advanced Biology Shambhavi Naik Advanced Biology Shambhavi Naik

IndiGen project — how mapping of genomes could transform India’s healthcare

The Council of Scientific and Industrial Research (CSIR) has launched an ambitious project, IndiGen, to sequence whole genomes of diverse ethnic Indian population to develop public health technology applications.

The CSIR last month announced sequencing of 1,008 Indian genomes as part of the project. It aims to complete sequencing of at least 10,000 Indian genomes over the next three years.

A genome is an organism’s complete set of DNA. It includes all genes, which house the DNA, and chromosomes. The genome contains all the data that is needed to describe the organism completely — acting essentially as a blueprint. The genome can be understood through the process described as sequencing. (Read more)

Read More

How to respond to an 'intelligent' PLA

Advancements in Artificial Intelligence (AI) technologies over the next decade will have a profound impact on the nature of warfare. Increasing use of precision weapons, training simulations and unmanned vehicles are merely the tip of the iceberg. AI technologies, going forward, will not only have a direct battlefield impact in terms of weapons and equipment but will also impact planning, logistics and decision-making, requiring new ethical and doctrinal thinking. From an Indian perspective, China’s strategic focus on leveraging AI has serious national security implications.Read the full article on the Deccan Herald website.

Read More
High-Tech Geopolitics Prateek Waghre High-Tech Geopolitics Prateek Waghre

Lessons from Facebook and Twitter's Political Ads Policies

Over the course of the last few weeks, we have seen Facebook and Twitter take opposing views on the issue of political ads. While the issue itself does not have an immediate implication for Indian politics, the decisions of the two companies, their actions throughout the episode and reactions to them are emblematic of the larger set of problems surrounding their policies. They serve as a reminder that we should not expect these platforms to be neutral places in the context of public discourse solely through self-regulation.

In late October, Facebook infamously announced that it would not fact-check political ads. Shortly after that, Twitter’s CEO Jack Dorsey announced via Twitter that the company would not allow any political ads after November 22. And though Twitter is not alone in this approach, its role in public discourse differs from other companies like LinkedIn, TikTok etc. that already have similar policies. Google is reportedly due to announce its own policy soon. At face-value, it may seem that one of these approaches is far better than the other, but a deeper look brings forth the challenges both will find hard to overcome. Google, meanwhile, announced a new political ads policy on November 20. Its policy aims to limit micro-targeting across search, display and YouTube ads. Crucially, it reiterated that no advertisers (political or otherwise) are allowed to make misleading claims.

Potential for misuse

To demonstrate the drawbacks of Facebook’s policy, US lawmaker Elizabeth Warren’s Presidential campaign deliberately published an ad with a false claim about Facebook CEO Mark Zuckerberg. In another instance, Adriel Hampton, an activist, signed up as a candidate for California’s 2022 gubernatorial election so that he could publish ads with misleading claims (he was ultimately not allowed to do so).

While Twitter’s policy disallows ads from candidates, parties and political groups/ political action committees (PACs), Facebook claims it will still fact-check ads from PACs. For malicious actors determined to spread misinformation/disinformation through ads, these distinctions will not be much of an impediment. They will find workarounds.

While most conversation has been US-centric, both companies have a presence in over 100 countries. A significant amount of local context and human-effort is required to consistently enforce policies across all of them. The ongoing trend to substitute human oversight with machine learning could limit the acquisition of local knowledge. For e.g. does Facebook's policy of not naming whistle-blowers work in every country it has a presence in?

Notably, both companies stressed how little an impact political ads had on their respective bottom-lines. Considering the skewed revenues per user for North America + Europe compared with Asia Pacific + rest of the world, the financial incentive to enforce such resource-intensive policies equitably is limited. Both companies also have a history of inconsistent responses to moral panics resulting in an uneven implementation of their policies.

A self-imposed ban on political ads by Facebook and Twitter in Washington to avoid dealing with complex campaign finance rules has resulted in uneven enforcement and a complicated set of rules that have proven advantageous to incumbents. In response to criticism that these rules will adversely impact civil society and advocacy groups, Twitter initially said ‘cause-based ads’ won’t be banned and ultimately settled on limiting them by preventing micro-targeting. Ultimately, both approaches are likely to favour incumbents or those with deeper pockets.

Fixing Accountability

The real problems for Social Media networks go far beyond micro-targeted political advertising and the shortcomings across capacity, misuse and consequences apply there as well. The flow of misinformation/disinformation is rampant. A study by Poynter Institute highlighted that misinformation/disinformation outperformed fact-checks by several orders of magnitude. Research by Oxford Internet Institute and Freedom House has revealed the use disinformation campaigns online and the co-option of social media to power the shift towards illiberalism by various governments. Conflict and toxicity now seem to be features meant to drive engagement. Rules are implemented arbitrarily and suspension policies are not consistently enforced. The increased usage of machine learning algorithms (which can be gamed by mass reporting) in content moderation is coinciding with the reduction in human oversight.

Social Media networks are classified as intermediaries which grants them safe-harbour, implying that they cannot be held accountable for content posted on them by users. Intermediary is a very broad term covering everything from ISPs, Cloud services to end-user facing websites/applications across various sectors. Stratechery, a website which analyses technology strategy, proposes a framework for content moderation such that both discretion and responsibility is higher the closer a company is to an end-user. Therefore, for platforms like Facebook/Twitter/YouTube etc. there should be more responsibility/discretion than ISPs/Cloud services providers. It does not explicitly call for fixing accountability, which cannot be taken for granted.

Unfortunately, self-regulation has not worked in this context and their status as intermediaries may require additional consideration. Presently, India’s proposed revised Intermediary Guidelines already tend towards over-regulation to solve for the challenges posed by Social Media companies, adversely impacting many other companies. The real challenge for policy-makers and society in countries like India is to strike the balance between holding large Social Media networks accountable while not creating rules that are so onerous they can be weaponised into limiting freedom of speech.

(Prateek Waghre is a Technology-Policy researcher at Takshashila Institution. He focuses on the governance of Big Tech in Democracies)

This article was originally published on 21st November 2019, in Deccan Herald.

Read More

The PLA Insight: Issue no 29

I. The Big Story: PLA in Hong KongThe People’s Liberation Army soldiers were spotted cleaning-up the Hong Kong’s streets last week. Their presence raised concerns in China’s autonomous region. Social media feeds showed men in green and black uniforms with Chinese flags on their shoulders, “voluntarily clearing the streets.” Although several thousands of the PLA soldiers are located in Hong Kong’s PLA Garrison, they are rarely seen outside their barracks. Hong Kong government stated that it had not requested the Garrison’s assistance. The Chinese soldiers’ efforts to clear the roadblocks were “purely a voluntary community activity initiated by themselves.” The clean-up came after one of the most intense weeks of the anti-government protests. Read more...

Read More

We Need Our Own Honest Ads Act

Recent developments in online advertising have been uplifting. Facebook (and by extension, Instagram) has been running a policy that is meant to block predatory ads that target people who are overweight or have skin conditions, pushing unusual and often medically dangerous miracle cures. Google, which makes over $100 billion in online ad revenue, has also released a statement declaring a ban on ads that are selling treatments that have no established biomedical and scientific basis. Twitter also declared that it won’t be accepting ads from state-controlled media entities.This is not to say that the advertising policies of these companies are perfect, as incidents reported by The Verge and CNBC will tell you. However, things have been improving at a steady pace as far as advertising policies are concerned.A major catalyst for this change has been the 2016 US election that saw the potential of online advertising abused for targeting voters. Since then, there has been bipartisan support in the US to achieve greater transparency in online advertising. This includes disclosing who paid for public ads, how many people saw those ads, and how the purchaser can be contacted.There are two problems with the support for greater transparency in advertising. Firstly, the bi-partisan push never ended up becoming law. Secondly, even if it did end up becoming law, its impact would have been limited to the US.It is an interesting story why we still lack a law that enforces greater transparency in advertising, and much of it revolves around Facebook, with its conclusion set to impact other players in online advertising. The bill, called the Honest Ads Act, was introduced in the Senate in 2017.Had it become law, it’s success or failure would have given other countries a template to work with to achieve greater transparency in advertising. As of now, that will need to continue without precedent. Days after the bill was introduced, Facebook announced that it would be updating its Advertising Transparency and Authenticity Efforts.Mark Zuckerberg declared his support for the Honest Ads Act through a separate Facebook post, stating, “Election interference is a problem that’s bigger than any one platform, and that’s why we support the Honest Ads Act”. Important side note, Twitter also announced its decision to back the Act, but the focus here is on Facebook because of its size, position, and role in the 2016 US election.Once Facebook expressed its support for the act, and declared the intent to self-regulate according to the bill, the issue lost momentum. At the time, Zuckerberg’s testimony at Capitol Hill was impending, and the news cycle shifted its attention. Senate Majority Leader Mitch McConnell, brought in the first amendment into the argument, saying he was sceptical of proposals (like the Honest Ads Act) that would penalize American citizens trying to use the internet and to advertise. At this point, you could just make the argument that in retrospect, Facebook could have supported the Honest Ads Act by not declaring its support.Regardless, the implications of these events impacted players across a wide spectrum. Because there was no legal requirement to do so, other avenues of online ads (read, Twitter, Google) did not need to comply with a set standard that could be used as a yardstick to judge them against. In addition, the problem with the freedom of speech argument is that transparency in ads is not directly impacting free speech. You could extend the same argument to revoke the laws that mandate transparency in TV and radio ads in the US. So where is the crackdown on transparency in TV and Radio?The Honest Ads Act is relevant as it had the potential to set the tone for how transparent the regulation should be in other countries.The US is not the most significant user base for these platforms. And as you might expect, having transparency in political ads could be useful for other countries that also hold elections. For example, India has over 270 million Facebook users, a significant percentage of whom participated in the general elections. Understandably, advertising on social media sites such as Facebook was an integral part of most campaign strategies. So, it would help to have a law that helps voters identify who is paying for what political ad, and conversely, which of them might be facts, and which of them might be false propaganda.Asking online ad companies such as Facebook to regulate themselves will have exactly the effect that it is having now. They will move towards better ad and transparency policies at their own pace, influenced by what the prevailing narrative is. And for most countries, that is not enough.Having a law in countries where these platforms operate is more efficient. It is not just the United States that needs its ads to be honest.The writer is a Research Analyst with Takshashila Institution, Bengaluru.This article was first published in Deccan Herald.

Read More

India will be watching the new Rajapaksa regime closely

Sri Lanka’s geo-strategic location will continue to attract foreign powers like China and the United States (US) and therefore the tussle between them is likely to be played out in Sri Lanka. In this context, India’s ability to shape Sri Lankan policies will be tested. Moreover, how Gotabaya Rajapaksa engages with China will also be watchedSri Lanka is probably the most significant state in the evolving Indian Ocean geopolitics. Therefore, political trajectory of this island nation under the new dispensation will definitely determine the course of security competition in the Indian Ocean.Read Sankalp Gurjar's article here.

Read More
High-Tech Geopolitics, Economic Policy Prateek Waghre High-Tech Geopolitics, Economic Policy Prateek Waghre

Why we must be vigilant about mass facial surveillance

The recent revelations about NSO group’s Pegasus being used to target an estimated two dozen Indian lawyers and activists using the vulnerabilities in Whatsapp have once again brought the issue of targeted surveillance of citizens into focus. As the saying goes, no good crisis should go to waste. This is an opportunity to raise public awareness about trends in mass surveillance involving Facial Recognition systems and CCTV cameras that impact every citizen irrespective of whether or not they have a digital presence today.

The Panoptican, conceptualised by philosopher Jeremy Bentham, was a prison designed in a way that prisoners could be observed by a central tower, except they wouldn’t know when they were being watched, forcing them to self-regulate their behaviour. Michel Foucault later extended this idea stating that modern states could no longer resort to violent and public forms of discipline and needed a more sophisticated form of control using observation and surveillance as a deterrent.

Live Facial Recognition combined with an ever expanding constellation of CCTV cameras has the potential to make this even more powerful. Therefore, it suits governments around the world, irrespective of ideology, to expand their mass surveillance programs with stated objectives like national security, identification of missing persons etc. and in the worst cases, continue maximizing these capabilities to enable the establishment of an Orwellian state.

Global trends
China’s use of such systems is well documented. As per a study by the Journal of Democracy, there will be almost 626 million CCTV cameras deployed around the country by the end of 2020. It was widely reported in May that its Facial recognition database includes nearly all citizens. Facial recognition systems are used in public spaces for purposes ranging from access to services (hotels/flights/public transport etc) to public shaming of individuals for transgressions such as jaywalking by displaying their faces and identification information on large screens installed at various traffic intersections and even monitoring whether students are paying attention in class or not.

The former was highlighted by an almost comedic case in September, where a young woman found that her access to payment gateways, ability to check in to hotels/trains etc. was affected after she underwent plastic surgery. In addition, there is also a fear that Facial Recognition technology is being used to surveil and target minorities in Xinjiang province.

In Russia, Moscow mayor Sergei Sobyanin has claimed that the city had nearly 200,000 surveillance cameras. There have also been reports that the city plans to build AI-based Facial Recognition into this large network with an eye on the growing number of demonstrations against the Putin government.

Even more concerning is the shift by countries that have a ‘democratic ethos’ to deploying and expanding their usage of such systems. Australia was recently in the news for advocating face scans to be able to access adult content. Some schools in the country are also running a trial of the technology to track attendance. France is testing a Facial Recognition based National ID system. In the UK, the High Court dismissed an application for judicial review of automated facial recognition. The challenge itself was a response to pilot programs run by the police, or installation of such systems by various councils, as per petitioners, without the consent of citizens and a legal basis.

There was also heavy criticism of Facial Recognition being used at football games and music concerts. Its use in personal spaces, too, continues to expand as companies explore potential uses to measure employee productivity or candidate suitability by analysing facial expressions.

There are opposing currents as well – multiple cities in the US have banned/are contemplating preventing law enforcement/government agencies from deploying the technology. Sweden’s Data Protection Authority fined a municipality after a school conducted a pilot to track attendance on the grounds that it violated EU’s General Data Protection Regulation (GDPR).

Advocacy groups like the Ada Lovelace Institute have called for a moratorium on all use of the technology until society can come to terms with its potential impact. Concerns have been raised on grounds that the accuracy of such systems is currently low, thus severely increasing the risk of misidentification when used by law enforcement agencies. Secondly, since the technology will learn from existing databases (e.g. a criminal database), any bias reflected in such a database such as disproportionate representation of minorities will creep into the system.

Also, there is limited information in many cases where and how such systems are being used. Protestors in Hong Kong and, recently, Chile, have shown the awareness to counter law enforcement’s use of Facial Recognition by targeting cameras. The means have varied from the use of face-masks/clothing imprinted with multiple faces to pointing numerous lasers at the cameras, and even physically removing visible cameras.

India’s direction
In mid-2019, the National Crime Records Bureau of India put out a tender inviting bids for an Automated Facial Recognition System (AFRS) without any prior public consultation. Meeting minutes of a pre-bid seminar accessed by the Internet Freedom Foundation indicated that there were 80 vendor representatives present. 

Convenience is touted as the main benefit of various pilot programs to use ‘faces’ as boarding cards at airports in New Delhi, Bengaluru and Hyderabad as part of the Civil Aviation Ministry’s Digi Yatra program. Officials have sought to allay privacy concerns stating that no information is stored. City police in New Delhi and Chennai have run trials in the past. Hyderabad police has until recently, routinely updated their Twitter accounts with photos of officers scanning people’s faces with cameras. Many of these posts were deleted after independent researcher Srinivas Kodali repeatedly questioned the legality of such actions.

Many of the afore mentioned trials reported low single figure accuracy rates for Facial Recognition. The State of Policing in India (2019) report by Lokniti and Common Cause indicated that roughly 50 per cent of personnel believe that minorities and migrants and ‘very likely’ and ‘somewhat’ naturally prone to committing crimes. These aspects are concerning when considering capability/capacity and potential for misuse of the technology. False-positives as result of a low accuracy rate, combined with potentially biased law enforcement and a lack of transparency, could make it a tool for harassment of citizens.

Schools have attempted to use them to track attendance. Gated communites/offices already deploy a large number of CCTV cameras. A transition to live Facial Recognition is an obvious next step. However, given that trust in tech companies is at a low, and the existence of Facial Recognition training datasets such as Megaface (a large dataset utilised to train Facial Recognition algorithms using images uploaded on the Internet as far back as the mid 2000s without consent) – privacy advocates are concerned.

Opposition and future considerations for society
Necessary and Proportionate, a coalition of civil society organisations, privacy advocates around the world, proposes thirteen principles on application of human rights to communication surveillance, many of which are applicable here as well. To state some of them – legality, necessary and legitimate aims, proportionality, due process along with judicial and public oversight, prevention of misuse and a right to appeal. Indeed, most opposition from civil society groups and activists against government use of mass surveillance is on the basis of these principles. When looked at from the lenses of intent (stated or otherwise), capacity and potential for misuse – these are valid grounds to question mass surveillance by the governments.

It is also important for society to ask and seek to answer some of the following questions: Is the state the only entity that can misuse this technology? What kind of norms should society work towards when it comes to private surveillance? Is it likely that the state will act to limit its own power especially if there is a propensity to both accept and conduct indiscriminate surveillance of private spaces, as is the case today? What will be the unseen effects of normalising mass public and private surveillance on future generations and how can they be empowered to make a choice?

This article was first published in Deccan Herald on 11th November, 2019. 

Read More
Strategic Studies Prakash Menon Strategic Studies Prakash Menon

The nuclear cloud hanging over the human race

India and China are the only nuclear powers which adhere to a No First Use policy, based on the rationale that the only role of nuclear weapons is to deter their own kind. With overwhelming evidence now available regarding nuclear explosions and climate change, it is time that India and China jointly take the lead for a Global No First Use (GNFU) Treaty and retard the dangers that stem from expanding geopolitical tensions between nuclear powers.Continue to read this article here.

Read More

Govt needs to be wary of facial recognition misuse

India is creating a national facial recognition system. If you live in India, you should be concerned about what this could lead to. It is easy to draw parallels with 1984 and say that we are moving towards Big Brother at pace, and perhaps we are. But a statement like that, for better or worse, would accentuate the dystopia and may not be fair to the rationale behind the move. Instead, let us sidestep conversations about the resistance, doublethink, and thoughtcrime, and look at why the government wants to do this and the possible risks of a national facial recognition system.

WHY DOES THE GOVERNMENT WANT THIS?

Let us first look at it from the government’s side of the aisle. Having a national facial recognition database can have a lot of pros. Instead of looking at this like big brother, the bestcase scenario is that the Indian government is looking at better security, safety, and crime prevention. It would aid law enforcement. In fact, the request for proposal by the National Crime Records Bureau (NCRB) says as much, ‘It (the national facial recognition system) is an effort in the direction of modernizing the police force, information gathering, criminal identification, verification and its dissemination among various police organizations and units across the country’.

Take it one step further in a world where later down the line, you could also use the same database to achieve gains in efficiency and productivity. For example, schools could have attendance based on FaceID-like software, or checking for train tickets would be more efficient (discounting the occasional case of plastic surgery that alters your appearance significantly enough).

POTENTIAL FOR MISUSE

The underlying assumption for this facial recognition system is that people implicitly trust the government with their faces, which is wrong. Not least because even if you trust this government, you may not trust the one that comes after it. This is especially true when you consider the power that facial recognition databases provide administrations.

For instance, China has successfully used AI and facial recognition to profile and suppress minorities. Who is to guarantee that the current or a future government will not use this technology to keep out or suppress minorities domestically? The current government has already taken measures to ramp up mass surveillance. In December last year, the Ministry of Home Affairs issued a notification that authorized 10 agencies to intercept calls, data on any computer.

WHERE IS THE CONSENT? Apart from the fact that people cannot trust all governments across time with data of their faces, there is also the hugely important issue of consent and absence of legality. Facial data is personal and sensitive. Not giving people the choice to opt-out is objectively wrong.

Consider the fact that once such a database exists, it is will be combined with state police across the country, it says as much in the proposal excerpt mentioned above. There is every chance that we are looking at increased discrimination in profiling with AI algorithms repeating the existing biases.

Why should the people not have a say in whether they want their facial data to be a part of this system, let alone whether such a system should exist in the first place?

Moreover, because of how personal facial data is, even law enforcement agencies should have to go through some form of legal checks and safeguards to clarify why they want access to data and whether their claim is legitimate.

Data breaches would have worse consequences

Policy, in technology and elsewhere, is often viewed through what outcomes are intended and anticipated. Data breaches are anticipated and unintended. Surely the government does not plan to share/sell personal and sensitive data for revenue. However, considering past trends in Aadhaar, and the performance of State Resident Data Hubs goes, leaks and breaches are to be expected. Even if you trust the government to not misuse your facial data, you shouldn’t be comfortable with trusting third parties who went through the trouble of stealing your information from a government database.

Once the data is leaked and being used for nefarious purposes, what even would remedial measures look like? And how would you ensure that the data is not shared or misused again? It is a can of worms which once opened, cannot be closed.

Regardless of where on the aisle you stand, you are likely to agree that facial data is personal and sensitive. The technology itself is extremely powerful and thus, can be misused in the wrong hands. If the government builds this system today, without consent or genuine public consultation, it would be almost ensuring that it or future administrations misuse it for discriminatory profiling or for suppressing minorities. So if you do live in India today, you should be very concerned about what a national facial recognition system can lead to.

This article was first published in The Deccan Chronicle. Views are personal.

The writer is a Policy Analyst at The Takshashila Institution.

Read More
Economic Policy Nitin Pai Economic Policy Nitin Pai

Hold government accountable for Delhi air pollution but also punish selfish behaviour

If you are among the millions personally suffering from the acute air pollution in Delhi and many other parts of north India, now is not an appropriate time for a deeper reflection on the underlying causes of this human disaster. This is not to absolve the state and union governments involved. Nor is it to absolve businesses, industries and markets. They too have acted irresponsibly, even when they’ve complied with the law. But in the heat and passion of the public discourse, we forget to also point fingers at ourselves.Read more

Read More