Commentary

Find our newspaper columns, blogs, and other commentary pieces in this section. Our research focuses on Advanced Biology, High-Tech Geopolitics, Strategic Studies, Indo-Pacific Studies & Economic Policy

Govt needs to be wary of facial recognition misuse

India is creating a national facial recognition system. If you live in India, you should be concerned about what this could lead to. It is easy to draw parallels with 1984 and say that we are moving towards Big Brother at pace, and perhaps we are. But a statement like that, for better or worse, would accentuate the dystopia and may not be fair to the rationale behind the move. Instead, let us sidestep conversations about the resistance, doublethink, and thoughtcrime, and look at why the government wants to do this and the possible risks of a national facial recognition system.

WHY DOES THE GOVERNMENT WANT THIS?

Let us first look at it from the government’s side of the aisle. Having a national facial recognition database can have a lot of pros. Instead of looking at this like big brother, the bestcase scenario is that the Indian government is looking at better security, safety, and crime prevention. It would aid law enforcement. In fact, the request for proposal by the National Crime Records Bureau (NCRB) says as much, ‘It (the national facial recognition system) is an effort in the direction of modernizing the police force, information gathering, criminal identification, verification and its dissemination among various police organizations and units across the country’.

Take it one step further in a world where later down the line, you could also use the same database to achieve gains in efficiency and productivity. For example, schools could have attendance based on FaceID-like software, or checking for train tickets would be more efficient (discounting the occasional case of plastic surgery that alters your appearance significantly enough).

POTENTIAL FOR MISUSE

The underlying assumption for this facial recognition system is that people implicitly trust the government with their faces, which is wrong. Not least because even if you trust this government, you may not trust the one that comes after it. This is especially true when you consider the power that facial recognition databases provide administrations.

For instance, China has successfully used AI and facial recognition to profile and suppress minorities. Who is to guarantee that the current or a future government will not use this technology to keep out or suppress minorities domestically? The current government has already taken measures to ramp up mass surveillance. In December last year, the Ministry of Home Affairs issued a notification that authorized 10 agencies to intercept calls, data on any computer.

WHERE IS THE CONSENT? Apart from the fact that people cannot trust all governments across time with data of their faces, there is also the hugely important issue of consent and absence of legality. Facial data is personal and sensitive. Not giving people the choice to opt-out is objectively wrong.

Consider the fact that once such a database exists, it is will be combined with state police across the country, it says as much in the proposal excerpt mentioned above. There is every chance that we are looking at increased discrimination in profiling with AI algorithms repeating the existing biases.

Why should the people not have a say in whether they want their facial data to be a part of this system, let alone whether such a system should exist in the first place?

Moreover, because of how personal facial data is, even law enforcement agencies should have to go through some form of legal checks and safeguards to clarify why they want access to data and whether their claim is legitimate.

Data breaches would have worse consequences

Policy, in technology and elsewhere, is often viewed through what outcomes are intended and anticipated. Data breaches are anticipated and unintended. Surely the government does not plan to share/sell personal and sensitive data for revenue. However, considering past trends in Aadhaar, and the performance of State Resident Data Hubs goes, leaks and breaches are to be expected. Even if you trust the government to not misuse your facial data, you shouldn’t be comfortable with trusting third parties who went through the trouble of stealing your information from a government database.

Once the data is leaked and being used for nefarious purposes, what even would remedial measures look like? And how would you ensure that the data is not shared or misused again? It is a can of worms which once opened, cannot be closed.

Regardless of where on the aisle you stand, you are likely to agree that facial data is personal and sensitive. The technology itself is extremely powerful and thus, can be misused in the wrong hands. If the government builds this system today, without consent or genuine public consultation, it would be almost ensuring that it or future administrations misuse it for discriminatory profiling or for suppressing minorities. So if you do live in India today, you should be very concerned about what a national facial recognition system can lead to.

This article was first published in The Deccan Chronicle. Views are personal.

The writer is a Policy Analyst at The Takshashila Institution.

Read More
Indo-Pacific Studies, Strategic Studies Manoj Kewalramani Indo-Pacific Studies, Strategic Studies Manoj Kewalramani

China’s big plan for AI domination is dazzling the world, but it has dangers built in. Here’s what India needs to watch out for.

China has been one of the early movers in the AI space, and evaluating its approach to AI development can help identify important lessons and pitfalls that Indian policy makers and entrepreneurs must keep in mind.

In June, Niti Aayog published a discussion paper arguing that India has a significant stake in the artificial intelligence (AI) revolution and therefore needs to evolve a national AI strategy. The document examined policies and strategies issued by a number of countries that could inform the Indian approach.
China has been one of the early movers in the AI space, and evaluating its approach towards AI development can help identify important lessons for Indian policymakers and entrepreneurs.
In July 2017, China’s State Council published its AI plan, outlining the goal of becoming the world’s primary AI innovation centre by 2030. This is a comprehensive vision document, unlike “strategies” or “policies” put out by other key global players.
Read More
Indo-Pacific Studies Manoj Kewalramani Indo-Pacific Studies Manoj Kewalramani

Breaking down China’s AI ambitions

The Social Credit System is about much more than surveillance and loyalty, as popularly understood. Nudging persons to adopt desirable behaviour and enhancing social control are part of the story. But there are larger drivers of this policy. It is fundamentally linked to the Chinese economy and its transformation to being more market driven.

China unveiled a plan to develop the country into the world’s primary innovation centre for artificial intelligence in 2017. It identified AI as a strategic industry, crucial for enhancing economic development, national security, and governance.The Chinese government’s command innovation approach towards AI development is crafting a political economy that tolerates sub-optimal and even wasteful outcomes in the quest for expanding the scale of the industry. Consequently, the industry is likely to be plagued by concerns about overinvestment, overcapacity, quality of products, and global competitiveness.In addition, increasing friction over trade with other states and President Xi Jinping’s turn towards techno-nationalism along with tightening political control could further undermine China’s AI industry. Before we dive into the challenges, here’s some background.Read more here>

Read More