Commentary

Find our newspaper columns, blogs, and other commentary pieces in this section. Our research focuses on Advanced Biology, High-Tech Geopolitics, Strategic Studies, Indo-Pacific Studies & Economic Policy

High-Tech Geopolitics Shrikrishna Upadhyaya High-Tech Geopolitics Shrikrishna Upadhyaya

Missing in India’s AI growth plan is private investment

By Shailesh Chitnis

On artificial intelligence (AI), the government appears to be moving at a frenetic pace. This month, plans were announced to make large public datasets available to Indian businesses. The government also wants to embed AI in different parts of India Stack, and fund three centres of excellence for AI, housed within leading academic institutions.

Read More
High-Tech Geopolitics Shrikrishna Upadhyaya High-Tech Geopolitics Shrikrishna Upadhyaya

Let’s insist on full disclosure and consent for AI and algorithm use

By Nitin Pai

One of the many recent mysteries is why hundreds of extremely intelligent and rich people think that a moratorium on further development of artificial intelligence is feasible, or that a six month hiatus is sufficient for us to figure out what to do about it. Technology development is a ‘prisoner’s dilemma’ with millions of competing participants making it impossible to get everyone to cooperate. Top-tier competitors are more likely to cheat on the moratorium in the expectation that others will do so, which will render such as moratorium useless, and worse, drive the industry underground.

Read More
High-Tech Geopolitics Shrikrishna Upadhyaya High-Tech Geopolitics Shrikrishna Upadhyaya

For India’s AI ambitions, the time to act is now

By Shailesh Chitnis

On May 17, 2017, AlphaGo, an Artificial Intelligence (AI) system built by Google’s DeepMind, defeated Ke Jie, China’s leading player in the board game, Go. In his book AI Superpowers, Kai-Fu Lee cites this as the seminal moment in China’s AI awakening. Considered the hardest game to master, Go’s dominance by a computer roused the government into action. Within a few months, Beijing announced plans to dominate AI by 2030.

Read More
High-Tech Geopolitics Shrikrishna Upadhyaya High-Tech Geopolitics Shrikrishna Upadhyaya

Technological power in today’s world is much too concentrated

By Nitin Pai

You don’t have to be a Luddite to have serious misgivings about brain implants. There certainly are beneficial uses, but once brain-computer interfaces become commercially available, we can neither predict nor control what they will end up being used for. There is a risk we will rapidly and thoughtlessly end up changing what it means to be human. With only an indirect interface to the human brain, social media networks have profoundly transformed human society. We are still discovering how pervasive information networks influence human cognition, but we already know enough to be concerned about the impact on rational thinking and collective opinion formation.

Read More

Behind Beijng’s proposal to regulate military applications of AI

By Megha Pardhi

China recently submitted a position paper on regulating the military applications of artificial intelligence to the sixth review conference of the United Nations Convention on Certain Conventional Weapons (CCW).The takeaway from this position paper is that countries should debate, discuss, and perhaps eschew the weaponization of AI. By initiating a discussion on regulating military applications of AI, Beijing wants to project itself as a responsible international player.

Read More

Needed: Intelligent act to regulate AI

The 41st General Conference of the United Nations Educational, Scientific and Cultural Organization (UNESCO) concluded on 24 November 2021 with a major step on the global development of norms on the use and regulation of Artificial Intelligence (AI).193 member states of UNESCO signed and adopted the draft AI Ethics Recommendation. It can be touted as the first globally accepted normative standard-setting instrument in the realm of AI. The voluntary, non-binding commitment is a major point of cooperation between States and leaders in identifying principles of ethics in the regulation AI systems that find wide application in today’s world.

Read More
High-Tech Geopolitics Guest User High-Tech Geopolitics Guest User

The Race for the Domination of AI Chips

By Arjun Gargeyas

With AI and advanced semiconductor technology an integral part of Industry 4.0, the impact of AI chips on the global technology landscape will gradually evolve in the coming decade.  The concept of new applications of semiconductors is gradually emerging and the concept of using artificial intelligence (AI) algorithms on high-end chipsets has opened an entirely new market for these devices, also known as AI chips. 

Read More

Does the arrival of AI mean the end of data privacy?

In recent years, there has been a great buzz around the development of Artificial Intelligence (AI) and what that might mean for the Indian economy. On the government’s side, Niti Aayog has come up with a national strategy on AI; the Ministry of Commerce has set up an AI task force. A ‘National Centre of AI’ is also planned. All of these initiatives have a scope to define where AI can contribute to the Indian industry and how to best achieve adoption at scale. But there’s a flip side to AI and that impacts data privacy.

The relation between AI and data privacy is a complex one. Broadly speaking, the growth of AI may spell the end of data privacy if we don’t proactively try to embed privacy by design.
Algorithms in AI learn from big datasets. For example, let us take a huge dataset, say India’s Aadhaar database. To the human eye and mind, it would be almost impossible to discern any insight from looking into this huge database/spreadsheet. However, to an AI algorithm, it could serve as fuel. AI learns from big data and identifies patterns in numbers that may draw unlikely correlations.
The catch here is in the fact that the more data an AI programme is fed, the harder it becomes to de-identify people. Because the programme can compare two or more datasets, it may not need your name to identify you. Data containing ‘location stamps’ — information with geographical coordinates and time stamps — could be used to easily track the mobility trajectories of, say, where and how people live and work. Supplement this with datasets about your UPI payments, and it might also know where and what you spend your money on.
So, the more data AI is fed, the better it might get to know you. Because of this, if AI is the future, then privacy may be a thing of the past. Still, can AI be, instead, leveraged to enhance privacy for individuals and companies?
There is a bit of a silver lining. Applications of AI have immense potential when it comes to enhancing security and privacy. It can help better understand how much of your data is being collected and how it may be used. A good use case is the AI, ‘Polisis’ which stands for Privacy Policy Analysis.
The algorithm uses deep learning and can read privacy policy documents to develop insights such as an executive summary and a flow chart of what kind of data is collected and who it will be sent to. In addition, it also outlines whether or not the consumer can opt-out of the collection or sharing of the data.
As rosy as leveraging AI for privacy might sound, data is going to drive the economies of the future, and in a data-driven regime, the idea of privacy takes centre stage to protect the interest of consumers and citizens alike.
This brings us to another question: if AI is fundamentally opposed to privacy, is there a way to get around the problem? There are two aspects to how privacy can be maintained not at the cost of development in AI. The first is that of consumer action. There is a need to modify the bridge between AI and data protection.
Terms and conditions
With rising data collection and storage, doctrinal notions around ‘consent’ and ‘privacy notices’ should be reconsidered. For instance, we may need to revisit the model of ‘clickwrap’ contracts (which allows the user to click on the “I accept” button without reading long, verbose, and unintelligible privacy terms and conditions).
What consumers are not aware of is that often, they can decline the contract and still get unfettered access to the content. While this is a practice that should not be encouraged, it is still a step better than accepting terms and conditions without reading them.
The best practice would be to find out whether the following T&Cs are a part of the agreement: (1) Can the website use your content? (2) Does everything you upload become open source? (3) Can your name and likeness appear in ads? (4) Do you pay the company’s legal costs to cover late payments? (5) Is the company responsible for your data loss?
Of course, you shouldn’t have to read legalities before you want to read an article. To that extent, a possible workaround could be using tools such as ‘Polisis’.
The second solution is to change the nature of AI development. This means including privacy by design in AI algorithms. While there can be no strict set of rules or policy guidelines that can bind an algorithm designer, best practices following constitutional standards jurisdiction-wise can be developed as a benchmark.
A few techniques that could be deployed to enhance privacy when data is being processed by an AI algorithm are differential privacy, homomorphic encryptions, and generative adversarial networks. Along with these, another privacy enhancing and data protection measure which should be taken is of certification schemes and privacy seals to help demonstrate compliance by organisations.
The development of AI might spell the end of privacy as we know it. There have been examples of AI enhancing privacy, but those are the exceptions, not the norm. AI is proliferating, so it is necessary to embed privacy and appropriate technical and organisational measures for it into the process that leads to positive outcomes. The opportunity for AI today is, therefore, not just solving for corporations and nations, but instead, to do so in a manner that is sustainable in terms of user privacy.
This article was first published in Deccan Herald. Views expressed are personal.
Read More
Indo-Pacific Studies Manoj Kewalramani Indo-Pacific Studies Manoj Kewalramani

Breaking down China’s AI ambitions

The Social Credit System is about much more than surveillance and loyalty, as popularly understood. Nudging persons to adopt desirable behaviour and enhancing social control are part of the story. But there are larger drivers of this policy. It is fundamentally linked to the Chinese economy and its transformation to being more market driven.

China unveiled a plan to develop the country into the world’s primary innovation centre for artificial intelligence in 2017. It identified AI as a strategic industry, crucial for enhancing economic development, national security, and governance.The Chinese government’s command innovation approach towards AI development is crafting a political economy that tolerates sub-optimal and even wasteful outcomes in the quest for expanding the scale of the industry. Consequently, the industry is likely to be plagued by concerns about overinvestment, overcapacity, quality of products, and global competitiveness.In addition, increasing friction over trade with other states and President Xi Jinping’s turn towards techno-nationalism along with tightening political control could further undermine China’s AI industry. Before we dive into the challenges, here’s some background.Read more here>

Read More