Commentary

Find our newspaper columns, blogs, and other commentary pieces in this section. Our research focuses on Advanced Biology, High-Tech Geopolitics, Strategic Studies, Indo-Pacific Studies & Economic Policy

We Need Our Own Honest Ads Act

Recent developments in online advertising have been uplifting. Facebook (and by extension, Instagram) has been running a policy that is meant to block predatory ads that target people who are overweight or have skin conditions, pushing unusual and often medically dangerous miracle cures. Google, which makes over $100 billion in online ad revenue, has also released a statement declaring a ban on ads that are selling treatments that have no established biomedical and scientific basis. Twitter also declared that it won’t be accepting ads from state-controlled media entities.This is not to say that the advertising policies of these companies are perfect, as incidents reported by The Verge and CNBC will tell you. However, things have been improving at a steady pace as far as advertising policies are concerned.A major catalyst for this change has been the 2016 US election that saw the potential of online advertising abused for targeting voters. Since then, there has been bipartisan support in the US to achieve greater transparency in online advertising. This includes disclosing who paid for public ads, how many people saw those ads, and how the purchaser can be contacted.There are two problems with the support for greater transparency in advertising. Firstly, the bi-partisan push never ended up becoming law. Secondly, even if it did end up becoming law, its impact would have been limited to the US.It is an interesting story why we still lack a law that enforces greater transparency in advertising, and much of it revolves around Facebook, with its conclusion set to impact other players in online advertising. The bill, called the Honest Ads Act, was introduced in the Senate in 2017.Had it become law, it’s success or failure would have given other countries a template to work with to achieve greater transparency in advertising. As of now, that will need to continue without precedent. Days after the bill was introduced, Facebook announced that it would be updating its Advertising Transparency and Authenticity Efforts.Mark Zuckerberg declared his support for the Honest Ads Act through a separate Facebook post, stating, “Election interference is a problem that’s bigger than any one platform, and that’s why we support the Honest Ads Act”. Important side note, Twitter also announced its decision to back the Act, but the focus here is on Facebook because of its size, position, and role in the 2016 US election.Once Facebook expressed its support for the act, and declared the intent to self-regulate according to the bill, the issue lost momentum. At the time, Zuckerberg’s testimony at Capitol Hill was impending, and the news cycle shifted its attention. Senate Majority Leader Mitch McConnell, brought in the first amendment into the argument, saying he was sceptical of proposals (like the Honest Ads Act) that would penalize American citizens trying to use the internet and to advertise. At this point, you could just make the argument that in retrospect, Facebook could have supported the Honest Ads Act by not declaring its support.Regardless, the implications of these events impacted players across a wide spectrum. Because there was no legal requirement to do so, other avenues of online ads (read, Twitter, Google) did not need to comply with a set standard that could be used as a yardstick to judge them against. In addition, the problem with the freedom of speech argument is that transparency in ads is not directly impacting free speech. You could extend the same argument to revoke the laws that mandate transparency in TV and radio ads in the US. So where is the crackdown on transparency in TV and Radio?The Honest Ads Act is relevant as it had the potential to set the tone for how transparent the regulation should be in other countries.The US is not the most significant user base for these platforms. And as you might expect, having transparency in political ads could be useful for other countries that also hold elections. For example, India has over 270 million Facebook users, a significant percentage of whom participated in the general elections. Understandably, advertising on social media sites such as Facebook was an integral part of most campaign strategies. So, it would help to have a law that helps voters identify who is paying for what political ad, and conversely, which of them might be facts, and which of them might be false propaganda.Asking online ad companies such as Facebook to regulate themselves will have exactly the effect that it is having now. They will move towards better ad and transparency policies at their own pace, influenced by what the prevailing narrative is. And for most countries, that is not enough.Having a law in countries where these platforms operate is more efficient. It is not just the United States that needs its ads to be honest.The writer is a Research Analyst with Takshashila Institution, Bengaluru.This article was first published in Deccan Herald.

Read More
High-Tech Geopolitics, Economic Policy Prateek Waghre High-Tech Geopolitics, Economic Policy Prateek Waghre

Why we must be vigilant about mass facial surveillance

The recent revelations about NSO group’s Pegasus being used to target an estimated two dozen Indian lawyers and activists using the vulnerabilities in Whatsapp have once again brought the issue of targeted surveillance of citizens into focus. As the saying goes, no good crisis should go to waste. This is an opportunity to raise public awareness about trends in mass surveillance involving Facial Recognition systems and CCTV cameras that impact every citizen irrespective of whether or not they have a digital presence today.

The Panoptican, conceptualised by philosopher Jeremy Bentham, was a prison designed in a way that prisoners could be observed by a central tower, except they wouldn’t know when they were being watched, forcing them to self-regulate their behaviour. Michel Foucault later extended this idea stating that modern states could no longer resort to violent and public forms of discipline and needed a more sophisticated form of control using observation and surveillance as a deterrent.

Live Facial Recognition combined with an ever expanding constellation of CCTV cameras has the potential to make this even more powerful. Therefore, it suits governments around the world, irrespective of ideology, to expand their mass surveillance programs with stated objectives like national security, identification of missing persons etc. and in the worst cases, continue maximizing these capabilities to enable the establishment of an Orwellian state.

Global trends
China’s use of such systems is well documented. As per a study by the Journal of Democracy, there will be almost 626 million CCTV cameras deployed around the country by the end of 2020. It was widely reported in May that its Facial recognition database includes nearly all citizens. Facial recognition systems are used in public spaces for purposes ranging from access to services (hotels/flights/public transport etc) to public shaming of individuals for transgressions such as jaywalking by displaying their faces and identification information on large screens installed at various traffic intersections and even monitoring whether students are paying attention in class or not.

The former was highlighted by an almost comedic case in September, where a young woman found that her access to payment gateways, ability to check in to hotels/trains etc. was affected after she underwent plastic surgery. In addition, there is also a fear that Facial Recognition technology is being used to surveil and target minorities in Xinjiang province.

In Russia, Moscow mayor Sergei Sobyanin has claimed that the city had nearly 200,000 surveillance cameras. There have also been reports that the city plans to build AI-based Facial Recognition into this large network with an eye on the growing number of demonstrations against the Putin government.

Even more concerning is the shift by countries that have a ‘democratic ethos’ to deploying and expanding their usage of such systems. Australia was recently in the news for advocating face scans to be able to access adult content. Some schools in the country are also running a trial of the technology to track attendance. France is testing a Facial Recognition based National ID system. In the UK, the High Court dismissed an application for judicial review of automated facial recognition. The challenge itself was a response to pilot programs run by the police, or installation of such systems by various councils, as per petitioners, without the consent of citizens and a legal basis.

There was also heavy criticism of Facial Recognition being used at football games and music concerts. Its use in personal spaces, too, continues to expand as companies explore potential uses to measure employee productivity or candidate suitability by analysing facial expressions.

There are opposing currents as well – multiple cities in the US have banned/are contemplating preventing law enforcement/government agencies from deploying the technology. Sweden’s Data Protection Authority fined a municipality after a school conducted a pilot to track attendance on the grounds that it violated EU’s General Data Protection Regulation (GDPR).

Advocacy groups like the Ada Lovelace Institute have called for a moratorium on all use of the technology until society can come to terms with its potential impact. Concerns have been raised on grounds that the accuracy of such systems is currently low, thus severely increasing the risk of misidentification when used by law enforcement agencies. Secondly, since the technology will learn from existing databases (e.g. a criminal database), any bias reflected in such a database such as disproportionate representation of minorities will creep into the system.

Also, there is limited information in many cases where and how such systems are being used. Protestors in Hong Kong and, recently, Chile, have shown the awareness to counter law enforcement’s use of Facial Recognition by targeting cameras. The means have varied from the use of face-masks/clothing imprinted with multiple faces to pointing numerous lasers at the cameras, and even physically removing visible cameras.

India’s direction
In mid-2019, the National Crime Records Bureau of India put out a tender inviting bids for an Automated Facial Recognition System (AFRS) without any prior public consultation. Meeting minutes of a pre-bid seminar accessed by the Internet Freedom Foundation indicated that there were 80 vendor representatives present. 

Convenience is touted as the main benefit of various pilot programs to use ‘faces’ as boarding cards at airports in New Delhi, Bengaluru and Hyderabad as part of the Civil Aviation Ministry’s Digi Yatra program. Officials have sought to allay privacy concerns stating that no information is stored. City police in New Delhi and Chennai have run trials in the past. Hyderabad police has until recently, routinely updated their Twitter accounts with photos of officers scanning people’s faces with cameras. Many of these posts were deleted after independent researcher Srinivas Kodali repeatedly questioned the legality of such actions.

Many of the afore mentioned trials reported low single figure accuracy rates for Facial Recognition. The State of Policing in India (2019) report by Lokniti and Common Cause indicated that roughly 50 per cent of personnel believe that minorities and migrants and ‘very likely’ and ‘somewhat’ naturally prone to committing crimes. These aspects are concerning when considering capability/capacity and potential for misuse of the technology. False-positives as result of a low accuracy rate, combined with potentially biased law enforcement and a lack of transparency, could make it a tool for harassment of citizens.

Schools have attempted to use them to track attendance. Gated communites/offices already deploy a large number of CCTV cameras. A transition to live Facial Recognition is an obvious next step. However, given that trust in tech companies is at a low, and the existence of Facial Recognition training datasets such as Megaface (a large dataset utilised to train Facial Recognition algorithms using images uploaded on the Internet as far back as the mid 2000s without consent) – privacy advocates are concerned.

Opposition and future considerations for society
Necessary and Proportionate, a coalition of civil society organisations, privacy advocates around the world, proposes thirteen principles on application of human rights to communication surveillance, many of which are applicable here as well. To state some of them – legality, necessary and legitimate aims, proportionality, due process along with judicial and public oversight, prevention of misuse and a right to appeal. Indeed, most opposition from civil society groups and activists against government use of mass surveillance is on the basis of these principles. When looked at from the lenses of intent (stated or otherwise), capacity and potential for misuse – these are valid grounds to question mass surveillance by the governments.

It is also important for society to ask and seek to answer some of the following questions: Is the state the only entity that can misuse this technology? What kind of norms should society work towards when it comes to private surveillance? Is it likely that the state will act to limit its own power especially if there is a propensity to both accept and conduct indiscriminate surveillance of private spaces, as is the case today? What will be the unseen effects of normalising mass public and private surveillance on future generations and how can they be empowered to make a choice?

This article was first published in Deccan Herald on 11th November, 2019. 

Read More

Govt needs to be wary of facial recognition misuse

India is creating a national facial recognition system. If you live in India, you should be concerned about what this could lead to. It is easy to draw parallels with 1984 and say that we are moving towards Big Brother at pace, and perhaps we are. But a statement like that, for better or worse, would accentuate the dystopia and may not be fair to the rationale behind the move. Instead, let us sidestep conversations about the resistance, doublethink, and thoughtcrime, and look at why the government wants to do this and the possible risks of a national facial recognition system.

WHY DOES THE GOVERNMENT WANT THIS?

Let us first look at it from the government’s side of the aisle. Having a national facial recognition database can have a lot of pros. Instead of looking at this like big brother, the bestcase scenario is that the Indian government is looking at better security, safety, and crime prevention. It would aid law enforcement. In fact, the request for proposal by the National Crime Records Bureau (NCRB) says as much, ‘It (the national facial recognition system) is an effort in the direction of modernizing the police force, information gathering, criminal identification, verification and its dissemination among various police organizations and units across the country’.

Take it one step further in a world where later down the line, you could also use the same database to achieve gains in efficiency and productivity. For example, schools could have attendance based on FaceID-like software, or checking for train tickets would be more efficient (discounting the occasional case of plastic surgery that alters your appearance significantly enough).

POTENTIAL FOR MISUSE

The underlying assumption for this facial recognition system is that people implicitly trust the government with their faces, which is wrong. Not least because even if you trust this government, you may not trust the one that comes after it. This is especially true when you consider the power that facial recognition databases provide administrations.

For instance, China has successfully used AI and facial recognition to profile and suppress minorities. Who is to guarantee that the current or a future government will not use this technology to keep out or suppress minorities domestically? The current government has already taken measures to ramp up mass surveillance. In December last year, the Ministry of Home Affairs issued a notification that authorized 10 agencies to intercept calls, data on any computer.

WHERE IS THE CONSENT? Apart from the fact that people cannot trust all governments across time with data of their faces, there is also the hugely important issue of consent and absence of legality. Facial data is personal and sensitive. Not giving people the choice to opt-out is objectively wrong.

Consider the fact that once such a database exists, it is will be combined with state police across the country, it says as much in the proposal excerpt mentioned above. There is every chance that we are looking at increased discrimination in profiling with AI algorithms repeating the existing biases.

Why should the people not have a say in whether they want their facial data to be a part of this system, let alone whether such a system should exist in the first place?

Moreover, because of how personal facial data is, even law enforcement agencies should have to go through some form of legal checks and safeguards to clarify why they want access to data and whether their claim is legitimate.

Data breaches would have worse consequences

Policy, in technology and elsewhere, is often viewed through what outcomes are intended and anticipated. Data breaches are anticipated and unintended. Surely the government does not plan to share/sell personal and sensitive data for revenue. However, considering past trends in Aadhaar, and the performance of State Resident Data Hubs goes, leaks and breaches are to be expected. Even if you trust the government to not misuse your facial data, you shouldn’t be comfortable with trusting third parties who went through the trouble of stealing your information from a government database.

Once the data is leaked and being used for nefarious purposes, what even would remedial measures look like? And how would you ensure that the data is not shared or misused again? It is a can of worms which once opened, cannot be closed.

Regardless of where on the aisle you stand, you are likely to agree that facial data is personal and sensitive. The technology itself is extremely powerful and thus, can be misused in the wrong hands. If the government builds this system today, without consent or genuine public consultation, it would be almost ensuring that it or future administrations misuse it for discriminatory profiling or for suppressing minorities. So if you do live in India today, you should be very concerned about what a national facial recognition system can lead to.

This article was first published in The Deccan Chronicle. Views are personal.

The writer is a Policy Analyst at The Takshashila Institution.

Read More

There’s more to India’s woes than data localisation

The personal data protection bill is yet to become a law and the debate is still rife on the costs and benefits of data localisation. It is yet to be seen officially if the government is going to mandate localisation in the data protection bill and to whom it is going to apply. Regardless of whether or not data localization ends up enshrined in the law, it is worth taking a step back and asking why the government is pushing for it in the first place.

For context, localisation is the practice of storing domestic data on domestic soil. One of the most credible arguments for why it should be the norm is that it will help law enforcement. Most platforms that facilitate messaging are based in the US (think WhatsApp and Messenger). Because of the popularity of these ‘free services,’ a significant amount of the world’s communication takes place on these platforms. This also includes communication regarding crimes and violation of the law.

This is turning out to be a problem because in cases of law violations, communications on these platforms might end up becoming evidence that Indian law enforcement agencies may want to access. The government has already made multiple efforts to make this process easier for law enforcement. In December 2018, the ministry of home affairs issued an order granting powers of “interception, monitoring, and decryption of any information generated, transmitted, received or stored in any computer,” to ten central agencies, to protect security and sovereignty of India.

But this does not help in cases where the information may be stored outside the agencies’ jurisdiction. So, in cases where Indian law enforcement agencies want to access data held by US companies, they are obliged to abide by lawful procedures in both the US and India.

The bottleneck here is that there is no mechanism that can keep up with this phenomenon (not counting the CLOUD Act, as India has not entered into an executive agreement under it).

Indian requests for access to data form a fair share, owing to India’s large population and growing internet penetration. Had there been a mechanism that provided for these requests in a timely enforcement through the provision of data. Most requests are US-bound, thanks to the dominance of US messaging, search, and social media apps. Each request has to justify ‘probable cause by US standards.’ This, combined with the number of requests from around the world, weighs down on the system and makes it inefficient. People have called the MLATs broken and there have been several calls for reform of the system.

A comprehensive report by the Observer Research Foundation (ORF) found that the MLAT process on global average takes 10 months for law enforcement requests to receive electronic evidence. 10 months of waiting for evidence is simply too long for two reasons. Firstly, in cases of law enforcement, time tends to be of the essence. Secondly, countries such as India have a judicial system with a huge backlog of cases. 10month-long timelines to access electronic evidence make things worse.

Access to data is an international bottleneck for law enforcement. The byproduct of the mass adoption of social media and messaging is that electronic criminal evidence for all countries is now concentrated in the US.

The inefficiency of MLATs is one of the key reasons why data-sharing agreements are rising in demand and in supply, and why the CLOUD Act was so well-received as a solution that reduced the burden on MLATs.

Countries need to have standards that can fasten access to data for law enforcement, an understanding of what kinds of data are permissible to be shared across borders, and common standards for security.

India’s idea is that localizing data will help with access to it for law enforcement, at least eventually down the line. It may compensate for not being a signatory to the Budapest Convention. It is unclear how effective localisation will be. Facebook’s stored in India is Facebook’s data.

Facebook is still an American company and should still be subject to US standards of data-sharing, which are one of the toughest in the world and include an independent judge assessing the probable cause, refusing bulk collection or overreach. This is before we take into account encryption.

For Indian law enforcement, the problem in this whole mess is not where the data is physically stored. It is the process that makes access to it inefficient. Localisation is not a direct fix, if it proves to be one at all. The answer lies in better data-sharing arrangements, based on plurilateral terms. The sooner this realized, the faster the problems can be resolved. data still

Rohan is a policy analyst at the technology and policy programme at The Takshashila Institution. Views are personal.

This article was first published in the Deccan Chronicle.

Read More

How Pegasus works, strengths & weaknesses of E2E encryption & how secure apps like WhatsApp really are

Pegasus, the software that infamously hacked WhatsApp earlier this year, is a tool developed to help government intelligence and law enforcement agencies to battle cybercrime and terror. Once installed on a mobile device, it can collect contacts, files, and passwords. It can also ‘overcome’ encryption, and use GPS to pinpoint targets. More importantly, it is notoriously easy to install. It can be transmitted to your phone through a WhatsApp call from an unknown number (that does not need to be picked up), and does not require user permissions to get access to the phone’s camera or microphone. All of that makes it a near complete tool for snooping.While Pegasus is able to hack most of your phone’s capabilities, the big news here is that it can ‘compromise’ end to end (E2E) encryption. The news comes at attesting time for encryption in India, as the government deliberates a crackdown on E2E encryption, a decision that we will all learn about more on Jan 15, 2020.Before we look at how Pegasus was able to compromise E2E encryption, let’s look at how E2E encryption works and how it has developed a place for itself in human rights.E2E is an example of how a bit of math, applied well, can secure communications better than all the guns in the world. The way it works on platforms such as WhatsApp is that once the user (sender) opens the app, the app generates 2 keys on the device, one public and one private. The private key remains with the sender and the public key is transmitted to the receiver via the company’s server. The important thing to note here is that the message is already encrypted by the public key before the message reaches the server. The server only relays the secure message and the receiver’s private key then decrypts it. End to end encryption differs from standard encryption because in services with standard encryption (think Gmail), along with the receiver, the service provider generally holds the keys, and thus, can also access the contents of the message.Some encryptions are stronger than others. The strength of an encryption is measured through how large the size of the key is. Traditionally, WhatsApp uses a 128-bit key, which is standard. Here you can learn about current standards of encryption and how they have developed over the years. The thing to keep in mind is that it can take over billions of years to crack a secure encryption depending on the key size (not taking into account quantum computing):Key Size         Time to Crack56-bit                 399 Seconds128-bit               1.02 x 1018 years192-bit               1.872 x 1037 years256-bit               3.31 x 1056 yearsE2E encryption has had a complex history with human rights. One the one side, governments and law enforcement agencies see E2E encryption as a barrier when it comes to ensuring the human rights of its citizens. Examples of mob lynching being coordinated through WhatsApp, such as these, exist around the world.On the other hand, security in communications and the anonymity it brings, has been a boon for people who might suffer harm if their conversations were not private. Think peaceful activists who utilize it to fight for democracy around the world, most recently, Hong Kong. Same goes for LGBTQ activists and whistleblowers. Even diplomats and government officials operate through the seamless secure connectivity offered by E2E encryption.The general consensus in civil society is that E2E encryption is worth having as an increasing amount of digital human communications move online to platforms such as WhatsApp.How does Pegasus fit in?End to end encryption ensures that your messages are encrypted in transit and can only be decrypted by the devices that are involved in the conversation. However, once a device decrypts a message it receives, Pegasus can access that data which is at rest. So it is not the end to end encryption that is compromised, but your devices security. Once a phone is infected, Pegasus can mirror the device, literally record the keystrokes being typed by the user, browser history, contacts, files and so on.The strength of end to end encryption lies in the fact that it encrypts data in transit well. So unless you have the key for decryption, it is impossible to trace the origin of messages as well as the content that is being transmitted. The weakness for end to end encryption here, as mentioned above, is that it does not apply to data at rest. If it were still encrypting data at rest, messages received by users would not be readable.At this point, the question about how secure apps such as WhatsApp, Signal, and Telegram really are, is widely debateable. While the encryption is not compromised, the larger system is, and that has the potential to make the encryption a moot point.WhatsApp came out with an update that supposedly fixed the vulnerability earlier this year, seemingly protecting communications on the platform from Pegasus.What does this mean for regulation against WhatsApp?The Pegasus story comes at a critical time for the future of encryption on WhatsApp and on platforms in general. The fact that WhatsApp waited ~6 months to file the lawsuit against the NSO will not help the platforms credibility on the traceability and encryption debate. This also brings into question the standards for data protection Indian citizens and users should be subject to. The data protection bill is yet to become law. With the Pegasus hack putting privacy front and center, the onus should ideally be on making sure that Indian communications are secure against foreign and domestic surveillance efforts.

Read More

Cons of breaking encryption outweigh pros

A bit of math can better secure your communications than all the guns in the world combined. That is the beauty of end to end encryption which currently runs on WhatsApp. It makes messages shared between people private so that only the sender and the recipient can view what is being said. On a related note, the notification of the intermediary guidelines is likely to be completed by 15 January 2020. These updated guidelines are going to determine the future of end to end encryption.The major trade-off here is privacy versus security. The government’s argument is that it needs to access communications between its citizens for the purposes of security. The spread of false news on WhatsApp has instigated lynch mobs and resulted in 27 reported deaths in 2017. That is exactly why in December 2018, the Ministry of Home Affairs issued an order granting powers of "interception, monitoring, and decryption of any information generated, transmitted, received or stored in any computer", to ten central agencies. But platforms using end to end encryption means that the interception of information might not be of much use if the government does not have a key for the encryption. The amendments in the intermediary guidelines call for allowing platforms such as Telegram and WhatsApp to, “..enable tracing out of such originator of information on its platform as may be required by government agencies who are legally authorised”.The other side of the coin here is privacy. There is no way where platforms take away encryption from criminals but leave it intact for others. If intermediaries allowed traceability and compromised end to end encryption, the sender of each message would be identifiable to WhatsApp and by extension, the government. And while the encryption provides a shield of anonymity to trolls and spreaders of misinformation, it also gives assurance to people who would otherwise have been silenced or suppressed. Think whistleblowers and political protesters. End to end encryptionWe need to have transparency and install the highest standards to due process to make sure that should traceability be enabled, it is not abused (a similar precedent for which has been set by the NSA).allows those people to avoid the fear of being targeted. Also, encryption on content extends into more routine aspects of life. For instance, WhatsApp is a platform where people can talk about personal and sensitive parts of their life, such as a disease or mental health issues, and rest assured that Facebook, the internet, and the government won’t target you using that information. At a personal level, the fact that end to end encryption keeps communications private between the participants is reason enough not to break it. In the age of the contemporary internet, privacy is a luxury that is being provided at scale.In addition, there are a host of questions on the side of implementation. For instance, the guidelines are applicable to all intermediaries that have more than 50 lakh users. There is no clarity on whether that means all registered users, daily active users or even monthly active users. Moreover, how will the government know if platforms have met that threshold and keep track of all the intermediaries that pop up on the App Store/Play Store? More fundamentally, who is an intermediary? Does Google Docs count as a platform, as it also has a chat feature? Are online games also subject to this?Even if all of these are resolved, the 50 lakh threshold might mean that criminals can just move to smaller, lesser-known platforms that offer end to end encryption, taking away significantly from the effectiveness of the exercise.Adjusting the trade-off between privacy and security is a thankless task that more often than not is likely to be decided by the values and interests of the people in power. The job at hand here is to make sure that a robust set of processes are set in place if end to end encryption is to be broken. We need to have transparency and install the highest standards to due process to make sure that should traceability be enabled, it is not abused (a similar precedent for which has been set by the NSA).There needs to be transparency around the process that lets people know who is seeking the data. Standards need to exist around the specificity of what accounts and data can be targeted to prevent requests for bulk data. The request for access should be backed up by justification of credible facts, all of which should be subject to review by an independent entity or a judge.None of these provisions currently exist around the intermediary guidelines, and neither is there an indication that it is being considered. The cons of enabling traceability and breaking end to end encryption outweigh the pros subjectively.However, if the government is going to go ahead with this and include the clause in the January 2020 notification, then it should do this right by placing adequate oversight and safeguards in the amendments.This article was first published in Asian Age.(Rohan is a policy analyst at the technology and policy programme at The Takshashila Institution. Views are personal.)

Read More
High-Tech Geopolitics Nitin Pai High-Tech Geopolitics Nitin Pai

Telecom revolution took India to 21st century. The state is taking it backwards

The manner in which the Indian state has treated telecom is indicative of the disdain it has for a sector that has underpinned the country’s rise to an aspiring global power in the last 25 years. If we have to fix the problems we’ve created, it’s important to enumerate the big policy mistakes we have made. Between a rapacious bureaucracy, corrupt politicians, rent-seeking crony businesses and an economics-agnostic judiciary, we have created the conditions for a telecom crisis.Read more

Read More

Where is the debate on data privacy headed?

Even as India pushes for data decryption access from Big Tech for better law enforcement, there is a larger issue of how Big Tech is not quite the paragon of virtue when it comes to upholding user privacy.

If the Indian government does get social media platforms to part with user data, it should remember that with great power over the citizens comes a greater responsibility towards the citizens.

A lot has happened in privacy in recent memory. Perhaps most importantly, Attorney General K.K. Venugopal has argued in the Supreme Court that “They [internet platforms] can’t come into the country and say we will establish a non-decryptable system” and that “terrorists cannot claim privacy”. On the side of Big Tech, corporate products and policies keep moving towards privacy while instances of privacy violations continue to exist.Google released the next generation of Android earlier this month and included some important privacy protections. Google was then also fined $170 million for violating children’s privacy on YouTube. Google also open-sourced a differential privacy tool on GitHub to help protect private information. Facebook also made news in privacy, announcing a feature called ‘Off-Facebook activity’, allowing access to a summary of activity that Facebook has about you. Facebook then suffered a data leak (phone numbers) of 419 million users. Similarly, Amazon also took a significant step in allowing users to delete their data (voice and transcripts) from Alexa.What we are seeing here is that India’s idea of a digital world is beginning to diverge significantly from that of global platforms. The Indian government increasingly wants access to data on its citizens for purposes of law enforcement. When communications on platforms are encrypted (end-to-end or otherwise), it is impossible to track what information is being shared unless one has a decryption key. Failure to track fake news on WhatsApp has instigated lynch mobs and resulted in 27 reported deaths in 2017.The Attorney General is right in asserting that terrorists cannot claim privacy. However, tech does not bend selectively to reflect values. Platforms cannot decrypt messages for the bad guys while keeping encryption available for everyone else. Most companies around the world (not including China) have sided with the idea of privacy in communications. It helps build trust with the user and complements the power of the network effect. If tomorrow, Facebook and the Internet began to target you with ads on the basis of conversations you were having on WhatsApp, you would rightly be concerned. Platforms like WhatsApp (and Telegram, and Signal) are home to some of our most sensitive information. No one would like to, say, discuss their mental health issues with a friend on WhatsApp, only to have medicines recommended to them on every website they visit. Similarly, political protestors who express themselves through peaceful dissent would not like to have their messages read and used against them. Anonymity through encryption can be a shield for terror, but it is also an essential tool for people who may not be able to express themselves freely otherwise.The fines that Big Tech firms have historically been charged for privacy violations have not been large enough to significantly dent them vis-a-vis the revenues that they are making.What happens to end-to-end encryption is going to be subject to the beliefs and values held by the people in power. The government has been trying to push its agenda for months. Prior to asking Facebook to help the government in decrypting data, the government had asked intermediaries to enable traceability of messages. This was carried out by the Ministry of Electronics and Information Technology (MEITY) that proposed amendments in the intermediary guidelines in December 2018. The final notification is due to be issued on January 15, 2020. During the same month, the Ministry of Home Affairs issued an order granting powers of “interception, monitoring, and decryption of any information generated, transmitted, received or stored in any computer” to ten Central agencies.None of this is implying that the government is looking to actively spy on you, but that with great power over the citizens comes a greater responsibility towards the citizens. For instance, when Edward Snowden broke the news of the NSA’s surveillance capabilities, he also stated that employees at the NSA intercepted personal nude photos and shared it with their colleagues, almost as a fake currency. The NSA, in a response to Forbes, neither confirmed nor denied the practice.Just as governments around the world have not been perfect with their conduct towards privacy, neither has big tech. While companies such as Facebook have supported and implemented end-to-end encryption, they have been repeatedly penalised for privacy violations. Big Tech is moving towards privacy at its own pace. Companies such as Google, Facebook, Amazon, and Apple have very different attitudes towards privacy. That is mostly because around the world, they have not been mandated to comply with a standard set of rules. Countries with huge user bases are yet to come up with data protection laws (read India). And on occasions where laws on data protection and privacy have existed, companies have felt perfectly at liberty to violate them and paying the fines, looking at it as a cost of doing business. Two things work in favour of Big Tech here. Firstly, the fines that they have historically been charged have not been large enough to significantly dent them with respect to the revenues that these corporations are making. Secondly, all of these companies are not transparent in their workings and functioning with privacy. So in all likelihood, not all violations of user privacy are being punished around the world. Thus, every violation escaped is essentially money saved for later fines.Regardless of all of this, anonymity in communications is worth having and fighting for. In the current version of the Internet, complete privacy in communications is a rare occurrence. Irrespective of where parties stand across the aisle, maintaining end-to-end encryption should be common ground.

This article was first published in The Hindu. Views are Personal.

Read More

What does refusal to sign the Osaka Track mean for India?

The big decision here is whether or not India wants to share its data with anyone under any circumstances.India recently started sharing maritime data with countries in the Indian Ocean Region. The Information Fusion Centre is actively interacting with the maritime community and has already built linkages with 18 countries and 15 multinational/maritime security centres. On that note, it is worth relooking at India’s approach to data sharing and cross-border data flows.Technology is now a variable that defines relations between countries. Over the year, we have seen an increasing number of instances that reaffirm the existence of high-tech geopolitics. First, there was the US-imposed ban on Huawei technologies. Then the Americans considered imposing caps on H1-B visas for countries that implemented data localisation. One of the most important recent developments was at this year’s G20 summit where Japan’s Shinzo Abe presented the idea to have a multilateral broad framework for the sharing of data. It is worth analyzing India’s response to it.The agreement is called the Osaka Track. The idea is that member countries should be able to share and store data across borders without having to worry about security risks. The agreement has many notable signatories, such as the US, EU, and China. It is India’s response that is interesting. India, for better or worse, has not been big on data sharing. So much so, that recent news claimed that the government was considering getting a domestic messaging service for official communication. With this context in mind (as well as the draft e-commerce policy, data protection bill, and the RBI data localization notification), India refused to join the Osaka Track as a signatory. The questions for India here are, what does this mean for the future of Indian data, and how India is likely to conduct itself in this world of high-tech geopolitics?India’s reasons for not signing the pact are two-fold. Firstly, as the sentiment goes, data is national wealth. The idea here is to keep all data possible within Indian borders. Much like you would do be inclined to do with actual wealth. Secondly, as an official stated, India needs to better understand what free flow of data might mean. Having said that, India then wants to look at its domestic requirements and would like to see the issue of cross-border data flows discuss the same on a WTO (World Trade Organisation) platform. What the foreign policy is broadly saying here (to my understanding) is that it is not in India’s best interests to share its data right now. However, once the government has a better understanding of the Osaka Track, they might reconsider.In the broader global context, the Osaka Track is a step towards an emerging pattern. Data flows are likely to be increasingly regulated through economic blocs and not nations. Europe’s General Data Protection Regulation and Convention 108+ are the best examples of this. The Osaka Track was an opportunity for India to follow this trend and facilitate trans-border data flows. India’s rejection of it does not mean that other opportunities will not present themselves. Should India decide that data sharing is in its best interests, there are other platforms to make it happen on its own terms. One option to pursue this route would be to establish a data sharing law and standards under Bay of Bengal Initiative for Multi-Sectoral Technical and Economic Cooperation (BIMSTEC). Sharing costs of storage and following common processing standards would give India an edge in data geopolitics. Just because it would make powerhouses such as the US rethink applying sanctions to all of BIMSTEC instead of India alone. BIMSTEC, of course, is also interchangeable. India could take the lead and establish a data sharing policy with SAARC (South Asian Association for Regional Cooperation) or with a different combination of countries it might prefer. The big decision here is whether or not India wants to share its data with anyone under any circumstances.If India is to treat data as wealth and not share it across borders, it may be time to consider what that might mean. An increasing number of government policies are treating data as an asset that should not be shared. Doing so is likely to come at the cost of being ostracised by the US. However, if India is to go ahead with this, it makes sense as citizens to ask the government how data is going to be used to achieve progress.While there are a lot of policy proposals on how data should be regulated in India, there aren’t many on how it is going to be used for economic development. Sharing data with countries and/or companies can often crowdsource the initiative for development, as it seems to be doing for security at The Information Fusion Centre. As Microsoft’s collaboration with the Telangana government proved by using data to optimise agricultural yields. However, if India decides to cut itself off as evidenced by the refusal to sign the Osaka Track, it is best to ask how crowdsourcing the initiatives will be substituted. While options to do so domestically might exist (such as releasing community data for entrepreneurs and Indian companies), there need to be indicators that they are being considered or carried out on a national level. Because if data is national wealth, then there needs to be a plan on how it should be used to achieve economic development and progress for the nation.This article was first published in the Asian Age. Views are personal.

Read More

Performing well in the sandbox won't be enough

RBI recently came up with a Draft Enabling Framework for Regulatory Sandbox in financial technology. For context, a sandbox is a framework that allows testing of innovations by private firms in a controlled environment. As far as developments in the fintech regulation space go, this is a good one. A sandbox allows players to run a pilot test on new products and services at a smaller scale with less capital than usually required. According to the draft, RBI will consider testing for innovative products and services in the following areas, Retail payments, Money transfer services, Marketplace lending, Digital KYC, Financial advisory services, Wealth management services, Digital identification services, Smart contracts, Financial inclusion products, Cybersecurity products. Some of these, especially the digital KYC and financial inclusion, are more front-facing than others. There is also a separate clause for innovative technologies that include, Mobile technology applications (payments, digital identity, etc.), Data Analytics, Application Program Interface (APIs) services, Applications under blockchain technologies, Artificial Intelligence and Machine Learning applications. A notable exemption from the sandbox is cryptocurrencies. This is keeping in line with the report of the Inter-ministerial group that recommended banning private cryptocurrencies and proposed a fine of up to ₹25 crore as well as up to 10 years imprisonment. It echoes the stance of the report that encourages developments in blockchain and the distributed ledger technology in general. This is likely due to concerns that private cryptocurrencies can lead to macroeconomic instability and finance terror groups, both of which are fair concerns. There have been claims that the sandbox is available for a limited set of customers, only 10-12 companies. It is unclear whether that hypothesis is true. The eligibility criteria as specified in the draft states that the focus of the sandbox will be to encourage innovations where there is an absence of governing regulations; there is a need to temporarily ease regulations for enabling the proposed innovation; the proposed innovation shows promise of easing/effecting delivery of financial services in a significant way. This does not directly translate into having only 10-12 players. This should also act as a win for Facebook, and in all probability, WhatsApp. It is an open secret that WhatsApp has been keen to launch a payments service in India, dubbed ‘WhatsApp Pay’. However, over recent times, the regulatory climate has proved to be unfavourable to bring those efforts to fruition. The regulatory sandbox may serve as the ideal testing ground for the service before its release. A successful stint in the sandbox is not a guarantee to achieve regulatory approvals however, as the draft states. Companies and their services can perform well and still be denied clearance to launch at a national level. That is something firms like WhatsApp and Facebook will have to deal with. Any financial services that pass the sandbox will need to clear regulatory hurdles such as data localization guidelines laid out by RBI and the data protection bill (if and when it becomes law). There are two key things to keep in mind here. Firstly, the sandbox is likely to be of help to both newcomers into the market as well as incumbents who plan to try out new ideas in fintech. This includes innovative efforts to increase financial inclusion through services that rely on machine learning and AI. Thus, it is a boost to the fintech landscape overall and also on India’s AI front. The latter of which could use an injection in homegrown talent, applications, and infrastructure. Secondly, returns from the sandbox have the potential to pay off dividends in the short to long term, depending on how long the program lasts. The RBI, also to their credit has provided a set of risks and limitations in the draft. It includes the possibility of innovators losing time and flexibility because of due process. The need for regulatory approvals after sandbox testing and legal issues leading to consumer losses is also included. None of these risks is an argument for not having the sandbox. The sandbox proposed is an objectively good idea. Countries around the world, including Thailand, Singapore, and the US have tried it. In a fintech space that is growing and needs new innovation to foster better development in areas such as financial inclusion and creditworthiness, this is a welcome step. It is too early to say whether the idea will be successful, or whether it will face implementation challenges or end up leading to unintended consequences, such as favouring the incumbents over startups or making participation exclusive to a limited set of players. The idea is still in the draft stage and it could be a while before it is successfully carried out. The bottom line here is that despite all these considerations, it is better to have a sandbox than not have one.This article was first published in Deccan Chronicle. Views are personal.

Read More

Wider debate needed on major changes in data protection law

With the developments in Kashmir and the economy dominating so much of the national discussion, it can be hard to keep track of what is happening with tech policy. One thing that might slip through the cracks is the changes in the Data Protection Bill. According to a recent report by Medianama, the ministry of electronics and information technology, or MEITY, is privately seeking responses to new questions from select stakeholders.The Data Protection Bill is going to be profoundly important in India’s tech policy landscape, going forward. It will tackle issues around data privacy, data protection, and data processing, all of which have never been discussed at length in Indian law. Based on inputs received from the Srikrishna Committee report, it will also focus on data localisation. So when MEITY initially asked for feedback on the draft data protection bill, it received over 600 comments from individuals and organisations. For this round of comments, the ministry has reached out to only 10-15 stakeholders.This raises a host of questions and concerns about the process. First, as Medianama puts it, why the secrecy? With over 600 comments being submitted, the bill is clearly an issue with a significant amount of public interest. MEITY and its bureaucracy could argue that 600 submissions are a lot to process, and not all the comments are relevant to the process. But for a piece of legislation this important, it is surely better to have too many inputs instead of picking and choosing which voices you would like to elevate. There is also not a lot of transparency in the process. How does one figure out the basis on which these 10-15 stakeholders were selected? This is not to imply that the participants asked for feedback are not a decent sample of stakeholders. But this could have been done better had a rationale/basis behind the selection been provided.Not knowing why some people have been selected and the others ruled out matters more when the bill has sweeping changes. The new version of the bill is reportedly going to address issues in e-commerce and community data. Both these topics were not a part of the Srikrishna Committee or the October consultation process. It is unclear what the bill’s stance might be on both these matters. To make an educated guess on e-commerce, the bill might condense and borrow aspects from the draft e-commerce policy from earlier this year. It is anyone’s guess what those aspects might be, but it does narrow down the list. As for community data, there does not exist such a precedent. It is frankly shocking that the consultation has been labelled as “clarifications” in the bill. If entire new industries are being addressed under the document, then surely it should be classified as additions/revisions, and thus call for comments from all stakeholders. It might not have been acceptable to have a selection of stakeholders for a round of clarifications on data protection. And that is even more so when it comes to picking and choosing stakeholders regarding legislation that has such substantial changes.This lack of transparency also makes one question the importance given to the comments submitted earlier. There is clearly value in having documents that present the perspectives of stakeholders across the industry, academia, and civil society. But given the recent turn of events, who can say whether these perspectives have been reflected in the new version that is being circulated for feedback. The idea here is not to give credit to organisations/individuals who may have caused tangible changes in policy. Instead, it is to ensure that the process that goes into finalising the document is truly multilateral.The final version of the policy will still have winners and losers. No legislation is objectively perfect. This is especially true in technology, where most laws find it hard to keep up with rapid advances in the industry. If the industry wins through lax laws on data privacy, civil society arguing for stronger privacy laws will de facto lose. What the consultations should then aim for is to reflect that different views on subjects were considered before making trade-offs in favour of one over the other.What makes this situation even more bizarre is that up until this point, the process has been fairly transparent. The Srikrishna Committee’s recommendations were comprehensive and publicly available. As was the first round of comments in October 2018, even though the comments submitted were not made available to the public. Why then has the second round of inputs been restricted to just a few people without any explanation? Not allowing public comments while adding e-commerce and community data to the mandate is going to have negative implications when the bill finally comes out. A large share of stakeholders in academia, industry, and civil society are going to have fundamental disagreements on the way this was carried out, as well as its contents.Indian policy towards all things technology is slowly catching up with advancements. As new legislation comes out regarding other emerging technologies such as artificial intelligence, fintech, and the Internet of Things, it would make sense to involve multiple perspectives while designing it. Failure to do so is likely to have an adverse impact on the adoption of these technologies and, by extension, India’s development.This article was first published in Deccan Chronicle. Views expressed are personal.

Read More

The Evolution of Synthetic Thought

Download the Essay in PDFThe world has never been enough. At least for us, humans. The endeavour to become more than what we are lies at the heart of human civilisation. We have overcome challenges of nature, obstacles of time, physical and mental impediments. Perhaps nothing reflects the culmination of this collective zeal to surpass our capabilities as much as Transhumanism.Transhumanism is a belief that human beings can transcend the limits of physical and mental limitations through technology. For some, a Transhumanist is an ideal to strive towards, and for others, it is both a source and an answer to all of humanity’s problems.Borne out of a belief system that humankind should reach the pinnacle of its capabilities and beyond, Transhumanism comprises augmentations to overcome limitations. While technological augmentations may be a recent endeavour, primitive humans have utilised tools to augment their capabilities. From the wooden spears, they used to hunt, the prosthetic wooden and iron legs to augment walking, all the way to lances in warfare, humans have employed augmentations throughout history. Eyeglasses, clothing, and ploughs signalled a rise in using tools to augment our capabilities.The rise in medical technology, genetic science, and electronics from the 1990s, has opened new frontiers in human capabilities. We don’t merely use technology as enablers but have started adopting it from within in the form of cybernetics. Armbands, deep-brain stimulators, physical and neural augmentations, mechanical and cybernetic implants, and potentially gene editing are technologies that humans can use to enhance themselves and achieve capabilities previously unheard of.On one hand, science is driving innovation in augmentation, and on the other, Transhumanism has given rise to a significant amount of philosophical thought. Notions of challenging what it means to be human, virtues and vices of post-humanism, and the dangers of uncontrolled immortality provoke deep questions that do not have answers but encourage much debate and discourse. There is also an entire section of humanity that believes that the very notion of Transhumanism is irrelevant, for any such technological advancements are several decades away.Transhumanism has generated fear and enthusiasm in equal measures. While proponents extol the virtues of embracing technology to enhance our lives, detractors fear what this will mean to be human at all. The widespread availability of Transhumanist technologies could result in radical life extension, overall well-being and improper perpetuation could create class divides, encourage oppression and even alter geopolitical landscapes.For the first time in human history, we can radically alter our minds and bodies and take shortcuts to the various destinations of natural evolution. This essay looks at Transhumanism from an emerging technological paradigm and attempts to provide an objective view of where Transhumanism is headed and what it means to the rest of the world.[pdf-embedder url="https://takshashila.org.in/wp-content/uploads/2019/08/TE-Evolution-of-Synthetic-Thought-CRG-2019-01.pdf"]Download the Essay in PDF

Read More

Facebook can’t be taken down, but Zuckerberg can be taken down a notch

It is hard to associate Facebook and most of Big Tech with anything positive right now. Privacy breaches and the Cambridge Analytica scandal led the American Federal Trade Commission (FTC) to fine Facebook $5bn. The fine was a joke. FTC’s decision was seen to be so weak that Facebook’s stock actually rose in the wake of the levy. You know a corporation is a fairly big when a multi-billion dollar fine barely qualifies as pocket change for it. The question most of the world seems to be asking is whether Facebook is perhaps too big?You will hear arguments that bigness is not a crime, that no company should be punished for being successful. But that is not remotely the point. The only reason the world now thinks that Facebook needs to be curbed is because of its horrendous conduct with the privacy of user data. Earlier this year, we found out that Facebook stored millions of passwords in plain text, visible to thousands of employees. When users signed up for two-factor authentication, Facebook used those numbers for targetted ads. What’s worse is that the $5bn fine is for privacy violations that Facebook has been fined for before by the FTC in 2011. Not only did Facebook’s conduct failed to improve, it actually got worse (read: Cambridge Analytica).

There have been privacy-focused initiatives from within Facebook that help users take more control of their privacy. Facebook recently announced an upcoming feature called ‘Off-Facebook Activity’. The idea is that since Facebook tracks what you do on the Internet even when you are not on Facebook, Off-Facebook Activity will give you an overview of websites and apps that share your information with Facebook. You still can’t delete the information that Facebook collects. However, you can choose to delink that information from your digital profile. It’s not perfect in concept, hasn’t been released yet, and does not do nearly enough to calm concerns around Facebook’s conduct towards user privacy.This would not have been such a huge problem if Facebook did not have a stronghold on free speech. Facebook (with its acquisition of Instagram and WhatsApp) has a monopoly on social media. Its closest competitors are Snapchat or LinkedIn. So even if people want to quit Facebook, they don’t have anywhere else to go. Zuckerberg has in fact admitted to Facebook’s power on free speech itself, stating, “Lawmakers often tell me we have too much power over speech, and frankly, I agree.”

Facebook’s monopoly means that no matter how horrible their conduct is with user data, they tend to get away with it. This brings us to the question of whether Facebook can be broken up. The idea here is that if Facebook could be unmerged with Instagram and WhatsApp, it would spark competition in privacy practices. Competition in privacy laws would be better for everyone.There have been calls to do exactly that. The idea of dismantling Big Tech is a key message of United States Senator Elizabeth Warren’s presidential campaign. Facebook’s own co-founder, Chris Huges, argued for breaking up Facebook too, calling Zuckerberg’s power “unAmerican”. The problem here is that it is a grey area for antitrust laws. It's unclear whether existing antitrust law is equipped to engender a splitting of Facebook, Instagram, and WhatsApp. It's up to the Justice Department and FTC to determine if a case can be made out for it. Even so, you can rest assured that if the U.S. government wanted to break up Facebook, it would be a lengthy process that might ultimately be unsuccessful. The government tried to break up Microsoft in 1990s, and failed.The other option here is to have stricter regulations when it comes to privacy. It is certainly the one Facebook prefers. In an Op-ed article for the New York Times published earlier this year, Nick Clegg, a vice president–level staffer at Facebook, called for better accountability through regulation. He emphasised the need for “significant resources and strong new rules” and added that breaking the company up would not resolve the problems of election interference or user privacy.Of course, Nick Clegg would say that. The problem here is that even if better privacy laws did exist they might not mean much given Facebook’s size and dominion. It could just choose to ignore them as it has in the past and assuage hurt feelings by paying a fine on occasion. Besides, U.S. privacy laws would not apply overseas. People in India would still suffer from privacy violations at the hands of Facebook.Facebook’s size is not a reason to punish it. However, its conduct toward user privacy is. It might be impossible to break up Facebook, but it is reasonable to demand accountability of it.If Facebook is to be truly made accountable, Mark Zuckerberg needs to be reined in. You will hear people say that Facebook’s current situation is a failure of capitalism. They will probably say that Big Tech needs big structural changes. They wouldn’t be wrong. Capitalism — and the attention economy, in particular — is not perfect. But, as of now, these are broad sweeping arguments, and not solutions. If Facebook is to be made more accountable, we need to begin with making Zuckerberg more accountable. Zuckerberg currently holds ~60% share on the Facebook board. This means Facebook’s board has no power to keep him accountable. It is advisory at best.The fix here is that Zuckerberg's power needs to be regulated. Creating a privacy czar will achieve little if s/he has no power to check Zuckerberg and his decisions. There are ways to accomplish this, none of them are easy.The most straightforward solution here would be to loosen Zuckerberg’s hold on the board by divesting him of a significant part of his shares. The legal precedent for this may not exist. However, it is only fair as Zuckerberg’s power, as a single man controlling the speech of 2 billion individuals, is unprecedented. It would help the board hold him accountable rather than simply advise. It would also steer clear of setting a precedent of companies being punished for being too successful.This article was first published in The Hindu.

Read More

India's Upcoming Digital Tax: How Will Big Tech Cope?

Taxation and regulation is slowly catching up with technology: India is now preparing a framework to tax Big Tech companies. India’s desire to do so is part of a global pattern. Earlier this year, France came up with a proposition to tax Google, Facebook, and Amazon. India following suit will lead to broader international consequences with a combination of factors determining how Big Tech is taxed.Digital taxation has been on the government’s mind for some time. June 2016 saw India come up with a “Google Tax,” an equalization levy that taxed digital advertising. In 2018, the revenue from the tax surpassed 10 billion rupees ($139 million). Prime Minister Narendra Modi’s government is also keeping an eye on the tech ecosystem. Modi himself has pushed for Digital India, Startup India, and called for digital payments post demonetization. As India grows as a market for digital technologies, the scope for the government to tax big tech firms such as Facebook and Google grows.India is a huge market for companies. As of April 2019, Facebook had 300 million users in India. You can expect to see a significantly higher number from Google. The idea is that India is a big source of revenue for tech companies. However, because these companies do not have a significant economic presence (SEP) in India, they might not pay their fair share of taxes. This is where India’s new proposed framework comes in. Multinational tech companies achieve scale without volume; they are structured in a way to pay less taxes. The proposal currently in the works will change that and make companies liable to be taxed.The complication here is that we live in a world where technology is a variable in international relations. Most giant tech companies are American and operate from Silicon Valley. The Trump administration might not like Big Tech but it will retaliate against other countries taxing Silicon Valley. The Americans have already postured against this seriously before. When faced with the prospect of data localization by India, Trump’s White House considered capping H-1B visas. The United States has already announced an inquiry into France’s proposal to tax Facebook, Google, Amazon, and Apple. There is a good chance that the matter might end in tariffs for France. This is something India has to consider as it goes ahead with taxing American Big Tech. Trump himself has been aggravated by the tariffs India places on the United States. He called out Modi and called the tariffs “no longer acceptable” during the G-20 summit. India’s push for digital taxation will likely provoke a similar reaction from the White House.The other stakeholder here is Silicon Valley itself. It is unclear how firms will react to thinner profit margins in India. There is a possibility that they could acknowledge market leverage while lobbying through third parties. The other end of the spectrum is that they could threaten to pull out of the market, leaving India with no direct substitutes. One possible outcome of a proposed Indian tax could also be the loss of future possible investment in India. If the government decided to tax Facebook and Google on revenue generated in India, they could see it as a sign to invest more in other markets. At the moment, India could be too big of a market to ignore. However, the imposition of taxes going forward might take away the incentive to innovate for the Indian consumer. It would also translate into the slower deployment of new AI-based technologies by large tech corporations, potentially slowing down India’s advancement in the AI race.All of the options above are highly unlikely, though. What is most likely to happen is the revision of a new framework with Big Tech at the table. Because both parties, the Modi government, and Big Tech, have a lot at stake, a compromise seems like the rational outcome. This way Google and Facebook can have a say in deciding how they are taxed and how much they should have to pay. A mutually consented tax rate could be beneficial for all stakeholders. It would keep investment flowing while not forcing India to look for domestic substitutes. It would also ensure that India does not rely on Chinese substitutes, doubling the scale of Chinese digital companies. However, just because this seems like the rational way forward does not mean that it is the one that will be taken. There are so many ways that an Indian digital tax could play out; we can only hope that policymakers will have carefully considered its impact on India’s foreign relations.Modi, Trump, China, and Silicon Valley — it is all a fascinating mess. It also goes to show how technology has inserted itself into foreign policy and geopolitics. As the Modi government might consider taking France’s lead, it is a step toward taxation coming to terms with the digital economy. How this ends globally will determine profit margins in Silicon Valley and development budgets in New Delhi.This article was first published in The Diplomat

Read More

Where will big data take platforms like Netflix?

Streaming services are cataloguing the entire world’s audiovisual content onto their platforms. If you had told someone ten years ago that most of the world’s movies and TV shows would be available on-demand in their pocket they would have given you a patronising look of disbelief. If at present of the streaming industry is pathbreaking, the future promises to build even further on it.Streaming is funded by subscriptions and guided by big data analytics. Knowledge of how consumers behave on platforms such as Netflix and Prime Video lets the services gauge what else they might be willing to pay for. It is hard to say which way streaming is likely to go over the next decade. However, it is possible to make an educated guess based on the frameworks of information economics.Three main areas are likely to be affected by the continued usage of big data to improve streaming, user experience, security, and pricing.When the news broke that Netflix customises individual thumbnails for each user, it was another endorsement of how the platform was using big data to keep customers hooked. Netflix doesn’t just use a film or show’s original art; it employs an algorithm to source high-quality images from the content. Then it does more testing to determine what individual subscribers are most likely to click on. Based on that, each user’s Netflix homepage looks different, even if they have similar tastes. The idea is to have users spend as much time on Netflix as possible, and personalised thumbnails are a small cog in the working of this big machine. Data on who binge-watches which shows and how long each visit on the website lasts is also crucial when it comes to deciding what to invest in. This is not a new phenomenon. The TV show House of Cards is a case study to understand this.Big data tells Netflix (and Prime Video and Hulu and Hotstar) what users want even before they themselves know it. The data-based knowledge that David Fincher’s movies were in high demand — this insight was based on the number of times people played and paused them and how long they watched for — was a powerful resource for the TV decision-makers. Combining Fincher with a star-studded cast was not a shot in the dark. Netflix bet $100 million on two seasons (26 episodes) of the show at first, without watching a single episode. They even went as far as to make different trailers and filtered their distribution according to user preferences. This just shows that the information about our tastes and tendencies, as exhibited by big data, is empirically reliable.Going forward, big data analytics will continue to tell companies what the users want. This will have a significant impact on how funds are distributed across genres. For instance, the success of Narcos and Stranger Things will drive investments in more original content in their particular genres. This also means evolving content markets all over the world to keep users hooked and get new users to subscribe (think about the success of Sacred Games in India).

No free-loading

The increasing use of big data analytics will also mean tighter security for accounts. So, no chance of four people pooling their money together to get one streaming account. Also, no mooching off your friend’s account. The free-rider problem means streaming giants lose money on every individual that watches content without paying for it. Because Netflix has data on usage patterns — laptop model, user location over time — it can identify when someone other than the paying customer is watching. So, it is no wonder that the company is now planning on using AI to keep off account-moochers. Though such algorithms have not taken mass effect, there is reason to believe that this might change soon.Lastly, big data will be transformative when it comes to pricing streaming services. As companies compete for a higher share of users’ e-wallets, data on how much the consumers are willing to pay will be transformative in determining how the service is priced. The marginal cost of adding an additional user to a streaming service is negligible, which means that the price they can be charged is relatively flexible as compared to traditional industries such as cars. This is exactly what Netflix has been trying to leverage in India. In July, the company unveiled a mobile-only plan for the price-sensitive market. It is a novel move that might help Netflix compete with Prime Video, Jio TV, and Hotstar in India, all of which are cheaper options. The same could hold true for markets where the consumer is willing to pay more for premium services. Data will decide.User experience, security, and pricing are three key areas where big data analytics could be transformational for the streaming industry. This is by no means an exhaustive list. It would have taken a mental leap ten years ago to conceive of the current streaming scenario, and we might find the same a decade from now. New applications of insights from big data will continue to come to light. And the interesting thing is that, for us who are now aware of the speed at which data engineering and digital ecosystems can evolve, none of these developments are situated too far off in the future to be imagined. In big data analytics, the enabler of the present is also the driver of the future.This article was first published in The Hindu. Views are personal.

Read More

Privacy is dead. So, it’s time to turn data into a bargaining chip.

Tech firms offer services in exchange, but the government will argue it needs your data for national security. Why not trade it then.

This year, Google bought Nest. Why was the world’s biggest search engine acquiring a thermostat company? Because through Nest, Google will get to know what temperature you prefer in your home, or when you come in and go out during weekdays and weekends.Everyone wants data. It is why The Economist claimed that “(t)he world’s most valuable resource is no longer oil, but data”.In today’s digital world, fighting for privacy is fighting a losing battle. What we can instead fight for is making privacy a bargaining chip. Giving up your data to different people only makes sense if you know what you get in return.Earlier last year, when US Senator Orrin Hatch asked Mark Zuckerberg how Facebook remained free, a mildly amused Zuckerberg replied, “Senator, we run ads”. The clip went viral and highlighted the need for regulators to get up to speed with technology.People who understand how Facebook and Google work may know how they get their revenue by selling ads. They monitor your clicks; how much time you spend on a website; and what webpages you visit to target what they should be showing/selling to you. So, if you spend some time viewing videos of cats or say, an iPad, Facebook and Google will make sure that the content targeted to you is based on cats or iPads. However, the workings behind targeted advertising mean that it makes sense to think about privacy as a bargaining chip rather than an absolute right.Debates in technology change fast. Over the past year, different aspects of tech policy have been highlighted. There was an argument on intermediary liability on whether platforms should be considered the same as information publishers. We also have the ongoing debate on data localisation and where they should be physically located. Facebook’s launch of Libra shifted conversations to cryptocurrency and whether Facebook needed to be broken up. In the middle of all this chaos, the argument for user privacy seems to have died down. The news attention cycle is partly to blame.An equally big, if not bigger, part of the blame should be put on how big tech (Facebook, Google, and Amazon) operates. The business models for a lot of platforms (including Facebook, Google, Reddit, and Twitter) are responsible for it. Let’s look at Google. The idea is to offer services in exchange for your data. You don’t pay when signing up, but instead, give money to the platform’s clients after the application has used your own habits against you. Remember searching for that specific shoe wishfully, only to desert it midway, but the ads popping up for several weeks?The bottom line you need to understand in case of big tech making claim to your data is that they will provide you services in return (think Google Drive, Google Photos, or Google Search).It is not just big tech that wants your data. The government wants it too. While big tech might offer you services in exchange for that data, the government is not obliged to make any such promises. Instead, the government’s argument is that it needs access to data to maintain law enforcement, ensure national security and have supervisory access. This is a global trend that spans across contexts.For instance, the Reserve Bank of India wants unfettered supervisory access to financial data. India’s updated Information Technology Intermediaries Guidelines (Amendment) Rules want data on the originator of content on platforms. The Australian government has exclusive access to its citizens’ healthcare data, which cannot be shared outside its borders. There is also a big sentiment for states, especially developing economies such as India and China, to view data as a form of national wealth that can be used for development. This, in addition to the argument for law enforcement, makes the state naturally take an opposing stance to Facebook and Google when it comes to data access.These conversations are bound to become more complex as technology advances and the state plays catch-up. We are already looking at years of discussion on Facebook’s Libra project, which will also be a defining battle for the short-term future of cryptocurrencies. There is also Japanese PM Shinzo Abe’s proposal for a multilateral data-sharing framework, called the Osaka Track, which was proposed at the recent G20 summit. Over time, emerging technologies such as Artificial Intelligence and the internet of things are bound to raise the stakes as well, as both are closely linked to data.With so much happening in and around data and technology, it can be dizzying to keep up. The privacy debate gets left behind. The only big company making any noise about privacy seems to be Apple.So, it’s time for us to get the bargaining chip. I know how this is a controversial opinion, especially for people who consider privacy to be an absolute right, and rightly so. I am not implying that the battle on privacy is lost. I am implying that it is a losing battle. Quoting Shoshana Zuboff, the nature of the internet and ‘surveillance capitalism’ leaves us with little choice.In such times, thinking of privacy as a bargaining chip only seems to be a bi-product of a pragmatic assessment of the situation. You can use your privacy in a transaction to get goods and services. You pay with your privacy when you sign up for Google or Facebook and opt in to their services. Similarly, you lose your privacy when the government has sensitive data on you (which may or may not be optional). Giving up data about yourself only makes sense when you know what you get in return. ‘Privacy is a bargaining chip’ is one of the few phrases that are always central to the debates in tech policy and helps us make sense of the world around us.The article was first published in The Print. Views are personal.

Read More

Does the arrival of AI mean the end of data privacy?

In recent years, there has been a great buzz around the development of Artificial Intelligence (AI) and what that might mean for the Indian economy. On the government’s side, Niti Aayog has come up with a national strategy on AI; the Ministry of Commerce has set up an AI task force. A ‘National Centre of AI’ is also planned. All of these initiatives have a scope to define where AI can contribute to the Indian industry and how to best achieve adoption at scale. But there’s a flip side to AI and that impacts data privacy.

The relation between AI and data privacy is a complex one. Broadly speaking, the growth of AI may spell the end of data privacy if we don’t proactively try to embed privacy by design.
Algorithms in AI learn from big datasets. For example, let us take a huge dataset, say India’s Aadhaar database. To the human eye and mind, it would be almost impossible to discern any insight from looking into this huge database/spreadsheet. However, to an AI algorithm, it could serve as fuel. AI learns from big data and identifies patterns in numbers that may draw unlikely correlations.
The catch here is in the fact that the more data an AI programme is fed, the harder it becomes to de-identify people. Because the programme can compare two or more datasets, it may not need your name to identify you. Data containing ‘location stamps’ — information with geographical coordinates and time stamps — could be used to easily track the mobility trajectories of, say, where and how people live and work. Supplement this with datasets about your UPI payments, and it might also know where and what you spend your money on.
So, the more data AI is fed, the better it might get to know you. Because of this, if AI is the future, then privacy may be a thing of the past. Still, can AI be, instead, leveraged to enhance privacy for individuals and companies?
There is a bit of a silver lining. Applications of AI have immense potential when it comes to enhancing security and privacy. It can help better understand how much of your data is being collected and how it may be used. A good use case is the AI, ‘Polisis’ which stands for Privacy Policy Analysis.
The algorithm uses deep learning and can read privacy policy documents to develop insights such as an executive summary and a flow chart of what kind of data is collected and who it will be sent to. In addition, it also outlines whether or not the consumer can opt-out of the collection or sharing of the data.
As rosy as leveraging AI for privacy might sound, data is going to drive the economies of the future, and in a data-driven regime, the idea of privacy takes centre stage to protect the interest of consumers and citizens alike.
This brings us to another question: if AI is fundamentally opposed to privacy, is there a way to get around the problem? There are two aspects to how privacy can be maintained not at the cost of development in AI. The first is that of consumer action. There is a need to modify the bridge between AI and data protection.
Terms and conditions
With rising data collection and storage, doctrinal notions around ‘consent’ and ‘privacy notices’ should be reconsidered. For instance, we may need to revisit the model of ‘clickwrap’ contracts (which allows the user to click on the “I accept” button without reading long, verbose, and unintelligible privacy terms and conditions).
What consumers are not aware of is that often, they can decline the contract and still get unfettered access to the content. While this is a practice that should not be encouraged, it is still a step better than accepting terms and conditions without reading them.
The best practice would be to find out whether the following T&Cs are a part of the agreement: (1) Can the website use your content? (2) Does everything you upload become open source? (3) Can your name and likeness appear in ads? (4) Do you pay the company’s legal costs to cover late payments? (5) Is the company responsible for your data loss?
Of course, you shouldn’t have to read legalities before you want to read an article. To that extent, a possible workaround could be using tools such as ‘Polisis’.
The second solution is to change the nature of AI development. This means including privacy by design in AI algorithms. While there can be no strict set of rules or policy guidelines that can bind an algorithm designer, best practices following constitutional standards jurisdiction-wise can be developed as a benchmark.
A few techniques that could be deployed to enhance privacy when data is being processed by an AI algorithm are differential privacy, homomorphic encryptions, and generative adversarial networks. Along with these, another privacy enhancing and data protection measure which should be taken is of certification schemes and privacy seals to help demonstrate compliance by organisations.
The development of AI might spell the end of privacy as we know it. There have been examples of AI enhancing privacy, but those are the exceptions, not the norm. AI is proliferating, so it is necessary to embed privacy and appropriate technical and organisational measures for it into the process that leads to positive outcomes. The opportunity for AI today is, therefore, not just solving for corporations and nations, but instead, to do so in a manner that is sustainable in terms of user privacy.
This article was first published in Deccan Herald. Views expressed are personal.
Read More
High-Tech Geopolitics Manoj Kewalramani High-Tech Geopolitics Manoj Kewalramani

Why India needs to leverage and not localise data

If one were to chart the arc of geopolitical competition over the past 100 years, one can identify four primary sources of contention – land, people, natural resources, and now, the commanding heights of new technologies. At the heart of the current competition lies data – the fuel that will power future innovation.

We generate data just by existing. Every telephone call we make, every social media share, a journey from home to work, financial transactions, and even the beat of your heart is data. This is valuable to corporations big and small looking to create new products and offer new solutions. The more data you have and the better quality data you have, the greater your chances of tailoring products, out-innovating competitors, and achieving scale.
It’s little wonder then that access to data has become a point of geopolitical contention. This was evident at the recent G20 summit in Osaka. India boycotted a move by the world’s leading economies to establish a new regime for global data governance. The Osaka Track, as it is called, is a plurilateral initiative to establish a framework for cross-border data sharing, essentially aimed at limiting a states’ ability to hoard data generated within its borders.
This goes against India’s stated policy. India boycotted the Osaka Track preferring the conversation to be held at the WTO. Foreign Secretary, Vijay Gokhale, also underscored the significance of data as “national wealth.” India’s approach has been that of data localisation. Localisation essentially means storing domestic data on domestic soil. India’s rationale for pursuing localisation comes from the notion of viewing data as a new form of wealth. Keeping this in mind, there is a strong sentiment across the government to internalise this wealth and use it for development. While the idea does seem to make sense on the surface, a deeper look shows that there are significant costs and benefits to it.
Perhaps the greatest benefits to localisation lie in security and ease of access. RBI emphasised the latter by being the first government entity to call for localisation. The BN Justice Shrikrishna Committee and E-commerce policy have also called for localisation, citing similar grounds. However, there are significant costs to localising data in India and incurring those costs might not make data more secure. Firstly, building and maintaining data centres is a capital intensive business. It requires a significant amount of water, electricity, and bandwidth. Electricity and water are both commodities that India does not have in abundance. As of 2017, an estimated 240 million (24 crore) people in India did not have access to electricity. NITI Aayog estimates that 600 million (60 crore) people face a severe water shortage in India, and the situation will only get worse with the water demand being twice the supply by 2030. The recent Chennai water crisis does a lot to place emphasis on this. It would just not be fair or ethical to allocate water reserves to cool down data centres when they should be diverted to Chennai.
Secondly, as far as benefits are concerned, having data centres in India is not likely to make data more secure. India currently ranks 23rd in the global cybersecurity index. There have also been multiple leaks on the Aadhar data India does store locally. So there is a historical precedent for data stored on Indian servers to not be adequately protected. Moreover, if the idea is that storing data here is likely to impose an Indian jurisdiction on it, it may not pan out that way. The physical location of data does not define who owns it or has access to it. If Facebook decided to store data in India, the data would still belong to Facebook.
Given this, it is important to shift the data policy conversation from a storage location to access. Moreover, in doing so, it is important to adopt a strategic outlook. From this perspective, data is a tool of leverage, along with the size of the Indian market. India ranks at the 57th position in the Global Innovation Index. Our technology and innovation ecosystem has a lot of catching up to do, in comparison to those in the US and China. Therefore, allowing foreign competitors free and easy access to Indian data could stymie the growth of future Indian enterprises.
It would instead be far more prudent to pursue a policy of conditional access. There are a number of potential benefits to this approach.
Conditional access could either take the form of requirements for localisation along with investments and collaborations with Indian enterprises. A similar approach was recently taken by the Ministry of Road Transport and Highways which shared vehicle and DL data with Indian companies for a fee of ₹3 Crore. The Vahan and Sarthi databases brought in a revenue of ₹65 crores in total and were made available to 87 domestic companies. Apart from expanding state revenues, such an approach with foreign firms could lead to greater capital investments in India along with the diffusion of technology and managerial and operational best practices. In the long run, this could aid India’s start-up and tech ecosystems.
However, going down this road requires clear domestic legislation and regulations. There are three broad areas regulation should be aimed at addressing. Firstly, defining domestic jurisdiction of data. There is a need to define laws on who owns data, the citizen or the state. Calling data national wealth sets a precedent in favour of state ownership of personal data. Once defined, procedures for judicial safeguards and parliamentary oversight should be put in place to determine who can access public data. That can be followed by discussions over the finer points on what kinds of data should be classified as sensitive and personal. The data protection bill addresses this to some extent. However, the bill is not law yet. Even if it were to become law, it is unclear what aspects of the bill will be changed.
Secondly, in case foreign players are to collaborate with the Indian ecosystem to use data, there needs to be a regulation providing for foreign access. This is likely to be a huge part of conditional localisation. Having a framework that facilitates domestic and foreign collaborations between companies as well as states could be helpful in leveraging data for Indian development. Thirdly, regulation needs to address standards in public data. If data collected by states is to be made accessible to private parties, there need to be national standards in how data is collected, sorted, and opened for access. This would help in processing data and deriving insights from it. Standards would also make it easier to maintain clean datasets.
Considering the above, making the argument for localisation might be a sound short-term negotiating strategy. But in order to strengthen one’s hand at the global table, in the long run, it’s important to focus on putting in place domestic rules and regulations and then negotiating conditional access.
The views expressed above are the authors' own. The article was first published in Deccan Herald.
Read More

Reading into India’s draft e-commerce policy

Bottom line is that it would now be misleading to say that the Indian government does not have a vision in tech policy. It is one step forward from our late 2017 vision.

Up until 2018, one would have been hard-pressed to identify whether India had any coherent intent regarding its technology policy. There were question marks on where the government stood with respect to a range of issues — data protection, cross-border data flows, AI, encryption, fintech, and e-commerce. With the coming of the draft e-commerce policy, the good news halfway through 2019 is that India does seem to have a definite plan for all of these pillars of technology. The broader question is: what does this mean for stakeholders across the ecosystem and for India’s digital aspirations?The draft policy has a lot to say about consumer/citizen data. A lot of which has been mentioned in policies before this one. Data is a national asset, but does that mean it has to be controlled by the state? This question is more relevant now than ever. India does not currently have a data protection law in place. There also isn't a due process of law for data disclosure. The policy says that data should be stored locally. This is in line with the personal data protection bill and RBI’s directive.

Maybe the plan is that localisation will help transform our digital infrastructure. Across the three policies, there is no roadmap to suggest how localisation will be achieved or why it’s needed (apart from unfettered supervisory access, as stated in the RBI notification). Let’s say that the directive does help build digital infrastructure in India. It will not make India a data centre hub, because of our electricity, bandwidth, and water deficiencies. If the plan is to overcome these deficiencies, there is no course of action attached to it. Storage is not the same as access. People own data. Fiduciaries get access to it by consent. Just because a data centre is located in India may not mean the data belongs to the state. So, the rights to insights generated will stay with fiduciaries as well. If the policy’s plan is to take us to a Digital India, it is unclear how these directives shape the road to it.Another condition from previous policies is the requirement that operating e-commerce platforms must have a registered business entity in India. The amendments to the intermediary liability guidelines followed the same tone. There are two broad concerns stemming from this. Firstly, implementation. How do you track that every e-commerce vendor has a business entity in India? The policy suggests nothing in terms of implementation. Second, how do you punish platforms violating the rule? Do you have them removed from the app/play store? What if Google and Apple don’t comply? In which place, why do it in the first place?Secondly, who are the winners and losers of this measure? The broad answer to this is foreign e-commerce platforms — specifically, medium to small e-commerce platforms that might not be able to afford to set up registered business entities. At the same time, it is a win for small and medium enterprises at home. They now have lesser competition. They are also enabled through the simplification of export regulation and the raised ceiling on export goods. This makes it relatively easier to look for markets abroad by reducing costs. At the same time, it closes the gifting route for foreign companies to export to India. Bigger foreign companies have the means to comply with the directive, even though it might take a while to adjust organisational structure. This likely means lack of competition for domestic Indian firms. Making it harder for foreign e-commerce firms to compete is somewhat of a theme here.Ultimately, the draft e-commerce policy leaves us with more questions than answers. Firstly, is there a direct link between localisation and the quest for access to data? Why localise in the first place? If there are objectives behind it, how is localisation part of the roadmap to getting there? Also, how does the government plan to crack down on platforms that do not have the financial resources to have an office in India? Will the administration identify and penalise every foreign player on the app/play stores? While answers to these questions remain unclear, one thing that the draft e-commerce policy does resolve is the perceived absence of cohesive intent. The list of questions and regulations discussed above is by no means exhaustive. There are other components — marketplace models, anti-counterfeiting measures, source code for advertisements, and so on. The bottom line, however, is that it would now be misleading to say that the Indian government does not have a vision in tech policy. It is one step forward from our late 2017 vision.This article was first published in The Hindu. Views are personal. 

Read More

How Can India Combine Data and Regional Power?

The stakes concerning jurisdiction over data have never been higher. There is a global discourse today on the future of data that was recently brought into the news cycle by the U.S. government. The Trump administration was mulling over capping H-1B visas to deter India’s rules on data centers. There is a sense perhaps that data has become a variable in regional and global geopolitics today. Owing to its immense population, India unsurprisingly generates a lot of data. The question for New Delhi is how to translate this into a geopolitical advantage.China has a robust approach to its data — one that has been conducive to its digital goals. It needs countries to store their data locally. Companies might store their data in China but can resist sharing user information with the government. Apple has a data center in China but refuses to share encryption keys with the government. By closing off its data from the world, China lies on an extreme end of the data geopolitical spectrum.Other countries, such as the United States, Canada, and Australia differentiate their data, some of which is deemed fit to be shared with the world. Critical data, however, is not allowed to cross borders. In cases with a middle ground, data can be shared, but a working copy of it must be maintained at home.This brings us to India. As data regulation changes from bills to laws, Indian data policy is still officially in flux. Looking at current global trends, India could broadly either go the American or the Chinese way. Maybe New Delhi could pursue the best of both approaches. What these broad approaches might be missing, however, is a geopolitical opportunity. Here India could use its data and that of its neighbors’ (BIMSTEC) to their collective advantage.BIMSTEC is a group comprised of Bangladesh, India, Myanmar, Sri Lanka, Thailand, Nepal, and Bhutan. The group made headlines recently when India invited its members to Prime Minister Narendra Modi’s swearing-in ceremony. This is a good indicator for the relevance of regional groupings, as the SAARC leaders were invited for the same event last time around, in 2014. Combining BIMSTEC with data is a huge opportunity for India. Taking the lead in this regard has a set of technological and geopolitical advantages. This could mean having a common set of data processing laws, common security standards, a common market for data, and a larger region and resources to build data centres in.Doing all or any one of the above could add importance to BIMSTEC in the region and the world too. The only similar project to exist in the data space is that of the European Union and the Council of Europe. Having a standard of data processing laws and shared space for localisation for seven countries would add significant bargaining power against warnings like caps on H-1B visas. Convention 108+ in the EU does something similar. By having common adequacy standards, data can only flow across borders for processing when the receiver/processor meets the standards set for it.So not only can a combined BIMSTEC approach to data can increase the region’s bargaining power globally, but it can also help better security standards. It is also likely to bring down the costs of storing and maintaining data. Data centers are resource-intensive in terms of electricity, water, and bandwidth. Pooling resources to build and maintain them is likely to bring down costs. Should the BIMSTEC area become a cheap option for data centers, it would give the region an increased say in the global technological debate.Pooling national data can also help the faster development of AI in the region. More importantly, any advancements in AI-based on regional data would help develop the technology in the context of developing countries and not just Silicon Valley. It can become hard to relate to AI that can open garage doors for Teslas in South Asia. It would be more useful to have self-driving cars that can deal with potholes, for instance. This would also be good for BIMSTEC enterprises as they use data to solve local problems.BIMSTEC pooling of data would be tailor-made for AI to solve regional problems across borders. Also developing new standards for data processing presents a new opportunity. It would bring the privacy debate back into the discourse in these countries. Having a multinational approach to data jurisdictions is not something the world is familiar with—certainly not South Asia. Developing laws that address these issues would be a remarkable achievement considering the unique challenges each country faces.The bottom line is that in a world of high-tech geopolitics, BIMSTEC might be a better approach than India alone. It would undoubtedly provide more power to India and the region. The icing on the cake is that it presents wonderful possibilities for the future of big data and AI in the region. Having a regional approach to local problems, splitting costs of data centers, and the possibility of better processing laws is wonderful. All that remains for India to do is take the lead and add data as a component of foreign policy.This article was first published in The Diplomat. Views are personal.  

Read More