Commentary

Find our newspaper columns, blogs, and other commentary pieces in this section. Our research focuses on Advanced Biology, High-Tech Geopolitics, Strategic Studies, Indo-Pacific Studies & Economic Policy

Should You Download Aarogya Setu?

This week of the pandemic has focused significantly on around Aarogya Setu and contact tracing. So much so that during his speech extending the lockdown, PM Modi urged people to download the app.The idea is for the Government to use the app to know where you are and who you have been in contact with, enabling contact tracing.The app’s privacy policy has been under fire since its release and has been updated recently with improved protections. Because the app is to be used for contact tracing as well as quarantine enforcement, it will collect huge amounts of personal and sensitive data. For instance, signing up to the app requires you to put in your name, age, gender, phone number, and profession.Once you have registered, the app will begin to use Bluetooth to check who you have been in contact with. In case you test positive, the information might come in handy to notify people who may have also been infected. However, the Bluetooth itself does not give away your location.The way it works is if the Bluetooth on your phone detects another phone in range, the pair will exchange keys and keep a record of the interaction.The app will use the GPS inbuilt on your phone to monitor your location, enabling it to determine whether you are adhering to the quarantine with significant accuracy. The phone will take note of your location every 15 minutes and only share the information with the Government server if you test positive.Normally, you would have found the data collected by the app to be extremely invasive. But then again, these are not normal times. You could make the argument that the measures are necessary and proportional.There are some technical shortcomings and slightly concerning macro trends with the concept. Firstly, the usage of Bluetooth. Bluetooth is fairly trustworthy over 6 ft (the norm for physical distancing). However, the same things that stop coronavirus from spreading do not apply to Bluetooth. As put by Casey Newton, Bluetooth can recognise two devices kept 10 ft and an apartment wall apart while the coronavirus may not transmit through walls.Situations like these are likely to lead to a lot of false positives.  Secondly, the context here matters. India faces different challenges as compared to the developed world. Earlier this month, when Apple and Google came up with the idea to enable contact tracing, it led to plenty of debate around wealth distribution being strongly correlated with OS distribution.The idea is that if you wanted to check where in the world wealth was concentrated, you could look at a map of iOS users around the world. Android, on the other hand, runs on a lot more smartphones than iOS, and not all of them have Bluetooth-LE, which is needed to enable contact tracing. Here, it is the poor who lose out.In India’s case, the poor currently lose out because they own feature phones and not smartphones. While Medianama reports that the Government is working on a feature phone version of the app, the poor will continue to remain at a disadvantage until it is released.Should you download the app? Yes. The updated privacy policy is a marked improvement upon the previous one. Most data collected by the app is stored on the device locally or 30 days, after which it is deleted. Data that is shared with the Government will be deleted after 45 days.However, in case you are unfortunate enough to test positive, the information shared with the Government will be deleted two months after the individual is cured.Broadly speaking, this policy is a step towards better data management practices. While protections could have been made better through open sourcing the app and disclosing the encryption, the current version of the app is reasonable in its approach and mandate.  More importantly, perhaps, in last week’s column, I made the point that the liberties we give up today may end up becoming the norm tomorrow.This very much applies to Aarogya Setu. Even with an updated privacy policy, in regular times, the app would have been considered invasive to personal privacy.The hallmark of a good policy/programme is that it ceases to exist once it has achieved its goal. What the app does need is an end date so that it does not inadvertently set a new normal. This also applies to measures such as facial recognition techniques being used to enforce quarantines as well as any other means that collect, store or process data.The pandemic will hopefully come to an end sometime. Keeping that in mind, so should the technology measures that are used to contain it. In that regard, Aarogya Setu should lead the way. A new, worse normal in privacy is the last thing the world needs. This article was first published in Deccan Chronicle. Views are personal.

Read More

Overcorrection at the cost of privacy during coronavirus is problematic

This article was first published in Deccan Chronicle. Views are personal. There are three pillars of crisis management, according to NYU professor Scott Galloway. First, the top guy/girl takes responsibility. Second, acknowledging the issue. Third, overcorrect. When you read it out loud, it seems reasonably straightforward but is a process that should not be taken for granted.It has taken governments and leaders around the world multiple attempts to take responsibility and acknowledge the issue. Finally, time has come to overcorrect. Six months from now when things are back to relatively normal, measures taken now may look drastic, but that is the point.However, it is not going to be easy to overcorrect. Even governments have to follow social distancing and may already have limited capacity to deal with a pandemic of this size. Given the pervasive nature of modern technology, it is no wonder that government administrations around the world are going to try and use digital methods to aid their efforts.China has been ahead of the curve on this. The government has begun using the Alipay app to assign citizens with QR codes based on their risk of exposure to regulate citizen movement. A green QR code means that you are free to move around. A yellow code means a one week quarantine while a red QR code refers to a two week quarantine. In Tamil Nadu, the Police is now using facial recognition to track people in quarantine. At a larger scale, the Union Government of India is using the Aarogya Setu app to help connect Indian citizens to health services.To most people, it might not make sense to talk about privacy in such times. And on one level, they would be right. It is hard to overstate the seriousness of the situation and dealing with the pandemic comes first. In such emergencies, concerns about privacy come second. Moreover, as a fundamental right in India, privacy exists with reasonable restrictions. Erosions of privacy must be necessary, legal, and proportional. Instead of being suspended, this standard should be upheld. Because as the Union Government (and the larger international community) use facial recognition, or apps such as the one deployed in China, it is crucial to keep in mind that such techniques have the potential to set a new normal by resetting our expectations on personal privacy.Rahul Matthan has an excellent analysis backed by observations regarding this. Before 26/11, hotels in India would let you drive to the entrance and hand over your keys to the valet. Post the Mumbai attacks, vehicles are mandatorily screened, as are people and the contents of their baggage. As a practice, it seemed important and urgent at the time, and has now become routine.More than a decade from now, doing so has become the expectation, and it is probably for the best. However, that was 2008, and the difference between now and then is that the liberties yielded today will be a lot more invasive than just vehicle checking at hotels.For instance, in case of CoBuddy (the app bring used in Tamil Nadu to track people), the app has constant access to the phone’s GPS and continuously checks the location of the phone. The app automatically sends an alert to the Police as soon as the person moves out of the geofence. The Police also sends users prompts 2-3 times a day to verify their faces.Not all data is created equal. While both facial data and location data are personal and sensitive, the former tends to be more invasive. This is because facial information is permanent and cannot be easily changed. While constant access to people’s location can help determine where they live and their movement patterns, it is easier to change where you go compared to how you look.The Aarogya Setu app, while admittedly better than (now discontinued) Corona Cavach when it comes to privacy, still collects your name, phone number, age, sex, profession, countries visited in the last 30 days and whether or not you are a smoker (apart from constant access to your location). Compare this with the Singapore government’s app, Trace Together, that only stores your mobile number and a randomly generated ID. Not only are we being subject to invasive apps, lists of infected people with their names and addresses have been made publicly available without their consent.Given the nature of the crisis and the tech response we have seen, it is evident that two things are happening here. Firstly, there is little regard to data minimization. Governments in India and across the world are collecting more data and accessing increased data points to get a better sense of people’s movements during the crisis. Once your name, age, facial data is shared with the government, it is unlikely to change. Instead, once this crisis is over, the data can be used for purposes it wasn’t collected for.Secondly, violations of personal privacy are becoming the norm and not the exception. It is now somehow okay to post lists of infected people on WhatsApp groups and to provide facial data to the police 2-3 times a day. Much like the vehicle checks at hotels post the 26/11 attacks, our expectations of privacy are being reset. Only this time, it is being done at scale.It is fair to say that we live in unprecedented times. But just because we do, does not mean that the necessity and proportionality standards for eroding personal privacy should be suspended. If anything, they should instead be upheld. Because the liberties we give up today, may end up becoming the norm tomorrow.

Read More

Zoom fiasco highlights need for data protection law

This article was first published in Deccan Chronicle.None of the protections afforded by a privacy law are in place yet, which leaves our data open to exploitation by tech companies.There has been a lot going on at Zoom. The video conference app has been a major beneficiary from the lockdowns imposed due to the coronavirus, as humanity participates in its largest-ever work from home experiment. As a result, Zoom’s shares have doubled in value in less than six months. All is not well though, the company has been fraught with privacy issues recently. For instance, the Electronic Frontier Foundation (EFF) pointed out that hosts of Zoom meetings can see if the participants are paying attention based on whether or not the Zoom window is active on their screens.Zoom would likely make the argument that the ability to be able to check whether people are active on a team call is a feature, not an instrument meant to cause harm. Which is one way to look at things. But at the same time, that is not the only slip up in terms of privacy the company has been embroiled in this past month. VICE reported that Zoom’s iOS app sends user data to Facebook even if you do not have a Facebook account. Zoom notifies Facebook when the user opens the app, shares details about the user’s device, such as the model, time zone, city, phone carrier, and the unique advertiser identifier (a unique number created by user devices which are then used to target ads).Zoom’s privacy policy is not explicit about this data collection and there is a blame game to be played here. Facebook can make the argument that it requires developers (like Zoom) using Facebook’s SDKs and Pixels to be transparent about the data they are collecting, using and sharing. Zoom can and has argued that Facebook was collecting unnecessary device data. We need to talk about all of this because apps like Zoom and Houseparty are not going anywhere.Instead, this incident is an excellent teacher for how policy and protections work in the data protection space. Firstly, it highlights the need and urgency for India (and other countries) to have a data protection law. These are exactly the kind of offenses a data protection law is supposed to penalise. In an ideal world, had there been a data protection law in place here, Zoom likely would have had to adhere to a standard of explicit consent. This way, the user would have been aware of what data was being shared. Had Zoom not adhered to the guidelines of consent, it would have had to pay a penalty. The data being shared with Facebook would have come under ambits of personal data, personal sensitive data and non-personal data, requiring different levels of protection and liability.The fact that none of the protections afforded by a privacy law are in place yet means the only protections users have are those given to them by companies whose objective is to maximise shareholder value. More often than not maximising shareholder value comes at a cost of trampling on user rights. Most companies will be more than happy to make this trade-off and would ideally want to do it when there isn’t a data protection law in place.At this point, it is hard to state whether or not a data protection regulation is going to be a definitive solution to incidents like these. Broadly because there isn’t a lot of precedence to learn from yet. Arguably the most significant existing legislation in this space is the General Data Protection Regulation (GDPR) in the EU. The law was enforced in May 2018 and an assessment of how its implementation has fared is due by the Commission sometime this year.There is every chance that the Personal Data Protection regulation that India ends up adopting is not going to fix everything when it comes to abuses of power that come with a vacuum in the data protection space. It is going to be hard to implement clauses and penalties on every website on the internet and to track data flow at scale.  However, as any policy analyst worth their salt will tell you, change happens at the margins.In the larger picture, Zoom sharing data with Facebook without explicit notice is a sign that is reflective of a deeper problem of accountability within the data protection space. There are no laws, and when laws do exist, they are near impossible to impose and monitor. This should serve as a high-profile warning sign of practices that currently exist and are going to continue until regulation exists.The writer is a technology policy analyst at The Takshashila Institution. Views are personal.

Read More
High-Tech Geopolitics, Advanced Biology Prateek Waghre High-Tech Geopolitics, Advanced Biology Prateek Waghre

As Chorus of 'Chinese Virus' Rings Loudly in India, Is the Stage Set For an Info-Ops Tussle?

This article was originally published on The WireUsers of Indian Twitter, for want of a better term, will not have been able to escape the term ‘Chinese virus’ trending on the platform in the form of different hashtags over the last 10 days.What seemingly started off as agitprop by the American right has transcended boundaries and resonated in India as well, echoing sentiment that Beijing and the Chinese should be severely penalised for the COVID-19 pandemic.This sentiment was backed by what appeared to be some coordinated activity on Twitter from March 24 onward, around the time of India’s lockdown, all with the purpose of taking aim at China.#ChineseVirus19, #ChineseBioterrorisn, #Chinaliedpeopledied and #ChineseVirusCorona were some of the hashtags being used in favour of this narrative around March 24 and March 25.Read more

Read More

What Zoom’s rise tells us about future of work culture

This article was first published in Deccan Herald.The shift to working from home would have happened in an organic manner, but the COVID-19 pandemic has accelerated the speed of change; important to think about the precautions we must take to make this on-going experiment successfulThe coronavirus pandemic has become the reason for the largest work from home experiment in history. This phenomenon has meant an increased use of video conferencing and collaboration platforms that allow many people to simultaneously interact and collaborate in a virtual setting.Not surprisingly, the company that has benefited most from this on-going experiment is Zoom, a video conferencing platform that is being used by millions of users. Zoom’s share price has more than doubled since the new coronavirus began to spread in December 2019. There has also been a rise in trolling and graphic content on Zoom, an almost definitive sign that it is rising in popularity among teenagers and not just working professionals.Zoom’s rise (along with other video conferencing platforms like Skype and Slack) is indicative of a broader shift in the work culture. This shift to working from home or working remotely would have likely happened in an organic manner anyway, but the COVID-19 pandemic has accelerated its speed.There isn’t much value in arguing about whether this phase will lead to a permanent shift in terms of a bulk of jobs being performed remotely from now. That question depends on too many variables, and it is impossible to predict.But the shift itself needs to be understood as part of an evolving trend. Workspaces for the most part have moved from cubicles to open-plan offices. As Chris Bailey notes in Hyperfocus, it is contentious to conclude that open-plan offices improved productivity across the board. What open-plan offices did was to make employees think twice before interrupting their colleagues, and made them more respectful of each other’s time.The future of the office space, moving on from open-plan offices, is the virtual office (widely anticipated and now catalysed by the COVID-19 threat), with people logging in and conducting meetings from home. This brings us to what the characteristics of this new work from home culture will be and what broad precautions we must take to ensure that remote working is successful for us as a people.Thinking through the idea of working remotelyThe first and most important thing to look out for here is going to be the impact this is going to have on the attention economy. With an increasing number of people working from home today, there is going to be a significant reduction in friction. Let me explain, the attention economy runs on the idea of manipulating people to spend more time on platforms. Companies do this by eliminating friction between the user and the content This is why the feed on Instagram is endless, or why the default option on Netflix is to keep watching more instead of stopping. Because everything is either free or so easy to access, attention becomes the currency.But when in office environments, for example, during a meeting, there is a certain amount of friction in accessing these apps. Using Instagram while talking to a colleague is going to have a social cost on your relationship. However, when working from home, it is going to be significantly easier for employees to give in to their distractions instead of focusing on tasks at hand. It is no wonder that Zoom has begun offering a feature that allows hosts to check if participants are paying attention based on whether or not the Zoom window is active on their screens.In addition, this also opens a can of worms for privacy breaches and the issue of regulating non-personal data. Because a huge number of people are shifting to working online for the foreseeable future, the value of online meetings increases in terms of the data being shared on them. This gives video conferencing and collaboration platforms the incentive to collect and share an increased amount of data with advertisers. For example, information on when users open the app, details about the user’s device – such as the model, time zone, city, phone carrier, and the unique advertiser identifier (a unique number created by user devices which are then used to target ads).In addition, increased workloads being transferred online will also generate increasing volumes of non-personal data, making the debate on how that should be regulated more relevant. For context, non-personal data is a negative term that is used to refer to all data that is not personal. This includes data like company financials, growth projections, and quite possibly most things discussed in office meetings.It is unlikely that COVID-19 has transformed offices forever. In this regard, its role in history is likely to be seen as a catalyst, accelerating the shift from offline offices to online offices. But as it does so, we need to take precautions by introducing friction in the attention economy, being conscious of the privacy trade-offs being made to facilitate new features, and installing regulation for the governance of non-personal data.(Rohan Seth is a technology policy analyst at The Takshashila Institution)

Read More

Saving Our Own: COVID-19 Presents Challenges and Opportunities in Technology

With the pandemic existing at this scale, state capacity alone may not be enough to respond effectively. Struggling in the face of an invisible threat, states have to co-opt technology to augment their arsenal for a long-haul fight against coronavirus.The unprecedented spread of the COVID-19 outbreak is overwhelming healthcare systems across the world. In less than three months, the virus has impacted 177 countries or territories (including cruise ships) infecting over 2,30,000 people and killing over 11,000. The rapidity of community spread has left policymakers and bureaucrats scrambling for ways to bolster an overworked healthcare system. Another impending concern is of people who are struggling to adjust to a self-isolation way of life.With the pandemic existing at this scale, state capacity alone may not be enough to respond effectively. Struggling in the face of an invisible threat, states have to co-opt technology to augment their arsenal for a long-haul fight against coronavirus.There are three phases in which technology can be effective - one in the detection of COVID-19 positive individuals; second in enforcing quarantine conditions and finally on facilitating the non-infected individuals to stay-at-home.Technological developments of the last few decades are helping the testing regime that has been currently applied for COVID-19. There are two stages for detection: thermal screening that we see at airports or public places and the confirmatory test. The thermal screening is a crude process - it can only identify if someone has a fever. While the confirmatory test usually determines if the virus is present in the sample by searching for the virus’s RNA.However, this depends on an adequate amount of virus being present in the sample which further depends on the way the sample was taken. An alternative way to test for COVID-19 is to identify antibodies that the human body makes in response to the virus. This is a better method because antibodies can stay in the body long after the infection is over and therefore can be used to determine a history of infection.The first serological tests based on antibody testing are now being made available. These tests will make identifying susceptible people easier and hopefully reduce the ambiguity brought on by the other methods.Quarantine and self-isolation seem to be a difficult way to contain viruses, even celebrities have chosen to defy quarantine, setting bad examples for the general public. China, one of the world’s most technologically advanced societies, was quick to adopt self-isolation and asked its residents to stay indoors.Through AliPay (one of the most popular mobile apps in China), the Chinese State has attempted to regulate the movement of its people. The idea is that based on a number of factors, each person’s app is going to be assigned a QR code to signify a risk of exposure. The code itself has three categories; green - unrestricted movement, yellow- self-quarantine for a week, and red - self-quarantine for two weeks. The system is being rolled out nationwide and is going to make it extremely hard for people to go around town without having a code.There are concerns with these measures, such as privacy violations (though some of the data being collected is not permanent in nature, for instance, location data). But it has unintended but anticipated consequences. In China, people were not informed about what variables their QR codes were being calculated. So people who were returning home from work and had their codes turn red were not allowed entry in their apartment complexes and were exposed to a higher degree of risk. Or for people departing high-risk areas having their codes turn red meant they could not get on flights or take exits on highways. Yet effective quarantine measures are required and the compromise on privacy may be essential to ensure wider public health. In India particularly, this appears to have been brought on by public behaviour itself endangering public health should be a crime, and safeguards against such lapses should not be undermined.Finally, technological innovations need to help individuals working from home maintaining social contacts. There hasn’t been a lot of discussion around how extended periods of isolation could impact mental health. With longer work-from-home durations and quarantine, there is a significant chance that we might be looking at increased cases of depression. Communities being isolated at this level is unprecedented, and if the situation continues like this for a longer period, people will miss physical contact, social validation, and also the endorphins from a gym session. This is a gap that tech can and may need to address if things continue the way they are.Given the anticipated long-term effects of this viral outbreak, we are going to need technological interventions to curb it as soon as possible and help us adjust to a new way of life in the long term.Shambhavi Naik is a research fellow at the Takshashila Institution, Bengaluru. Rohan Seth is a policy analyst at the Takshashila Institution, Bengaluru. Views are personal.This article was first published in News 18.

Read More

Antitrust is already working. Here’s why.

With the United States Congress taking aim at Google’s Search in anti-trust hearings, anti-trust is in the spotlight again. It is still unclear whether tech platforms privileging their products is enough to build and win an anti-trust case against big tech. We will find the answer to that question in the times to come. Until then, let us remember that 2019 displayed two memorable sides of Facebook founder Mark Zuckerberg. The first where he made the US Congress look stupid while explaining to them how Facebook made money. Remember, “Senator, we run ads”. The other side was visible when he talked about Elizabeth Warren being an existential threat to Facebook.Warren had not been the favorite to win the Democratic ticket for a while, let alone the Presidency, but was big enough to get herself heard. And a big part of her message was opposition to Big Tech and her call to break up Amazon, Google, and Facebook. Even if Warren goes on to win the Presidency (she has dropped out of the race now) and launches her anti-Big Tech campaign, it is unlikely that any of the Big Four (Facebook, Amazon, Apple, Google) would be broken up so easily.Actually, there are a bunch of reasons why it is unlikely that Big Tech would be broken anytime soon. For starters, all of them have the capital and expertise at hand to handle an anti-trust challenge. Also, you could well argue that anti-trust (or at least the way it operates) wasn’t built for this. And finally, a significant part of America views these companies as symbols of success, and at times engines of economic growth.So how is anti-trust working? Is it because it seems to have successfully threatened Facebook to merge its messaging services, making it harder to separate them? No. Anti-trust would still apply to a more interoperable messenger. If it could break up AT&T, one of the largest and most complex networks known to mankind, then consider it capable enough to mandate the decoupling of a complex messaging service.Instead, anti-trust works today as a deterrent, and a fairly effective one at that. Through the threat of breaking up Big Tech, it has ensured that these massive corporations think thrice before acquiring smaller firms. Instead, it has unknowingly been responsible for establishing a new approach to dealing with competition in Silicon Valley: The slow burn.The idea is that bigger tech companies slowly eat into their competitor’s market share or refrain from entering into new markets for fear of anti-trust. Professor Scott Galloway explains this well through Star Wars. Here is a slightly paraphrased version of his analysis:The Death Star’s multiple reactors can cause the total and rapid destruction of a planet. But, firing a single reactor on a planet is enough to be overkill for a city or a base, but nowhere near enough power to destroy an entire planet. That is basically what Big Tech has also adopted as a strategy.Prime Microsoft (1990s) was absolutely ruthless as a company. During the first browser wars, it killed its closest competitor, Netscape, and ended up inviting anti-trust. Since then, Microsoft has learned. Now, for instance, it has decided to compete against Slack with its latest offering, Teams. Turn on the single reactor. Microsoft began by offering Teams at a marginally lower price point than Slack. It has gradually upped the ante since then. Now bundling Teams with the Office 365 bundle, it has passed Slack in terms of users (13 million v 10 million, according to the most recent reports). Had Microsoft wanted to turn on the death star, it could have begun with retailing Teams for free and added in support for the G-Suite and say, Zoom. Making Teams interoperable would have taken away much of Slack’s USP.Once you begin thinking in terms of this analogy, a lot of Big Tech’s actions (and non-actions) begin to make sense. Think Amazon and FedEx.Amazon sits upon arguably the world’s best physical distribution networks. With some significant tweaking, they can also perform FedEx’s functions of transporting packages between cities. This would easily eat into the market share of FedEx. Plus, since it is Amazon, it is likely that replacing FedEx would happen at break-even or loss, beating FedEx’s price point. Amazon somewhat turned on the single death ray recently when it blocked its sellers from using FedEx ground for Prime shipments during the holiday season,  citing a dip in performance levels. Had it chosen to turn on the Death Star, it could have banned their express service from Prime Shipments and both express and ground services on Non-Prime Shipments.Amazon has also perfected its software over the years. So, when you order something on the platform, it feels frictionless. When was the last time you used the FedEx app? For Amazon, this market is low-hanging fruit – One that it won’t go for (at least with all its push) in the foreseeable future.And that in itself is how anti-trust is working. Yes, it wasn’t built keeping in mind Big Tech. Who could have predicted that we would have such companies back then? Big Tech likely has the capacity and expertise to navigate an anti-trust challenge. Anti-trust may not have been built for this, but the current version of it seems to have unintentionally been repurposed. It is now akin to a tool of deterrence against Big Tech. To some extent, we may have Warren to thank for it.Rohan Seth is a Policy Analyst at the Takshashila Institution. Views are personal.This article was first published in Deccan Chronicle. 

Read More

SC Verdict: A positive step to realise VC's potential

On March 4, a historic day for the cryptocurrency industry in India, the Supreme Court of India quashed the Reserve Bank of India’s (RBI) prohibition on the trade of virtual currency (VC). The road towards the day of the verdict has been long and arduous. Even this verdict is a small win and how the cryptocurrency industry moves ahead remains to be seen. The Supreme Court concluded that the interdiction of VCs failed the four-pronged proportionality test and violated the fundamental right of the cryptocurrency exchanges to carry out any occupation, trade, or business. The Court believed that there were less intrusive measures to achieve the purposes that RBI intended. It also added that the RBI had not presented any empirical evidence to show that entities regulated by it suffered harm due to VC exchanges.

Nevertheless, the Court refuted almost all other arguments made by the petitioners. It upheld the right of the RBI to regulate VCs and commented that the RBI could regulate/prohibit anything that may impact the financial system of the country. On the claim by the petitioners that the circular was an executive action by the RBI and did not afford similar judicial acceptance as legislative action, the Court observed that the RBI was an autonomous institution responsible for maintaining the financial integrity of the country and enjoyed broad powers to govern activities that impact the monetary, credit and payments systems in India.

The latest judgment came almost two years after the RBI, through a circular, had prohibited the use of virtual currency. The circular forbade entities regulated by the RBI from dealing with or providing services to individuals or business entities dealing with or settling virtual currencies. All entities which were already involved in the provision of the aforementioned services were asked to wind down in three months. Cryptocurrency tokens could undermine international policy frameworks such as the AML (anti-money laundering) and FATF (Financial Action Task Force), designed to counter money laundering and terrorist financing, RBI had posited. It could also adversely impact market integrity and capital control, RBI’s deputy governor, BP Kanungo had further explained at a press conference on April 5th, 2018.

This led to the closing of many fledgling crypto exchanges within the country. Koinex, India’s largest crypto exchange was shut down because of the circular.  Unocoin, one of the early entrants in the bitcoin space in India, resumed fiat deposits on March 5, 2020, after suspending it in the summer of 2018. Unocoin had to lay off 50 percent of its employees after the ban. India has already lost valuable time, money, talent in a promising industry.

It is believed that the government and the RBI have similar opinions on cryptocurrencies. In February 2018, in his budget speech, then Finance Minister Arun Jaitley had categorically said that ‘the government does not recognise cryptocurrency as legal tender or coin and will take all measures to eliminate the use of these crypto-assets in financing illegitimate activities or as part of the payments system’. An inter-ministerial committee also submitted its recommendations on July 22, 2019, and suggested banning private cryptocurrencies and criminalisation of activities related to VCs. The committee also submitted a draft bill - Banning of Cryptocurrency & Regulation of Official Digital Currency Bill, 2019. The current Finance Minister Nirmala Sitharaman has said that ‘countries will have to show extreme caution on cryptocurrencies.

It will take much more to kickstart a decimated industry. Sensing the inclinations within the Finance Ministry and the RBI, and having been at the receiving end once before, the banks are likely to be circumspect in lending to the crypto exchanges.

A consistent policy framework needed

Coherent policy action by the government is required now. The government must identify the policy objective it wishes to achieve. These objectives would be a combination of checking money laundering, preventing terrorist financing, promoting greater financial inclusion, and ensuring financial stability. Evaluating all options and analysing empirical data and then choosing the most effective and less invasive measure is necessary.

RBI set up a sandbox for testing fintech products in April 2019. Opening this sandbox to cryptocurrencies as well would enable companies to live-test their new products in a controlled environment. This would not only promote innovation but also enhance the knowledge and awareness about cryptocurrency projects amongst the government officials, allowing them to take appropriate regulatory measures if and when global players such as Facebook launch their products in India.

The government should set up a specialised cryptocurrency advisory council that would liaise with multiple stakeholders in the government, industry, academia and suggest enabling regulation for the industry in India. The advisory council would be cognisant of the legislations in different parts of the world and conduct India-focused studies and recommend measures suited to the Indian landscape.

The Supreme Court judgment is a small step in the long road towards realising the immense potential of cryptocurrency and the government should leverage this opportunity to inspire confidence and implement progressive legislation.

 (Utkarsh Narain is a technology policy researcher at Takshashila Institution, a centre for research and education in public policy in Bengaluru. This commentary was published in Deccan Chronicle on March 12. The views expressed are personal.)

Read More

Using Tech to Deal with Covid-19 Is Problematic

Covid-19 has taken the world by storm. With Covid-19 being classified as a pandemic, recent predictions claim that within the coming year, 40-70% of people around the world will be affected with Covid-19 (including mild disease or an asymptomatic form). In a sense, China being the epicenter of the outbreak has reluctantly taken the world through a learning curve on how technology intersects with the policy in public health emergencies.As the number of cases rose in China, the ruling party’s response has been interesting. Since early February, China has been encouraging citizens to return to work. But while the Government does that, it has also begun efforts to regulate people’s movement through smartphones. Currently, the system is present in 200 cities and is being rolled out nationwide. Users fill in a form on the Alipay app with their personal details and are presented with a QR code, which can be green, yellow, or red. If your code is green, you are free to move about unrestricted. A yellow code means staying at home for a week, whereas a red means a two-week quarantine.On the surface, it makes sense. People who are predicted to be at risk should take precautions to ensure they don’t spread the virus. Software is a great medium to help achieve that. In a pandemic of this scale and seriousness, workers in public places like metro stations, subways, and residential societies should have the power to check who may be a contagion risk.But once you take a closer look, it becomes evident that tech does not always mirror society. People do not always fall neatly into green, yellow, and red signals. Data that classifies people may be riddled with biases. Algorithms may come to unjustified and false conclusions that put people at risk. Data shared with law enforcement agencies infringe on people’s privacy. All of this is evident now, making China an excellent case study to learn from.The New York Times has done exceptional reporting on this. In a particular case, Leon Lei, 29, was allotted a green code on Alipay before leaving his hometown, Anqing, to return to work in Hangzhou. A day before he departed, his code turned red, seemingly for no apparent reason. It is hard to say why the code changed and what parameters the algorithm used to detect the possibility of people being at risk. A working theory could be that Leon’s hometown, while itself not being a hotbed for the virus, borders Hubei Province — the center of the outbreak. As a result, the software decided to change its color. But it is hard to know for sure.Had location been the deciding factor in the code changing its color, then it is safe to assume that an increasing number of people in Anqing would get red codes, even if they are not at risk. This would make it harder for them to move to safer areas. Vanessa Wong faced this situation when she had no symptoms and her code suddenly turned red. Her employer and housing complex needed green codes for entry, leaving her stranded in Hubei. In addition, personal data shared by the users send location and an identifier to the police.This brings us to a larger question. What is a responsible way to use tech in such emergencies? State capacity is limited and technology is a handy tool that allows governments to bridge gaps. But as China teaches us, such solutions have very significant limitations. They do not mirror society accurately, can be biased, infringe on privacy, and have the potential to do considerable harm.This is why monitoring apps, such as Alipay need to be more transparent. It is better to disclose what data is being collected and how much weightage each parameter will be given. Citizens would then have the basic know-how of why their codes turned green and what they can do to be safer. It is because there is little to no transparency in the Alipay process, forcing uninfected citizens to be stranded and putting them at risk.People often tend to claim that technology is just a tool. It is value-neutral and does not defer between groups. It seems like a benign sentence but is dangerously misleading. When it comes to outcomes, history, and China today teach us that tech ends up choosing winners and losers, unintentionally so. Covid-19 is a crisis that should not be wasted in teaching us that.This article was first published in Deccan Chronicle. Views are personal.

Read More

Why Amazon CEO Jeff Bezos's $10 bn to fight climate change may not help

This article was first published in the Deccan Herald.Amazon CEO Jeff Bezos recently announced through an Instagram post that he would donate $ 10 Billion from his personal wealth to the newly created Bezos Earth Fund to fight climate change. The global initiative will fund scientists, activists, and NGOs according to the social media post. However, questions such as when will the money be disbursed, whether the fund will be a private foundation, a limited liability corporation, or a donor-advised fund remain unanswered.In recent years, we are seeing increased instances of giving by mega billionaires. Warren Buffet committed a majority of his wealth to the Bill & Melinda Gates Foundation. Mark Zuckerberg also pledged 99 per cent of his Facebook shares to the Chan Zuckerberg Initiative, soon after the birth of his daughter in 2015. Billionaires like Infosys’ Nilekanis and Wipro's Azim Premji have signed the ‘Giving Pledge’, committing the majority of their wealth. Bezos, who hasn’t signed the ‘Giving Pledge’ is the latest to jump onto the strategic philanthropy bandwagon.While the individual grant by Bezos is laudable, fighting the adverse effects of climate change will require ‘collective action from big companies, small companies, nation-states, global organisations, and individuals, as Bezos’s post acknowledges. Thus, to understand the direction the fund takes, it makes sense to analyse the policies and actions of Amazon with regard to climate change over the years.On September 19, 2019, Amazon signed ‘The Climate Pledge’ and committed to achieving the requirements of the Paris Agreement by 2040, ten years in advance of the 2050 deadline. For the record, Amazon releases 128.9 grams CO2 equivalent per dollar (USD) of Gross Merchandise Sales (GMS). It aims to fulfil 80 per cent of its energy requirements across all businesses, through renewable energy, by 2024 and raise the share to 100 per cent by 2030. Investing $100 million in reforestation projects around the world and securing a fleet of 100,000 electric delivery vehicles also feature as goals in the Amazon Sustainability Report 2019. Approximately 80 per cent of Amazon’s total emissions, which equal 44.40 Million Metric Tons (mmt) CO₂ equivalent, come from indirect sources -- corporate purchases and Amazon-branded product emissions, as well as third-party transportation, packaging upstream energy-related emissions forming the majority.Amazon’s treatment of the climate action activists from within the company who formed the Amazon Employees for Climate Justice group in April 2019 has been less than encouraging. An open letter, signed by 8,702 employees, to Jeff Bezos and the Board of Directors, asked the company to ‘adopt the climate plan shareholder resolution and release a company-wide climate plan’ to tackle the climate crisis. Bezos used his influence and 16 per cent stake to vote down the proposal in the Annual General Meeting of Amazon’s shareholders. However, the support that the group garnered from other stakeholders in the company made Bezos relook his position and lead to the birth of the above-mentioned ‘Climate Pledge’.The climate group has also urged Amazon to shift to renewable sources for Amazon Web Services, its most profitable business. Amazon continues to award contracts to fossil fuel companies for powering its data centres for cloud services. Amazon is not alone in this regard. Big Tech companies, including Google and Microsoft, are building partnerships with fossil fuel companies to leverage Artificial Intelligence to extract more oil from the earth efficiently. It remains to be seen if Amazon breaks the trend and puts its mouth where the money is. Amazon also sponsored a gala by the Competitive Enterprise Institute – a free-market think tank that engages in climate change denial.Governments have a significant role when it comes to spending to fight climate change. The Paris Climate Accord was also signed between countries and not companies (even though Amazon did make a pledge). Governments are better actors to fight climate change because the trade-offs they face are inherently different than private companies. For example, when Amazon claims that it was to be carbon neutral, it will have to revise its practices to achieve that goal. That could mean cutting corners and making compromises when the company’s own interests are at stake. Governments are long-term and do not face the threat of extinction, unlike private enterprises. This provides ministries and departments with the luxury of a longer-term vision.When you take that into account, it makes sense to better fund governments by paying taxes rather than donating personal wealth through commitments made on Instagram. However, Amazon has not been a great taxpayer. From 2008 to 2018, Amazon has paid $1.5 billion in corporate taxes. It’s closest competitor, Walmart, has paid $64 billion by comparison. Keep in mind that between September 2008 and September 2018, the value of Amazon’s stock grew more than twenty-four times from $78.3 to $1,915. During the same period, Walmart’s stock price went from $59.73 to $94.59. Amazon should have paid a lot more in taxes than $64 billion, and yet it ended up paying $1.5 billion.Putting the prior actions of Amazon with regard to policies, treatments of employees, investments in fossil fuel companies, and low taxes paid into perspective, the $10 billion individual grant is not close to what Amazon can do to minimise its carbon footprint and fight climate change. It is a welcome gesture, but we need much more to confront this global challenge.(Utkarsh Narain and Rohan Seth are technology policy analysts at the Takshashila Institution, an independent centre for research and education in public policy in Bengaluru.)

Read More

Privacy Is Not Dead, It Is Terminally Ill

This article was first published in Deccan Chronicle. Earlier last week, The Verge ran a story about how health apps had permissions to change their terms of service without the user’s knowledge. If you are a former alcoholic who tracks how many days it has been since your last cigarette or a depressed professional who is keeping a record of how your days are progressing, that is horrible news. It sets the precedent that it does not matter what conditions you agreed to once you signed up for the app. Thus, your information can and likely will be sold to companies that may want to sell you alcohol or medication. The news comes as a shock to most people who read it, especially considering the personal and sensitive nature of health data. But that is the nature of terms and conditions that technology companies set out in their agreements today. A significant source of revenue for tech products and applications is the data they sell to their clients based on your usage. And it does not make sense to keep asking you for new kinds of permissions every time they want to track or access something. Instead, it works better to have a long-form document that is widely encompassing and grants them all the permissions they might ever need, including the permission to change the terms of the agreement after you signed. After all, no one reads the privacy policies before clicking, ‘I Agree’. This was on display earlier last year when Chaayos started facial recognition, and Nikhil Pahwa went through their privacy policy to unearth this line, “Customer should not expect, that customer’s personal information should always remain private”. The rest of the privacy policy essentially conveyed that Chaayos collects customer data but does not guarantee privacy. And Chaayos is not the cause of an extremely exploitative attitude towards data; it is a symptom. The history of the internet and the revenue model it gave birth to, has led to this point where access to information is a paramount need. If you want a better understanding of it, the New York Times did an excellent job tracing the history of Google’s privacy policy which does serve as a history of the internet. Because of how little regulation existed in the internet space when it was a sunrise industry, the frontrunners today ran with our data on their terms. During all of this, consent has been virtually non-existent. I use the word virtually consciously. Consent has largely been a placeholder during the internet’s rich history. There are two reasons why. Firstly, terms and conditions lead to consent fatigue. Even the best of lawyers do not go through the conditions for every app before they click accept. Secondly, let’s say you press the decline button when asked for additional permissions. Apps are known then to bypass the OS’ permission system without consent to gather that data. But let’s say that we live in an ideal world and apps don’t do that. You manage to read a few agreements and make a conscious decision to accept. You are happy to give your consent for access to the microphone but not the location and thus, deny permission. There is a chance that it still doesn’t matter. Consents tend to be interlinked because of the nature of the internet and smartphone apps. For instance, consider the automation app, ‘If This Then That (IFTTT)’. It serves as a platform to automate functions across multiple services. For instance, it can log in every trip you take on Uber to a Google Sheet. Sounds like a helpful tip to keep track of and claim work reimbursements, doesn’t it? But if you do use that service, you are subject to three interlinked policies, Uber’s, GSuite’s, and IFTTT’s. At this point, any data you generate from that automation will likely be sold for profit. How do we tackle something like this? How do we make sure that privacy is respected more and companies cannot change their agreements once you click accept? Google took a small step towards it by introducing in-context permissions in Android 10. The idea was that if an app wanted additional permissions, say access to your microphone, or your location, it would ask you when it needed it, and not front-load all requests. We are yet to see how effective it is going to be over time.  At their best, in-context permissions will tell you why PayTM needs access to your location (because they likely need that information in case there is a fraud), or that your SMS app has been recording your location in the background for no apparent reason. At their worst, they make consent fatigue worse. In context, permissions are likely not the only answer, but it’s a start. Google implementing it is a definite sign that privacy is not dead, just terminally ill. Given time, and combined with measures such as simplified permissions, our generation might see a day when we completely control our data. Views are personal. 

Read More

Intermediary guidelines might infringe on privacy

This article was first published in Deccan Chronicle.

If you try to keep up to date with the tech policy debates in India, intermediary liability is one of those few topics you cannot escape. In very oversimplified terms, the debate here is whether companies like Facebook should be held accountable for the content that is posted on them.

The Ministry of Electronics and Information Technology (MeitY) came up with proposed changes to the intermediary guidelines back in December 2018. Since then, discourse around the topic has been rife and the new, finalised guidelines are speculated to come out in the next few weeks.

So when Bloomberg reported that MeitY is expected to put out the new rules later this month without ‘any major changes’, speculation around the guidelines was replaced by concern.

One of the most contentious clauses of the intermediary guidelines was to make messages and posts traceable to their origins. That would mean WhatsApp would need to use its resources to track where a message was originating from and then report that to the government.

As The Verge puts it, tech companies could essentially be required to serve as deputies of the state, conducting investigations on behalf of law enforcement, without so much as a court order.

That is deeply troubling. In contemporary India, we have either begun to take secure messaging for granted or just do not think about how secure our communications are today.

Here context matters. More often than not, when I talk about privacy and end-to-end encryption, I get glazed eyes. That is understandable. People find it hard to understand how encryption impacts their lives. But humor me in a thought experiment. As you read this, take a look around you. Take a good look at the person physically closest to you at this moment and ask yourself whether you would be okay with disabling the security on your phone and giving it to them for three days. If the thought of doing that makes you even slightly uncomfortable, you now understand why privacy matters.

Using the new rules, privacy is now going to be chipped away for anyone in India who uses WhatsApp (or any other end-to-end encrypted service). Add to that the fact that India today does not have the strongest of institutions. The issue of whether NaMo TV was a governance tool or a political one taught us that. So if there is anything that the political climate tells us today is that there is a very real chance that these guidelines can and will be used for political gain.

The other side to the story also is that these are intermediary guidelines and do not apply to just platforms. The intermediary is a broader term and encompasses not just platforms such as Facebook, Telegram, or Signal, but also cloud service providers, ISPs, and even cybercafés.

Not all of these players have equal access to information when ordered by law enforcement agencies to disclose it. A consultation report released by Medianama listed instances of harassment of intermediaries.

According to the report, ISPs claimed to live in a constant situation of threat and made to feel like criminals for running their businesses. During raids, people and their families were often asked to part with their phones and electronics, along with access to their passwords. In fact, according to the report, when a cloud service provider for an app in Andhra Pradesh was approached by the Police with a request for information, it had to go out of business being unable to comply. Not all intermediaries are created equal and these guidelines do not acknowledge that.

But the broader problem I see here is that there is no problem statement here. What these guidelines are trying to address is not clear. If the agenda is for the law enforcement agencies to information on digital communications (and that is essential to maintain law and order), it does not make sense to do it through these means. There are international provisions that India can and should resort to address (CLOUD Act in particular).

Once we go down this route, there is a non-zero chance that intermediaries such as WhatsApp might stop providing their services in India. Especially since if they comply, it will set precedent for other countries to follow India in their approach to breaking encryption. This could end up causing these intermediaries to serve as government lieutenants. Regardless of what platform we choose to communicate on, we need to value privacy going forward. If you disagree with the idea, now might be a great time to unlock your phone and hand your phone over to the person physically closest to you.

Using the new rules, privacy is now going to be chipped away for anyone in India who uses any end-to-end encrypted service.

The intermediary is a broad term and encompasses not just platforms but also cloud service providers, ISPs, and even cybercafés.

The writer is a research analyst at The Takshashila Institution All views are the author’s own.

Read More

NRC website imbroglio highlights need for govt accountability

This article was first published in Deccan Chronicle.

Last week, multiple news outlets reported that the website housing NRC data had gone offline. Reportedly, this happened because a cloud services contract procured by Wipro on behalf of the State government of Assam was not renewed and thus, turned off due to non-payment. For now, officials have made assurances that the data itself is safe. Some aspersions have also been cast on former state officials working on the NRC project. This is still a developing story and there are multiple conspiracy theories being floated about the root-cause ranging on a spectrum from malintent to negligence and good old-fashioned incompetence.

From a public policy perspective, there are multiple questions that come up — should the state be contracting with private enterprise? How accountable should the state be when there is a loss of data or harm caused to people by accumulating this data? How much data should the state gather about its citizens and the potential for misuse? Let’s look at them starting from the narrowest question and then expanding outwards.

AWS V/S MEGHRAJ One of the reasons for outrage has been the use of Amazon Web Services to host this site especially when the National Informatics Center (NIC) itself offers a cloud service called ‘MeghRaj’. The concern cited is that the data may leave the country, or that private contractors will potentially be able to access sensitive data. It is almost cliched to say that the Internet has no borders, but this distinction is important. Data is not any safer just by virtue of it being in India and at a state-operated facility. On the contrary, it is probably better for a website and its data to be hosted with industry-leading operators that follow best-practices and have the expertise to efficiently manage both operations and security. One must consider both the capacity and role of the state in this context. What is the market failure that the state is addressing by offering cloud hosting services in a market where the likes of Amazon, Google, and Microsoft operate?

The objection regarding contractor access to sensitive information is important and merits further consideration. To a large extent, this can be addressed by a contractual requirement to restrict access to individuals with security clearances. Yes, this brings the dimension of a principal-agent problem and lax enforcement of contract law in India. But it is important to contrast it with the alternative — an individual representing the state, where the principal-agent problem is even more acute. As things stand, there are still options to hold a private entity accountable for violation of contract, but there is a lower probability of punitive action against an individual representing the state for harm arising out of action/inaction on their part. As far as causes for outrage go, the fact that the data was stored with AWS should not be one. There are larger aspects at play here.

STATE ACCOUNTABILITY This incident brings with itself a much larger question on the accountability the Government should have towards data. The Indian government keeps a substantial amount of personal and sensitive data on its citizens. For example, data on how much gas you consume, your physical address, the model, make, and the number of your car as well as how many times you traveled out of the country in the last 10 years. That is more sensitive information than most companies in the private sector hold.

Keeping this (and the social contract) in mind, how accountable should the government be? According to the draft of the Personal Data Protection Bill, not very. Section 35 of the bill allows the Government to exempt whole departments from the bill, removing checks and balances that should exist when the Government acts as a collector or processor of your data.

How does that make sense? Why should the state be any less accountable than a private enterprise? In fact, the Government has sold the data of its citizens, without their consent (~25 crore vehicle registrations and 15 crore driving licenses) to the private sector for revenue. As of now, it is hard to conclude whether the incident occurred due to malintent, negligence, or incompetence. But regardless of the cause, it brings with itself a lesson. The Government and all its departments need to be more responsible and be held more accountable when it comes to the data they store and process.

IMPLICATIONS OF A DATA-HUNGRY STATE A case can be made that the state is not a monolith and there exist certain barriers and redundancies due to which databases in the Government do not talk to each other… yet. Chapter 4 of the 2018-19 Economic Survey of India envisioned data as a public good and advocated “combining … disparate datasets.” The combination of limited state capacity, lack of accountability, and a hunger for data can be a dangerous one. While capacity can be supplemented by private enterprise, there is no substitute for accountability. In such a scenario it is extremely important to consider, understand, debate the chronology, implications, and potential for misuse before going ahead with such large-scale activities that could end up severely disrupting many millions of lives.

Section 35 of the draft Personal Data Protection Bill allows the government to exempt whole departments from the Bill, removing checks and balances that should exist when the government acts as a collector or processor of your data.

(The writers are research analysts at The Takshashila Institution All views are the author’s own and are personal.)

Read More
High-Tech Geopolitics Prateek Waghre High-Tech Geopolitics Prateek Waghre

Tackling Information Disorder, the malaise of our times

This article was originally published in Deccan Herald.

The term ‘fake news’ – popularised by a certain world leader – is today used as a catch-all term for any situation in which there is a perceived or genuine falsification of facts irrespective of the intent. But the term itself lacks the nuance to differentiate between the many kinds of information operations that are common, especially on the internet.

Broadly, these can be categorized as disinformation (false content propagated with the intent to cause harm), misinformation (false content propagated without the knowledge that it is false/misleading or the intention to cause harm), and malinformation (genuine content shared with a false context and an intention to harm). Collectively, this trinity is referred to as ‘information disorder’.

Over the last 4 weeks, Facebook and Twitter have made some important announcements regarding their content moderation strategies. In January, Facebook said it was banning ‘deepfakes (videos in which a person is artificially inserted by an algorithm based on photos) on its platform. It also released additional plans for its proposed ‘Oversight Board’, which it sees as a ‘Supreme Court’ for content moderation disputes. Meanwhile, in early February, Twitter announced its new policy for dealing with manipulated media. But the question really is whether these solutions can address the problem.

Custodians of the internet

Before dissecting the finer aspects of these policies to see if they could work, it is important to unequivocally state that content moderation is hard. The conversation typically veers towards extremes: Platforms are seen to be either too lenient with harmful content or too eager when it comes to censoring ‘free expression’. The job at hand involves striking a difficult balance and it’s important to acknowledge there will always be tradeoffs.

Yet, as Tarleton Gillespie says in Custodians of the Internet, moderation is the very essence of what platforms offer. This is based on the twin-pillars of personalisation and the ‘safe harbour’ that they enjoy. The former implies that they will always tailor content for an individual user and the latter essentially grants them the discretion to choose whether a piece of content can stay up on the platform or not, without legal ramifications (except in a narrow set of special circumstances like child sex abuse material, court-orders, etc.) This of course reveals the concept of a ‘neutral’ platform for what it is, a myth. Which is why it is important to look at these policies with as critical an eye as possible.

Deepfakes and Synthetic/Manipulated Media

Let’s look at Facebook’s decision to ban ‘deepfakes’ using algorithmic detection. The move is laudable, however, this will not address the lightly edited videos that also plague the platform. Additionally, disinformation agents have modified their modus operandi to use malinformation since it is much harder to detect by algorithms. This form of information disorder is also very common in India.

Twitter’s policy goes further and aims to label/obfuscate not only deepfakes but any synthetic/manipulated media after March 5. It will also highlight and notify users that they are sharing information that has been debunked by fact-checkers. In theory, this sounds promising but determining context across geographies with varying norms will be challenging. Twitter should consider opening up flagged tweets to researchers.

The ‘Supreme Court’ of content moderation

The genesis of Facebook’s Oversight Board was a November 2018 Facebook post by Mark Zuckerberg ostensibly in response to the growing pressure on the company in the aftermath of Cambridge Analytica, the 2016 election interference revelations, and the social network’s role in aiding the spread of disinformation in Myanmar in the run-up to the Rohingya genocide. The Board will be operated by a Trust to which the company has made an irrevocable pledge of $130 million.

For now, cases will be limited to individual pieces of content that have already been taken down and can be referred in one of two ways: By Facebook itself or by individuals who have exhausted all appeals within its ecosystem (including Instagram). And while the geographical balance has been considered, for a platform that has approximately 2.5 billion monthly active users and removes nearly 12 billion pieces of content a quarter, it is hard to imagine the group being able to keep up with the barrage of cases it is likely to face.

There is also no guarantee that geographical diversity will translate to the genuine diversity required to deal with kind of nuanced cases that may come up. There is no commitment as to when the Board will also be able to look into instances where controversial content has been left online. Combined with the potential failings of its deepfakes policy to address malinformation, this will result in a tradeoff where harmful, misleading content will likely stay online.

Another area of concern is the requirement to have an account in the Facebook ecosystem to be able to refer a case. Whenever the board’s ambit expands beyond content takedown cases, this requirement will exclude individuals/groups, not on Facebook/Instagram from seeking recourse, even if they are impacted.

The elephant in the room is, of course, WhatsApp. With over 400 million users in India and support for end-to-end encryption, it is the main vehicle for information disorder operations in the country. The oft-repeated demands for weakening encryption and providing backdoors are not the solution either.

Information disorder, itself, is not new. Rumours, propaganda, and lies are as old as humanity itself and surveillance will not stop them. Social media platforms significantly increase the velocity at which this information flows thereby increasing the impact of information disorder significantly. Treating this solely as a problem for platforms to solve is equivalent to addressing a demand-side problem through exclusive supply-side measures. Until individuals start viewing new information with a healthy dose of skepticism and media organisations stop being incentivised to amplify information disorder there is little hope of addressing this issue in the short to medium term.

(Prateek Waghre is a research analyst at The Takshashila Institution)

Read More

Fact-checking alone won't be enough in fight against fake news

Google has recently announced a $1 million grant to help fight misinformation in India. This could not have come at a better time. Misinformation has is a reality and bi-product of the Indian and global information age. It could be Kiran Bedi on Twitter claiming that the sun chants Om or WhatsApp forwards saying that Indira Gandhi entered JNU with force and made the leader of the student’s union, Sitaram Yechury, apologise and resign. As someone who was subject to both these pieces of misinformation, I admit I ended up believing both of them at first, without a second thought. While both of those stories are relatively harmless, misinformation does have an unfortunate history of causing fatalities. For instance, in Tamil Nadu, a mob was guilty of mistaking a 65-year-old woman for being a child trafficker. So when they saw her handing out chocolates to children, they put two and two together and proceeded to lynch the woman. Because of instances like these, and because misinformation has the power to shape the narrative, there is an urgent need to combat it. Countries have already begun to take notice and device measures. For instance, during times when ISIS was a greater force and Russia was emerging as an emerging misinformation threat, the US acknowledged that it was engaged in a war against misinformation. To that end, the Obama administration appointed Richard Stengel, former editor of the TIME magazine, as the undersecretary of Public Diplomacy in the State Department to deal with the threat. Stengel later wrote a book called Information Wars and acknowledged the limitations of the state in providing an effective counter to misinformation through fact-checking. When we try to tackle misinformation, we reason through it based on fundamentally incorrect assumptions. Typically, when we think of misinformation, we picture it to be like this pollutant that hits a population and spreads. Here we imagine that the population misinformation is affecting, is largely passive and homogenous. This theory does not take into account how people interact with the information that they receive or how their contexts impact it. It is a simple theory of communication and does not appreciate the complexities within which the world operates. Amber Sinha elaborates on this in his book, The Networked Public. Paul Lazarsfeld and Joseph Klapper debunked this theory of a passive population in the 1950s. Their argument was that contexts matter. Mass communication and information combined do have the potential to reinforce beliefs, but that reinforcement largely depended on the perception, selective exposure, and retention of information. Lazarsfeld and Klapper’s work is a more sobering look at how misinformation spreads. Most importantly, it tells us why fact-checking doesn’t work. People are not always passive consumers of information. There are multiple factors that significantly impact how information is consumed, such as perception, selective exposure, and confirmation bias. Two people can interpret the same piece of information differently. This is why we see that the media does not change beliefs and opinions, but instead, almost always ends up reinforcing them. So just because people are exposed to facts, does not mean that it is going to fix the problem. I tried to test it myself. To the person who had sent me the story about Indira Gandhi making Sitaram Yechury apologise and resign, I forwarded a link and a screenshot that debunked the forward. To my complete lack of a surprise, they did not respond. Similarly, when Kiran Bedi was told that NASA did not confirm that the Sun sounded like Om, she responded by Tweeting, “We may agree or not agree. Both choices are 🙏”. That makes sense. Remember the last time someone fact-checked you, or just blurted with a statement that went against your worldview. No one likes cognitive dissonance. When our beliefs are questioned, we feel uneasy, our brain tries to reconcile the conflicting ideas to make sense of the world again. It is no fun having your belief system shaken. This brings us back to square one. Misinformation is bad and has the potential to conjure divisive narratives and kill people. If fact-checking does not work, how do we counter it? I do not know the answer to this but would argue that the answer lies in patience and reason. We often think that leading with facts directly wins us an argument. In recent times, I have been guilty of that more often than I would like. But doing that just leads to cognitive dissonance, reconciliation of facts and beliefs, and regression to older values. We need to fundamentally rethink how we are to tackle misinformation.  This is why Google’s grant comes at an opportune time. We are yet to see how it will contribute to combating misinformation. While fact-checking is good and should continue, it is not nearly enough to win the information wars.

Read More

We need to revise our approach to anonymised data

Data is a complex, dynamic issue. We often like to make large buckets where we want to classify them. The Personal Data Protection Bill does this by making five broad categories, personal data, personal sensitive data, critical personal data, non-personal data, and anonymised data. While it is nice to have these classifications that help us make sense of how data operates, it is important to remember that the real world does not operate this way.

For instance, think about surnames. If you had a list of Indian surnames in a dataset, they alone would not be enough to identify people. So, you would put that dataset under the ambit of personal data. But since it is India, and context matters, surnames would be able to tell you a lot more about a person such as their caste. As a result, surnames alone might not be able to identify people, but they can go on to identify whole communities. That makes surnames more sensitive than just personal data. So you could make a case for them to be included in the personal sensitive category.

And that is the larger point here, data is dynamic, as a result of how it can be combined or used alone in varying contexts. As a result, it is not always easy to pin it down to broad buckets of categories.

This is something that is often not appreciated enough in policy-making, especially in the case of anonymised or non-personal data. Before I go on, let me explain the difference between the two, as there is a tendency to use them interchangeably.

Anonymised data refers to a dataset where the immediate identifiers (such as names or phone numbers) are stripped off the rest of the dataset. Nonpersonal data, on the other hand, is a broader, negative term. So anything that is not personal data can technically come under this umbrella, think anything from traffic signal data to a company's growth projections for the next decade.

Not only is there a tendency to use the terms interchangeably, but there is also a false underlying belief that data, once anonymised cannot be deanonymised. The reason the assumption is false is that data is essentially like puzzle pieces. Even if it is anonymized, having enough anonymised data can lead to deanonymisation and identification of individuals or even whole communities. For instance, if a malicious hacker has access to a history of your location through Google Maps, and can combine that with a history of your payments information from your bank account (or Google Pay), s/he does not need your name to identify you.

In the Indian policy-making context, there does not seem to be a realization that anonymisation can be reversed once you have enough data. The recently introduced Personal Data Protection Bill seems to be subject to this assumption.

Through Section 91, it allows “the central government to direct any data fiduciary or data processor to provide any personal data anonymised or other non-personal data to enable better targeting of delivery of services or formulation of evidence-based policies by the Central government”.

There are two major concerns here. Firstly, Section 91 gives the Government power to gather and process non-personal data. In addition, multiple other sections ensure that this power is largely unchecked. For instance, Section 35 provides the Government the power to exempt itself from the constraints of the bill. Also, Section 42 ensures that instead of being independent, the Data Protection Authority is constituted by members selected by the Government. Having this unchecked power when it comes to collecting and processing data is problematic especially it has the potential to give the Government the ability to use this data to identify minorities.

Secondly, it just does not make sense to address nonpersonal data under a personal data protection bill. Even before this version of the bill came out, there had been multiple calls to appoint a separate committee to come up with recommendations in this space. It would have then been ideal to have a different bill that looks at non-personal data. Because the subject is so vast, it does not make sense for it to be governed by a few lines in Section 91 for the foreseeable future.

So the bottom line is that anonymised data and nonpersonal data can be used to identify people. The government having unchecked powers to collect and process these kinds of data has the potential to lead to severely negative consequences. It would be better instead, to rethink the approach to non-personal and anonymised data and have a separate committee and regulation for this.

This article was first published in Deccan Chronicle.

(The writer is a technology policy analyst at the Takshashila Institution. Views are personal)

Read More
High-Tech Geopolitics Prateek Waghre High-Tech Geopolitics Prateek Waghre

Budget and Cybersecurity, a missed opportunity

This article originally appeared in Deccan Chronicle.In the lead-up to the 2020 Budget, the industry looked forward to two major announcements with respect to cybersecurity. First, the allocation of a specific ‘cyber security budget’ to protect the country’s critical infrastructure and support skill development. In 2019, even Rear Admiral Mohit Gupta (head of the Defence Cyber Agency) had called for 10% of the government’s IT spend to be put towards cyber security. Second, a focus on cyber security awareness programmes was seen as being critical especially considering the continued push for ‘Digital India’.On 1st February, in a budget speech that lasted over 150 minutes, the finance minister made 2 references to ‘cyber’. Once in the context of cyber forensics to propose the establishment of a National Police University and a National Forensic Science University. Second, cyber security was cited as a potential frontier that Quantum technology would open up. This was a step-up from the last two budget speeches (July 2019 and February 2019) both of which made no references to the term ‘cyber’ in any form. In fact, the last time cyber was used in a budget speech was in February 2018, in the context of cyber-physical weapons. When combined with other recent developments such as National Security Council Secretariat’s  (NSCS) call for inputs a National Cyber Security Strategy (NCSS), the inauguration of a National Cyber Forensics Lab in New Delhi, and an acknowledgement by Lt Gen Rajesh Pant (National Cyber Security Coordinator) that ‘India is the most attacked in cyber sphere’ are signals that the government does indeed consider cyber security an important area.While the proposal to establish a National Forensic Science University is welcome, it will do little to meaningfully address the skill shortage problem. The Cyber Security Strategy of 2013 had envisioned the creation of 500,000 jobs over a 5-year period. A report by Xpheno estimated that there are 67,000 open cyber security positions in the country. Globally, Cybersecurity Ventures estimates, there will be 3.5 million unfilled cyber security positions by 2021. 2 million of these are expected to be in the Asia Pacific region.It is unfair to expect this gap to be fulfilled by state action alone, yet, the budget represents a missed opportunity to nudge industry and academia to fulfilling this demand at a time when unemployment is a major concern. The oft-reported instances of cyber or cyber-enabled fraud that one sees practically every day in the newspaper clearly point to a low-level of awareness and cyber-hygiene among citizens. Allocation of additional funds for Meity’s Cyber Swachhta Kendra at the Union Budget would have sent a strong signal of intent towards addressing the problem.Prateek Waghre is a research analyst at The Takshashila Institution, an independent centre for research and education in public policy.

Read More

Data Protection Bill set to bring yet another shock for companies

The debate and protests around the Citizenship Amendment Act and the National Register of Citizens have dominated headlines around the nation, and rightfully so. While public attention and the news cycle continue to revolve around the issue, the Ministry of Electronics and Information Technology (MeitY) has released a Personal Data Protection Bill.

Since reading the bill, Justice B.N. Srikrishna (chair of the committee that drafted the initial report on data protection) has claimed it to have the potential to turn India into an Orwellian State. The statement is based on legitimate grounds, and that should give most people sleepless nights.

The Personal Data Protection Bill does give the government the power to exempt itself from the legislation. It also gives the State significant powers to demand data, and also places significant restrictions on cross-border data flows.

All of this is troubling on multiple levels and is being written about in columns and articles throughout India’s tech policy space. What is not getting enough attention, however, is that the bill is also bad news for the Indian economy, that too when it is the last thing India needs right now.

There are several counts on which the bill, in its current form, will have a negative impact on the economy. Most importantly among them, is the timeline for enforcement. The 2018 version of the Bill, provided for a period for adjustment and compliance before the enforcement of Bill’s provisions. Section 97’s transitional provisions provided industries a period of 18 months before mandating compliance.

Having a defined period of time that affords the industry the space to be in compliance is an objectively good policy. You could have a debate on how long that period should be, but it should be common ground to have a transition plan. For example, Europe’s Data Protection Law, the GDPR, was adopted in April 2016 but was enforced almost 2 years later, in May 2018.

What this tells us is that policy does not work like a light switch. Flicking it on does not always magically make sure that it will have the intended effects. The current version of the Bill does away with a transitional period altogether. This gives any company that collects data no time to adhere to the bill’s requirements. If implemented without a transition period, the bill would provide the government with grounds to penalise companies and impose punishments for not complying with directives that did not exist a day before the bill was introduced. Bangalore, being the hub of the Indian IT sector is likely to be impacted the most, with Mumbai, Hyderabad, and Delhi-NCR in tow.

Not only does the bill offer no transition period, but it also makes it a lot harder to carry out data processing outside of India. If companies want to outsource data processing of personal sensitive data to a different country, they need to do so under an intra-group scheme with the Data Protection Authority (DPA).

There are two things to consider here. Firstly, the DPA will be set up following the bill. Staffing it and providing it with the correct infrastructure and resources could take months from when the bill is enforced. Since there is no transition period, until the DPA is formed, companies who outsource data for processing would legally not be able to do so.

Secondly, even if the DPA is formed, there must be thousands of companies that would want to apply for an intra-group scheme, with new companies forming every month. It would put a lot of undue strain on the DPA to individually assess each company’s proposal and include them in an intragroup scheme.

This redundancy is going to impact small and medium enterprises a lot more than big firms. Big companies are likely to be able to afford to build processing capacity in India or afford costlier versions to maintain their standards. Small and medium enterprises, especially Indian firms, are not always going to have the money to comply within the given timeframe.

On a related note, the bill also creates three tiers of data, personal, personal sensitive, and critical personal data. While the first two are defined within the bill, critical personal data is not. As you would expect, critical personal data is going to be the tier with the most restrictions and burden of compliance.

For instance, while personal and personal sensitive data can be subject to cross-border transfers, critical personal data is not. So it puts any company that deals with data under a lot of anxiety. It would force them to stay in limbo until the third tier is defined, and will have an impact on how they go about their day-to-day business.

The digital economy is inextricably linked with the traditional economy. All of this, removing a runway for compliance, placing redundancy-ridden restrictions on the cross-border flow of personal sensitive data, and not defining critical personal data is bound to have a negative impact on the Indian economy. If the bill is passed in its current form, we are looking at FDI drying up within this sector. Big companies might have deeper pockets, but localisation laws will also go a long way to make sure that they keep their India-bound spending and outsourcing in check. On the other hand, it is also likely to incentivise small companies and startups to register their businesses elsewhere. All of this is coming at a time when the Indian economy needs it the least.

The redundancy is going to impact small and medium enterprises a lot more than big firms. Big companies are likely to be able to afford to build processing capacity in India or afford costlier versions to maintain their standards. Small and medium enterprises, especially Indian firms, are not always going to have the money to comply within the given timeframe.

Rohan is a Policy Analyst at The Takshashila Institution. Views are personal.

This article was first published in Deccan Chronicle.

Read More

Does Amazon do more harm than good?

Amid CEO Jeff Bezos’s visit to India, Amazon’s India website displayed a full-page letter highlighting how Amazon was committed to its small and medium scale business partners. Bezos also announced that Amazon will invest an “incremental US $ 1 billion to digitise micro and small businesses in cities, towns, and villages across India, helping them reach more customers than ever before”. However, as Bezos tried to bring on his ‘charm offensive’ to India, stating how he was inspired by the “boundless energy and grit” of the Indian people, not everyone seemed amused. On the one hand, we had the Union Commerce Minister stating that “Amazon is not doing India a favour by investing..it is probably because it wants to cover its losses incurred to deep discounting”, on the other hand, we had small and medium retailers protesting against the visit holding posters of ‘Go Back Amazon’. The retailers claimed that Amazon was doing more damage to their business than good.What is the truth?A typical brick and mortar retailer’s capability to sell is constrained by its access to consumers which in turn is confined by geography. The retailer’s market is restricted to people living in the vicinity of the shop. On the other hand, Amazon offers retailers access to millions of consumers across India. This expansion of the market is not only beneficial to the retailers but also to the final consumers who now have a plethora of products to choose from.   However, Amazon, apart from being a marketplace connecting sellers and buyers, is also a player on its own platforms. It sells various products from soaps, shirts, and underwear to tech accessories, and kitchen supplies of its own private label brands such as Solimo, Amazon Essentials, Symbol, Amazon Basics, among others. This violates the neutrality of the platform.Think of the last time you went to the second page of Amazon listings to buy a product. Can’t remember, right? Most of us tend to buy products, especially the standard, and low-value ones from the first five or six listings shown. Amazon has an incentive to and has been accused of favouring its own products above the ones sold by sellers. The reduction in traffic and sales observed by the sellers forces them to buy listing advertisements on Amazon. The protests were a manifestation of the low-bargaining power that individual sellers have against the world’s biggest e-commerce company.Now consider the information that Amazon has in terms of what products are sold where, at what price points, which are the major players in different segments, and so on. Studies show that Amazon uses its marketplace as a tinkering lab and leverages the information asymmetry to launch the most successful products on the platform, under its own label. Once, Amazon’s private-label launches the product, it undercuts the retailers on price and favourably places the products on the website effectively killing competition.  The current standard of ‘consumer welfare’ pegged on short-term price effects is inadequate for managing the above results. The de-facto ‘consumer welfare’ standard popularised by Robert Bork through his book, ‘The Antitrust Paradox’ argues that the goal of antitrust laws should be maximising consumer welfare and protecting the competition, not the competitors. Since, there is no clear evidence of Amazon raising prices in the short-term after launching a product, proving consumer harm is difficult. Therefore only considering the consumer welfare standard would be insufficient. As Lina M Khan points out that the structure of companies such as Amazon “create anti-competitive conflicts of interests” and provides opportunities to “cross-leverage market advantages across distinct lines of business.” Also, with Big-Tech companies such as Amazon, backed by ever-flowing streams of venture-capital money, many ill-effects might be seen in the longer term. We should also be cognisant of the fact that sellers are also customers for Amazon. Therefore, consumer welfare should also apply to sellers.As the Competition Commission of India conducts its investigations, it should examine all the new challenges posed by the likes of Amazon and be cautious in its approach and propose a path where the penalties laid down for Amazon are not a slap on the wrist. Instead, the way forward is where healthy competition can be sustained as well as the bargaining power of the sellers on the platforms is increased. This article was originally published in the Deccan Herald.

Read More
High-Tech Geopolitics Nitin Pai High-Tech Geopolitics Nitin Pai

Technology is set to be the main front in the US-China trade war

Despite the fanfare, the phase 1 agreement that the US and China signed last week does not represent a truce in the ongoing trade “war". It is not even a temporary ceasefire, as tariffs mostly remain in place and there is no indication how or when they will be lifted. The deal is at best a waypoint in the increasingly adversarial relationship between the world’s only superpower and its prospective challenger.
China promised to buy an additional $200 billion of US agricultural and energy products in two years, but it is hard to see how the Chinese economy can re-direct trade patterns of such magnitude in such a short period. As one US think tank expert told me, it’s not even clear if the US has “that much farm and energy stuff to sell" in the first place. China also solemnly promised not to steal intellectual property from high-technology companies, but how this will be enforced remains an open question. Moreover, Beijing astutely refused to make any commitment to hacking and cyber aggression, taking refuge in the argument that this is not a trade issue. In return, the US agreed to hold back from further increases in tariffs on Chinese goods.Read more
Read More