Commentary
Find our newspaper columns, blogs, and other commentary pieces in this section. Our research focuses on Advanced Biology, High-Tech Geopolitics, Strategic Studies, Indo-Pacific Studies & Economic Policy
Joining a New Social Media Platform Does Not Make Sense
Mastodon is what’s happening in India right now. Indian Twitter users are moving to the platform and have taken to using hashtags such as #CasteistTwitter and #cancelallBlueTicksinIndia. A key reason for this to transpire is that Twitter has been, to put it mildly, less than perfect, in moderating content in India. There is the incident with lawyer Sanjay Hegde that caused this to blow up, along with accusations that Twitter had been blocking hundreds and thousands of tweets in India since 2017 with a focus on accounts from Kashmir.Enter Mastodon. The platform, developed by developer Eugen Rochko, is opensourced, so no one entity gets to decide what content belongs on the communities there. Also, the data on Mastodon is not owned by one single corporation, so you know that your behavior on there is not being quantified and being sold to people who would use that to profile and target you.Plus, each server (community) has a relatively small size with a separate admin, moderator, and by extension, code of conduct. All of this sounds wonderful. The character count is also 500 words as opposed to 280 (if that is the sort of thing you consider to be an advantage).Mastodon is moving the needle forward by a significant increment when it comes to social networking. The idea is for us to move towards a future where user data isn’t monetised and people can host their own servers instead. As a tech enthusiast, that sounds wonderful and I honestly wish that this is what Twitter should have been.Keeping all of that in mind, I don’t think I will be joining Mastodon. Hear me out. A large part of it is not because Mastodon does not have its own problems, let’s set that aside for now and move on to the attention economy. Much like how goods and services compete for a share of your wallet, social media has for the longest time been competing for attention and mind-space. Because the more time you spend on the platform, the more ads you will see and the more money they will make. No wonder it is so hard to quit Instagram and Facebook.Joining a new platform for social media today is an investment that does not make sense unless the other one shuts down. There is a high chance of people initially quitting Twitter, only to come back to it while being addicted to another platform. The more platforms you are on, the thinner your attention is stretched. That is objectively bad for anyone who thinks they spend a lot of time on their phone.If you’re lucky to be one of the few people who do not suffer from that and are indifferent to the dopamine that notifications induce in your brain, this one doesn’t apply to you. Then there is the network effect and inertia. I for one, am for moving the needle forward little by little. But here, there is little to gain right now, with more to lose.Network effects are when products (in this case, platforms), gain value when more people use them. So, it makes sense for you to use WhatsApp and not Signal, as all your friends are on WhatsApp. Similarly, it makes sense for you to be on Twitter as your favorite celebs and news outlets are on there. Mastodon does not have the network effect advantage, so most people who do not specifically have their network on Mastodon, do not get a lot of value out of using it.In addition, there is inertia. Remember when we set aside Mastodon’s problems earlier, here is where they fit in. Mastodon is not as intuitive as using Twitter or Facebook. That makes it a deal-breaker for people of certain ages and also happens to be a significant con for people who don’t want to spend a non-trivial chunk of their time learning about servers, instances, toots, and so on.There also isn’t an official Mastodon app, however, there are a bunch of client apps that can be used instead, most popular among them is Tusky, but reviews will tell you that it is fairly buggy and that is to be expected. There is so much right with Mastodon. It is a great working example of the democratisation of social media. It also happens to exist in an age where it would be near impossible to get funding for or to start a new social media platform. The problem is that for people who don’t explicitly feel the need or see the value in joining Mastodon, are unlikely to split their attention further by joining a new platform. The switching costs, network effects, and inertia are simply too high.Rohan is a policy analyst at The Takshashila Institution and the co-author of Data Localization in a Globalized World: An Indian Perspective.This article was first published in Deccan Chronicle.
How to respond to an 'intelligent' PLA
Advancements in Artificial Intelligence (AI) technologies over the next decade will have a profound impact on the nature of warfare. Increasing use of precision weapons, training simulations and unmanned vehicles are merely the tip of the iceberg. AI technologies, going forward, will not only have a direct battlefield impact in terms of weapons and equipment but will also impact planning, logistics and decision-making, requiring new ethical and doctrinal thinking. From an Indian perspective, China’s strategic focus on leveraging AI has serious national security implications.Read the full article on the Deccan Herald website.
Here’s Why Facebook Should Collect Data on Our Political Leanings
As a global community, we should have a more visible and informed choice in what content we want to consume.The full article is available here.Rohan is a Policy Analyst at The Takshashila Institution.
Lessons from Facebook and Twitter's Political Ads Policies
Over the course of the last few weeks, we have seen Facebook and Twitter take opposing views on the issue of political ads. While the issue itself does not have an immediate implication for Indian politics, the decisions of the two companies, their actions throughout the episode and reactions to them are emblematic of the larger set of problems surrounding their policies. They serve as a reminder that we should not expect these platforms to be neutral places in the context of public discourse solely through self-regulation.
In late October, Facebook infamously announced that it would not fact-check political ads. Shortly after that, Twitter’s CEO Jack Dorsey announced via Twitter that the company would not allow any political ads after November 22. And though Twitter is not alone in this approach, its role in public discourse differs from other companies like LinkedIn, TikTok etc. that already have similar policies. Google is reportedly due to announce its own policy soon. At face-value, it may seem that one of these approaches is far better than the other, but a deeper look brings forth the challenges both will find hard to overcome. Google, meanwhile, announced a new political ads policy on November 20. Its policy aims to limit micro-targeting across search, display and YouTube ads. Crucially, it reiterated that no advertisers (political or otherwise) are allowed to make misleading claims.
Potential for misuse
To demonstrate the drawbacks of Facebook’s policy, US lawmaker Elizabeth Warren’s Presidential campaign deliberately published an ad with a false claim about Facebook CEO Mark Zuckerberg. In another instance, Adriel Hampton, an activist, signed up as a candidate for California’s 2022 gubernatorial election so that he could publish ads with misleading claims (he was ultimately not allowed to do so).
While Twitter’s policy disallows ads from candidates, parties and political groups/ political action committees (PACs), Facebook claims it will still fact-check ads from PACs. For malicious actors determined to spread misinformation/disinformation through ads, these distinctions will not be much of an impediment. They will find workarounds.
While most conversation has been US-centric, both companies have a presence in over 100 countries. A significant amount of local context and human-effort is required to consistently enforce policies across all of them. The ongoing trend to substitute human oversight with machine learning could limit the acquisition of local knowledge. For e.g. does Facebook's policy of not naming whistle-blowers work in every country it has a presence in?
Notably, both companies stressed how little an impact political ads had on their respective bottom-lines. Considering the skewed revenues per user for North America + Europe compared with Asia Pacific + rest of the world, the financial incentive to enforce such resource-intensive policies equitably is limited. Both companies also have a history of inconsistent responses to moral panics resulting in an uneven implementation of their policies.
A self-imposed ban on political ads by Facebook and Twitter in Washington to avoid dealing with complex campaign finance rules has resulted in uneven enforcement and a complicated set of rules that have proven advantageous to incumbents. In response to criticism that these rules will adversely impact civil society and advocacy groups, Twitter initially said ‘cause-based ads’ won’t be banned and ultimately settled on limiting them by preventing micro-targeting. Ultimately, both approaches are likely to favour incumbents or those with deeper pockets.
Fixing Accountability
The real problems for Social Media networks go far beyond micro-targeted political advertising and the shortcomings across capacity, misuse and consequences apply there as well. The flow of misinformation/disinformation is rampant. A study by Poynter Institute highlighted that misinformation/disinformation outperformed fact-checks by several orders of magnitude. Research by Oxford Internet Institute and Freedom House has revealed the use disinformation campaigns online and the co-option of social media to power the shift towards illiberalism by various governments. Conflict and toxicity now seem to be features meant to drive engagement. Rules are implemented arbitrarily and suspension policies are not consistently enforced. The increased usage of machine learning algorithms (which can be gamed by mass reporting) in content moderation is coinciding with the reduction in human oversight.
Social Media networks are classified as intermediaries which grants them safe-harbour, implying that they cannot be held accountable for content posted on them by users. Intermediary is a very broad term covering everything from ISPs, Cloud services to end-user facing websites/applications across various sectors. Stratechery, a website which analyses technology strategy, proposes a framework for content moderation such that both discretion and responsibility is higher the closer a company is to an end-user. Therefore, for platforms like Facebook/Twitter/YouTube etc. there should be more responsibility/discretion than ISPs/Cloud services providers. It does not explicitly call for fixing accountability, which cannot be taken for granted.
Unfortunately, self-regulation has not worked in this context and their status as intermediaries may require additional consideration. Presently, India’s proposed revised Intermediary Guidelines already tend towards over-regulation to solve for the challenges posed by Social Media companies, adversely impacting many other companies. The real challenge for policy-makers and society in countries like India is to strike the balance between holding large Social Media networks accountable while not creating rules that are so onerous they can be weaponised into limiting freedom of speech.
(Prateek Waghre is a Technology-Policy researcher at Takshashila Institution. He focuses on the governance of Big Tech in Democracies)
This article was originally published on 21st November 2019, in Deccan Herald.
We Need Our Own Honest Ads Act
Recent developments in online advertising have been uplifting. Facebook (and by extension, Instagram) has been running a policy that is meant to block predatory ads that target people who are overweight or have skin conditions, pushing unusual and often medically dangerous miracle cures. Google, which makes over $100 billion in online ad revenue, has also released a statement declaring a ban on ads that are selling treatments that have no established biomedical and scientific basis. Twitter also declared that it won’t be accepting ads from state-controlled media entities.This is not to say that the advertising policies of these companies are perfect, as incidents reported by The Verge and CNBC will tell you. However, things have been improving at a steady pace as far as advertising policies are concerned.A major catalyst for this change has been the 2016 US election that saw the potential of online advertising abused for targeting voters. Since then, there has been bipartisan support in the US to achieve greater transparency in online advertising. This includes disclosing who paid for public ads, how many people saw those ads, and how the purchaser can be contacted.There are two problems with the support for greater transparency in advertising. Firstly, the bi-partisan push never ended up becoming law. Secondly, even if it did end up becoming law, its impact would have been limited to the US.It is an interesting story why we still lack a law that enforces greater transparency in advertising, and much of it revolves around Facebook, with its conclusion set to impact other players in online advertising. The bill, called the Honest Ads Act, was introduced in the Senate in 2017.Had it become law, it’s success or failure would have given other countries a template to work with to achieve greater transparency in advertising. As of now, that will need to continue without precedent. Days after the bill was introduced, Facebook announced that it would be updating its Advertising Transparency and Authenticity Efforts.Mark Zuckerberg declared his support for the Honest Ads Act through a separate Facebook post, stating, “Election interference is a problem that’s bigger than any one platform, and that’s why we support the Honest Ads Act”. Important side note, Twitter also announced its decision to back the Act, but the focus here is on Facebook because of its size, position, and role in the 2016 US election.Once Facebook expressed its support for the act, and declared the intent to self-regulate according to the bill, the issue lost momentum. At the time, Zuckerberg’s testimony at Capitol Hill was impending, and the news cycle shifted its attention. Senate Majority Leader Mitch McConnell, brought in the first amendment into the argument, saying he was sceptical of proposals (like the Honest Ads Act) that would penalize American citizens trying to use the internet and to advertise. At this point, you could just make the argument that in retrospect, Facebook could have supported the Honest Ads Act by not declaring its support.Regardless, the implications of these events impacted players across a wide spectrum. Because there was no legal requirement to do so, other avenues of online ads (read, Twitter, Google) did not need to comply with a set standard that could be used as a yardstick to judge them against. In addition, the problem with the freedom of speech argument is that transparency in ads is not directly impacting free speech. You could extend the same argument to revoke the laws that mandate transparency in TV and radio ads in the US. So where is the crackdown on transparency in TV and Radio?The Honest Ads Act is relevant as it had the potential to set the tone for how transparent the regulation should be in other countries.The US is not the most significant user base for these platforms. And as you might expect, having transparency in political ads could be useful for other countries that also hold elections. For example, India has over 270 million Facebook users, a significant percentage of whom participated in the general elections. Understandably, advertising on social media sites such as Facebook was an integral part of most campaign strategies. So, it would help to have a law that helps voters identify who is paying for what political ad, and conversely, which of them might be facts, and which of them might be false propaganda.Asking online ad companies such as Facebook to regulate themselves will have exactly the effect that it is having now. They will move towards better ad and transparency policies at their own pace, influenced by what the prevailing narrative is. And for most countries, that is not enough.Having a law in countries where these platforms operate is more efficient. It is not just the United States that needs its ads to be honest.The writer is a Research Analyst with Takshashila Institution, Bengaluru.This article was first published in Deccan Herald.
Why we must be vigilant about mass facial surveillance
The recent revelations about NSO group’s Pegasus being used to target an estimated two dozen Indian lawyers and activists using the vulnerabilities in Whatsapp have once again brought the issue of targeted surveillance of citizens into focus. As the saying goes, no good crisis should go to waste. This is an opportunity to raise public awareness about trends in mass surveillance involving Facial Recognition systems and CCTV cameras that impact every citizen irrespective of whether or not they have a digital presence today.
The Panoptican, conceptualised by philosopher Jeremy Bentham, was a prison designed in a way that prisoners could be observed by a central tower, except they wouldn’t know when they were being watched, forcing them to self-regulate their behaviour. Michel Foucault later extended this idea stating that modern states could no longer resort to violent and public forms of discipline and needed a more sophisticated form of control using observation and surveillance as a deterrent.
Live Facial Recognition combined with an ever expanding constellation of CCTV cameras has the potential to make this even more powerful. Therefore, it suits governments around the world, irrespective of ideology, to expand their mass surveillance programs with stated objectives like national security, identification of missing persons etc. and in the worst cases, continue maximizing these capabilities to enable the establishment of an Orwellian state.
Global trends
China’s use of such systems is well documented. As per a study by the Journal of Democracy, there will be almost 626 million CCTV cameras deployed around the country by the end of 2020. It was widely reported in May that its Facial recognition database includes nearly all citizens. Facial recognition systems are used in public spaces for purposes ranging from access to services (hotels/flights/public transport etc) to public shaming of individuals for transgressions such as jaywalking by displaying their faces and identification information on large screens installed at various traffic intersections and even monitoring whether students are paying attention in class or not.
The former was highlighted by an almost comedic case in September, where a young woman found that her access to payment gateways, ability to check in to hotels/trains etc. was affected after she underwent plastic surgery. In addition, there is also a fear that Facial Recognition technology is being used to surveil and target minorities in Xinjiang province.
In Russia, Moscow mayor Sergei Sobyanin has claimed that the city had nearly 200,000 surveillance cameras. There have also been reports that the city plans to build AI-based Facial Recognition into this large network with an eye on the growing number of demonstrations against the Putin government.
Even more concerning is the shift by countries that have a ‘democratic ethos’ to deploying and expanding their usage of such systems. Australia was recently in the news for advocating face scans to be able to access adult content. Some schools in the country are also running a trial of the technology to track attendance. France is testing a Facial Recognition based National ID system. In the UK, the High Court dismissed an application for judicial review of automated facial recognition. The challenge itself was a response to pilot programs run by the police, or installation of such systems by various councils, as per petitioners, without the consent of citizens and a legal basis.
There was also heavy criticism of Facial Recognition being used at football games and music concerts. Its use in personal spaces, too, continues to expand as companies explore potential uses to measure employee productivity or candidate suitability by analysing facial expressions.
There are opposing currents as well – multiple cities in the US have banned/are contemplating preventing law enforcement/government agencies from deploying the technology. Sweden’s Data Protection Authority fined a municipality after a school conducted a pilot to track attendance on the grounds that it violated EU’s General Data Protection Regulation (GDPR).
Advocacy groups like the Ada Lovelace Institute have called for a moratorium on all use of the technology until society can come to terms with its potential impact. Concerns have been raised on grounds that the accuracy of such systems is currently low, thus severely increasing the risk of misidentification when used by law enforcement agencies. Secondly, since the technology will learn from existing databases (e.g. a criminal database), any bias reflected in such a database such as disproportionate representation of minorities will creep into the system.
Also, there is limited information in many cases where and how such systems are being used. Protestors in Hong Kong and, recently, Chile, have shown the awareness to counter law enforcement’s use of Facial Recognition by targeting cameras. The means have varied from the use of face-masks/clothing imprinted with multiple faces to pointing numerous lasers at the cameras, and even physically removing visible cameras.
India’s direction
In mid-2019, the National Crime Records Bureau of India put out a tender inviting bids for an Automated Facial Recognition System (AFRS) without any prior public consultation. Meeting minutes of a pre-bid seminar accessed by the Internet Freedom Foundation indicated that there were 80 vendor representatives present.
Convenience is touted as the main benefit of various pilot programs to use ‘faces’ as boarding cards at airports in New Delhi, Bengaluru and Hyderabad as part of the Civil Aviation Ministry’s Digi Yatra program. Officials have sought to allay privacy concerns stating that no information is stored. City police in New Delhi and Chennai have run trials in the past. Hyderabad police has until recently, routinely updated their Twitter accounts with photos of officers scanning people’s faces with cameras. Many of these posts were deleted after independent researcher Srinivas Kodali repeatedly questioned the legality of such actions.
Many of the afore mentioned trials reported low single figure accuracy rates for Facial Recognition. The State of Policing in India (2019) report by Lokniti and Common Cause indicated that roughly 50 per cent of personnel believe that minorities and migrants and ‘very likely’ and ‘somewhat’ naturally prone to committing crimes. These aspects are concerning when considering capability/capacity and potential for misuse of the technology. False-positives as result of a low accuracy rate, combined with potentially biased law enforcement and a lack of transparency, could make it a tool for harassment of citizens.
Schools have attempted to use them to track attendance. Gated communites/offices already deploy a large number of CCTV cameras. A transition to live Facial Recognition is an obvious next step. However, given that trust in tech companies is at a low, and the existence of Facial Recognition training datasets such as Megaface (a large dataset utilised to train Facial Recognition algorithms using images uploaded on the Internet as far back as the mid 2000s without consent) – privacy advocates are concerned.
Opposition and future considerations for society
Necessary and Proportionate, a coalition of civil society organisations, privacy advocates around the world, proposes thirteen principles on application of human rights to communication surveillance, many of which are applicable here as well. To state some of them – legality, necessary and legitimate aims, proportionality, due process along with judicial and public oversight, prevention of misuse and a right to appeal. Indeed, most opposition from civil society groups and activists against government use of mass surveillance is on the basis of these principles. When looked at from the lenses of intent (stated or otherwise), capacity and potential for misuse – these are valid grounds to question mass surveillance by the governments.
It is also important for society to ask and seek to answer some of the following questions: Is the state the only entity that can misuse this technology? What kind of norms should society work towards when it comes to private surveillance? Is it likely that the state will act to limit its own power especially if there is a propensity to both accept and conduct indiscriminate surveillance of private spaces, as is the case today? What will be the unseen effects of normalising mass public and private surveillance on future generations and how can they be empowered to make a choice?
This article was first published in Deccan Herald on 11th November, 2019.
Govt needs to be wary of facial recognition misuse
India is creating a national facial recognition system. If you live in India, you should be concerned about what this could lead to. It is easy to draw parallels with 1984 and say that we are moving towards Big Brother at pace, and perhaps we are. But a statement like that, for better or worse, would accentuate the dystopia and may not be fair to the rationale behind the move. Instead, let us sidestep conversations about the resistance, doublethink, and thoughtcrime, and look at why the government wants to do this and the possible risks of a national facial recognition system.
WHY DOES THE GOVERNMENT WANT THIS?
Let us first look at it from the government’s side of the aisle. Having a national facial recognition database can have a lot of pros. Instead of looking at this like big brother, the bestcase scenario is that the Indian government is looking at better security, safety, and crime prevention. It would aid law enforcement. In fact, the request for proposal by the National Crime Records Bureau (NCRB) says as much, ‘It (the national facial recognition system) is an effort in the direction of modernizing the police force, information gathering, criminal identification, verification and its dissemination among various police organizations and units across the country’.
Take it one step further in a world where later down the line, you could also use the same database to achieve gains in efficiency and productivity. For example, schools could have attendance based on FaceID-like software, or checking for train tickets would be more efficient (discounting the occasional case of plastic surgery that alters your appearance significantly enough).
POTENTIAL FOR MISUSE
The underlying assumption for this facial recognition system is that people implicitly trust the government with their faces, which is wrong. Not least because even if you trust this government, you may not trust the one that comes after it. This is especially true when you consider the power that facial recognition databases provide administrations.
For instance, China has successfully used AI and facial recognition to profile and suppress minorities. Who is to guarantee that the current or a future government will not use this technology to keep out or suppress minorities domestically? The current government has already taken measures to ramp up mass surveillance. In December last year, the Ministry of Home Affairs issued a notification that authorized 10 agencies to intercept calls, data on any computer.
WHERE IS THE CONSENT? Apart from the fact that people cannot trust all governments across time with data of their faces, there is also the hugely important issue of consent and absence of legality. Facial data is personal and sensitive. Not giving people the choice to opt-out is objectively wrong.
Consider the fact that once such a database exists, it is will be combined with state police across the country, it says as much in the proposal excerpt mentioned above. There is every chance that we are looking at increased discrimination in profiling with AI algorithms repeating the existing biases.
Why should the people not have a say in whether they want their facial data to be a part of this system, let alone whether such a system should exist in the first place?
Moreover, because of how personal facial data is, even law enforcement agencies should have to go through some form of legal checks and safeguards to clarify why they want access to data and whether their claim is legitimate.
Data breaches would have worse consequences
Policy, in technology and elsewhere, is often viewed through what outcomes are intended and anticipated. Data breaches are anticipated and unintended. Surely the government does not plan to share/sell personal and sensitive data for revenue. However, considering past trends in Aadhaar, and the performance of State Resident Data Hubs goes, leaks and breaches are to be expected. Even if you trust the government to not misuse your facial data, you shouldn’t be comfortable with trusting third parties who went through the trouble of stealing your information from a government database.
Once the data is leaked and being used for nefarious purposes, what even would remedial measures look like? And how would you ensure that the data is not shared or misused again? It is a can of worms which once opened, cannot be closed.
Regardless of where on the aisle you stand, you are likely to agree that facial data is personal and sensitive. The technology itself is extremely powerful and thus, can be misused in the wrong hands. If the government builds this system today, without consent or genuine public consultation, it would be almost ensuring that it or future administrations misuse it for discriminatory profiling or for suppressing minorities. So if you do live in India today, you should be very concerned about what a national facial recognition system can lead to.
This article was first published in The Deccan Chronicle. Views are personal.
The writer is a Policy Analyst at The Takshashila Institution.
There’s more to India’s woes than data localisation
The personal data protection bill is yet to become a law and the debate is still rife on the costs and benefits of data localisation. It is yet to be seen officially if the government is going to mandate localisation in the data protection bill and to whom it is going to apply. Regardless of whether or not data localization ends up enshrined in the law, it is worth taking a step back and asking why the government is pushing for it in the first place.
For context, localisation is the practice of storing domestic data on domestic soil. One of the most credible arguments for why it should be the norm is that it will help law enforcement. Most platforms that facilitate messaging are based in the US (think WhatsApp and Messenger). Because of the popularity of these ‘free services,’ a significant amount of the world’s communication takes place on these platforms. This also includes communication regarding crimes and violation of the law.
This is turning out to be a problem because in cases of law violations, communications on these platforms might end up becoming evidence that Indian law enforcement agencies may want to access. The government has already made multiple efforts to make this process easier for law enforcement. In December 2018, the ministry of home affairs issued an order granting powers of “interception, monitoring, and decryption of any information generated, transmitted, received or stored in any computer,” to ten central agencies, to protect security and sovereignty of India.
But this does not help in cases where the information may be stored outside the agencies’ jurisdiction. So, in cases where Indian law enforcement agencies want to access data held by US companies, they are obliged to abide by lawful procedures in both the US and India.
The bottleneck here is that there is no mechanism that can keep up with this phenomenon (not counting the CLOUD Act, as India has not entered into an executive agreement under it).
Indian requests for access to data form a fair share, owing to India’s large population and growing internet penetration. Had there been a mechanism that provided for these requests in a timely enforcement through the provision of data. Most requests are US-bound, thanks to the dominance of US messaging, search, and social media apps. Each request has to justify ‘probable cause by US standards.’ This, combined with the number of requests from around the world, weighs down on the system and makes it inefficient. People have called the MLATs broken and there have been several calls for reform of the system.
A comprehensive report by the Observer Research Foundation (ORF) found that the MLAT process on global average takes 10 months for law enforcement requests to receive electronic evidence. 10 months of waiting for evidence is simply too long for two reasons. Firstly, in cases of law enforcement, time tends to be of the essence. Secondly, countries such as India have a judicial system with a huge backlog of cases. 10month-long timelines to access electronic evidence make things worse.
Access to data is an international bottleneck for law enforcement. The byproduct of the mass adoption of social media and messaging is that electronic criminal evidence for all countries is now concentrated in the US.
The inefficiency of MLATs is one of the key reasons why data-sharing agreements are rising in demand and in supply, and why the CLOUD Act was so well-received as a solution that reduced the burden on MLATs.
Countries need to have standards that can fasten access to data for law enforcement, an understanding of what kinds of data are permissible to be shared across borders, and common standards for security.
India’s idea is that localizing data will help with access to it for law enforcement, at least eventually down the line. It may compensate for not being a signatory to the Budapest Convention. It is unclear how effective localisation will be. Facebook’s stored in India is Facebook’s data.
Facebook is still an American company and should still be subject to US standards of data-sharing, which are one of the toughest in the world and include an independent judge assessing the probable cause, refusing bulk collection or overreach. This is before we take into account encryption.
For Indian law enforcement, the problem in this whole mess is not where the data is physically stored. It is the process that makes access to it inefficient. Localisation is not a direct fix, if it proves to be one at all. The answer lies in better data-sharing arrangements, based on plurilateral terms. The sooner this realized, the faster the problems can be resolved. data still
Rohan is a policy analyst at the technology and policy programme at The Takshashila Institution. Views are personal.
This article was first published in the Deccan Chronicle.
How Pegasus works, strengths & weaknesses of E2E encryption & how secure apps like WhatsApp really are
Pegasus, the software that infamously hacked WhatsApp earlier this year, is a tool developed to help government intelligence and law enforcement agencies to battle cybercrime and terror. Once installed on a mobile device, it can collect contacts, files, and passwords. It can also ‘overcome’ encryption, and use GPS to pinpoint targets. More importantly, it is notoriously easy to install. It can be transmitted to your phone through a WhatsApp call from an unknown number (that does not need to be picked up), and does not require user permissions to get access to the phone’s camera or microphone. All of that makes it a near complete tool for snooping.While Pegasus is able to hack most of your phone’s capabilities, the big news here is that it can ‘compromise’ end to end (E2E) encryption. The news comes at attesting time for encryption in India, as the government deliberates a crackdown on E2E encryption, a decision that we will all learn about more on Jan 15, 2020.Before we look at how Pegasus was able to compromise E2E encryption, let’s look at how E2E encryption works and how it has developed a place for itself in human rights.E2E is an example of how a bit of math, applied well, can secure communications better than all the guns in the world. The way it works on platforms such as WhatsApp is that once the user (sender) opens the app, the app generates 2 keys on the device, one public and one private. The private key remains with the sender and the public key is transmitted to the receiver via the company’s server. The important thing to note here is that the message is already encrypted by the public key before the message reaches the server. The server only relays the secure message and the receiver’s private key then decrypts it. End to end encryption differs from standard encryption because in services with standard encryption (think Gmail), along with the receiver, the service provider generally holds the keys, and thus, can also access the contents of the message.Some encryptions are stronger than others. The strength of an encryption is measured through how large the size of the key is. Traditionally, WhatsApp uses a 128-bit key, which is standard. Here you can learn about current standards of encryption and how they have developed over the years. The thing to keep in mind is that it can take over billions of years to crack a secure encryption depending on the key size (not taking into account quantum computing):Key Size Time to Crack56-bit 399 Seconds128-bit 1.02 x 1018 years192-bit 1.872 x 1037 years256-bit 3.31 x 1056 yearsE2E encryption has had a complex history with human rights. One the one side, governments and law enforcement agencies see E2E encryption as a barrier when it comes to ensuring the human rights of its citizens. Examples of mob lynching being coordinated through WhatsApp, such as these, exist around the world.On the other hand, security in communications and the anonymity it brings, has been a boon for people who might suffer harm if their conversations were not private. Think peaceful activists who utilize it to fight for democracy around the world, most recently, Hong Kong. Same goes for LGBTQ activists and whistleblowers. Even diplomats and government officials operate through the seamless secure connectivity offered by E2E encryption.The general consensus in civil society is that E2E encryption is worth having as an increasing amount of digital human communications move online to platforms such as WhatsApp.How does Pegasus fit in?End to end encryption ensures that your messages are encrypted in transit and can only be decrypted by the devices that are involved in the conversation. However, once a device decrypts a message it receives, Pegasus can access that data which is at rest. So it is not the end to end encryption that is compromised, but your devices security. Once a phone is infected, Pegasus can mirror the device, literally record the keystrokes being typed by the user, browser history, contacts, files and so on.The strength of end to end encryption lies in the fact that it encrypts data in transit well. So unless you have the key for decryption, it is impossible to trace the origin of messages as well as the content that is being transmitted. The weakness for end to end encryption here, as mentioned above, is that it does not apply to data at rest. If it were still encrypting data at rest, messages received by users would not be readable.At this point, the question about how secure apps such as WhatsApp, Signal, and Telegram really are, is widely debateable. While the encryption is not compromised, the larger system is, and that has the potential to make the encryption a moot point.WhatsApp came out with an update that supposedly fixed the vulnerability earlier this year, seemingly protecting communications on the platform from Pegasus.What does this mean for regulation against WhatsApp?The Pegasus story comes at a critical time for the future of encryption on WhatsApp and on platforms in general. The fact that WhatsApp waited ~6 months to file the lawsuit against the NSO will not help the platforms credibility on the traceability and encryption debate. This also brings into question the standards for data protection Indian citizens and users should be subject to. The data protection bill is yet to become law. With the Pegasus hack putting privacy front and center, the onus should ideally be on making sure that Indian communications are secure against foreign and domestic surveillance efforts.
Cons of breaking encryption outweigh pros
A bit of math can better secure your communications than all the guns in the world combined. That is the beauty of end to end encryption which currently runs on WhatsApp. It makes messages shared between people private so that only the sender and the recipient can view what is being said. On a related note, the notification of the intermediary guidelines is likely to be completed by 15 January 2020. These updated guidelines are going to determine the future of end to end encryption.The major trade-off here is privacy versus security. The government’s argument is that it needs to access communications between its citizens for the purposes of security. The spread of false news on WhatsApp has instigated lynch mobs and resulted in 27 reported deaths in 2017. That is exactly why in December 2018, the Ministry of Home Affairs issued an order granting powers of "interception, monitoring, and decryption of any information generated, transmitted, received or stored in any computer", to ten central agencies. But platforms using end to end encryption means that the interception of information might not be of much use if the government does not have a key for the encryption. The amendments in the intermediary guidelines call for allowing platforms such as Telegram and WhatsApp to, “..enable tracing out of such originator of information on its platform as may be required by government agencies who are legally authorised”.The other side of the coin here is privacy. There is no way where platforms take away encryption from criminals but leave it intact for others. If intermediaries allowed traceability and compromised end to end encryption, the sender of each message would be identifiable to WhatsApp and by extension, the government. And while the encryption provides a shield of anonymity to trolls and spreaders of misinformation, it also gives assurance to people who would otherwise have been silenced or suppressed. Think whistleblowers and political protesters. End to end encryptionWe need to have transparency and install the highest standards to due process to make sure that should traceability be enabled, it is not abused (a similar precedent for which has been set by the NSA).allows those people to avoid the fear of being targeted. Also, encryption on content extends into more routine aspects of life. For instance, WhatsApp is a platform where people can talk about personal and sensitive parts of their life, such as a disease or mental health issues, and rest assured that Facebook, the internet, and the government won’t target you using that information. At a personal level, the fact that end to end encryption keeps communications private between the participants is reason enough not to break it. In the age of the contemporary internet, privacy is a luxury that is being provided at scale.In addition, there are a host of questions on the side of implementation. For instance, the guidelines are applicable to all intermediaries that have more than 50 lakh users. There is no clarity on whether that means all registered users, daily active users or even monthly active users. Moreover, how will the government know if platforms have met that threshold and keep track of all the intermediaries that pop up on the App Store/Play Store? More fundamentally, who is an intermediary? Does Google Docs count as a platform, as it also has a chat feature? Are online games also subject to this?Even if all of these are resolved, the 50 lakh threshold might mean that criminals can just move to smaller, lesser-known platforms that offer end to end encryption, taking away significantly from the effectiveness of the exercise.Adjusting the trade-off between privacy and security is a thankless task that more often than not is likely to be decided by the values and interests of the people in power. The job at hand here is to make sure that a robust set of processes are set in place if end to end encryption is to be broken. We need to have transparency and install the highest standards to due process to make sure that should traceability be enabled, it is not abused (a similar precedent for which has been set by the NSA).There needs to be transparency around the process that lets people know who is seeking the data. Standards need to exist around the specificity of what accounts and data can be targeted to prevent requests for bulk data. The request for access should be backed up by justification of credible facts, all of which should be subject to review by an independent entity or a judge.None of these provisions currently exist around the intermediary guidelines, and neither is there an indication that it is being considered. The cons of enabling traceability and breaking end to end encryption outweigh the pros subjectively.However, if the government is going to go ahead with this and include the clause in the January 2020 notification, then it should do this right by placing adequate oversight and safeguards in the amendments.This article was first published in Asian Age.(Rohan is a policy analyst at the technology and policy programme at The Takshashila Institution. Views are personal.)
Telecom revolution took India to 21st century. The state is taking it backwards
The manner in which the Indian state has treated telecom is indicative of the disdain it has for a sector that has underpinned the country’s rise to an aspiring global power in the last 25 years. If we have to fix the problems we’ve created, it’s important to enumerate the big policy mistakes we have made. Between a rapacious bureaucracy, corrupt politicians, rent-seeking crony businesses and an economics-agnostic judiciary, we have created the conditions for a telecom crisis.Read more
Where is the debate on data privacy headed?
Even as India pushes for data decryption access from Big Tech for better law enforcement, there is a larger issue of how Big Tech is not quite the paragon of virtue when it comes to upholding user privacy.
If the Indian government does get social media platforms to part with user data, it should remember that with great power over the citizens comes a greater responsibility towards the citizens.
This article was first published in The Hindu. Views are Personal.
What does refusal to sign the Osaka Track mean for India?
The big decision here is whether or not India wants to share its data with anyone under any circumstances.India recently started sharing maritime data with countries in the Indian Ocean Region. The Information Fusion Centre is actively interacting with the maritime community and has already built linkages with 18 countries and 15 multinational/maritime security centres. On that note, it is worth relooking at India’s approach to data sharing and cross-border data flows.Technology is now a variable that defines relations between countries. Over the year, we have seen an increasing number of instances that reaffirm the existence of high-tech geopolitics. First, there was the US-imposed ban on Huawei technologies. Then the Americans considered imposing caps on H1-B visas for countries that implemented data localisation. One of the most important recent developments was at this year’s G20 summit where Japan’s Shinzo Abe presented the idea to have a multilateral broad framework for the sharing of data. It is worth analyzing India’s response to it.The agreement is called the Osaka Track. The idea is that member countries should be able to share and store data across borders without having to worry about security risks. The agreement has many notable signatories, such as the US, EU, and China. It is India’s response that is interesting. India, for better or worse, has not been big on data sharing. So much so, that recent news claimed that the government was considering getting a domestic messaging service for official communication. With this context in mind (as well as the draft e-commerce policy, data protection bill, and the RBI data localization notification), India refused to join the Osaka Track as a signatory. The questions for India here are, what does this mean for the future of Indian data, and how India is likely to conduct itself in this world of high-tech geopolitics?India’s reasons for not signing the pact are two-fold. Firstly, as the sentiment goes, data is national wealth. The idea here is to keep all data possible within Indian borders. Much like you would do be inclined to do with actual wealth. Secondly, as an official stated, India needs to better understand what free flow of data might mean. Having said that, India then wants to look at its domestic requirements and would like to see the issue of cross-border data flows discuss the same on a WTO (World Trade Organisation) platform. What the foreign policy is broadly saying here (to my understanding) is that it is not in India’s best interests to share its data right now. However, once the government has a better understanding of the Osaka Track, they might reconsider.In the broader global context, the Osaka Track is a step towards an emerging pattern. Data flows are likely to be increasingly regulated through economic blocs and not nations. Europe’s General Data Protection Regulation and Convention 108+ are the best examples of this. The Osaka Track was an opportunity for India to follow this trend and facilitate trans-border data flows. India’s rejection of it does not mean that other opportunities will not present themselves. Should India decide that data sharing is in its best interests, there are other platforms to make it happen on its own terms. One option to pursue this route would be to establish a data sharing law and standards under Bay of Bengal Initiative for Multi-Sectoral Technical and Economic Cooperation (BIMSTEC). Sharing costs of storage and following common processing standards would give India an edge in data geopolitics. Just because it would make powerhouses such as the US rethink applying sanctions to all of BIMSTEC instead of India alone. BIMSTEC, of course, is also interchangeable. India could take the lead and establish a data sharing policy with SAARC (South Asian Association for Regional Cooperation) or with a different combination of countries it might prefer. The big decision here is whether or not India wants to share its data with anyone under any circumstances.If India is to treat data as wealth and not share it across borders, it may be time to consider what that might mean. An increasing number of government policies are treating data as an asset that should not be shared. Doing so is likely to come at the cost of being ostracised by the US. However, if India is to go ahead with this, it makes sense as citizens to ask the government how data is going to be used to achieve progress.While there are a lot of policy proposals on how data should be regulated in India, there aren’t many on how it is going to be used for economic development. Sharing data with countries and/or companies can often crowdsource the initiative for development, as it seems to be doing for security at The Information Fusion Centre. As Microsoft’s collaboration with the Telangana government proved by using data to optimise agricultural yields. However, if India decides to cut itself off as evidenced by the refusal to sign the Osaka Track, it is best to ask how crowdsourcing the initiatives will be substituted. While options to do so domestically might exist (such as releasing community data for entrepreneurs and Indian companies), there need to be indicators that they are being considered or carried out on a national level. Because if data is national wealth, then there needs to be a plan on how it should be used to achieve economic development and progress for the nation.This article was first published in the Asian Age. Views are personal.
Performing well in the sandbox won't be enough
RBI recently came up with a Draft Enabling Framework for Regulatory Sandbox in financial technology. For context, a sandbox is a framework that allows testing of innovations by private firms in a controlled environment. As far as developments in the fintech regulation space go, this is a good one. A sandbox allows players to run a pilot test on new products and services at a smaller scale with less capital than usually required. According to the draft, RBI will consider testing for innovative products and services in the following areas, Retail payments, Money transfer services, Marketplace lending, Digital KYC, Financial advisory services, Wealth management services, Digital identification services, Smart contracts, Financial inclusion products, Cybersecurity products. Some of these, especially the digital KYC and financial inclusion, are more front-facing than others. There is also a separate clause for innovative technologies that include, Mobile technology applications (payments, digital identity, etc.), Data Analytics, Application Program Interface (APIs) services, Applications under blockchain technologies, Artificial Intelligence and Machine Learning applications. A notable exemption from the sandbox is cryptocurrencies. This is keeping in line with the report of the Inter-ministerial group that recommended banning private cryptocurrencies and proposed a fine of up to ₹25 crore as well as up to 10 years imprisonment. It echoes the stance of the report that encourages developments in blockchain and the distributed ledger technology in general. This is likely due to concerns that private cryptocurrencies can lead to macroeconomic instability and finance terror groups, both of which are fair concerns. There have been claims that the sandbox is available for a limited set of customers, only 10-12 companies. It is unclear whether that hypothesis is true. The eligibility criteria as specified in the draft states that the focus of the sandbox will be to encourage innovations where there is an absence of governing regulations; there is a need to temporarily ease regulations for enabling the proposed innovation; the proposed innovation shows promise of easing/effecting delivery of financial services in a significant way. This does not directly translate into having only 10-12 players. This should also act as a win for Facebook, and in all probability, WhatsApp. It is an open secret that WhatsApp has been keen to launch a payments service in India, dubbed ‘WhatsApp Pay’. However, over recent times, the regulatory climate has proved to be unfavourable to bring those efforts to fruition. The regulatory sandbox may serve as the ideal testing ground for the service before its release. A successful stint in the sandbox is not a guarantee to achieve regulatory approvals however, as the draft states. Companies and their services can perform well and still be denied clearance to launch at a national level. That is something firms like WhatsApp and Facebook will have to deal with. Any financial services that pass the sandbox will need to clear regulatory hurdles such as data localization guidelines laid out by RBI and the data protection bill (if and when it becomes law). There are two key things to keep in mind here. Firstly, the sandbox is likely to be of help to both newcomers into the market as well as incumbents who plan to try out new ideas in fintech. This includes innovative efforts to increase financial inclusion through services that rely on machine learning and AI. Thus, it is a boost to the fintech landscape overall and also on India’s AI front. The latter of which could use an injection in homegrown talent, applications, and infrastructure. Secondly, returns from the sandbox have the potential to pay off dividends in the short to long term, depending on how long the program lasts. The RBI, also to their credit has provided a set of risks and limitations in the draft. It includes the possibility of innovators losing time and flexibility because of due process. The need for regulatory approvals after sandbox testing and legal issues leading to consumer losses is also included. None of these risks is an argument for not having the sandbox. The sandbox proposed is an objectively good idea. Countries around the world, including Thailand, Singapore, and the US have tried it. In a fintech space that is growing and needs new innovation to foster better development in areas such as financial inclusion and creditworthiness, this is a welcome step. It is too early to say whether the idea will be successful, or whether it will face implementation challenges or end up leading to unintended consequences, such as favouring the incumbents over startups or making participation exclusive to a limited set of players. The idea is still in the draft stage and it could be a while before it is successfully carried out. The bottom line here is that despite all these considerations, it is better to have a sandbox than not have one.This article was first published in Deccan Chronicle. Views are personal.
Wider debate needed on major changes in data protection law
With the developments in Kashmir and the economy dominating so much of the national discussion, it can be hard to keep track of what is happening with tech policy. One thing that might slip through the cracks is the changes in the Data Protection Bill. According to a recent report by Medianama, the ministry of electronics and information technology, or MEITY, is privately seeking responses to new questions from select stakeholders.The Data Protection Bill is going to be profoundly important in India’s tech policy landscape, going forward. It will tackle issues around data privacy, data protection, and data processing, all of which have never been discussed at length in Indian law. Based on inputs received from the Srikrishna Committee report, it will also focus on data localisation. So when MEITY initially asked for feedback on the draft data protection bill, it received over 600 comments from individuals and organisations. For this round of comments, the ministry has reached out to only 10-15 stakeholders.This raises a host of questions and concerns about the process. First, as Medianama puts it, why the secrecy? With over 600 comments being submitted, the bill is clearly an issue with a significant amount of public interest. MEITY and its bureaucracy could argue that 600 submissions are a lot to process, and not all the comments are relevant to the process. But for a piece of legislation this important, it is surely better to have too many inputs instead of picking and choosing which voices you would like to elevate. There is also not a lot of transparency in the process. How does one figure out the basis on which these 10-15 stakeholders were selected? This is not to imply that the participants asked for feedback are not a decent sample of stakeholders. But this could have been done better had a rationale/basis behind the selection been provided.Not knowing why some people have been selected and the others ruled out matters more when the bill has sweeping changes. The new version of the bill is reportedly going to address issues in e-commerce and community data. Both these topics were not a part of the Srikrishna Committee or the October consultation process. It is unclear what the bill’s stance might be on both these matters. To make an educated guess on e-commerce, the bill might condense and borrow aspects from the draft e-commerce policy from earlier this year. It is anyone’s guess what those aspects might be, but it does narrow down the list. As for community data, there does not exist such a precedent. It is frankly shocking that the consultation has been labelled as “clarifications” in the bill. If entire new industries are being addressed under the document, then surely it should be classified as additions/revisions, and thus call for comments from all stakeholders. It might not have been acceptable to have a selection of stakeholders for a round of clarifications on data protection. And that is even more so when it comes to picking and choosing stakeholders regarding legislation that has such substantial changes.This lack of transparency also makes one question the importance given to the comments submitted earlier. There is clearly value in having documents that present the perspectives of stakeholders across the industry, academia, and civil society. But given the recent turn of events, who can say whether these perspectives have been reflected in the new version that is being circulated for feedback. The idea here is not to give credit to organisations/individuals who may have caused tangible changes in policy. Instead, it is to ensure that the process that goes into finalising the document is truly multilateral.The final version of the policy will still have winners and losers. No legislation is objectively perfect. This is especially true in technology, where most laws find it hard to keep up with rapid advances in the industry. If the industry wins through lax laws on data privacy, civil society arguing for stronger privacy laws will de facto lose. What the consultations should then aim for is to reflect that different views on subjects were considered before making trade-offs in favour of one over the other.What makes this situation even more bizarre is that up until this point, the process has been fairly transparent. The Srikrishna Committee’s recommendations were comprehensive and publicly available. As was the first round of comments in October 2018, even though the comments submitted were not made available to the public. Why then has the second round of inputs been restricted to just a few people without any explanation? Not allowing public comments while adding e-commerce and community data to the mandate is going to have negative implications when the bill finally comes out. A large share of stakeholders in academia, industry, and civil society are going to have fundamental disagreements on the way this was carried out, as well as its contents.Indian policy towards all things technology is slowly catching up with advancements. As new legislation comes out regarding other emerging technologies such as artificial intelligence, fintech, and the Internet of Things, it would make sense to involve multiple perspectives while designing it. Failure to do so is likely to have an adverse impact on the adoption of these technologies and, by extension, India’s development.This article was first published in Deccan Chronicle. Views expressed are personal.
The Evolution of Synthetic Thought
Download the Essay in PDFThe world has never been enough. At least for us, humans. The endeavour to become more than what we are lies at the heart of human civilisation. We have overcome challenges of nature, obstacles of time, physical and mental impediments. Perhaps nothing reflects the culmination of this collective zeal to surpass our capabilities as much as Transhumanism.Transhumanism is a belief that human beings can transcend the limits of physical and mental limitations through technology. For some, a Transhumanist is an ideal to strive towards, and for others, it is both a source and an answer to all of humanity’s problems.Borne out of a belief system that humankind should reach the pinnacle of its capabilities and beyond, Transhumanism comprises augmentations to overcome limitations. While technological augmentations may be a recent endeavour, primitive humans have utilised tools to augment their capabilities. From the wooden spears, they used to hunt, the prosthetic wooden and iron legs to augment walking, all the way to lances in warfare, humans have employed augmentations throughout history. Eyeglasses, clothing, and ploughs signalled a rise in using tools to augment our capabilities.The rise in medical technology, genetic science, and electronics from the 1990s, has opened new frontiers in human capabilities. We don’t merely use technology as enablers but have started adopting it from within in the form of cybernetics. Armbands, deep-brain stimulators, physical and neural augmentations, mechanical and cybernetic implants, and potentially gene editing are technologies that humans can use to enhance themselves and achieve capabilities previously unheard of.On one hand, science is driving innovation in augmentation, and on the other, Transhumanism has given rise to a significant amount of philosophical thought. Notions of challenging what it means to be human, virtues and vices of post-humanism, and the dangers of uncontrolled immortality provoke deep questions that do not have answers but encourage much debate and discourse. There is also an entire section of humanity that believes that the very notion of Transhumanism is irrelevant, for any such technological advancements are several decades away.Transhumanism has generated fear and enthusiasm in equal measures. While proponents extol the virtues of embracing technology to enhance our lives, detractors fear what this will mean to be human at all. The widespread availability of Transhumanist technologies could result in radical life extension, overall well-being and improper perpetuation could create class divides, encourage oppression and even alter geopolitical landscapes.For the first time in human history, we can radically alter our minds and bodies and take shortcuts to the various destinations of natural evolution. This essay looks at Transhumanism from an emerging technological paradigm and attempts to provide an objective view of where Transhumanism is headed and what it means to the rest of the world.[pdf-embedder url="https://takshashila.org.in/wp-content/uploads/2019/08/TE-Evolution-of-Synthetic-Thought-CRG-2019-01.pdf"]Download the Essay in PDF
Facebook can’t be taken down, but Zuckerberg can be taken down a notch
It is hard to associate Facebook and most of Big Tech with anything positive right now. Privacy breaches and the Cambridge Analytica scandal led the American Federal Trade Commission (FTC) to fine Facebook $5bn. The fine was a joke. FTC’s decision was seen to be so weak that Facebook’s stock actually rose in the wake of the levy. You know a corporation is a fairly big when a multi-billion dollar fine barely qualifies as pocket change for it. The question most of the world seems to be asking is whether Facebook is perhaps too big?You will hear arguments that bigness is not a crime, that no company should be punished for being successful. But that is not remotely the point. The only reason the world now thinks that Facebook needs to be curbed is because of its horrendous conduct with the privacy of user data. Earlier this year, we found out that Facebook stored millions of passwords in plain text, visible to thousands of employees. When users signed up for two-factor authentication, Facebook used those numbers for targetted ads. What’s worse is that the $5bn fine is for privacy violations that Facebook has been fined for before by the FTC in 2011. Not only did Facebook’s conduct failed to improve, it actually got worse (read: Cambridge Analytica).
There have been privacy-focused initiatives from within Facebook that help users take more control of their privacy. Facebook recently announced an upcoming feature called ‘Off-Facebook Activity’. The idea is that since Facebook tracks what you do on the Internet even when you are not on Facebook, Off-Facebook Activity will give you an overview of websites and apps that share your information with Facebook. You still can’t delete the information that Facebook collects. However, you can choose to delink that information from your digital profile. It’s not perfect in concept, hasn’t been released yet, and does not do nearly enough to calm concerns around Facebook’s conduct towards user privacy.This would not have been such a huge problem if Facebook did not have a stronghold on free speech. Facebook (with its acquisition of Instagram and WhatsApp) has a monopoly on social media. Its closest competitors are Snapchat or LinkedIn. So even if people want to quit Facebook, they don’t have anywhere else to go. Zuckerberg has in fact admitted to Facebook’s power on free speech itself, stating, “Lawmakers often tell me we have too much power over speech, and frankly, I agree.”
Facebook’s monopoly means that no matter how horrible their conduct is with user data, they tend to get away with it. This brings us to the question of whether Facebook can be broken up. The idea here is that if Facebook could be unmerged with Instagram and WhatsApp, it would spark competition in privacy practices. Competition in privacy laws would be better for everyone.There have been calls to do exactly that. The idea of dismantling Big Tech is a key message of United States Senator Elizabeth Warren’s presidential campaign. Facebook’s own co-founder, Chris Huges, argued for breaking up Facebook too, calling Zuckerberg’s power “unAmerican”. The problem here is that it is a grey area for antitrust laws. It's unclear whether existing antitrust law is equipped to engender a splitting of Facebook, Instagram, and WhatsApp. It's up to the Justice Department and FTC to determine if a case can be made out for it. Even so, you can rest assured that if the U.S. government wanted to break up Facebook, it would be a lengthy process that might ultimately be unsuccessful. The government tried to break up Microsoft in 1990s, and failed.The other option here is to have stricter regulations when it comes to privacy. It is certainly the one Facebook prefers. In an Op-ed article for the New York Times published earlier this year, Nick Clegg, a vice president–level staffer at Facebook, called for better accountability through regulation. He emphasised the need for “significant resources and strong new rules” and added that breaking the company up would not resolve the problems of election interference or user privacy.Of course, Nick Clegg would say that. The problem here is that even if better privacy laws did exist they might not mean much given Facebook’s size and dominion. It could just choose to ignore them as it has in the past and assuage hurt feelings by paying a fine on occasion. Besides, U.S. privacy laws would not apply overseas. People in India would still suffer from privacy violations at the hands of Facebook.Facebook’s size is not a reason to punish it. However, its conduct toward user privacy is. It might be impossible to break up Facebook, but it is reasonable to demand accountability of it.If Facebook is to be truly made accountable, Mark Zuckerberg needs to be reined in. You will hear people say that Facebook’s current situation is a failure of capitalism. They will probably say that Big Tech needs big structural changes. They wouldn’t be wrong. Capitalism — and the attention economy, in particular — is not perfect. But, as of now, these are broad sweeping arguments, and not solutions. If Facebook is to be made more accountable, we need to begin with making Zuckerberg more accountable. Zuckerberg currently holds ~60% share on the Facebook board. This means Facebook’s board has no power to keep him accountable. It is advisory at best.The fix here is that Zuckerberg's power needs to be regulated. Creating a privacy czar will achieve little if s/he has no power to check Zuckerberg and his decisions. There are ways to accomplish this, none of them are easy.The most straightforward solution here would be to loosen Zuckerberg’s hold on the board by divesting him of a significant part of his shares. The legal precedent for this may not exist. However, it is only fair as Zuckerberg’s power, as a single man controlling the speech of 2 billion individuals, is unprecedented. It would help the board hold him accountable rather than simply advise. It would also steer clear of setting a precedent of companies being punished for being too successful.This article was first published in The Hindu.
India's Upcoming Digital Tax: How Will Big Tech Cope?
Taxation and regulation is slowly catching up with technology: India is now preparing a framework to tax Big Tech companies. India’s desire to do so is part of a global pattern. Earlier this year, France came up with a proposition to tax Google, Facebook, and Amazon. India following suit will lead to broader international consequences with a combination of factors determining how Big Tech is taxed.Digital taxation has been on the government’s mind for some time. June 2016 saw India come up with a “Google Tax,” an equalization levy that taxed digital advertising. In 2018, the revenue from the tax surpassed 10 billion rupees ($139 million). Prime Minister Narendra Modi’s government is also keeping an eye on the tech ecosystem. Modi himself has pushed for Digital India, Startup India, and called for digital payments post demonetization. As India grows as a market for digital technologies, the scope for the government to tax big tech firms such as Facebook and Google grows.India is a huge market for companies. As of April 2019, Facebook had 300 million users in India. You can expect to see a significantly higher number from Google. The idea is that India is a big source of revenue for tech companies. However, because these companies do not have a significant economic presence (SEP) in India, they might not pay their fair share of taxes. This is where India’s new proposed framework comes in. Multinational tech companies achieve scale without volume; they are structured in a way to pay less taxes. The proposal currently in the works will change that and make companies liable to be taxed.The complication here is that we live in a world where technology is a variable in international relations. Most giant tech companies are American and operate from Silicon Valley. The Trump administration might not like Big Tech but it will retaliate against other countries taxing Silicon Valley. The Americans have already postured against this seriously before. When faced with the prospect of data localization by India, Trump’s White House considered capping H-1B visas. The United States has already announced an inquiry into France’s proposal to tax Facebook, Google, Amazon, and Apple. There is a good chance that the matter might end in tariffs for France. This is something India has to consider as it goes ahead with taxing American Big Tech. Trump himself has been aggravated by the tariffs India places on the United States. He called out Modi and called the tariffs “no longer acceptable” during the G-20 summit. India’s push for digital taxation will likely provoke a similar reaction from the White House.The other stakeholder here is Silicon Valley itself. It is unclear how firms will react to thinner profit margins in India. There is a possibility that they could acknowledge market leverage while lobbying through third parties. The other end of the spectrum is that they could threaten to pull out of the market, leaving India with no direct substitutes. One possible outcome of a proposed Indian tax could also be the loss of future possible investment in India. If the government decided to tax Facebook and Google on revenue generated in India, they could see it as a sign to invest more in other markets. At the moment, India could be too big of a market to ignore. However, the imposition of taxes going forward might take away the incentive to innovate for the Indian consumer. It would also translate into the slower deployment of new AI-based technologies by large tech corporations, potentially slowing down India’s advancement in the AI race.All of the options above are highly unlikely, though. What is most likely to happen is the revision of a new framework with Big Tech at the table. Because both parties, the Modi government, and Big Tech, have a lot at stake, a compromise seems like the rational outcome. This way Google and Facebook can have a say in deciding how they are taxed and how much they should have to pay. A mutually consented tax rate could be beneficial for all stakeholders. It would keep investment flowing while not forcing India to look for domestic substitutes. It would also ensure that India does not rely on Chinese substitutes, doubling the scale of Chinese digital companies. However, just because this seems like the rational way forward does not mean that it is the one that will be taken. There are so many ways that an Indian digital tax could play out; we can only hope that policymakers will have carefully considered its impact on India’s foreign relations.Modi, Trump, China, and Silicon Valley — it is all a fascinating mess. It also goes to show how technology has inserted itself into foreign policy and geopolitics. As the Modi government might consider taking France’s lead, it is a step toward taxation coming to terms with the digital economy. How this ends globally will determine profit margins in Silicon Valley and development budgets in New Delhi.This article was first published in The Diplomat.
Where will big data take platforms like Netflix?
Streaming services are cataloguing the entire world’s audiovisual content onto their platforms. If you had told someone ten years ago that most of the world’s movies and TV shows would be available on-demand in their pocket they would have given you a patronising look of disbelief. If at present of the streaming industry is pathbreaking, the future promises to build even further on it.Streaming is funded by subscriptions and guided by big data analytics. Knowledge of how consumers behave on platforms such as Netflix and Prime Video lets the services gauge what else they might be willing to pay for. It is hard to say which way streaming is likely to go over the next decade. However, it is possible to make an educated guess based on the frameworks of information economics.Three main areas are likely to be affected by the continued usage of big data to improve streaming, user experience, security, and pricing.When the news broke that Netflix customises individual thumbnails for each user, it was another endorsement of how the platform was using big data to keep customers hooked. Netflix doesn’t just use a film or show’s original art; it employs an algorithm to source high-quality images from the content. Then it does more testing to determine what individual subscribers are most likely to click on. Based on that, each user’s Netflix homepage looks different, even if they have similar tastes. The idea is to have users spend as much time on Netflix as possible, and personalised thumbnails are a small cog in the working of this big machine. Data on who binge-watches which shows and how long each visit on the website lasts is also crucial when it comes to deciding what to invest in. This is not a new phenomenon. The TV show House of Cards is a case study to understand this.Big data tells Netflix (and Prime Video and Hulu and Hotstar) what users want even before they themselves know it. The data-based knowledge that David Fincher’s movies were in high demand — this insight was based on the number of times people played and paused them and how long they watched for — was a powerful resource for the TV decision-makers. Combining Fincher with a star-studded cast was not a shot in the dark. Netflix bet $100 million on two seasons (26 episodes) of the show at first, without watching a single episode. They even went as far as to make different trailers and filtered their distribution according to user preferences. This just shows that the information about our tastes and tendencies, as exhibited by big data, is empirically reliable.Going forward, big data analytics will continue to tell companies what the users want. This will have a significant impact on how funds are distributed across genres. For instance, the success of Narcos and Stranger Things will drive investments in more original content in their particular genres. This also means evolving content markets all over the world to keep users hooked and get new users to subscribe (think about the success of Sacred Games in India).
No free-loading
The increasing use of big data analytics will also mean tighter security for accounts. So, no chance of four people pooling their money together to get one streaming account. Also, no mooching off your friend’s account. The free-rider problem means streaming giants lose money on every individual that watches content without paying for it. Because Netflix has data on usage patterns — laptop model, user location over time — it can identify when someone other than the paying customer is watching. So, it is no wonder that the company is now planning on using AI to keep off account-moochers. Though such algorithms have not taken mass effect, there is reason to believe that this might change soon.Lastly, big data will be transformative when it comes to pricing streaming services. As companies compete for a higher share of users’ e-wallets, data on how much the consumers are willing to pay will be transformative in determining how the service is priced. The marginal cost of adding an additional user to a streaming service is negligible, which means that the price they can be charged is relatively flexible as compared to traditional industries such as cars. This is exactly what Netflix has been trying to leverage in India. In July, the company unveiled a mobile-only plan for the price-sensitive market. It is a novel move that might help Netflix compete with Prime Video, Jio TV, and Hotstar in India, all of which are cheaper options. The same could hold true for markets where the consumer is willing to pay more for premium services. Data will decide.User experience, security, and pricing are three key areas where big data analytics could be transformational for the streaming industry. This is by no means an exhaustive list. It would have taken a mental leap ten years ago to conceive of the current streaming scenario, and we might find the same a decade from now. New applications of insights from big data will continue to come to light. And the interesting thing is that, for us who are now aware of the speed at which data engineering and digital ecosystems can evolve, none of these developments are situated too far off in the future to be imagined. In big data analytics, the enabler of the present is also the driver of the future.This article was first published in The Hindu. Views are personal.
Privacy is dead. So, it’s time to turn data into a bargaining chip.
Tech firms offer services in exchange, but the government will argue it needs your data for national security. Why not trade it then.
This year, Google bought Nest. Why was the world’s biggest search engine acquiring a thermostat company? Because through Nest, Google will get to know what temperature you prefer in your home, or when you come in and go out during weekdays and weekends.Everyone wants data. It is why The Economist claimed that “(t)he world’s most valuable resource is no longer oil, but data”.In today’s digital world, fighting for privacy is fighting a losing battle. What we can instead fight for is making privacy a bargaining chip. Giving up your data to different people only makes sense if you know what you get in return.Earlier last year, when US Senator Orrin Hatch asked Mark Zuckerberg how Facebook remained free, a mildly amused Zuckerberg replied, “Senator, we run ads”. The clip went viral and highlighted the need for regulators to get up to speed with technology.People who understand how Facebook and Google work may know how they get their revenue by selling ads. They monitor your clicks; how much time you spend on a website; and what webpages you visit to target what they should be showing/selling to you. So, if you spend some time viewing videos of cats or say, an iPad, Facebook and Google will make sure that the content targeted to you is based on cats or iPads. However, the workings behind targeted advertising mean that it makes sense to think about privacy as a bargaining chip rather than an absolute right.Debates in technology change fast. Over the past year, different aspects of tech policy have been highlighted. There was an argument on intermediary liability on whether platforms should be considered the same as information publishers. We also have the ongoing debate on data localisation and where they should be physically located. Facebook’s launch of Libra shifted conversations to cryptocurrency and whether Facebook needed to be broken up. In the middle of all this chaos, the argument for user privacy seems to have died down. The news attention cycle is partly to blame.An equally big, if not bigger, part of the blame should be put on how big tech (Facebook, Google, and Amazon) operates. The business models for a lot of platforms (including Facebook, Google, Reddit, and Twitter) are responsible for it. Let’s look at Google. The idea is to offer services in exchange for your data. You don’t pay when signing up, but instead, give money to the platform’s clients after the application has used your own habits against you. Remember searching for that specific shoe wishfully, only to desert it midway, but the ads popping up for several weeks?The bottom line you need to understand in case of big tech making claim to your data is that they will provide you services in return (think Google Drive, Google Photos, or Google Search).It is not just big tech that wants your data. The government wants it too. While big tech might offer you services in exchange for that data, the government is not obliged to make any such promises. Instead, the government’s argument is that it needs access to data to maintain law enforcement, ensure national security and have supervisory access. This is a global trend that spans across contexts.For instance, the Reserve Bank of India wants unfettered supervisory access to financial data. India’s updated Information Technology Intermediaries Guidelines (Amendment) Rules want data on the originator of content on platforms. The Australian government has exclusive access to its citizens’ healthcare data, which cannot be shared outside its borders. There is also a big sentiment for states, especially developing economies such as India and China, to view data as a form of national wealth that can be used for development. This, in addition to the argument for law enforcement, makes the state naturally take an opposing stance to Facebook and Google when it comes to data access.These conversations are bound to become more complex as technology advances and the state plays catch-up. We are already looking at years of discussion on Facebook’s Libra project, which will also be a defining battle for the short-term future of cryptocurrencies. There is also Japanese PM Shinzo Abe’s proposal for a multilateral data-sharing framework, called the Osaka Track, which was proposed at the recent G20 summit. Over time, emerging technologies such as Artificial Intelligence and the internet of things are bound to raise the stakes as well, as both are closely linked to data.With so much happening in and around data and technology, it can be dizzying to keep up. The privacy debate gets left behind. The only big company making any noise about privacy seems to be Apple.So, it’s time for us to get the bargaining chip. I know how this is a controversial opinion, especially for people who consider privacy to be an absolute right, and rightly so. I am not implying that the battle on privacy is lost. I am implying that it is a losing battle. Quoting Shoshana Zuboff, the nature of the internet and ‘surveillance capitalism’ leaves us with little choice.In such times, thinking of privacy as a bargaining chip only seems to be a bi-product of a pragmatic assessment of the situation. You can use your privacy in a transaction to get goods and services. You pay with your privacy when you sign up for Google or Facebook and opt in to their services. Similarly, you lose your privacy when the government has sensitive data on you (which may or may not be optional). Giving up data about yourself only makes sense when you know what you get in return. ‘Privacy is a bargaining chip’ is one of the few phrases that are always central to the debates in tech policy and helps us make sense of the world around us.The article was first published in The Print. Views are personal.