Commentary
Find our newspaper columns, blogs, and other commentary pieces in this section. Our research focuses on Advanced Biology, High-Tech Geopolitics, Strategic Studies, Indo-Pacific Studies & Economic Policy
Mint | Hustles could yet trip up Indian startups if we don’t take due care
By Nitin Pai
Yes, I live in Bengaluru, and yes, my neighbourhood is the epicentre of India’s startup ecosystem. Still, I was struck by the manner in which the person at the adjacent table at Third Wave Coffee greeted his friend. “What’s your hustle, man?" Over the past decade, the word ‘hustle’ has made inroads into the vocabulary of the country’s tech industry. In a positive sense, it means different things: to move fast and get things done; to work hard and for long hours; to earn a second income; to do freelance work in the gig economy; or to start up a new business. Read the full article here.
The Roadmap To India’s $300 Billion Electronics Manufacturing Dream
By Arjun Gargeyas
On January 24, the Minister for Electronics and Information Technology, Ashwini Vaishnaw released the vision document on the opportunities and growth of domestic electronics manufacturing. The report was a collaboration with the Indian Cellular and Electronics Association (ICEA) for developing a roadmap to increase and improve the share of India’s electronics exports in the next five years. While the first volume, titled “Increasing India’s Electronics Exports and Share in GVCs”, was released in November 2021, the second volume of the vision document came out this month.
What Zoom’s rise tells us about future of work culture
This article was first published in Deccan Herald.The shift to working from home would have happened in an organic manner, but the COVID-19 pandemic has accelerated the speed of change; important to think about the precautions we must take to make this on-going experiment successfulThe coronavirus pandemic has become the reason for the largest work from home experiment in history. This phenomenon has meant an increased use of video conferencing and collaboration platforms that allow many people to simultaneously interact and collaborate in a virtual setting.Not surprisingly, the company that has benefited most from this on-going experiment is Zoom, a video conferencing platform that is being used by millions of users. Zoom’s share price has more than doubled since the new coronavirus began to spread in December 2019. There has also been a rise in trolling and graphic content on Zoom, an almost definitive sign that it is rising in popularity among teenagers and not just working professionals.Zoom’s rise (along with other video conferencing platforms like Skype and Slack) is indicative of a broader shift in the work culture. This shift to working from home or working remotely would have likely happened in an organic manner anyway, but the COVID-19 pandemic has accelerated its speed.There isn’t much value in arguing about whether this phase will lead to a permanent shift in terms of a bulk of jobs being performed remotely from now. That question depends on too many variables, and it is impossible to predict.But the shift itself needs to be understood as part of an evolving trend. Workspaces for the most part have moved from cubicles to open-plan offices. As Chris Bailey notes in Hyperfocus, it is contentious to conclude that open-plan offices improved productivity across the board. What open-plan offices did was to make employees think twice before interrupting their colleagues, and made them more respectful of each other’s time.The future of the office space, moving on from open-plan offices, is the virtual office (widely anticipated and now catalysed by the COVID-19 threat), with people logging in and conducting meetings from home. This brings us to what the characteristics of this new work from home culture will be and what broad precautions we must take to ensure that remote working is successful for us as a people.Thinking through the idea of working remotelyThe first and most important thing to look out for here is going to be the impact this is going to have on the attention economy. With an increasing number of people working from home today, there is going to be a significant reduction in friction. Let me explain, the attention economy runs on the idea of manipulating people to spend more time on platforms. Companies do this by eliminating friction between the user and the content This is why the feed on Instagram is endless, or why the default option on Netflix is to keep watching more instead of stopping. Because everything is either free or so easy to access, attention becomes the currency.But when in office environments, for example, during a meeting, there is a certain amount of friction in accessing these apps. Using Instagram while talking to a colleague is going to have a social cost on your relationship. However, when working from home, it is going to be significantly easier for employees to give in to their distractions instead of focusing on tasks at hand. It is no wonder that Zoom has begun offering a feature that allows hosts to check if participants are paying attention based on whether or not the Zoom window is active on their screens.In addition, this also opens a can of worms for privacy breaches and the issue of regulating non-personal data. Because a huge number of people are shifting to working online for the foreseeable future, the value of online meetings increases in terms of the data being shared on them. This gives video conferencing and collaboration platforms the incentive to collect and share an increased amount of data with advertisers. For example, information on when users open the app, details about the user’s device – such as the model, time zone, city, phone carrier, and the unique advertiser identifier (a unique number created by user devices which are then used to target ads).In addition, increased workloads being transferred online will also generate increasing volumes of non-personal data, making the debate on how that should be regulated more relevant. For context, non-personal data is a negative term that is used to refer to all data that is not personal. This includes data like company financials, growth projections, and quite possibly most things discussed in office meetings.It is unlikely that COVID-19 has transformed offices forever. In this regard, its role in history is likely to be seen as a catalyst, accelerating the shift from offline offices to online offices. But as it does so, we need to take precautions by introducing friction in the attention economy, being conscious of the privacy trade-offs being made to facilitate new features, and installing regulation for the governance of non-personal data.(Rohan Seth is a technology policy analyst at The Takshashila Institution)
Technology is set to be the main front in the US-China trade war
Shutting down internet to curb opposing views is problematic
States around the world are divided along the lines of how they should view the internet. On one end of the spectrum, there are calls to treat the internet somewhat as a fundamental right. For instance, the UN subscribes to this view and is publicly advocating for internet freedom and protection of rights online. On the other end of the spectrum, there is India, where after over a hundred shutdowns in 2019 alone, you could arguably define access to the internet as a luxury.
In my personal opinion, shutting down the internet for a certain area is an objectively horrible thing to do. It’s no wonder that states tend to not take this lightly. Even in Hong Kong, after months of protests, the government felt it okay to issue a ban on face masks in public gatherings. However, when it came to the internet, the government looked at censoring the internet, not shutting it down. The difference is that under censorship, access to certain websites or apps is restricted, but there is reasonable scope for the protesters to contact their families and loved ones. The chronology will tell you that even internet censorship as a measure was considered after weeks of protests.
In the case of India, that is among one of the first things the government does. So when India revoked Kashmir’s autonomy on August 5, 2019, the government shut down the internet the same day. It has been almost 150 days at the time of writing with no news of access to the internet being restored in Kashmir valley. Naturally, people are now getting on trains to go to nearby towns with internet access to renew official documents, fill out admission forms, check emails, or register for exams.
There are multiple good arguments as to why the internet should not be shut down for regions. They cost countries a lot of money once implemented.
According to a report by Indian Council for Research on International Economic Relations, During 2012-17,
16,315 hours of Internet shutdown cost India’s economy around $3 billion, the 12,600 hours of mobile Internet shutdown about $2.37 billion, and the 3,700 hours of mobile and fixed-line Internet shutdowns nearly $678.4 million.
Telecom operators have also suffered because of the Article
370 and the CAA bi-products of the internet shutdown with The Cellular Operators Association of India (COAI) estimating the cost of internet shutdowns being close to `24.5 million for every hour of internet shutdown. Then consider the impact shutting down the internet has on the fundamental right to the freedom of speech and expression and the impact it has on the democratic fabric of our country.
In the case of India, internet shutdowns are also a bad idea because they reinforce the duration of shutdowns and also make themselves more frequent.
Let me explain the duration argument first. Shutdowns tend to happen in regions that are already unstable or maybe about to become so. For better or for worse, the violence and brutality resulting from the instability are captured and shared through smartphones. While those videos/photos may not be as effective as independent news stories, when put on social media they combine to build a narrative. And soon enough the whole is greater than the sum of its parts, creating awareness among people who had little or none before. The problem is that the longer the instability and the internet shutdown lasts, the more ‘content’ there is to build a narrative. In the case of Assam and even more so in Kashmir, this is exactly what has happened. At this point, if the government rescinds the shutdown in either of those places, it faces the inevitable floodgates opening on social media. And the longer this lasts, the more content is going to be floated around.
Secondly, internet shutdowns make internet shutdowns more frequent. After revoking access to the internet a certain number of times, the current administration seems to have developed a model/doctrine for curbing dissent.
Step 1 in that model is shutting down the internet. This has led to shutdowns being normalized as a measure within the government. So it’s no longer a calculated response but a knee-jerk reaction that seems to kick the freedom of expression in the teeth every time it is activated.
The broader point here is that taking away the internet is an act of running away from backlash and discourse.
To carry it out as an immediate response to protests is in principle, turning away from the democratic value of free speech. It’s hard to believe that it may be time for the world’s largest democracy to learn from Hong Kong (a state which uses tear gas against its people and then tries to ban face masks) when it comes to dealing with protesters.
(The writer is a technology policy analyst at the Takshashila Institution.)
This article was first published in Deccan Chronicle.
Amazon, Fine Margins, and Ambient Computing
There are some keynotes in the tech world that serve as highlights of the year. There is Apple’s iPhone event and WWDC where Apple traditionally deals with software developments. Then there is Google’s IO, and also the Mobile World Congress. Virtually all of these are guaranteed to make the news. Earlier last year, it was an Amazon event that captured the news (outshining Facebook’s Oculus event that was held on the same day in the process).During the event, Amazon launched 14 new products. By any standards, that is a lot of announcements, products, and things to cover in a single event. And so it can be a bit much to keep up with and make sense of what’s happening at Amazon. The short version of the developments is that Amazon is trying to put Alexa everywhere it possibly can. It’s competing with Google Assistant and Siri, as well as your daily phone usage. It wants you to check your phone less and talk to Alexa more.It would explain why Amazon has launched ‘echo buds’. They have Bose’s ‘Noise Reduction Technology’ and are significantly cheaper than Apple’s Air Pods. There is also an Amazon microwave (also cheaper than its competition), as well as Echo Frames, and an “Alexa ring”, called Loop. The Echo speaker line has also been diversified to suit different pockets (and has also included a deepfake of Samuel Jackson’s voice, which is amusing and incentive enough to prefer Alexa over other voice assistants unless competition upstages them). Amazon launched a plug-in device called Echo Flex (which seems to be ideally suited for hallways, in case you want access to Alexa while going from one room to another and are not wearing your glasses, earphones, or ring). Aside from a huge number of available form factors in which they can put Alexa in, the other thing about these products is how they are priced. You could make the argument that the margins are so little that the pricing is predatory (a testament to what can be accomplished when one sacrifices profit for market share). Combine that with how they will be featured on Amazon’s website and you can foresee decent adoption rates, not just in the US, but also globally should those products be available.In the lead-up to the event, Amazon also launched a Voice Interoperability Initiative. The idea is that you can access multiple voice assistants from a device. Notably, Google Assistant and Siri are not part of the alliance, but Cortana is. You can check out a full list here. The alliance is essentially a combination of the best of the rest. It aims to compensate for the deep system integration that Alexa lacks but Google Assistant and Siri have on Android and iOS devices.Besides making Alexa more competitive, the broader aim for the event is to make Amazon a leader in ambient computing. Amazon knows that it is going to be challenging to have people switch from their phones to Alexa and so likely wants marginal wins (a practice perfected in-house). That’s why so many of their announced products are concepts, or ‘day 1’ products available on an invite-only basis. The goal is to launch a bunch of things and see what sticks and feels the most natural to fit Alexa in so that they can capitalize on it later.It is Amazon’s job to make a pitch for an Alexa-driven world and try to drive us there through its products and services, but not enough has been said about what it might look like once we are in it. An educated guess is that user convenience will eventually win in such a reality. As will AI, with more data points coming in for training. This is likely to come at a cost of privacy depending on Amazon’s compliance with data protection laws (should they become a global norm).To be fair to Amazon, the event had some initial focus on privacy which then shifted to products. However, the context matters. For better or worse, these new form factors are a step ahead in collecting user data. Also, the voice interoperability project might also mean that devices will have multiple trigger words and thus, more accidental data collection. To keep up with that, Amazon will need to improve its practices on who listens to recordings and how.Amazon’s event has given us all things Alexa at very competitive rates, which sounds great. If you are going to take away one thing from the event, let it be that Amazon wants to naturalise you talking to Alexa. Its current strategy is to surround you with the voice assistant wrapped in different products. If it can make you switch to talking to Alexa instead of checking your phone, or using Google Assistant or Siri even 4 times a day, that is a win they can build on.
Disney Should Buy Spotify
You may think that winning the streaming race depends on having the best content, but things have already begun to change. As of now, the company with the better bundle will win, and that’s why it makes sense for Disney to buy Spotify this year.To read the full article, visit OZY.Rohan is a technology policy analyst at The Takshashila Institution.
Data Protection Bill, an unfinished piece of work
Bill demands age verification and consent from guardians of children for data processing
Shashi Tharoor has a strong case when he says that the personal data protection Bill should have come to the information technology standing committee. It does set a precedent when issues as important as the bill do not go through proper channels of debate. Because of the nature of the Bill, there is a tremendous amount of scope for discourse and disagreement.
Let us begin with the most debated aspect of this legislation, the Data Protection Authority (DPA). Because the mandate of the Bill is so large, it can only go on to set guidelines and give direction on where the data protection space should go. The heavy lifting of enforcement, monitoring, and evaluation has to fall on the shoulders of a different (and ideally independent) body. In this case, it is the DPA that has the duty to protect the interests of data principals, prevent any misuse of personal data, ensure compliance with the act, and promote awareness about data protection. The body needs to enforce the Bill down to auditing and compliance, maintaining a database on the website that has a list of significant data fiduciaries along with a ranking that reflects the level of compliance these fiduciaries are achieving, and act as a form of check and balance to the government.
However, the DPA may end up not being the force of objective balance that it has often been made out in the Bill. Here is why. The body will have a total of 7 members (a chairperson with 6 others). All of them will be appointed by the government, based on the recommendations of the cabinet secretary, secretary to the Government of India in the ministry (or department) dealing with legal affairs, and the secretary to the ministry (or department) of electronics and information technology. All of this falls under the mandate of the executive and has no involvement required from the judiciary or for that matter the legislative. Also, the current version of the Bill does not specify who (or which department) these recommendations will go to in the central government. Is it MeitY? NITI Aayog? PMO? There is no clarity.
One cannot help but notice a pattern here. The Bill itself is going to go to a committee dominated by members of the ruling party and the enforcer is going to be wholly constituted by the executive.
Where is the feedback loop? Or the chance for scrutiny? You could at this point begin questioning how independent the DPA is going to be in its values and actions.
That is not to say that the Bill is all bad. Specifically, it does a good job of laying out the rights of the personal and sensitive personal data of children. And that is not often talked about a lot. The Bill here has a unique approach where it classifies companies that deal with children’s data as guardian data fiduciaries. That is crucial because children may be less aware of the risks, consequences and safeguards concerns and their rights in relation to the processing of personal data. Here the Bill clearly requires these guardian data fiduciaries to demand age verification and consent from guardians for data processing. Also, fiduciaries are not allowed to profile, track, monitor or target ads at individuals under 18.
This is a loss for Facebook. The minimum age to be on the social media platform is 13. And Facebook’s business model is to profile, track, monitor, and micro-target its users. One of two things will happen here. Facebook will either have to change the bar for entry onto the platform to 18 as per the Bill. Or, it will need to ensure that its algorithms and products do not apply to users who are below 13. Either way, expect pushback from Facebook on this, which may or may not result in the section being modified.
The other thing the Bill should add on children’s rights is the requirement to simplify privacy and permissions for children to be consistent with global standards. For instance, the GDPR mandates asking for consent form children in clear and plain language. There is value in making consent consumable for children and for adults. So provisions in this regard should apply not just for children but also for adults, mandating a design template on how and when consent should be asked for.
In sum, the Bill is an unfinished product in so many ways. It has good parts, such as the section on the personal and personal sensitive data of children. However, it needs debate and scrutiny from multiple stakeholders to guide the DPA to be the best version of itself and it is in the government’s hands to make that happen.
Personal Data Protection Bill has its flaws
Data Protection Authority can potentially deal with brokers and the negative externality
Indian tech policy is shifting from formative to decisive. Arguably the biggest increment in this shift comes this week as the Personal Data Protection Bill will (hopefully) be debated and passed by the parliament. The bill itself has gone through public (and private) consultation. But it is still anyone's guess what the final version will look like.
Based on the publically available draft, there is a lot right with the bill. The definitions of different kinds of data are clear, and there is a lot of focus on consent. However, there is not enough focus on regulating data brokers. And that can be a problem. Data brokers are intermediaries who aggregate information from a range of sources. They clean, process, and/or sell data they have. They generally source this data if it is publicly available on the internet or from companies who first hand.
Because the bill does not explicitly discuss brokers, problems lie ahead. Broadly, you could argue that brokers come under either the fiduciary or in India sell lists of people who have been convicted of rape and the list ends up becoming public information.
Similarly, think about cases where databases of shops selling beef, alcoholics or erectile dysfunction are released into the wild. The latter two are instances the US is somewhat familiar with. A data broker can ask its clients to not re-sell the data, or expect certain standards of security to be maintained. But there is no way to logistically ensure that the client is going to adhere to this in a responsible manner. The draft bill talks about how to deal with breaches and who should be notified. But breaches are, by definition, unauthorised. A data broker’s whole business model is selling or processing data. All of which is legal. So, how should the
Indian government be looking at keep data brokers accountable? Some would argue that the answer may lie in data localisation. But localisation will only ensure that data is stored/processed domestically. Even if the broker is located domestically, it doesnt matter unless there is provision in law for mandating accountability.
The issue around brokers is also unlikely to be handled in the final version of the bill. Even though it is important and urgent, it does not take precedence over more fundamental issues. What is going to happen here is that data brokers and their activities are going to be subject to the mandate of the Data Protection Authority (DPA) due to be formed after the bill is passed.
Once the DPA is formed, there are a few ways in which it can potentially deal with brokers and the negative externality their role brings.
One option could be to hold data brokers accountable once a breach has occurred and a broker has been identified as culpable. The problem here is that data moves fast. By the time there is a punitive measure in response to a breach, the damage may have already been done. In addition, such a measure would also encourage brokers to hide traces of the breaches that lead to them.
Another alternative could be to ask every data broker to register themselves.
But that would mean more data brokers being incentivised to move out of the country while maintaining operations in India.
Rohan is a technology policy analyst at The Takshashila Institution.
This article was first published in Deccan Chronicle.
Joining a New Social Media Platform Does Not Make Sense
Mastodon is what’s happening in India right now. Indian Twitter users are moving to the platform and have taken to using hashtags such as #CasteistTwitter and #cancelallBlueTicksinIndia. A key reason for this to transpire is that Twitter has been, to put it mildly, less than perfect, in moderating content in India. There is the incident with lawyer Sanjay Hegde that caused this to blow up, along with accusations that Twitter had been blocking hundreds and thousands of tweets in India since 2017 with a focus on accounts from Kashmir.Enter Mastodon. The platform, developed by developer Eugen Rochko, is opensourced, so no one entity gets to decide what content belongs on the communities there. Also, the data on Mastodon is not owned by one single corporation, so you know that your behavior on there is not being quantified and being sold to people who would use that to profile and target you.Plus, each server (community) has a relatively small size with a separate admin, moderator, and by extension, code of conduct. All of this sounds wonderful. The character count is also 500 words as opposed to 280 (if that is the sort of thing you consider to be an advantage).Mastodon is moving the needle forward by a significant increment when it comes to social networking. The idea is for us to move towards a future where user data isn’t monetised and people can host their own servers instead. As a tech enthusiast, that sounds wonderful and I honestly wish that this is what Twitter should have been.Keeping all of that in mind, I don’t think I will be joining Mastodon. Hear me out. A large part of it is not because Mastodon does not have its own problems, let’s set that aside for now and move on to the attention economy. Much like how goods and services compete for a share of your wallet, social media has for the longest time been competing for attention and mind-space. Because the more time you spend on the platform, the more ads you will see and the more money they will make. No wonder it is so hard to quit Instagram and Facebook.Joining a new platform for social media today is an investment that does not make sense unless the other one shuts down. There is a high chance of people initially quitting Twitter, only to come back to it while being addicted to another platform. The more platforms you are on, the thinner your attention is stretched. That is objectively bad for anyone who thinks they spend a lot of time on their phone.If you’re lucky to be one of the few people who do not suffer from that and are indifferent to the dopamine that notifications induce in your brain, this one doesn’t apply to you. Then there is the network effect and inertia. I for one, am for moving the needle forward little by little. But here, there is little to gain right now, with more to lose.Network effects are when products (in this case, platforms), gain value when more people use them. So, it makes sense for you to use WhatsApp and not Signal, as all your friends are on WhatsApp. Similarly, it makes sense for you to be on Twitter as your favorite celebs and news outlets are on there. Mastodon does not have the network effect advantage, so most people who do not specifically have their network on Mastodon, do not get a lot of value out of using it.In addition, there is inertia. Remember when we set aside Mastodon’s problems earlier, here is where they fit in. Mastodon is not as intuitive as using Twitter or Facebook. That makes it a deal-breaker for people of certain ages and also happens to be a significant con for people who don’t want to spend a non-trivial chunk of their time learning about servers, instances, toots, and so on.There also isn’t an official Mastodon app, however, there are a bunch of client apps that can be used instead, most popular among them is Tusky, but reviews will tell you that it is fairly buggy and that is to be expected. There is so much right with Mastodon. It is a great working example of the democratisation of social media. It also happens to exist in an age where it would be near impossible to get funding for or to start a new social media platform. The problem is that for people who don’t explicitly feel the need or see the value in joining Mastodon, are unlikely to split their attention further by joining a new platform. The switching costs, network effects, and inertia are simply too high.Rohan is a policy analyst at The Takshashila Institution and the co-author of Data Localization in a Globalized World: An Indian Perspective.This article was first published in Deccan Chronicle.
How to respond to an 'intelligent' PLA
Advancements in Artificial Intelligence (AI) technologies over the next decade will have a profound impact on the nature of warfare. Increasing use of precision weapons, training simulations and unmanned vehicles are merely the tip of the iceberg. AI technologies, going forward, will not only have a direct battlefield impact in terms of weapons and equipment but will also impact planning, logistics and decision-making, requiring new ethical and doctrinal thinking. From an Indian perspective, China’s strategic focus on leveraging AI has serious national security implications.Read the full article on the Deccan Herald website.
Here’s Why Facebook Should Collect Data on Our Political Leanings
As a global community, we should have a more visible and informed choice in what content we want to consume.The full article is available here.Rohan is a Policy Analyst at The Takshashila Institution.
Lessons from Facebook and Twitter's Political Ads Policies
Over the course of the last few weeks, we have seen Facebook and Twitter take opposing views on the issue of political ads. While the issue itself does not have an immediate implication for Indian politics, the decisions of the two companies, their actions throughout the episode and reactions to them are emblematic of the larger set of problems surrounding their policies. They serve as a reminder that we should not expect these platforms to be neutral places in the context of public discourse solely through self-regulation.
In late October, Facebook infamously announced that it would not fact-check political ads. Shortly after that, Twitter’s CEO Jack Dorsey announced via Twitter that the company would not allow any political ads after November 22. And though Twitter is not alone in this approach, its role in public discourse differs from other companies like LinkedIn, TikTok etc. that already have similar policies. Google is reportedly due to announce its own policy soon. At face-value, it may seem that one of these approaches is far better than the other, but a deeper look brings forth the challenges both will find hard to overcome. Google, meanwhile, announced a new political ads policy on November 20. Its policy aims to limit micro-targeting across search, display and YouTube ads. Crucially, it reiterated that no advertisers (political or otherwise) are allowed to make misleading claims.
Potential for misuse
To demonstrate the drawbacks of Facebook’s policy, US lawmaker Elizabeth Warren’s Presidential campaign deliberately published an ad with a false claim about Facebook CEO Mark Zuckerberg. In another instance, Adriel Hampton, an activist, signed up as a candidate for California’s 2022 gubernatorial election so that he could publish ads with misleading claims (he was ultimately not allowed to do so).
While Twitter’s policy disallows ads from candidates, parties and political groups/ political action committees (PACs), Facebook claims it will still fact-check ads from PACs. For malicious actors determined to spread misinformation/disinformation through ads, these distinctions will not be much of an impediment. They will find workarounds.
While most conversation has been US-centric, both companies have a presence in over 100 countries. A significant amount of local context and human-effort is required to consistently enforce policies across all of them. The ongoing trend to substitute human oversight with machine learning could limit the acquisition of local knowledge. For e.g. does Facebook's policy of not naming whistle-blowers work in every country it has a presence in?
Notably, both companies stressed how little an impact political ads had on their respective bottom-lines. Considering the skewed revenues per user for North America + Europe compared with Asia Pacific + rest of the world, the financial incentive to enforce such resource-intensive policies equitably is limited. Both companies also have a history of inconsistent responses to moral panics resulting in an uneven implementation of their policies.
A self-imposed ban on political ads by Facebook and Twitter in Washington to avoid dealing with complex campaign finance rules has resulted in uneven enforcement and a complicated set of rules that have proven advantageous to incumbents. In response to criticism that these rules will adversely impact civil society and advocacy groups, Twitter initially said ‘cause-based ads’ won’t be banned and ultimately settled on limiting them by preventing micro-targeting. Ultimately, both approaches are likely to favour incumbents or those with deeper pockets.
Fixing Accountability
The real problems for Social Media networks go far beyond micro-targeted political advertising and the shortcomings across capacity, misuse and consequences apply there as well. The flow of misinformation/disinformation is rampant. A study by Poynter Institute highlighted that misinformation/disinformation outperformed fact-checks by several orders of magnitude. Research by Oxford Internet Institute and Freedom House has revealed the use disinformation campaigns online and the co-option of social media to power the shift towards illiberalism by various governments. Conflict and toxicity now seem to be features meant to drive engagement. Rules are implemented arbitrarily and suspension policies are not consistently enforced. The increased usage of machine learning algorithms (which can be gamed by mass reporting) in content moderation is coinciding with the reduction in human oversight.
Social Media networks are classified as intermediaries which grants them safe-harbour, implying that they cannot be held accountable for content posted on them by users. Intermediary is a very broad term covering everything from ISPs, Cloud services to end-user facing websites/applications across various sectors. Stratechery, a website which analyses technology strategy, proposes a framework for content moderation such that both discretion and responsibility is higher the closer a company is to an end-user. Therefore, for platforms like Facebook/Twitter/YouTube etc. there should be more responsibility/discretion than ISPs/Cloud services providers. It does not explicitly call for fixing accountability, which cannot be taken for granted.
Unfortunately, self-regulation has not worked in this context and their status as intermediaries may require additional consideration. Presently, India’s proposed revised Intermediary Guidelines already tend towards over-regulation to solve for the challenges posed by Social Media companies, adversely impacting many other companies. The real challenge for policy-makers and society in countries like India is to strike the balance between holding large Social Media networks accountable while not creating rules that are so onerous they can be weaponised into limiting freedom of speech.
(Prateek Waghre is a Technology-Policy researcher at Takshashila Institution. He focuses on the governance of Big Tech in Democracies)
This article was originally published on 21st November 2019, in Deccan Herald.
Govt needs to be wary of facial recognition misuse
India is creating a national facial recognition system. If you live in India, you should be concerned about what this could lead to. It is easy to draw parallels with 1984 and say that we are moving towards Big Brother at pace, and perhaps we are. But a statement like that, for better or worse, would accentuate the dystopia and may not be fair to the rationale behind the move. Instead, let us sidestep conversations about the resistance, doublethink, and thoughtcrime, and look at why the government wants to do this and the possible risks of a national facial recognition system.
WHY DOES THE GOVERNMENT WANT THIS?
Let us first look at it from the government’s side of the aisle. Having a national facial recognition database can have a lot of pros. Instead of looking at this like big brother, the bestcase scenario is that the Indian government is looking at better security, safety, and crime prevention. It would aid law enforcement. In fact, the request for proposal by the National Crime Records Bureau (NCRB) says as much, ‘It (the national facial recognition system) is an effort in the direction of modernizing the police force, information gathering, criminal identification, verification and its dissemination among various police organizations and units across the country’.
Take it one step further in a world where later down the line, you could also use the same database to achieve gains in efficiency and productivity. For example, schools could have attendance based on FaceID-like software, or checking for train tickets would be more efficient (discounting the occasional case of plastic surgery that alters your appearance significantly enough).
POTENTIAL FOR MISUSE
The underlying assumption for this facial recognition system is that people implicitly trust the government with their faces, which is wrong. Not least because even if you trust this government, you may not trust the one that comes after it. This is especially true when you consider the power that facial recognition databases provide administrations.
For instance, China has successfully used AI and facial recognition to profile and suppress minorities. Who is to guarantee that the current or a future government will not use this technology to keep out or suppress minorities domestically? The current government has already taken measures to ramp up mass surveillance. In December last year, the Ministry of Home Affairs issued a notification that authorized 10 agencies to intercept calls, data on any computer.
WHERE IS THE CONSENT? Apart from the fact that people cannot trust all governments across time with data of their faces, there is also the hugely important issue of consent and absence of legality. Facial data is personal and sensitive. Not giving people the choice to opt-out is objectively wrong.
Consider the fact that once such a database exists, it is will be combined with state police across the country, it says as much in the proposal excerpt mentioned above. There is every chance that we are looking at increased discrimination in profiling with AI algorithms repeating the existing biases.
Why should the people not have a say in whether they want their facial data to be a part of this system, let alone whether such a system should exist in the first place?
Moreover, because of how personal facial data is, even law enforcement agencies should have to go through some form of legal checks and safeguards to clarify why they want access to data and whether their claim is legitimate.
Data breaches would have worse consequences
Policy, in technology and elsewhere, is often viewed through what outcomes are intended and anticipated. Data breaches are anticipated and unintended. Surely the government does not plan to share/sell personal and sensitive data for revenue. However, considering past trends in Aadhaar, and the performance of State Resident Data Hubs goes, leaks and breaches are to be expected. Even if you trust the government to not misuse your facial data, you shouldn’t be comfortable with trusting third parties who went through the trouble of stealing your information from a government database.
Once the data is leaked and being used for nefarious purposes, what even would remedial measures look like? And how would you ensure that the data is not shared or misused again? It is a can of worms which once opened, cannot be closed.
Regardless of where on the aisle you stand, you are likely to agree that facial data is personal and sensitive. The technology itself is extremely powerful and thus, can be misused in the wrong hands. If the government builds this system today, without consent or genuine public consultation, it would be almost ensuring that it or future administrations misuse it for discriminatory profiling or for suppressing minorities. So if you do live in India today, you should be very concerned about what a national facial recognition system can lead to.
This article was first published in The Deccan Chronicle. Views are personal.
The writer is a Policy Analyst at The Takshashila Institution.
There’s more to India’s woes than data localisation
The personal data protection bill is yet to become a law and the debate is still rife on the costs and benefits of data localisation. It is yet to be seen officially if the government is going to mandate localisation in the data protection bill and to whom it is going to apply. Regardless of whether or not data localization ends up enshrined in the law, it is worth taking a step back and asking why the government is pushing for it in the first place.
For context, localisation is the practice of storing domestic data on domestic soil. One of the most credible arguments for why it should be the norm is that it will help law enforcement. Most platforms that facilitate messaging are based in the US (think WhatsApp and Messenger). Because of the popularity of these ‘free services,’ a significant amount of the world’s communication takes place on these platforms. This also includes communication regarding crimes and violation of the law.
This is turning out to be a problem because in cases of law violations, communications on these platforms might end up becoming evidence that Indian law enforcement agencies may want to access. The government has already made multiple efforts to make this process easier for law enforcement. In December 2018, the ministry of home affairs issued an order granting powers of “interception, monitoring, and decryption of any information generated, transmitted, received or stored in any computer,” to ten central agencies, to protect security and sovereignty of India.
But this does not help in cases where the information may be stored outside the agencies’ jurisdiction. So, in cases where Indian law enforcement agencies want to access data held by US companies, they are obliged to abide by lawful procedures in both the US and India.
The bottleneck here is that there is no mechanism that can keep up with this phenomenon (not counting the CLOUD Act, as India has not entered into an executive agreement under it).
Indian requests for access to data form a fair share, owing to India’s large population and growing internet penetration. Had there been a mechanism that provided for these requests in a timely enforcement through the provision of data. Most requests are US-bound, thanks to the dominance of US messaging, search, and social media apps. Each request has to justify ‘probable cause by US standards.’ This, combined with the number of requests from around the world, weighs down on the system and makes it inefficient. People have called the MLATs broken and there have been several calls for reform of the system.
A comprehensive report by the Observer Research Foundation (ORF) found that the MLAT process on global average takes 10 months for law enforcement requests to receive electronic evidence. 10 months of waiting for evidence is simply too long for two reasons. Firstly, in cases of law enforcement, time tends to be of the essence. Secondly, countries such as India have a judicial system with a huge backlog of cases. 10month-long timelines to access electronic evidence make things worse.
Access to data is an international bottleneck for law enforcement. The byproduct of the mass adoption of social media and messaging is that electronic criminal evidence for all countries is now concentrated in the US.
The inefficiency of MLATs is one of the key reasons why data-sharing agreements are rising in demand and in supply, and why the CLOUD Act was so well-received as a solution that reduced the burden on MLATs.
Countries need to have standards that can fasten access to data for law enforcement, an understanding of what kinds of data are permissible to be shared across borders, and common standards for security.
India’s idea is that localizing data will help with access to it for law enforcement, at least eventually down the line. It may compensate for not being a signatory to the Budapest Convention. It is unclear how effective localisation will be. Facebook’s stored in India is Facebook’s data.
Facebook is still an American company and should still be subject to US standards of data-sharing, which are one of the toughest in the world and include an independent judge assessing the probable cause, refusing bulk collection or overreach. This is before we take into account encryption.
For Indian law enforcement, the problem in this whole mess is not where the data is physically stored. It is the process that makes access to it inefficient. Localisation is not a direct fix, if it proves to be one at all. The answer lies in better data-sharing arrangements, based on plurilateral terms. The sooner this realized, the faster the problems can be resolved. data still
Rohan is a policy analyst at the technology and policy programme at The Takshashila Institution. Views are personal.
This article was first published in the Deccan Chronicle.
How Pegasus works, strengths & weaknesses of E2E encryption & how secure apps like WhatsApp really are
Pegasus, the software that infamously hacked WhatsApp earlier this year, is a tool developed to help government intelligence and law enforcement agencies to battle cybercrime and terror. Once installed on a mobile device, it can collect contacts, files, and passwords. It can also ‘overcome’ encryption, and use GPS to pinpoint targets. More importantly, it is notoriously easy to install. It can be transmitted to your phone through a WhatsApp call from an unknown number (that does not need to be picked up), and does not require user permissions to get access to the phone’s camera or microphone. All of that makes it a near complete tool for snooping.While Pegasus is able to hack most of your phone’s capabilities, the big news here is that it can ‘compromise’ end to end (E2E) encryption. The news comes at attesting time for encryption in India, as the government deliberates a crackdown on E2E encryption, a decision that we will all learn about more on Jan 15, 2020.Before we look at how Pegasus was able to compromise E2E encryption, let’s look at how E2E encryption works and how it has developed a place for itself in human rights.E2E is an example of how a bit of math, applied well, can secure communications better than all the guns in the world. The way it works on platforms such as WhatsApp is that once the user (sender) opens the app, the app generates 2 keys on the device, one public and one private. The private key remains with the sender and the public key is transmitted to the receiver via the company’s server. The important thing to note here is that the message is already encrypted by the public key before the message reaches the server. The server only relays the secure message and the receiver’s private key then decrypts it. End to end encryption differs from standard encryption because in services with standard encryption (think Gmail), along with the receiver, the service provider generally holds the keys, and thus, can also access the contents of the message.Some encryptions are stronger than others. The strength of an encryption is measured through how large the size of the key is. Traditionally, WhatsApp uses a 128-bit key, which is standard. Here you can learn about current standards of encryption and how they have developed over the years. The thing to keep in mind is that it can take over billions of years to crack a secure encryption depending on the key size (not taking into account quantum computing):Key Size Time to Crack56-bit 399 Seconds128-bit 1.02 x 1018 years192-bit 1.872 x 1037 years256-bit 3.31 x 1056 yearsE2E encryption has had a complex history with human rights. One the one side, governments and law enforcement agencies see E2E encryption as a barrier when it comes to ensuring the human rights of its citizens. Examples of mob lynching being coordinated through WhatsApp, such as these, exist around the world.On the other hand, security in communications and the anonymity it brings, has been a boon for people who might suffer harm if their conversations were not private. Think peaceful activists who utilize it to fight for democracy around the world, most recently, Hong Kong. Same goes for LGBTQ activists and whistleblowers. Even diplomats and government officials operate through the seamless secure connectivity offered by E2E encryption.The general consensus in civil society is that E2E encryption is worth having as an increasing amount of digital human communications move online to platforms such as WhatsApp.How does Pegasus fit in?End to end encryption ensures that your messages are encrypted in transit and can only be decrypted by the devices that are involved in the conversation. However, once a device decrypts a message it receives, Pegasus can access that data which is at rest. So it is not the end to end encryption that is compromised, but your devices security. Once a phone is infected, Pegasus can mirror the device, literally record the keystrokes being typed by the user, browser history, contacts, files and so on.The strength of end to end encryption lies in the fact that it encrypts data in transit well. So unless you have the key for decryption, it is impossible to trace the origin of messages as well as the content that is being transmitted. The weakness for end to end encryption here, as mentioned above, is that it does not apply to data at rest. If it were still encrypting data at rest, messages received by users would not be readable.At this point, the question about how secure apps such as WhatsApp, Signal, and Telegram really are, is widely debateable. While the encryption is not compromised, the larger system is, and that has the potential to make the encryption a moot point.WhatsApp came out with an update that supposedly fixed the vulnerability earlier this year, seemingly protecting communications on the platform from Pegasus.What does this mean for regulation against WhatsApp?The Pegasus story comes at a critical time for the future of encryption on WhatsApp and on platforms in general. The fact that WhatsApp waited ~6 months to file the lawsuit against the NSO will not help the platforms credibility on the traceability and encryption debate. This also brings into question the standards for data protection Indian citizens and users should be subject to. The data protection bill is yet to become law. With the Pegasus hack putting privacy front and center, the onus should ideally be on making sure that Indian communications are secure against foreign and domestic surveillance efforts.
The three elements of China’s innovation model
In November 2018, the New York Times published a series that began with a story titled, The Land that Failed to Fail. The central argument of the piece is that defying Western expectations, the Communist Party has maintained its control in China while adopting elements of capitalism, eschewing political liberalisation, and pursuing innovation. The last of these three — innovation — is the subject of this piece.What drives innovation in China? This is not merely a question about the mechanics of policy, the might of capital, the determination of dogged entrepreneurs, or the brilliance that is conjured up in university dormitories. Increasingly, it is a question that has acquired geopolitical significance, not just in the context of power politics but also in the debate over fundamental values about the political and economic organisation. In other words, the question that China’s march towards becoming a “country of innovators” raises is whether a political system that prioritises control can foster genuine innovation.Answering this requires an understanding of the key elements of the Chinese model of innovation. To my mind, there are three key components of this model—state support, a systems approach towards the development of new technologies and businesses, and building an effective “bird-cage.” There are, of course, other factors like the pursuit of prestige, the desire to rebalance the economy, the need to enhance the effectiveness of governance, and the size of the consumer market, which supports innovation. But it is the first three components that form the key pillars of China’s innovation model.Read More...
All Roads Lead to the Middle Kingdom
In January 2017, Chinese President Xi Jinping stood at the podium in Davos defending economic globalisation. He argued that the world needed to “adapt to and guide economic globalisation, cushion its negative impact, and deliver its benefits to all countries and all nations.” And in this process, “China’s development is an opportunity for the world.” All of this was, of course, in the backdrop of the beginning of Donald Trump’s presidency in the US.Addressing deputies at the National People’s Congress in March 2018, Xi doubled down on that message: "China will contribute more Chinese wisdom, Chinese solutions, and Chinese strength to the world, to push for building an open, inclusive, clean, and beautiful world that enjoys lasting peace, universal security, and common prosperity. Let the sunshine of a community with a shared future for humanity illuminate the world!"Both of those speeches reflected strength. The essential message they conveyed was that the world needed China. And under Xi, China now was surer about its destiny and keener than ever to play a larger international role. Yet as 2018 unfolded, this narrative came under severe strain. To assess how, we need to look at three dimensions: Xi’s status as the core of the Communist Party, the pushback against BRI, and the deepening competition with the US. It is the interplay of these three that is shaping China’s future.Read More...
China’s big plan for AI domination is dazzling the world, but it has dangers built in. Here’s what India needs to watch out for.
China has been one of the early movers in the AI space, and evaluating its approach to AI development can help identify important lessons and pitfalls that Indian policy makers and entrepreneurs must keep in mind.
Breaking down China’s AI ambitions
The Social Credit System is about much more than surveillance and loyalty, as popularly understood. Nudging persons to adopt desirable behaviour and enhancing social control are part of the story. But there are larger drivers of this policy. It is fundamentally linked to the Chinese economy and its transformation to being more market driven.
China unveiled a plan to develop the country into the world’s primary innovation centre for artificial intelligence in 2017. It identified AI as a strategic industry, crucial for enhancing economic development, national security, and governance.The Chinese government’s command innovation approach towards AI development is crafting a political economy that tolerates sub-optimal and even wasteful outcomes in the quest for expanding the scale of the industry. Consequently, the industry is likely to be plagued by concerns about overinvestment, overcapacity, quality of products, and global competitiveness.In addition, increasing friction over trade with other states and President Xi Jinping’s turn towards techno-nationalism along with tightening political control could further undermine China’s AI industry. Before we dive into the challenges, here’s some background.Read more here>