Commentary

Find our newspaper columns, blogs, and other commentary pieces in this section. Our research focuses on Advanced Biology, High-Tech Geopolitics, Strategic Studies, Indo-Pacific Studies & Economic Policy

Why India needs to be the centre for content moderation reform

You could put a price tag on what it costs to keep platforms clean of harmful content. $52 million is a good starting point (and an underestimation). But the learnings that come out of this experience have the potential to be priceless. Not just in terms of how much money they can potentially save in counselling costs, but in terms of preventing the mental harm that content moderation causes people who undertake it.The article was first published in Deccan Chronicle. Read more. 

Read More

What Zoom’s rise tells us about future of work culture

This article was first published in Deccan Herald.The shift to working from home would have happened in an organic manner, but the COVID-19 pandemic has accelerated the speed of change; important to think about the precautions we must take to make this on-going experiment successfulThe coronavirus pandemic has become the reason for the largest work from home experiment in history. This phenomenon has meant an increased use of video conferencing and collaboration platforms that allow many people to simultaneously interact and collaborate in a virtual setting.Not surprisingly, the company that has benefited most from this on-going experiment is Zoom, a video conferencing platform that is being used by millions of users. Zoom’s share price has more than doubled since the new coronavirus began to spread in December 2019. There has also been a rise in trolling and graphic content on Zoom, an almost definitive sign that it is rising in popularity among teenagers and not just working professionals.Zoom’s rise (along with other video conferencing platforms like Skype and Slack) is indicative of a broader shift in the work culture. This shift to working from home or working remotely would have likely happened in an organic manner anyway, but the COVID-19 pandemic has accelerated its speed.There isn’t much value in arguing about whether this phase will lead to a permanent shift in terms of a bulk of jobs being performed remotely from now. That question depends on too many variables, and it is impossible to predict.But the shift itself needs to be understood as part of an evolving trend. Workspaces for the most part have moved from cubicles to open-plan offices. As Chris Bailey notes in Hyperfocus, it is contentious to conclude that open-plan offices improved productivity across the board. What open-plan offices did was to make employees think twice before interrupting their colleagues, and made them more respectful of each other’s time.The future of the office space, moving on from open-plan offices, is the virtual office (widely anticipated and now catalysed by the COVID-19 threat), with people logging in and conducting meetings from home. This brings us to what the characteristics of this new work from home culture will be and what broad precautions we must take to ensure that remote working is successful for us as a people.Thinking through the idea of working remotelyThe first and most important thing to look out for here is going to be the impact this is going to have on the attention economy. With an increasing number of people working from home today, there is going to be a significant reduction in friction. Let me explain, the attention economy runs on the idea of manipulating people to spend more time on platforms. Companies do this by eliminating friction between the user and the content This is why the feed on Instagram is endless, or why the default option on Netflix is to keep watching more instead of stopping. Because everything is either free or so easy to access, attention becomes the currency.But when in office environments, for example, during a meeting, there is a certain amount of friction in accessing these apps. Using Instagram while talking to a colleague is going to have a social cost on your relationship. However, when working from home, it is going to be significantly easier for employees to give in to their distractions instead of focusing on tasks at hand. It is no wonder that Zoom has begun offering a feature that allows hosts to check if participants are paying attention based on whether or not the Zoom window is active on their screens.In addition, this also opens a can of worms for privacy breaches and the issue of regulating non-personal data. Because a huge number of people are shifting to working online for the foreseeable future, the value of online meetings increases in terms of the data being shared on them. This gives video conferencing and collaboration platforms the incentive to collect and share an increased amount of data with advertisers. For example, information on when users open the app, details about the user’s device – such as the model, time zone, city, phone carrier, and the unique advertiser identifier (a unique number created by user devices which are then used to target ads).In addition, increased workloads being transferred online will also generate increasing volumes of non-personal data, making the debate on how that should be regulated more relevant. For context, non-personal data is a negative term that is used to refer to all data that is not personal. This includes data like company financials, growth projections, and quite possibly most things discussed in office meetings.It is unlikely that COVID-19 has transformed offices forever. In this regard, its role in history is likely to be seen as a catalyst, accelerating the shift from offline offices to online offices. But as it does so, we need to take precautions by introducing friction in the attention economy, being conscious of the privacy trade-offs being made to facilitate new features, and installing regulation for the governance of non-personal data.(Rohan Seth is a technology policy analyst at The Takshashila Institution)

Read More

Shutting down internet to curb opposing views is problematic

States around the world are divided along the lines of how they should view the internet. On one end of the spectrum, there are calls to treat the internet somewhat as a fundamental right. For instance, the UN subscribes to this view and is publicly advocating for internet freedom and protection of rights online. On the other end of the spectrum, there is India, where after over a hundred shutdowns in 2019 alone, you could arguably define access to the internet as a luxury.

In my personal opinion, shutting down the internet for a certain area is an objectively horrible thing to do. It’s no wonder that states tend to not take this lightly. Even in Hong Kong, after months of protests, the government felt it okay to issue a ban on face masks in public gatherings. However, when it came to the internet, the government looked at censoring the internet, not shutting it down. The difference is that under censorship, access to certain websites or apps is restricted, but there is reasonable scope for the protesters to contact their families and loved ones. The chronology will tell you that even internet censorship as a measure was considered after weeks of protests.

In the case of India, that is among one of the first things the government does. So when India revoked Kashmir’s autonomy on August 5, 2019, the government shut down the internet the same day. It has been almost 150 days at the time of writing with no news of access to the internet being restored in Kashmir valley. Naturally, people are now getting on trains to go to nearby towns with internet access to renew official documents, fill out admission forms, check emails, or register for exams.

There are multiple good arguments as to why the internet should not be shut down for regions. They cost countries a lot of money once implemented.

According to a report by Indian Council for Research on International Economic Relations, During 2012-17,

16,315 hours of Internet shutdown cost India’s economy around $3 billion, the 12,600 hours of mobile Internet shutdown about $2.37 billion, and the 3,700 hours of mobile and fixed-line Internet shutdowns nearly $678.4 million.

Telecom operators have also suffered because of the Article

370 and the CAA bi-products of the internet shutdown with The Cellular Operators Association of India (COAI) estimating the cost of internet shutdowns being close to `24.5 million for every hour of internet shutdown. Then consider the impact shutting down the internet has on the fundamental right to the freedom of speech and expression and the impact it has on the democratic fabric of our country.

In the case of India, internet shutdowns are also a bad idea because they reinforce the duration of shutdowns and also make themselves more frequent.

Let me explain the duration argument first. Shutdowns tend to happen in regions that are already unstable or maybe about to become so. For better or for worse, the violence and brutality resulting from the instability are captured and shared through smartphones. While those videos/photos may not be as effective as independent news stories, when put on social media they combine to build a narrative. And soon enough the whole is greater than the sum of its parts, creating awareness among people who had little or none before. The problem is that the longer the instability and the internet shutdown lasts, the more ‘content’ there is to build a narrative. In the case of Assam and even more so in Kashmir, this is exactly what has happened. At this point, if the government rescinds the shutdown in either of those places, it faces the inevitable floodgates opening on social media. And the longer this lasts, the more content is going to be floated around.

Secondly, internet shutdowns make internet shutdowns more frequent. After revoking access to the internet a certain number of times, the current administration seems to have developed a model/doctrine for curbing dissent.

Step 1 in that model is shutting down the internet. This has led to shutdowns being normalized as a measure within the government. So it’s no longer a calculated response but a knee-jerk reaction that seems to kick the freedom of expression in the teeth every time it is activated.

The broader point here is that taking away the internet is an act of running away from backlash and discourse.

To carry it out as an immediate response to protests is in principle, turning away from the democratic value of free speech. It’s hard to believe that it may be time for the world’s largest democracy to learn from Hong Kong (a state which uses tear gas against its people and then tries to ban face masks) when it comes to dealing with protesters.

(The writer is a technology policy analyst at the Takshashila Institution.)

This article was first published in Deccan Chronicle.

Read More

Amazon, Fine Margins, and Ambient Computing

There are some keynotes in the tech world that serve as highlights of the year. There is Apple’s iPhone event and WWDC where Apple traditionally deals with software developments. Then there is Google’s IO, and also the Mobile World Congress. Virtually all of these are guaranteed to make the news. Earlier last year, it was an Amazon event that captured the news (outshining Facebook’s Oculus event that was held on the same day in the process).During the event, Amazon launched 14 new products. By any standards, that is a lot of announcements, products, and things to cover in a single event. And so it can be a bit much to keep up with and make sense of what’s happening at Amazon. The short version of the developments is that Amazon is trying to put Alexa everywhere it possibly can. It’s competing with Google Assistant and Siri, as well as your daily phone usage. It wants you to check your phone less and talk to Alexa more.It would explain why Amazon has launched ‘echo buds’. They have Bose’s ‘Noise Reduction Technology’ and are significantly cheaper than Apple’s Air Pods. There is also an Amazon microwave (also cheaper than its competition), as well as Echo Frames, and an “Alexa ring”, called Loop. The Echo speaker line has also been diversified to suit different pockets (and has also included a deepfake of Samuel Jackson’s voice, which is amusing and incentive enough to prefer Alexa over other voice assistants unless competition upstages them). Amazon launched a plug-in device called Echo Flex (which seems to be ideally suited for hallways, in case you want access to Alexa while going from one room to another and are not wearing your glasses, earphones, or ring). Aside from a huge number of available form factors in which they can put Alexa in, the other thing about these products is how they are priced. You could make the argument that the margins are so little that the pricing is predatory (a testament to what can be accomplished when one sacrifices profit for market share). Combine that with how they will be featured on Amazon’s website and you can foresee decent adoption rates, not just in the US, but also globally should those products be available.In the lead-up to the event, Amazon also launched a Voice Interoperability Initiative. The idea is that you can access multiple voice assistants from a device. Notably, Google Assistant and Siri are not part of the alliance, but Cortana is. You can check out a full list here. The alliance is essentially a combination of the best of the rest. It aims to compensate for the deep system integration that Alexa lacks but Google Assistant and Siri have on Android and iOS devices.Besides making Alexa more competitive, the broader aim for the event is to make Amazon a leader in ambient computing. Amazon knows that it is going to be challenging to have people switch from their phones to Alexa and so likely wants marginal wins (a practice perfected in-house). That’s why so many of their announced products are concepts, or ‘day 1’ products available on an invite-only basis. The goal is to launch a bunch of things and see what sticks and feels the most natural to fit Alexa in so that they can capitalize on it later.It is Amazon’s job to make a pitch for an Alexa-driven world and try to drive us there through its products and services, but not enough has been said about what it might look like once we are in it. An educated guess is that user convenience will eventually win in such a reality. As will AI, with more data points coming in for training. This is likely to come at a cost of privacy depending on Amazon’s compliance with data protection laws (should they become a global norm).To be fair to Amazon, the event had some initial focus on privacy which then shifted to products. However, the context matters. For better or worse, these new form factors are a step ahead in collecting user data. Also, the voice interoperability project might also mean that devices will have multiple trigger words and thus, more accidental data collection. To keep up with that, Amazon will need to improve its practices on who listens to recordings and how.Amazon’s event has given us all things Alexa at very competitive rates, which sounds great. If you are going to take away one thing from the event, let it be that Amazon wants to naturalise you talking to Alexa. Its current strategy is to surround you with the voice assistant wrapped in different products. If it can make you switch to talking to Alexa instead of checking your phone, or using Google Assistant or Siri even 4 times a day, that is a win they can build on.

Read More

Disney Should Buy Spotify

You may think that winning the streaming race depends on having the best content, but things have already begun to change. As of now, the company with the better bundle will win, and that’s why it makes sense for Disney to buy Spotify this year.To read the full article, visit OZY.Rohan is a technology policy analyst at The Takshashila Institution.  

Read More

Data Protection Bill, an unfinished piece of work

Bill demands age verification and consent from guardians of children for data processing

Shashi Tharoor has a strong case when he says that the personal data protection Bill should have come to the information technology standing committee. It does set a precedent when issues as important as the bill do not go through proper channels of debate. Because of the nature of the Bill, there is a tremendous amount of scope for discourse and disagreement.

Let us begin with the most debated aspect of this legislation, the Data Protection Authority (DPA). Because the mandate of the Bill is so large, it can only go on to set guidelines and give direction on where the data protection space should go. The heavy lifting of enforcement, monitoring, and evaluation has to fall on the shoulders of a different (and ideally independent) body. In this case, it is the DPA that has the duty to protect the interests of data principals, prevent any misuse of personal data, ensure compliance with the act, and promote awareness about data protection. The body needs to enforce the Bill down to auditing and compliance, maintaining a database on the website that has a list of significant data fiduciaries along with a ranking that reflects the level of compliance these fiduciaries are achieving, and act as a form of check and balance to the government.

However, the DPA may end up not being the force of objective balance that it has often been made out in the Bill. Here is why. The body will have a total of 7 members (a chairperson with 6 others). All of them will be appointed by the government, based on the recommendations of the cabinet secretary, secretary to the Government of India in the ministry (or department) dealing with legal affairs, and the secretary to the ministry (or department) of electronics and information technology. All of this falls under the mandate of the executive and has no involvement required from the judiciary or for that matter the legislative. Also, the current version of the Bill does not specify who (or which department) these recommendations will go to in the central government. Is it MeitY? NITI Aayog? PMO? There is no clarity.

One cannot help but notice a pattern here. The Bill itself is going to go to a committee dominated by members of the ruling party and the enforcer is going to be wholly constituted by the executive.

Where is the feedback loop? Or the chance for scrutiny? You could at this point begin questioning how independent the DPA is going to be in its values and actions.

That is not to say that the Bill is all bad. Specifically, it does a good job of laying out the rights of the personal and sensitive personal data of children. And that is not often talked about a lot. The Bill here has a unique approach where it classifies companies that deal with children’s data as guardian data fiduciaries. That is crucial because children may be less aware of the risks, consequences and safeguards concerns and their rights in relation to the processing of personal data. Here the Bill clearly requires these guardian data fiduciaries to demand age verification and consent from guardians for data processing. Also, fiduciaries are not allowed to profile, track, monitor or target ads at individuals under 18.

This is a loss for Facebook. The minimum age to be on the social media platform is 13. And Facebook’s business model is to profile, track, monitor, and micro-target its users. One of two things will happen here. Facebook will either have to change the bar for entry onto the platform to 18 as per the Bill. Or, it will need to ensure that its algorithms and products do not apply to users who are below 13. Either way, expect pushback from Facebook on this, which may or may not result in the section being modified.

The other thing the Bill should add on children’s rights is the requirement to simplify privacy and permissions for children to be consistent with global standards. For instance, the GDPR mandates asking for consent form children in clear and plain language. There is value in making consent consumable for children and for adults. So provisions in this regard should apply not just for children but also for adults, mandating a design template on how and when consent should be asked for.

In sum, the Bill is an unfinished product in so many ways. It has good parts, such as the section on the personal and personal sensitive data of children. However, it needs debate and scrutiny from multiple stakeholders to guide the DPA to be the best version of itself and it is in the government’s hands to make that happen.

Read More

2020 cybersecurity policy has to enable global collaboration

The rapid expansion of digital penetration in India brings with it the need to strengthen cybersecurity. The critical nature of the myriad cyber threats that India faces was underscored by the recent breach at the Kudankulam nuclear power plant and the Indian Space Research Organisation. These were just two of the 1,852 cyber-attacks that are estimated to have hit entities in India every minute in 2019. Symantec’s 2019 Internet Security Threat Report ranks India second on the list of countries affected by targeted attack groups between 2016 and 2018.It’s clear that India faces expanded and more potent cyber threats. Given this fact, the new national cybersecurity policy, set to be announced early next year, should improve on the shortcomings of the previous policy of 2013. The most significant of these were the absence of clear, measurable targets, failure to set standards for the private sector and limited focus on international collaboration.

 In many ways, the broad thrust of the 2013 policy was on point. It argued for the need to build a “secure and resilient cyberspace,” given the significance of the IT sector to foster growth while leading to social transformation and inclusion. This called for creating a “secure computing environment and adequate trust and confidence in electronic transactions, software, services, devices and networks”. Since then, certain steps have been taken to operationalise the policy. These include the establishment of the National Cyber Security Coordination Centre and Cyber Swachhta Kendra along with announcements to set up sectoral and state CERTs and expand the number of standardisation, testing and quality certification testing facilities. However, much more needs to be done and that too at a faster pace.While it is no one’s argument that state capacity can be augmented overnight, setting clear targets can help drive action towards an identified goal. Moreover, the lack of these in the 2013 policy means that it is extremely difficult today to assess whether the policy had the desired impact. Five-year plans are well-written documents, whether or not you agree with the goals they outline for the nation or even if the five-year approach is right at all.The most quantifiable item on the agenda for the 2013 cybersecurity policy was the objective to create a workforce of 500,000 professionals skilled in cybersecurity in the next five years through capacity building, skill development, and training. The objective set a number that one can look at five years from then and see if they exceeded or fell short of expectations. And the data in this regard is sobering. For instance, in 2018, IBM estimated that India was home to nearly 100,000 trained cybersecurity professionals. What’s further alarming is that it estimated the total number needed at nearly three million. The 2020 policy must, therefore, not just identify clear targets but also identify the ways and means through which that target should be met.Almost everything else in the 2013 document was fairly ambiguous. It contained repeated references to adopt and adhere to global standards for cybersecurity. However, there was no clarity on what specific standards should be followed and how long industry should take to adopt them.This brings us to the second shortcoming. The policy at the time was hoping to balance a trade-off between encouraging innovation while ensuring that basic standards for security and hygiene were met. When it comes to the private sector, it repeatedly used words such as “encourage”, “enable” and “promote”, being careful to not make anything mandatory. Even when it did mandate something, say global best practices for cybersecurity to critical infrastructure, it is hard to say how it planned to declare the mandate a success or a failure. This is again a pitfall that the 2020 policy must avoid. The policy must establish or identify standards that the industry should adopt within a fixed timeframe. Also, there is a need for the government to engage with the private sector, particularly when it comes to sharing skills and expertise.Finally, when it comes to international collaboration, the 2013 policy argued for developing bilateral and multilateral relationships in the area of cybersecurity with other countries and to enhance national and global cooperation among security agencies, CERTs, defence agencies and forces, law enforcement agencies and the judicial systems. Since then, India has entered into a bunch of cybersecurity-related MoUs. However, there is an urgent need to set into place domestic frameworks, say for instance with regard to data protection, which will enable broader global collaboration and participation in rule setting. Unfortunately, this has not been happening. For instance, India was not a signatory to the Budapest convention which would have allowed for easier access to data for law enforcement. It also did not enter into an executive agreement under the US-initiated CLOUD Act. On a related note, the government also did not sign the Osaka Track, a plurilateral data sharing agreement proposed at the 2019 G20 Summit. These are important dialogues that India must be part of if it needs to build a resilient and thriving cyber ecosystem.

Read More

Personal Data Protection Bill has its flaws

Data Protection Authority can potentially deal with brokers and the negative externality

Indian tech policy is shifting from formative to decisive. Arguably the biggest increment in this shift comes this week as the Personal Data Protection Bill will (hopefully) be debated and passed by the parliament. The bill itself has gone through public (and private) consultation. But it is still anyone's guess what the final version will look like.

Based on the publically available draft, there is a lot right with the bill. The definitions of different kinds of data are clear, and there is a lot of focus on consent. However, there is not enough focus on regulating data brokers. And that can be a problem. Data brokers are intermediaries who aggregate information from a range of sources. They clean, process, and/or sell data they have. They generally source this data if it is publicly available on the internet or from companies who first hand.

Because the bill does not explicitly discuss brokers, problems lie ahead. Broadly, you could argue that brokers come under either the fiduciary or in India sell lists of people who have been convicted of rape and the list ends up becoming public information.

Similarly, think about cases where databases of shops selling beef, alcoholics or erectile dysfunction are released into the wild. The latter two are instances the US is somewhat familiar with. A data broker can ask its clients to not re-sell the data, or expect certain standards of security to be maintained. But there is no way to logistically ensure that the client is going to adhere to this in a responsible manner. The draft bill talks about how to deal with breaches and who should be notified. But breaches are, by definition, unauthorised. A data broker’s whole business model is selling or processing data. All of which is legal. So, how should the

Indian government be looking at keep data brokers accountable? Some would argue that the answer may lie in data localisation. But localisation will only ensure that data is stored/processed domestically. Even if the broker is located domestically, it doesnt matter unless there is provision in law for mandating accountability.

The issue around brokers is also unlikely to be handled in the final version of the bill. Even though it is important and urgent, it does not take precedence over more fundamental issues. What is going to happen here is that data brokers and their activities are going to be subject to the mandate of the Data Protection Authority (DPA) due to be formed after the bill is passed.

Once the DPA is formed, there are a few ways in which it can potentially deal with brokers and the negative externality their role brings.

One option could be to hold data brokers accountable once a breach has occurred and a broker has been identified as culpable. The problem here is that data moves fast. By the time there is a punitive measure in response to a breach, the damage may have already been done. In addition, such a measure would also encourage brokers to hide traces of the breaches that lead to them.

Another alternative could be to ask every data broker to register themselves.

But that would mean more data brokers being incentivised to move out of the country while maintaining operations in India.

Rohan is a technology policy analyst at The Takshashila Institution.

This article was first published in Deccan Chronicle.

Read More
Economic Policy Anupam Manur Economic Policy Anupam Manur

Bengaluru needs more high-tech companies, not fewer 

The Karnataka government is set to release a new industrial policy next month with the goal of encouraging investment in tier-II cities. As it has been in the past, this goal is likely to be framed in zero-sum terms i.e. achieved by pushing IT companies to move away from Bengaluru and in other cities instead.We will limit this article’s focus on what such a policy direction would mean in high-tech sectors such as biotech, aerospace, and IT. , this push towards creating an alternative of centre gravity for the high-tech industry seems to be an intuitive answer for achieving balanced regional growth. And yet, this view is wrong because it doesn’t square with the empirical experience of high-tech clusters elsewhere in the world.Read more at: https://www.deccanherald.com/opinion/bengaluru-needs-more-high-tech-companies-not-fewer-780314.html

Read More

Joining a New Social Media Platform Does Not Make Sense

Mastodon is what’s happening in India right now. Indian Twitter users are moving to the platform and have taken to using hashtags such as #CasteistTwitter and #cancelallBlueTicksinIndia. A key reason for this to transpire is that Twitter has been, to put it mildly, less than perfect, in moderating content in India. There is the incident with lawyer Sanjay Hegde that caused this to blow up, along with accusations that Twitter had been blocking hundreds and thousands of tweets in India since 2017 with a focus on accounts from Kashmir.Enter Mastodon. The platform, developed by developer Eugen Rochko, is opensourced, so no one entity gets to decide what content belongs on the communities there. Also, the data on Mastodon is not owned by one single corporation, so you know that your behavior on there is not being quantified and being sold to people who would use that to profile and target you.Plus, each server (community) has a relatively small size with a separate admin, moderator, and by extension, code of conduct. All of this sounds wonderful. The character count is also 500 words as opposed to 280 (if that is the sort of thing you consider to be an advantage).Mastodon is moving the needle forward by a significant increment when it comes to social networking. The idea is for us to move towards a future where user data isn’t monetised and people can host their own servers instead. As a tech enthusiast, that sounds wonderful and I honestly wish that this is what Twitter should have been.Keeping all of that in mind, I don’t think I will be joining Mastodon. Hear me out. A large part of it is not because Mastodon does not have its own problems, let’s set that aside for now and move on to the attention economy. Much like how goods and services compete for a share of your wallet, social media has for the longest time been competing for attention and mind-space. Because the more time you spend on the platform, the more ads you will see and the more money they will make. No wonder it is so hard to quit Instagram and Facebook.Joining a new platform for social media today is an investment that does not make sense unless the other one shuts down. There is a high chance of people initially quitting Twitter, only to come back to it while being addicted to another platform. The more platforms you are on, the thinner your attention is stretched. That is objectively bad for anyone who thinks they spend a lot of time on their phone.If you’re lucky to be one of the few people who do not suffer from that and are indifferent to the dopamine that notifications induce in your brain, this one doesn’t apply to you. Then there is the network effect and inertia. I for one, am for moving the needle forward little by little. But here, there is little to gain right now, with more to lose.Network effects are when products (in this case, platforms), gain value when more people use them. So, it makes sense for you to use WhatsApp and not Signal, as all your friends are on WhatsApp. Similarly, it makes sense for you to be on Twitter as your favorite celebs and news outlets are on there. Mastodon does not have the network effect advantage, so most people who do not specifically have their network on Mastodon, do not get a lot of value out of using it.In addition, there is inertia. Remember when we set aside Mastodon’s problems earlier, here is where they fit in. Mastodon is not as intuitive as using Twitter or Facebook. That makes it a deal-breaker for people of certain ages and also happens to be a significant con for people who don’t want to spend a non-trivial chunk of their time learning about servers, instances, toots, and so on.There also isn’t an official Mastodon app, however, there are a bunch of client apps that can be used instead, most popular among them is Tusky, but reviews will tell you that it is fairly buggy and that is to be expected. There is so much right with Mastodon. It is a great working example of the democratisation of social media. It also happens to exist in an age where it would be near impossible to get funding for or to start a new social media platform. The problem is that for people who don’t explicitly feel the need or see the value in joining Mastodon, are unlikely to split their attention further by joining a new platform. The switching costs, network effects, and inertia are simply too high.Rohan is a policy analyst at The Takshashila Institution and the co-author of Data Localization in a Globalized World: An Indian Perspective.This article was first published in Deccan Chronicle.

Read More

Govt needs to be wary of facial recognition misuse

India is creating a national facial recognition system. If you live in India, you should be concerned about what this could lead to. It is easy to draw parallels with 1984 and say that we are moving towards Big Brother at pace, and perhaps we are. But a statement like that, for better or worse, would accentuate the dystopia and may not be fair to the rationale behind the move. Instead, let us sidestep conversations about the resistance, doublethink, and thoughtcrime, and look at why the government wants to do this and the possible risks of a national facial recognition system.

WHY DOES THE GOVERNMENT WANT THIS?

Let us first look at it from the government’s side of the aisle. Having a national facial recognition database can have a lot of pros. Instead of looking at this like big brother, the bestcase scenario is that the Indian government is looking at better security, safety, and crime prevention. It would aid law enforcement. In fact, the request for proposal by the National Crime Records Bureau (NCRB) says as much, ‘It (the national facial recognition system) is an effort in the direction of modernizing the police force, information gathering, criminal identification, verification and its dissemination among various police organizations and units across the country’.

Take it one step further in a world where later down the line, you could also use the same database to achieve gains in efficiency and productivity. For example, schools could have attendance based on FaceID-like software, or checking for train tickets would be more efficient (discounting the occasional case of plastic surgery that alters your appearance significantly enough).

POTENTIAL FOR MISUSE

The underlying assumption for this facial recognition system is that people implicitly trust the government with their faces, which is wrong. Not least because even if you trust this government, you may not trust the one that comes after it. This is especially true when you consider the power that facial recognition databases provide administrations.

For instance, China has successfully used AI and facial recognition to profile and suppress minorities. Who is to guarantee that the current or a future government will not use this technology to keep out or suppress minorities domestically? The current government has already taken measures to ramp up mass surveillance. In December last year, the Ministry of Home Affairs issued a notification that authorized 10 agencies to intercept calls, data on any computer.

WHERE IS THE CONSENT? Apart from the fact that people cannot trust all governments across time with data of their faces, there is also the hugely important issue of consent and absence of legality. Facial data is personal and sensitive. Not giving people the choice to opt-out is objectively wrong.

Consider the fact that once such a database exists, it is will be combined with state police across the country, it says as much in the proposal excerpt mentioned above. There is every chance that we are looking at increased discrimination in profiling with AI algorithms repeating the existing biases.

Why should the people not have a say in whether they want their facial data to be a part of this system, let alone whether such a system should exist in the first place?

Moreover, because of how personal facial data is, even law enforcement agencies should have to go through some form of legal checks and safeguards to clarify why they want access to data and whether their claim is legitimate.

Data breaches would have worse consequences

Policy, in technology and elsewhere, is often viewed through what outcomes are intended and anticipated. Data breaches are anticipated and unintended. Surely the government does not plan to share/sell personal and sensitive data for revenue. However, considering past trends in Aadhaar, and the performance of State Resident Data Hubs goes, leaks and breaches are to be expected. Even if you trust the government to not misuse your facial data, you shouldn’t be comfortable with trusting third parties who went through the trouble of stealing your information from a government database.

Once the data is leaked and being used for nefarious purposes, what even would remedial measures look like? And how would you ensure that the data is not shared or misused again? It is a can of worms which once opened, cannot be closed.

Regardless of where on the aisle you stand, you are likely to agree that facial data is personal and sensitive. The technology itself is extremely powerful and thus, can be misused in the wrong hands. If the government builds this system today, without consent or genuine public consultation, it would be almost ensuring that it or future administrations misuse it for discriminatory profiling or for suppressing minorities. So if you do live in India today, you should be very concerned about what a national facial recognition system can lead to.

This article was first published in The Deccan Chronicle. Views are personal.

The writer is a Policy Analyst at The Takshashila Institution.

Read More