MisDisMal-Information Edition 14

This newsletter is published at techpolicy.substack.com. The following is an excerpt from Edition 14.

Go here for the complete edition

Of Things that aren’t there, Midinformation, Belarus and Twitter offices

Things that aren’t there

The fans that weren’t and the laws that aren’t

Earlier this week, Mumbai Mirror carried a story about Badshah alias Aditya Prateek Singh Sisodia (turns out that he is my namesake, sort of) claiming that he had “ bought around 7.2 crore views for Rs 72 lakh for one of his songs, called Pagal Hai, in a bid to set a world record. ” The police went so far as to say he confessed but the rapper himself states that he ‘categorically denied all allegations’.

But as Karthik S points out, the business of ‘fakes likes and views’ maybe unethical, but is not illegal, yet. And Bhanuj Kappal writes in livemint, that he isn’t the only one.

*Aside: New York Times published an interactive story on follower factories in early 2018.*

Indeed, a report on Government Responses to Disinformation on Social Media Platforms indicates that while some countries Denmark, Sweden have referenced the use of bots in the context of elections, none of them have taken any legislative action so far. A draft version of the Interstate Broadcasting treaty in Germany proposes instituting an obligation on social media intermediaries ‘to identify social bots’. The primary considerations are electoral. The other related thread comes from the direction of linking social media profiles to real world identies. China and Belarus enforce this. India and Kyrgyztan have proposals the reference this. Brazil did too, but the subsequent drafts of the ‘disinformation law’ appears to have relaxed this requirement.

So while the ‘influencer industry’ may be relieved for now, all of us probably shouldn’t. Jenna Hand, for FirstDraft writes about government overreach during the pandemic.

Hungary ,  Romania ,  Algeria ,  Thailand and the  Philippines are  among the countries that have instituted new laws or invoked emergency decrees giving authorities the power to block websites, issue fines or imprison people for producing or spreading false information during the pandemic. In  Cambodia and  Indonesia , social media users have been arrested after allegedly posting false news about the coronavirus. In Egypt, a journalist who had been critical of the government’s response to the pandemic and was detained for  “spreading fake news” contracted the virus in custody and  diedbefore he could be tried. Even in  South Africa , where freedom of expression is a constitutional right, politicians criminalized the publication of any statement made “with the intention to deceive any other person” about Covid-19, government measures to address the disease or — in a sign of the country’s grim experience with HIV/AIDS — a person’s infection status.

And the International Press Institute is maintaining a tracker of countries that have passed ‘fake news laws’ during the pandemic. My sense is that it understates matters since countries have also used existing laws instead of passing new ones.

The Google searches that weren’t there…

Ok, mini rant time. News18 buzz ran a story titled ‘Is Kamala Harris Hindu?’ What Many Indians Searched for After Biden Picked US Vice President Candidate’. I was intrigued to see what numbers they would back it up with. Well, here they are:

After her nomination was announced, many in India started to look her up, but instead of looking at her achievements, the most common search terms were ‘Kamala Harris religion’ ‘Kamala Harris Hindu’ ‘Kamala Harris religion’

That’s it. If only Google had a tool that indicates relative search interest and search trends?

I guess they do, which indicated that searches for her religion (red) or whether she is a hindu (blue) were likely far lower than searches for her name in general (yellow). It is possible that some of those searching for her by name may have been interested in her religion and even a small percentage of us (indians) will be “many”, but I will let artist formerly known as Marky Mark sum up how I feel.

The enemy that isn’t there?

If you have been following discourse around information disorder, you will be all too familiar with the tendency to blame as much as possible on foreign interference. Thankfully, the conversation seems to be moving towards domestic disinformation too.

Writing in Foreign Policy, Seva Gunitsky:

Treating disinformation as an alien disease ignores the fact that it is perfectly compatible with democratic norms and thrives inside democratic states.

The same factors that promote healthy democracies also promote the spread of disinformation. Democratic deliberation requires free flows of information and multiple competing narratives.

What we see emerging now is the “democrat’s dilemma”—controlling information is inimical to democracy, but allowing it to spread unchecked creates disinformation that can undermine democratic discourse. Increasingly, this trade-off appears hardwired into modern democratic regimes.

Some of this realisation (some, not all) is being driven by a growing understanding of QAnon.

I’ve covered QAnon substantially in the last few editions. And a lot of that has been with a sense of deja vu. This may partly be because I recently read Rohit Chopra’s book analysing politics in the indian social media sphere[goodreads link].

This understanding, though, is also troubling because it becomes difficult to envision to way out when you can’t even have a civil conversation. Anne Applebaum in The Atlantic writes about what you can do when the facts don’t get through. For the record, I am not really a fan of the Lincoln Project’s content which she references, but I can see why it appeals to many

The Post that wasn’t there

Earlier in the week, Bangalore witnessed some violence in reaction to a post by an MLA’s nephew. By the following morning the hashtag ‘BangaloreRiots’ was trending on Twitter and everyone and their second cousins were being asked to ‘unequivocally’ condemn violence. I am going to encourage you to consider this beyond the obvious though. In the information ecosystem, anything rarely happens in isolation. If dangerous speech exists, it isn’t going to pause just because someone (or in this case, many someones) did something we don’t agree with.

There are 3 scenarios that could play out (ok, there are more, but stay with me):

A – Stay quiet. Some people will get called out for not saying something (most won’t). There will be some attacks on these people (squiggly blue lines) in addition to the existing chorus (solid blue lines).

B – Qualified criticism. The existing chorus will continue. Some will get called out, and may choose to defend themselves. This may lead to clashes with the existing chorus(criss-crossing squiggly green and blue lines), or even mobilise new voices that may have stayed quiet otherwise.

C – Unequivocal criticism. Everyone joins the existing chorus.

Now in reality, the entire ecosystem will be a sum of myriad such choices playing out. They will all create tension and information pollution as well as have long term effects – although degrees may vary.

But this isn’t why this sub-section exists in this section of the edition. It exists because something else didn’t exist. What? Pooja Chaudhuri from Altnews investigates the claims that the post in question was a response to another post denigrating a Hindu deity. It wasn’t.

In response to the events in Bangalore:

  • Telagana’s DGP and Hyderabad’s Police Commissioner simultaneously urged caution and threatened strict action.
  • Also linked to the Bangalore incident, a report in TOI states that Kolkata’s police commissioner also “cautioned social media users and asked them to refrain from posting fake news.” It further stated that ~ 200 people were prosecuted in April and May for “fake posts”.

Information that isn’t anywhere

Ok, I may be getting a little carried away with this theme, this is the last one, I promise.

An Xiao Mina writes about missing information or midinformation, which applies itself rather well to the whole COVID-19 situation:

In the case of emerging knowledge, it might be helpful to think not just about misinformation but midinformation. We know a little now, we’ll know more later, and we may never know everything ever. In other words, information stands in the middle, and we’re trying. Scientists are gaining some clarity, but it’s going to take some time for scientific consensus to build, and for public understanding to catch up.

Midinformation, in other words, is the sort of information crisis that happens when not all the facts are known. In that vacuum of knowledge, all kinds of rumors, conspiracies, misunderstandings and misconceptions can emerge, because it’s comforting to have an anchor that feels true and reliable.

To my untrained mind, this is reminiscent of the concept of data voids. Which Michael Golebiewski and Danah Boyd defined in the context of search engines.

“There are many search terms for which the available relevant data is limited, nonexistent, or deeply problematic. … We call these low-quality data situations ‘data voids.’”

And, in another extremely interesting post, Tommy Shane extends the concept of data voids beyond search engines to social media platforms asserting that they are search engines too given the way people interact with them. He has 3 asks from platforms:

1) A Google trends equivalent for social media platforms.

2) More precision from Google trends.

3) A connection between interest and results.