This newsletter is published at techpolicy.substack.com
An excerpt from Edition 21 is reproduced below.
Conspiracies, Rumour and Humour
Just a joke, or not just a joke? Framing counts
From 28th September – 2nd October, EU Disinfo Lab has been hosting a virtual conference on Disinformation (yes, yes, I know everyone is). Wednesday featured Emmanuel Choquette who spoke about humour, free speech and hate speech. It will eventually be posted online, but I want to focus on 3 key points he made. I’m listing them in the reverse order.
1) Humour, in the context of discourse, is not necessarily neutral. In fact, it shares similar characteristics (Slide included).
2) It does have mediated effects, as he demonstrated with an experiment that consisted of exposing people to pointed humour consisting of racial stereotypes.
3) He also referenced a study he carried out between 2006 – 2018, building on [trigger language warning] work by Lanita Jacobs-Huey which indicated that certain ethnic groups were more likely to be targeted by humour/jokes than others.
So, no, this doesn’t mean we have to stop joking around (phew). But we do have to realise that in certain contexts it may not ‘just be’ joking around.
And in some cases, humour can also be used as a vehicle to counter disinformation. Taiwan’s Digital Minster Audrey Tang, spoke about this in a conversation moderated by Joan Donovan. The underlying intention being to counter the rage and distrust that situations like pandemics, or what was phrased as ‘Humour over Rumour’ and that ‘Humour has a greater than R0’ value. Than what? That wasn’t very clear. Worth noting is that while these seemingly worked well in the context of medical information, but it remains to be seen how effective it can be at countering political disinformation. As I covered in edition 2 (What me-me worry): it does work well for the propagation of political disinformation.
I am going to try to get through this edition without mentioning QAnon (crap!). But that still leaves us plenty of things to talk about pertaining to conspiracy theories
TikTok user @Tofology has come up with a really interesting inverted pyramid (tin-foil hat?) to classify conspiracy theories starting from those grounded in reality at the bottom and then going further away from reality as you climb higher (speculation, leaving reality, science denial and past of the point of anti-semitic no return). Somewhere along the upward journey (into despair), it also includes harm. Now, for those of us in India, TikTok still blocked, but fortunately she did it post it on twitter as well.
And another Twitter user was helpful enough to illustrate the pyramid as an image. The examples are mostly U.S. centric, but with a little modification, it can be made more location neutral. I have to admit, I am confused about the ‘harm’ aspect though, because those near the bottom (based in reality) can fall at different places on this spectrum.
I’m getting a lot of requests to post this here as well. I’m working a revision (I didn’t expect this to blow up) that I’ll post soon. Some things need to be moved/clarified (i.e. Antifa Wildfires and moon landing + flat earth which are moving up)
Catherine Stihler, the chief executive of Creative Commons writes about defeating conspiracy theories by government transparency. Now, I don’t know if that will defeat it (and to be fair, the headline was probably chosen by the publication). The concluding sentences raise an extremely pertinent issue:
There is an urgent need to address humanity’s greatest global challenges through collaboration and accessing information.
It’s time to unlock knowledge for everyone, everywhere.
This creates a tension between the need to pay for quality information/knowledge (since it has high costs of production) and the need to ensure this knowledge is disseminated evenly. The pandemic has made it painfully evident that public knowledge (I realise the term is doing some heavy-lifting here), like public health should not be exclusive.
Nathan Allebach has a long read (~60 minutes) Conspiracy Theories in America. 2 sections in particular stood out to me:
- Declining Institutions and Distrusting Experts
People need agreed-upon information for a democracy to function. When someone handpicks data they want to believe from counter authorities and demand others trust it, while simultaneously dismissing expert consensus on that very data, it’s a massive problem.
Experts and institutions should not be immune from criticism for cases of corruption, costly mistakes, and the like, but the public response should be accountability and solution-based, rather than based in paranoia and populism. Scientists, researchers, journalists, and experts generally haven’t been the most effective at communicating messages over the years.
If people distrust expert consensus on an issue, they’ll always find a source to justify their beliefs, which is how pseudoscience and conspiracy theories gain momentum, and if people feel lied to by institutions, it creates a vacuum for these exploitative forces.
- From Extremely Online To ExtremistsAs expected, this is U.S. centric, but there are some aspects which apply across contexts
- The cross-pollination of views across different platforms.
- “The culture wars around feminism, LGBTQ, and Islam in particular became proxies for reactionaries to rally around online.”
- “Trolling online was once contained to a small community, but now opened to millions of people. Extremists learned that if they could make a splash, people would react, then journalists would be pressured to cover it, which would amplify the message and accelerate distrust in those media sources. They gamed the system.”
A matter of Frames
I’m going to use this to segue to my next point, in my mind, these points about the dual use of humour as well as amplification of conspiracies relate to how things are ‘framed’. And this idea of framing, perhaps, applies as much to what we would refer to as traditional or mainstream media. A few events got me thinking about this even more recently:
- An English language national daily in India, using a hate-inspired hashtag in their social media updates which implies a conspiracy involving marriage.
- A legal reporting outfit using a different hate-inspired hashtag to report on court proceedings.
- This editorial by the NY Times editorial board calling on social media platforms to have a clear, transparent, coordinated plan to deal with a scenario in which one candidate (we know which one) claims an illegitimate victory.
Now, it is good to push platforms to have a clear plan so that crucial information flows related to important events are not affected by arbitrary decision making. But, as these 2 tweets show – this conversation is not complete without involving the original intermediaries of information and their role in framing issues.