This is an excerpt from Edition 47 of MisDisMal-Information
In this edition:
- Networked Misinformation and Content Cartels
- Between hate and a hard place
- En-gendering disinformation
Networked Misinformation and Content Cartels
As Renee DiResta (2021) writes in The Atlantic, “Misinformation is networked; content moderation is not.” In this fragmented moderation landscape, content whose spread is limited on one platform can have wide reach on another.
I don’t usually start with a quote, but this one seemed apt. It is from a recent paper that analysed how Donald Trump’s tweets, that we were flagged/labelled [visual indicator only] or restricted [restricted forms of engagement such as viewing, retweeting, replying, etc.], spread across Twitter and how the same message spread across Facebook, Instagram and Reddit (HKS Misinformation Review – Zeve Sanderson, Megan A. Brown, Richard Bonneau, Jonathan Nagler and Joshua A. Tucker).
Some key findings from the paper
- On Twitter: Tweets that were labelled spread further than those that were neither labelled nor restricted.
- On other networks: In general, for posts containing the same ‘messages’, those that were restricted on Twitter spread further those that were labelled or not labelled. But there are some subtleties to highlight:
- Facebook: Messages with/without labels had a similar “average number of posts on public Facebook pages and groups”. Messages that were restricted had “a higher average number of posts, were posted to pages with a higher average number of page subscribers, and received a higher average total number of engagements.”
- Instagram: On average number the posts, the pattern was similar to Facebook. However, with engagement, there was a difference in that “posts with a hard intervention received the fewest engagements, while posts with no interventions received the most engagements.”
- Reddit: Reddit doesn’t report engagement numbers in the same way as other platforms, so researchers had to use subreddit size (users) and frequency of posts: “messages that received a hard intervention on Twitter were posted more frequently and on pages with over five times as many followers as pages in which the other two message types were posted.”
The authors are careful to point out that these results don’t suggest that the “Streisand effect” is in action since the nature of the messages themselves could have played a part.
In conclusion, they say:
Here, we show how content moderation policies on one platform may fail to contain the spread of misinformation when not equally enforced on other platforms. When considering moderation policies, both technologists and public officials should understand that moderation decisions currently operate in a fractured online information environment structured by private platforms with divergent community standards and enforcement protocols.
I think this is a crucial point. And recognising this ecosystemic nature of mis/disinformation was one of the reasons we proposed the term Digital Communication Networks (DCNs) with 3 components: capabilities, operators and networks. In fact, the networked nature of the information ecosystem means there are implications beyond just mis/disinformation (also, something we highlighted in the paper). It is also important to move away from the binary of treating users as either passive consumers of information + narratives and active disinformers, or that a limited set of actors exercise control over the information ecosystem. For this last bit, I find Kate Starbird’s Participatory Disinformation model pretty useful (I wrote about it in 39: Of Polarisation, propaganda, backfire and participatory disinformation) because it identifies the presence of closed-feedback loops, varying incentives and, challenges with control. Note: challenges with control don’t necessarily make them fragile.
But returning the idea of misinformation being networked and siloed content moderation not being an adequate response, we are likely to see regulatory forces pushing DCN operators towards ‘more cooperation’. At this point, we’d do well to recap some of the costs of ‘Content Cartels’ that Evelyn Douek wrote about (Knight First Amendment Institute)
- Compounding accountability deficits
- Creating a false patina of legitimacy
- Augmenting the power of the powerful
These are all sub-heads from the essay but pretty self-explanatory, so I won’t elaborate.
The GIFCT [Global Internet Forum to Counter Terrorism] represents an interesting case study (Emma Lhanso explains this very well on an episode of the Lawfare podcast). Also worth reading is this Erin Saltman interview with Issie Lapowsky on the GIFCT’s struggles to expand the definition of ‘violent extremism’ (The Protocol)
Aside: Incidentally, the 177-page report referred to in this report included BJP as an example of a Level 1 (Fringe Group) engaged in non-violent extremism (see page 51 for the framework and 62 for the table). See for yourself who actually seems to have covered this story (Google Dork Link). Spoiler: not a very long list.