This newsletter is published at techpolicy.substack.com
An excerpt from Edition 22 is reproduced below.
Of Q-A-Gone, Laughter, Models and Roles
This is one of the few occasions I won’t be starting off with an India-centric bit. And while the context may be foreign, I think the ideas still apply
Well, it happened, Facebook went after QAnon (again), but seemingly with more intent this time.
In the first month, we removed over 1,500 Pages and Groups for QAnon containing discussions of potential violence and over 6,500 Pages and Groups tied to more than 300 Militarised Social Movements. But we believe these efforts need to be strengthened when addressing QAnon.
Starting today, we will remove any Facebook Pages, Groups and Instagram accounts representing QAnon, even if they contain no violent content. This is an update from the initial policy in August that removed Pages, Groups and Instagram accounts associated with QAnon when they discussed potential violence while imposing a series of restrictions to limit the reach of other Pages, Groups and Instagram accounts associated with the movement. Pages, Groups and Instagram accounts that represent an identified Militarised Social Movement are already prohibited.
Shayan Sardarizadeh, who has been monitoring various groups and doing a weekly thread “this week in qanon” (see Twitter Search) has seen a big drop in the number of groups/pages/profiles he was tracking.
Holy hell. Just had a look at my list of QAnon groups and pages on Facebook and it’s bloodbath out there. Absolute bloodbath. I’m down to 31 groups and 49 pages. I had 220 groups and 205 pages just last week 😮 On Instagram, I’m down to 258 accounts, I had nearly 400 last week.
For all intents and purposes, this is a good thing. Though, such conspiracy theories are a hybrid stock-and-flow problem. Such interventions, mainly address the “stock”. They may stem the flow for a short period – but by now, we should know all too well that such movements adapt. All the way back in Edition 12, there was a piece by Abby Ohlheiser about fact-checks and account bans not being enough to stop QAnon.
As of the time I am writing this (Thursday evening in India), no similar announcement from other platforms have been made (some times these things tend to happen in spurts, or fall like dominoes). One domino seems to have fallen, and it was certainly one I didn’t expect. Etsy says it will remove QAnon-related merchandise. Q-ware (I totally made that term up) has been around on various e-commerce platforms, as this articlefrom mid-August indicates. How much it sells? That I really don’t know
We’ve also been here before, sort of. Just 3 weeks ago, Sheera Frenkel and Tiffany Hsu wrote in the New York Times about Facebook QAnon ban 1.0 not being effective enough.
The QAnon movement has proved extremely adept at evading detection on Facebook under the platform’s new restrictions. Some groups have simply changed their names or avoided key terms that would set off alarm bells. The changes were subtle, like changing “Q” to “Cue” or to a name including the number 17, reflecting that Q is the 17th letter of the alphabet. Militia groups have changed their names to phrases from the Bible, or to claims of being “God’s Army.”
Others simply tweaked what they wrote to make it more palatable to the average person. Facebook communities that had otherwise remained insulated from the conspiracy theory, like yoga groups or parenting circles, were suddenly filled with QAnon content disguised as health and wellness advice or concern about child trafficking.
And, then of course, there’s the likelihood of this content being recommended organically.
From the same piece:
Perhaps the most jarring part? At times, Facebook’s own recommendation engine — the algorithm that surfaces content for people on the site — has pushed users toward the very groups that were discussing QAnon conspiracies,
The Guardian reports that Facebook has missed some high-profile Australian accounts linked to it.
Kaitlyn Tiffany covers how Reddit got lucky by essentially getting rid of QAnon.
Unfortunately, Reddit is not particularly good at explaining how it accomplished such a remarkable feat. Chris Slowe, Reddit’s chief technology officer and one of its earliest employees, told me, point-blank: “I don’t think we’ve had any focused effort to keep QAnon off the platform.”
In a nutshell, it took action against subreddits for doxxing and got very lucky on the timing.
This isn’t a Facebook-only though problem by any means. Around the same time, Clive Thompson, in WIRED, wrote about the role of YouTube’s Up Next in sending people down the conspiracy theory, information disorder rabbit-hole and their travails with ‘borderline content’ – the term used to describe content that is close to be being against some policy or the other, but not quite there – and not quite very well defined as the article points out.
In 2018 a UC Berkeley computer scientist named Hany Farid teamed up with Guillaume Chaslot to run his scraper again. This time, they ran the program daily for 15 months, looking specifically for how often YouTube recommended conspiracy videos. They found the frequency rose throughout the year; at the peak, nearly one in 10 videos recommended were conspiracist fare.
In 2019, YouTube starting rolling out changes and claimed that it had reduced the watch-time of borderline content by 70%. Claims that couldn’t be independently verified, but the downward trend seems evident.
Berkeley professor Hany Farid and his team found that the frequency with which YouTube recommended conspiracy videos began to fall significantly in early 2019, precisely when YouTube was beginning its updates. By early 2020, his analysis found, those recommendations had gone down from a 2018 peak by 40 percent.
And then came the pandemic, and QAnon:
“If YouTube completely took away the recommendations algorithm tomorrow, I don’t think the extremist problem would be solved. Because they’re just entrenched,” [Becca] Lewis tells me. “These people have these intense fandoms at this point. I don’t know what the answer is.”
One of the former Google engineers I spoke to agreed: “Now that society is so polarized, I’m not sure YouTube alone can do much,” as the engineer noted. “People who have been radicalized over the past few years aren’t getting unradicalized. The time to do this was years ago.”
I have 2 points to make here.
- Recommendation Engines: This paper by Jennifer Cobbe and Jatinder Singh asserts that:
Focusing on recommender systems, i.e. the mechanism by which content is recommended by platforms, provides an alternative regulatory approach that avoids many of the pitfalls with addressing the hosting of content itself.
collectively, these studies and investigations show that open recommending can play a significant role in the dissemination of disinformation, conspiracy theories, extremism, and other potentially problematic content. Generally, these recommender systems do not deliberately seek to promote such content specifically, but they do deliberately seek to promote content that could result in users engaging with the platform without concern for what that content might be.
- They make the point that while a safe-harbour regime like hosting is the likely response (or was anyway), the task of recommending involves higher discretion and therefore should be accompanied with more responsibility. And that while regulation pertaining to content transmission and hosting can raise freedom of expression concerns, a regulatory approach focused on the recommendation engine can “side-step” that since the “onward algorithmic dissemination and amplification of such content by service providers would be the focus of regulation”
The paper proposes 6 principles that certainly merit some further thought.
- Open recommending [recommendation on user generated content platforms like Facebook/YouTube/TikTok etc) must be lawful and service providers should be prohibited from doing it where they violate these principles or other applicable laws.
- Service providers should have conditional liability protections for recommending illegal user-generated content and should lose liability protection for recommending while under a prohibition.
- Service providers should have a responsibility to not recommend certain potentially problematic content.
- Service providers should be required to keep records and make information about recommendations available to help inform users and facilitate oversight.
- Recommending should be opt-in, users should be able to exercise a minimum level of control over recommending, and opting-out again should be easy.
- There should be specific restrictions on service providers’ ability to use recommending to influence markets through recommending.
- And, secondly, as this article by Abby Ohlheiser states – it didn’t have to be this way. She spoke with the likes of Shireen Mitchell, Katherine Cross, Ellen Pao who had seen the warning signs early [targeted harassment, abuse] and even experienced them but were not taken seriously at the time. *In Edition 5, I linked to an episode of TheLawfare Podcast featuring Danielle Citron, where she had the made the point that vulnerable groups are where we see the harms first.* The crux is this:
I’m not proposing to tell you the magical policy that will fix this, or to judge what the platforms would have to do to absolve themselves of this responsibility. Instead, I’m here to point out, as others have before, that people had a choice to intervene much sooner, but didn’t. Facebook and Twitter didn’t create racist extremists, conspiracy theories, or mob harassment, but they chose to run their platforms in a way that allowed extremists to find an audience, and they ignored voices telling them about the harms their business models were encouraging.
And this is where I’ll bring it home. We’ve seen platforms respond (I am deliberately not saying intervene) in the U.S and Western Europe far faster (it is relative of course) than we have seen in the ‘Global South’. That they continue to lack (or seem to anyway) the know-how to intervene meaninfully in places like India – is a choice.
Is Laughter the best medicine?
There was one quote from the last article that also struck me, and it is linked to the ‘killjoy’ spree I have been on for the last few editions (emphasis added).
Whitney Phillips, an assistant professor at Syracuse University who studies online misinformation, published a report in 2018 documenting how journalists covering misinformation simultaneously perform a vital service and risk exacerbating harmful phenomena. It’s something Phillips, who is white, has been reckoning with personally. “I don’t know if there’s a specific moment that keeps me up at night,” she told me, “but there’s a specific reaction that does. And I would say that’s laughter.” Laughter by others, and laughter of her own.
This goes back to the ‘not just a joke’ aspect I covered in Edition 21 and the utility of parody from Edition 18. Whether we like to admit or not, humour has been weaponised – and even harder to admit: we all likely participate in it.