From foreign meddling to pink slime (US2020 Disinformation news, ed. 2)

Mathew Lowry
7 min readSep 17, 2020

This edition’s 9 articles span the real meaning of “foreign meddling” and domestic flashpoints, social media platform preparations for Election Night and beyond, and how media has to go beyond factchecking as it tackles “pink slime” (yes, it’s a thing).

It’s the second edition (here’s the first), and I’m already tweaking my process: as I Hubbed the 9 Resources tagged #us2020 and #disinformation covered below, I used my own words, rather than select, copy, paste & edit. In the process I think I created what Zettelkasters call Permanent Notes rather than literature notes, except that they’re public, appearing as an integral part of the dedicated Overview Page. The links below, of course, go straight to the articles themselves.

Looking in the mirror for the problem

The first article I Hubbed for this edition influenced how I curated most of the rest. I cannot recommend it enough:

Its sobering perspective: Russian trolls may get a Twitter hashtag trending for a few hours, but the President got scores of people to poison themselves with bleach with just one press conference. Who’s the actual danger here?

‘foreign meddling’ can only exploit and exacerbate a society’s existing flashpoints, and this election year provided a Perfect Storm Trifecta

In the final analysis, ‘foreign meddling’ can only exploit and exacerbate a society’s existing flashpoints, and this election year provided a Perfect Storm Trifecta of Black Lives Matter, COVID-19 and climate crisis wildfires (so far). All three were at least poorly managed, with some deliberately made worse for electoral gain in a country where the major sources of disinformation are now domestic (see previous edition and everything tagged #polarisation).

Even “solving” foreign disinformation, in other words, would not solve much: the underlying conditions, the true problem, would remain.

Also echoing the previous edition: the reaction to foreign meddling can be worse than its actual effect. This works in two ways:

  • in much the same way that an immune system over-response does more damage than the virus it’s fighting, the media response to Russian disinformation can magnify it,
  • while relatively few people may actually be swayed by the message content, its existence can convince people that the truth is unknowable and/or make them paranoid and feaful.

Perhaps we should paraphrase JFK and admit that the only thing we should fear about disinformation is the fear of disinformation.

The above ideas are found in two other articles I Hubbed recently:

Fearing Peace Data

The NYTimes, for example, manages to encompass both the fear of Russian disinformation and the fear of that fear. While documenting how the Russian ‘Peace Data’ operation shows how much more sophisticated they have become since 2016, they also point out that the Russian’s activities on Twitter and Facebook to promote the site were “almost overt, designed to be detected”.

Which is, of course, a great way to instil that sense of paranoia — after all, if the FBI did detect the ‘Peace Data’ operation, what are they not detecting…?

Exploiting BLM

The Win Black/Pa’lante team profiled in the LA Times, below, meanwhile, are tackling a bot-based campaign designed to divide and suppress the Latino vote. Their ‘war room’ floods cyberspace with their own counter-programming, and calls out misinformation without amplifying it:

And while on the topic of exploiting the BLM flashpoint in American society, check out this interview with Color of Change president Rashad Robinson (@aebrahim_79719) on Twitter’s removal of fake accounts impersonating African American men who left the Democrats to support Trump over BLM.

It looks like it worked unti it didn’t, with some of those accounts’ tweets getting over 10,000 RTs. I love Robinson’s metaphor: echoing Clay Johnson, he frames platform responsibility in terms of information diet and health:

“we would have a lot of poison food on our shelves if we didn’t have infrastructure to hold [food] companies accountable… we get this poisoning messaging from fake people that incites violence, and all the companies do is say ‘we’re trying… judge us by our efforts’… if there was food that was poisoning us we wouldn’t care how hard the companies were trying… they need to be held to a higher standard”

Platform (ir)responsibility

And while on the subject of the role of social media platforms, Mathew Ingram is distinctly unimpressed by Facebook’s efforts to date:

As is POLITICO, with a good overview of Silicon Valley’s ongoing failure:

A new tag: delegitimise

In the same vein, NiemanLab has a damning analysis of both how Facebook, Instagram, YouTube and Twitter handle Covid-19 disinformation, and their election-related disinformation policies:

Apparently less than 5% of the 912 posts flagged to these platforms for misinformation were dealt with, while their policies are “filled with gray areas that don’t always make it clear which types of election-related misinformation must be taken down”.

Of particular concern are policies regarding disinformation to de-legitimise the election results. Although the above analysis finds the platform’s policies insufficient, this is something the platforms are apparently preparing for:

These are the first two posts tagged #delegitimise on my Hub, but they chime in with Trump’s attacks on postal voting, so I doubt they’ll be the last.

In fairness to the platforms, having “to potentially treat the president as a bad actor” is a new challenge. Facebook, in particular, is in a difficult position, given its commitment to not factcheck political ads or politicians’ posts, making any post-election action look like hypocritical censorship (see #disinformation and #censorship).

Journalism, (d)evolving

Beyond Factchecking

The mainstream media have been tackling the “President Bad Actor” challenge since HuffPost put Trump’s election campaign in their entertainment section. Factchecking, in particular, was flavour of the month for a while, but has been demonstrated as largely ineffective by Trump, who literally laughs it off:

The essential problems, according to @DanFroomkin:

  • as factcheckers need to establish non-partisan status, they are equally tough on all sides … but all sides don’t lie the same
  • factcheckers are read by the converted, not by those who need to, who are very unlikely to believe it anyway due to motivated reasoning
  • lies are told 1000 times and factchecked once.

But the article does more than set out the problems — it offers solutions, and I recommend reading it, whether you’re a journalist or a journalism consumer (more on #factchecking and #disinformation).

A deep dive into Pink Slime

Oozing into the local news void left by social media platforms and Google, pink slime — “shadowy, politically backed ‘local news websites’ designed to promote partisan talking points and collect user data” — is multiplying: a network of 450 sites identified in Dec 2019 had grown to 1200 by July:

This startling growth, researched extensively by Columbia Journalism, is:

  • driven by diversification, with single-issue pink slime now focusing on everything from religious orientation to business news,
  • enabled by AI, with over 90% of the stories in the largest two networks algorithmically generated using technology created by a single company

More: #AI and #disinformation.

That’s it for this edition. As always, all resources are tagged #us2020 and #disinformation (or get the RSS) on my Hub, where you can also browse and subscribe to my newsletter and everything else I Like, Think and Do. This newsletter is underpinned by a Zettelkasten Overview as part of my enhanced Personal Content Strategy. I am @mathewlowry on Twitter.

--

--

Mathew Lowry
Mathew Lowry

Written by Mathew Lowry

Piloting innovative online communications since 1995. Editor: Knowledge4Policy. Founder: MyHub.ai. Personal Hub: https://myhub.ai/@mathewlowry/

No responses yet