Image for post
Image for post

Building Credibility Indexes from Fact-Checking to #TackleFakeNews

Fake News is suddenly on the boil in the Brussels Bubble. Here are some ideas I brainstormed earlier this week. Comments welcome.

I’ve been invited to three workshops and meetings about fake news over the last 6 days — somewhat ironic, given that I shifted my attention away from fake news since this December 2016 opus.

(There was widespread agreement back then that the term ‘fake news’ is so poorly defined and abused that it’s actually dangerous. Still, it’s been adopted by the Bubble, so I’ll use it here.)

One of those recent meetings was with EurActiv founder Christophe Leclerq, who had published his ideas on tackling fake news last month, including:

c) Bundling fact-checkers’ inputs: leverage media and NGO initiatives, currently fragmented… Focused fact-checking, but also normal media curation...

- #TackleFakeNews : Co-regulation? Leveraging media fact-checking into platform algorithms

We brainstormed some ideas on top of this starting point, and are sharing them here to collect better ones. So if you have some, share (below).

Starting point

Christophe’s starting point was this: “Credibility Index Providers” (CIPs) can add value to fact-checkers and better help social platforms identify and tackle fake news. There are therefore three players in Figure 1, below:

  • Fact-Checkers: both dedicated fact-checking organisations and other publishers meeting certain criteria (discussed below)
  • Credibility Index Providers: curate fact-checkers’ work to add value and derive machine-readable Credibility Indexes for content and sources
  • the Social Platforms themselves: use CIP(s) to improve their handling of fake news and improve user experience.
Image for post
Image for post
The Basic Architecture of the Credibility Index ecosystem

A Social Platform (top right) displays a piece of content (“Main Article”) in its feed. It automatically accompanies it with a summary of the piece’s credibility provided by a Credibility Index Provider, which auto-summarises the collective opinion of the Fact-Checkers it curates.

If the user clicks on this summary, they arrive on the Credibility Index Provider’s page dedicated to the Main Article (centre), which:

  • summarises and links through to the more detailed analyses by the individual Fact Checkers it has curated (cf Rotten Tomatoes, metacritic and other review aggregators)
  • provides a discussion space for comparing, contrasting and debating the accuracy of the Fact Checkers’ analyses (cf the home of all Wikipedians: the Discussion Page underpinning each Wikipedia page)

(update: note that the CIP is a better place for these debates than the Social Platform, where debates amplify the news in question — better to quarantine this debate in the CIP, where a minority will congregate, rather than on the much larger Platform).

The user can then dig further, visiting each Fact-Checkers’ analyses of the original Main Article (lower bottom and left, in blue).

The user can then dig even further, as the Fact-Checkers should include links to the Cited Sources they use (bottom and left).

Many of these sources will be cited by more than one Fact-Checker — in the Figure, Cited Source A (bottom) is cited by three Fact-Checkers, and so is also Highlighted directly by the Credibility Index Provider.

Btw, often-Cited Sources could also be highlighted by the Social Platform itself. The above Figure sets out one way they can use the content provided by the Credibility Index Provider; others are explored below.

User experiences

What exactly each Platform displays will vary with the Platform.

Image for post
Image for post

Last year, for example, Facebook began putting fact-checker content in the Related Content they displayed, not always successfully (left ;)), alongside news shared in Newsfeed.

This may have been considered better than directly contradicting the content with a flag, which can backfire.

In any case, Facebook’s latest Newsfeed tweaks seem to have removed this as part of their general retreat from distributing news, to howls of outrage from those who bet their farm on their Facebook page rather than learning from the Great 2014 Newsfeed tweak.

The Credibility Index Provider (CIP) needs to simultaneously add further detail to the information displayed on the Platform (above), and add value to the Fact Checkers’ work by:

  • Comparing and contrasting different Fact Checkers’ analyses of the Main Article’s veracity
  • Deriving an overall average and other information — e.g., the degree (or lack) of consensus between them
  • Aggregating all the Fact Checkers’ Cited Sources in one place, thereby demonstrating whether they are using the same or different sources, and/or interpreting them similarly or differently

This adds an extra level of transparency to the fact-checkers’ work, and hence to the fact-checking information published by the Platform.

The CIP can also provide a multilingual bridge between fact-checkers working in different languages.

It allows the Platforms to access increasing volumes of fact-checking, as it will reward high-quality, fact-checking work by all media (see below).

Adding a community discussion to this layer, as mentioned above, is a nice to have — this debate has to go somewhere, and the CIP seems the right level as it’s where users can easily compare the Fact-Checkers’ analyses, sources, etc.

Advantages

There are several reasons we see merit in this approach:

The approach allows Social Platforms to do more than ‘flag’ an article as suspect. Instead, they could display:

  • the average score of all curated fact-checkers, rather than just one
  • the strength of the consensus: are all fact-checkers’ scores close to the average, or is there great divergence? An average, after all, tells you that you’re comfortably warm — it might be worth knowing that your head’s in the fridge and your feet are in the oven
  • the most Cited Source by all fact-checkers
  • the most Cited Source which supports the Main Article, and the most Cited Source which disputes it

The Credibility Index Provider gives them a feed they can experiment with

Crucially, they can try all of these ideas out if they want to. The Credibility Index Provider gives them a feed they can experiment with. They may even find that different approaches work in different contexts.

This approach allows fact-checkers to inform the Platform’s users without actually affecting (censoring) what the Platform displays or displaying a ‘flag’ over contested content, which can actually validate fake news in some minds.

Instead, the user sees a quick summary of what an external Index Provider has to say about the article’s credibility. They can then investigate further themselves — they can click through, study various fact-checker’s views, contrast and compare, discuss...

This brings the benefits of fact-checking while providing transparency. It might be a good way of restoring trust.

And if a ‘War of the Credibility Indexes’ emerges (see below), this approach allows the Platform to legitimately claim that they are providing ample information with which users can make their own judgement.

As set out below, Fact-checkers will need to meet quality standards, and publish content according to technical standards, if they are to be curated by a Credibility Index Provider. Why should they bother?

Our working hypothesis was simply that being curated in this way multiplies their impact and provides their content with greater visibility.

This would encourage and reward high-quality media and increase diversity in fact-checking content

But this approach could also allow a wider variety of organisations to be treated as Fact-checkers, and encourage higher-quality journalism.

A publisher simply has to meet the quality standards to be curated by a Credibility Index Provider. An existing media publisher could thus choose to publish some or all of their content according to the standard, have it curated by a CIP, and gain audience. This would encourage and reward high-quality media, and increase diversity in fact-checking content online.

It would also reward the Cited Sources themselves in roughly the same way.

When: Rapid Reaction Fact-Checking

Figure 1 assumes that the Social Platform looks to the Credibility Index Provider for content on “Main Article” when it’s publishes into the newsfeed. If it finds something, it includes it in Related Content. If not, bad luck.

it’s important to fact-check something as soon as it takes off

We can do better. Fact-checkers’ resources are far from infinite, and it’s important to fact-check something as soon as it takes off. So in Figure 2, it is the Social Platform which brings content to the Fact-Checker’s attention:

Image for post
Image for post

As soon as a piece of content which has not been fact-checked meets certain criteria (virality, controversy, virulent comments…), the Platform pushes it to the Credibility Index Provider’s “Marketplace”, which in turn alerts participating Factcheckers.

(Btw, since Monday I’ve realised we need a better name than Marketplace, as I assume the Fact Checkers won’t be ‘bidding’, in competition with one another.)

Some thoughts on the ecosystem

Further notes from our conversation.

Returning to Christophe’s original post, co-regulation could require platforms of a certain size to use a Credibility Index Provider. That implies that several Credibility Index Providers should exist — noone wants a State-Imposed monopoly dictating Truth. Well, I don’t, anyway.

That seems to imply a mandated organisation for developing and imposing some sort of quality standard on Credibility Index Providers, and thus in turn on fact-checkers. These standards should reflect best practices in fact-checking (transparency, citing sources, etc.).

That in turn implies auditing agencies, checking compliance with the standard, in an analogous fashion to ISO certification.

War of the Credibility Indexes

This could result in a War of the Credibility Indexes (“Get your Fact-checking at Fox & Friends!”; “Fact-checking, brought to you by GreenPeace!”), although Platforms will probably only choose one CIP each. They’ll need to choose one transparently to avoid accusations of kowtowing to the State’s preferred version of reality.

Finally, a technical standard would be needed to ensure content flows from fact-checkers to Credibility Index Providers (I see no reason why a Fact-Checker cannot be curated by several, but we’ll see).

This technical standard should reflect the above quality standards, and hence in turn the best practices underpinning those quality standards. It should not be a significant technical challenge.

That’s as far as we got. If it is to go any further, it needs to be developed with the Platforms and fact-checkers themselves.

So we welcome ideas, opinions and relevant observations. Please Respond or connect (below) to contribute some. And if you found this interesting, your Applause will help others find it.

Browse previous editions

My Hub curates 100s of annotated resources about fake news, facebook, social media, filter bubbles and more. Subscribe to get my next posts, plus the best of all the Stuff I Curate, in your Inbox, or get just the High3lights via my CuratorBot. It can also put us in touch, or we can connect via my ‘orrible brochureware site.

Written by

Piloting innovative online communications since 1995. Editor: medium.com/Knowledge4Policy. Founder: MyHub.ai. Personal Hub: https://myhub.ai/@mathewlowry/