Trust Me (image via Wikimedia)

Should we put machines we don’t understand in charge of public policy?

Mathew Lowry
5 min readSep 26, 2017

--

AI is increasingly being used to support public participation in policy. While they offer a lot, they could invisibly skew policy if used carelessly.

I recently analysed the “Future of Europe” participation process, which included a large Facebook component: instead of using a Facebook poll, users ‘voted’ for one of 5 future EU scenarios using Facebook Reactions.

Clever social media marketing: each vote boosted the post, reducing campaign costs.

While they also used other channels, the Facebook component was potentially very influential, as the results would have been an easy-to-process pie chart of “what Europeans want for the future of Europe”.

In any case the EC ignored the results and adopted a 6th scenario.

Facebook’s algorithm decided who could vote on the “Future of Europe”

But what if Juncker had sought insights from that pie chart? Would he have known that Facebook’s algorithm determined who got a chance to vote? And if so, how would one use the data? Without detailed knowledge of their algorithm, one simply cannot compensate for Facebook’s influence.

In my post, I suggested a number of ‘success factors’ for public participation in EU policy. One of them is resource credibility — “participants can see sufficient resources in place to process their contributions”.

The “Future of Europe” Facebook tactic scores highly here — it doesn’t take a large team to buy Facebook advertising and create a pie chart, as long as you ignore the comments.

The problem lies in the use of an algorithm which policymakers cannot understand

The problem lies in the use of an algorithm which policymakers cannot understand — in this case, Facebook’s algorithm, the very definition of inscrutability.

And Facebook’s is not the only inscrutable algorithm influencing public participation in policy.

Is AI the answer? Ask DORIS

The importance of ‘resource credibility’ to a participation process reflects the fact that it is very resource-intensive to process thousands of participants’ textual contributions. Why would anyone spend their time contributing something valuable and useful to a participation process when they know there are simply not enough people in place to read and use it?

In preparing my workshop, however, I heard of DORIS, an IT tool designed to solve this problem, and which the EC is set to use more in future.

I asked if DORIS was used for Future of Europe. No answer yet.

DG CNECT’s Data Oriented Services team developed DORIS to help process stakeholder contributions to EU Digital Single Market policy.

It uses Natural Language Processing (NLP) to extract and visualise knowledge from the masses of content received.

DORIS is now being made available across the EC, in much the same way the EC adopted CNECT’s Newsroom (15 years after it was developed).

Inscrutable AI

It’s a shame there’s no detailed information available on how DORIS works, because machine-learning algorithms can be as inscrutable as Facebook’s.

Most NLP software are built around machine-learning algorithms which are ‘trained’ with existing data. Many — not all — are termed inscrutable because it is literally impossible to understand how they come to their conclusions.

Which is problematic, because their conclusions can be invisibly influenced by the very human prejudices AI is supposed to avoid. It all depends on the data used to train them. And their inscrutability makes spotting this influence almost impossible.

To understand why, check out the resources tagged inscrutable on my Hub. Alternatively, listen to this HBR podcast:

I covered this podcast in my latest newsletter, from which this is extracted:

“Weapons of Math Destruction sets out the serious legal and ethical problems posed by the machine-learning AI algorithms increasingly used by companies, governments and other authorities. These algorithms are usually secret … the entire process is not accountable… they are usually unfair and often illegal.

What makes this worse is that they’re also often wrong… example: Fox News, where women were systematically prevented from succeeding. A machine learning algorithm using Fox’s data to predict which job applicants would succeed would filter out the women…

O’Neil advocates that companies bring in specialised algorithm auditors, and concludes: “every data science institute…must take ethics seriously... regulations that already exist around anti-discrimination law, disparate impact and fair hiring practices have to be enforced in the realm of big data”

From company to government

O’Neill’s focus was on companies, who increasingly use AI to support Human Resources processes. But applying these tools to public participation opens up many other, far more ironic possibilities.

Imagine, for example, a government body responsible for fighting discrimination runs a public participation process… and the AI algorithms they use to process contributions include hidden biases which lead them to water down their anti-discrimination laws.

applying these tools to public participation opens up many other, far more ironic possibilities

Of course, I have no idea whether DORIS is inscrutable, or whether the data (if any) it uses is problematic. Maybe DORIS has been audited by the sort of specialists Cathy O’Neill suggests (note: she’s launched a company to do just that). Perhaps all DORIS’ programmers took ethics classes.

But another success factor for public participation is transparency, so maybe they should address these issues publicly.

And before you use an AI to process your participants’ contributions, I’d ask it some questions if I was you.

Any other interesting tools like DORIS out there? Connect to contribute

Connect to contribute

All four of us involved in the EWRC workshop would be very happy to integrate your case studies and other ideas. The easiest way is to Respond to this post. No Medium account? Other options here.

If you found this interesting, your Applause will help others find it

Browse previous editions

You can find resources tagged #participation, #AI and #inscrutable on my Hub. Subscribe (left) to get the best of the stuff I curate in your Inbox, or get just the High3lights from myCuratorBot. He can also get us in touch, or we can connect here.

--

--

Mathew Lowry
Mathew Lowry

Written by Mathew Lowry

Piloting innovative online communications since 1995. Editor: Knowledge4Policy. Founder: MyHub.ai. Personal Hub: https://myhub.ai/@mathewlowry/

Responses (1)