For the 6th episode of his Futurized podcast, Trond Undheim asked me why surveillance capitalism inevitably leads to polarised, undemocratic and dysfunctional societies, and what we must do about it.
To prepare I first pilot-created a Zettelkasten page on my Hub centred on #surveillance & #polarisation. This yielded 26 relevant articles (and then many more, as I played with tags), along with the notes I made on each as I Hubbed them. I added thoughts on the intersection of those tags to the Zettelkasten page as I read, so all resources underpinning this post (now and in future) are found there.
Trond and I had an excellent chat (watch it if you must), starting with Shoshana Zuboff’s definition of surveillance capitalism before moving to the reason I suggested the topic for a future-focused podcast: if we don’t solve surveillance capitalism, we don’t have a future.
Surveillance capitalism leads inexorably to societies so polarized they can barely function as democracies. Regulation will only help if we can find alternative platforms outside the surveillance capitalism paradigm.
If we don’t solve surveillance capitalism…
I’m no anti-capitalist: capitalism is far from perfect, but it beats what’s been tried elsewhere. Surveillance capitalism is simply its latest form, sitting alongside and on top of other forms, still present today.
Like those forms, surveillance capitalism likes to present itself as an unalloyed good which is somehow an inevitable and irresistible outcome of natural law. Both statements are simply false. As Zuboff points out in her excellent book, surveillance capitalism is:
- the result of a deliberately “hands off” regulation strategy, which Big Tech now spends millions on lobbying to maintain;
- profoundly undemocratic and dehumanising.
There’s no point me simply reiterating her arguments here, so please read Zuboff’s book if you disagree or are unsure. Alternatively, trust me.
Optimised for enragement
I wanted to focus Trond’s podcast episode on just one consequence of surveillance capitalism: it polarises societies to the point they can barely function as democracies.
I doubt Zuckerberg et al deliberately set out to design platforms to polarise society. What they did design were platforms to promote engagement, because engagement generates user data — which is to surveillance capitalism what oil, coal and steel are to industrial capitalism: lifeblood.
optimising for engagement results in platforms optimised for enragement
Unfortunately for just about everyone outside of the Kremilin’s Internet Research Agency, optimising a platform for engagement results in a platform optimised for outrage. The pillars of surveillance capitalism, in other words, are platforms optimised for enragement. And that polarises society.
And this is a problem. If we can’t agree on the problems we face, or show even the slightest respect for each other or the democratic elections we hold, we won’t solve anything.
Surveillance capitalism is not the only factor driving polarisation, of course: in the US, broadcast media and talkback radio have also played an important part, as has tabloid media in the UK.
But TV and newspapers have been around for a long time, while social platforms have only existed for a decade or so. The podcast, remember, was a discussion about the future, so spend a moment on a short mental exercise:
- think back: do you remember what things were like 5 years ago?
- so now extrapolate forward: where will we be in 5 years?
Now add other future trends, like augmented reality. And now factor in the continued decline of independent, affordable-to-all quality journalism, which Trond and I also touched upon in our chat.
Finally, cast around: are there any new forces emerging to prevent further polarisation? Nope. So when Trond asked me how I thought this would end:
“If we don’t change course, in the future we will be less will informed, more polarised, massively manipulated, living in more corrupt and less democratic societies, and unable to solve the challenges we face”
We have two solutions: regulate today’s social platforms, or find alternatives.
Can we regulate it?
I’m not an expert on regulation (although the authors of these 19 resources probably are), but to say the issue is problematic is a massive understatement, touching on everything from freedom of speech to artificial intelligence.
Added to that: coordinating national regulations for global platforms is practically impossible. All of which is made 1000% more difficult by Big Tech’s massive lobbying funds to prevent regulation in the first place.
regulating surveillance capitalism would represent an existential threat to some of the largest, richest and most powerful companies on the planet. Don’t hold your breath.
At the end of the day, nothing will stand between a big company and its core business model. Surveillance capitalism is not a little extra revenue on the side — regulating it would represent an existential threat to some of the largest, richest and most powerful companies on the planet. While we can expect plenty more PR and bandaid solutions, I wouldn’t hold your breath.
In any case, regulation will have no effect if people don’t see alternatives.
Can we find alternatives?
What would platforms built outside the surveillance capitalism paradigm look like?
Paradigms, by their very nature, tend to blind those inside it to the possibilities lying outside, so the best way to find some answers to the above question is to stop wringing our hands and start building stuff.
Instead of collecting and selling data about people, MyHub.ai collects metadata about content… so we optimise for content discovery, not engagement
MyHub.ai may be a first step in this direction, as its business model means its optimised for content discovery, not engagement.
Instead of collecting and selling data about people, MyHub.ai collects metadata about content. This then trains an AI, which is in turn monetised (video, left).
Although we’ll have social features soon (some ideas here), we’ll always focus on improving our users’ information diets, as that trains our AI.
Rather than turning users into dopamine-enslaved engagement zombies, endlessly clicking and liking and scrolling, it makes business sense for us to help our users get the most out of content.
we optimise for content discovery and improving our users’ information diets, as that’s how we train our AI
Which is why MyHub.ai is simultaneously a content/productivity tool and a social/content platform, with (planned) features as diverse as ‘follow that user’ to data visualisations that help users explore connections between the different pieces of content they’re reading, writing and sharing.
And that could be really important. New ideas come from spotting connections between existing ideas that noone else has seen. As I remarked to Trond towards the end of our chat, everyone’s reading list is as unique as their DNA — noone on Earth has read exactly the same content as you. So any tool that helps you get more value out of content could help you uncover new ideas that noone else can conceive of.