How Model Collapse could revive authentic human communities
As AI-generated content blurs the line between human and machine online, “model collapse” might help us find new value in well-managed human communities.
A while ago I joked that the entirety of LinkedIn would soon be dominated by content “written by AIs, for AIs”.
Not long after, LinkedIn integrated AI into its posting toolkit, a little before Bumble founder Whitney Wolfe Herd announced that each user of her dating app will one day get their AI agent to discuss with other users’ AI agents whether “their humans” should hook up.
Meanwhile, I’m investigating how my public sector clients can ensure Generative Search Answers incorporate their scientifically-validated knowledge rather than spam, disinformation and content marketing. TL:DR;: get an in-house AI to create massive FAQ-style online documents specifically for LLMs, but invisible to humans.
In other words: use AI to create content for AI to combat spam and disinfo created by AI. So while writers are integrating AI, those writing for search engines (SEO) are now writing for AI (LLMO). Take this to its logical conclusion and all writing will be by AIs to influence other AIs, with the only humans left some reverse centaurs, miserably clicking “validate” all day to tick a box somewhere that there’s still a human in the loop.
With the output, of course, used to train even more AIs.
Trolls in a hall of mirrors
How will this look? An early preview may be emerging in the toxic wastelands of trolls and trollbots.
I’ve always found it hard to distinguish human trolls from their programmed equivalent, as the human variety are arguably as programmed as much as any bot. When all a troll can think of is to “own the libs” or sneer at rednecks, they’re reacting exactly as trained by their ingroup’s algorithmically-shaped echo chamber. These algorithms optimise for enragement: constantly filtering out content contesting the ingroup’s worldview, and rewarding trollish behaviour and virtue signalling from ingroup members.
Trollbots, of course, are now becoming LLM glove puppets. The kinks are still getting ironed out, but in time this will make them even better at programming those people around them already incapable of distinguishing bot output from those of human beings.
As the bots improve, their influence over humans will grow, not least because humans will want to be as effective as the bots. As the two creatures’ outputs converge in tone, style and substance, distinguishing one from another will become even more challenging, increasing the influence of the bots further.
With the output, of course, used to train even more AIs.
From model collapse to AI4Communities
Unless, of course, model collapse saves us.
The full Nature paper is pretty dense — the short version is that training an LLM on LLM-generated content doesn’t work:
“What happens to generative AI once they themselves “contribute much of the text found online…? indiscriminate use of model-generated content in training causes irreversible defects … a degenerative process” — AI models collapse when trained on recursively generated data (Nature)
Note that training AI on “synthetic data” (AI output) is happening right now in AI laboratories everywhere, so the jury’s still out on this. But let’s say, for the sake of argument, that this effect is real. Tomorrow’s LLMs will be trained on today’s Web, which increasingly includes content generated from today’s LLMs. Today’s LLMs are therefore “poisoning the well” for their descendants, like a self-poisoning Ourobouros, the stupider, rattlesnake version of the mythical creature, killing itself as it consumes the poison in its own tail.
What really stood out for me from this paper, however, was the authors’ perspective that:
- this is nothing new: “click, content and troll farms [are] “forms of human ‘language models’… to misguide social networks and search algorithms.” The only difference stems from the scale now offered by LLMs.
- data on ”genuine human interactions … will be increasingly valuable”.
This last point feeds directly into the ideas I developed a few years ago, where I suggested small is beautiful Fediverse communities, supported by AIs which the communities (organised into data unions) own, train and even monetise (Minimum Viable Ecosystem for collective intelligence).
Since then, apart from coining #AI4Communities as shorthand, I haven’t developed this further, as I suspected my manifesto was probably wishful thinking. So it would be richly ironic if it became inevitable, after greed and short-sightedness floods the commons with low-grade AI content, leaving well-managed online communities of actual human beings the only place able to provide the sort of data tomorrow’s LLMs will need.
Followups
I’m following up this post with another developing the #AI4Communities concept further. It will be found alongside all resources tagged #AI4communities on my hub, where you can also find me on social and subscribe to my newsletter.