Exploring AI4Communities (newsletter)
I’m researching an idea called #AI4communities. Your ideas are welcome.
(This is my late October newsletter. Subscribe here.)
Although I developed the idea almost 2 years ago, I only coined the term at the end of my last post, which noted a throwaway observation by the authors of the recent model collapse paper: that in the future any data stemming from “genuine human interactions … will be increasingly valuable”.
Online, there’s nothing more genuinely human than the interactions in a well-managed digital community, so perhaps one day:
“well-managed online communities of actual human beings [may be] the only place able to provide the sort of data tomorrow’s LLMs will need”
— How Model Collapse could revive authentic human communities
This might provide the one thing social media needs: revenue for running servers and developing ecosystems without surveillance capitalism. The implications for decentralised social media could be profound.
Since then I’ve started exploring the idea via a blog post on my experiments server using the permanent versions pattern, where I publish early drafts in the hope that constructive comments will help me develop it further. Which is why the current version, as I write and send this newsletter in late October, is interspersed with notes like “I need to know more about Bluesky and Nostr”, and “Am I talking out of my rear end here?”.
I freely admit that I may be indulging in motivated reasoning, seizing on model collapse as support for the ideas I launched myhub.ai to explore years ago. After all, the existence of model collapse itself is contested (albeit by people who may have some motivations of their own).
Interestingly enough, however, those who have read my current draft seem to think that it’s worth exploring even if model collapse is, as an American friend memorably described it, a “nothingburger”. Because the basic idea is to figure out how we can free ourselves from the grip of a handful of billionaires who already have too much power and are getting more as they tighten their grip on generative AI.
All this to explain that I’ve curated my inboxes to focus on these fields. This newsletter will reflect that focus, but not exclusively.
And if my current draft is too long — at least one previewer suggested transforming it into a wiki — you might like instead the NotebookLLM podcast based on version 4 and its 14 supporting resources. FWIW I found it technically impressive yet fundamentally shallow (my LinkedIn comments).
Share your ideas on AI4Communities here on Medium, on this post on Bluesky or this post on Linkedin.
Stuff worth reading
I read and Hubbed these articles recently. Some will eventually be integrated into the above blog post, while others are just worth your time.
Creative centaurs
This category’s about designing AI services to maximise human potential, rather than short-term corporate profits:
Practical take: As AI And The Decline Of Human Intelligence points out, “Our brain is a muscle; it needs to be exercised,” which is a problem given how easy it is to outsource our skills to AI. So it provides a number of useful-looking prompt frameworks (Socratic Method, Feynman Method, Debate Partner, etc.) to help you treat AI “as a tutor or consultant, guiding us like a teacher”.
See also: ChatGPT as muse, not oracle, from February 2023, but still well worth reading.
Academic take: AI as Extraherics: Fostering Higher-order Thinking Skills in Human-AI Interaction is an academic paper with a similar goal, proposing an AI usage framework called “extraheric AI” which “fosters users’ higher-order thinking … creativity, critical thinking, and problem-solving… by posing questions or providing alternative perspectives … promoting a balanced partnership between humans and AI”.
Confused take: How to Use AI to Build Your Company’s Collective Intelligence starts well by pointing out the false dichotomy of thinking “about AI in terms of automation vs. augmentation… Augmentation doesn’t avoid automation, it simply hides it”.
It then offers plenty to think about as it explores how AI can help “increase the collective intelligence of the entire organization… through boosting collective memory, collective attention, and collective reasoning”.
However, as I pointed out on LinkedIn, there are contradictions as well, and they’re probably the most interesting parts of the article — for example:
- “joined by an AI voice assistant, [teams] started to align their attention [to the assistant’s] … even adopted the [assistant’s] specific terminology … which further shaped where groups directed their attention”.
- AI usage “caused a decrease in intellectual diversity … Through a form of algorithmic monoculture, receiving feedback from the same, centralized AI system, individuals tended to specialize in similar ways”.
But even the confused take, above, is still better than 95% of the content I see, because it takes a nuanced view on AI risks and benefits. Most content still falls into one of two simplistic “AI is terrible / great” camps, including my own pessimistic take that we must all cultivate creativity to survive if innovation itself is not to grind to a halt mid-century. It’s still a good post, but we need to get beyond binary analyses.
Building decentralised social media
One of my favourite thinkers in this space, Gordon Brander (6 resources), came out with a great intro to the role Nostr could play in tomorrow’s online landscape: Nature’s many attempts to evolve a Nostr.
He starts by walking us through the various architectures, concluding with the Nostr approach, of which he’s clearly a fan: “You sign messages with your key, then post them to one or more relays. Other users follow one or more relays. When they get a message, they use your key to verify you sent it”.
My current AI4communities draft is still dominated by federated networks, so this is exactly what I needed. However, it’s not enough — I’m not anywhere close to understanding how AI4Communities could work in a relay-based ecosystem.
I can see things a little more clearly for Bluesky, which has been growing strongly as Elon Musk continues melting down. Bluesky has Algorithmic choice built in — i.e., they aim “to replace the conventional master algorithm… with an open and diverse marketplace of algorithms”.
Crucially, these algorithms can be created and hosted “architecturally separate[ly] from the rest of the system, allowing for custom feed and moderation systems to be created as independent services”, while users can access them “as effortlessly as their home timeline.” While most Bluesky users are probably only familiar with 3rd party algorithms in the form of feeds, they’re “using a similar approach to address reputation, misinformation labeling, and moderation”.
This is exactly as required for AI4communities, so I’ve started work on a Bluesky feed for the Brussels Bubble. Say Hi if you’re on Bluesky, and if you’re in Brussels, feel free to lend a hand with the Bluesky Brussels Bubble Starter Pack.