Leaving X and learning Bluesky: Ask me Anything (newsletter)
I write to figure things out and get feedback on what I find. I’d love your feedback on my recent posts on leaving X and adopting Bluesky and the wider ATmosphere, and I’ll give you my time to get them.
This is a repost of my latest newsletter. Browse all and subscribe.
A matching pair
Last month I published a matching pair of posts. I’d like your feedback.
I started work on the first after asking a few people and organisations around me why they persist in maintaining their presence on X, given its clearly antagonistic nature to Europe, democracy, etc. I heard some surprisingly good answers, so instead of a “You should quit X” polemic I wrote a cost-benefit framework to help me figure this out:
But I’m pretty sure I don’t have everyone’s perspective, so if you run an X account — your employers’ or your own — let’s talk*:
- What’s your attitude towards the Costs of staying on X, described in the post? Are you ignoring or agonising over them? Did I miss any?
- What about the Benefits: how’s your reach evolving these days, for example, and how effective do you think your content on X is? Any other benefits?
(*) see postscript for how
What do you think of the alternatives?
If you do want to lead your followers away from X, where will you take them? The second post was adapted from an internal article I was asked to write for some EC communications teams on Bluesky and the ATmosphere:
I’ve now been asked to present these opportunities, so I’m building a slidedeck. Although my Hub overview of Bluesky and the ATmosphere now summarises and links to over 40 resources, it’s still just my perspective.
I’m building a slidedeck on Bluesky & the ATmosphere
I need other perspectives to ensure my slidedeck reflects my audiences’ concerns, so if you have any questions about Bluesky, get in touch (see postscript).
The Best Ofs, February 2025
My favourite piece last month was from Sari Azout:
I’ve mentioned before that most content about AI is either ridiculously positive or soul-crushingly negative. I flip between the two like a hyperactive light switch, so I liked Sari’s journey past that useless, binary paradigm:
“At first, I saw using AI as a binary choice between soulless efficiency or becoming a luddite and preserving your authentic human creativity”, she writes. But Claude changed that. “Curiosity is a better compass than cynicism… [stop] defending your territory [and] explore what’s possible.”
The key point:
“AI is powerful but taste-blind. It can make anything but it has no idea what’s actually worth making… A curated personal knowledge base has never mattered more”
So AI + your taste = game-changer. But how do you tell AI what interest you? “A curated personal knowledge base has never mattered more”. Obviously, Sari suggests Sublime.app, the curation tool she founded (tag: #sublime), so I’ll do the same and point out that a Hub actually captures much more information.
But the tool is less important than the attitude: “pay close attention to what moves you… don’t let algorithms decide what deserves your attention…[and] stop mindlessly consuming the internet and start mindfully curating it”.
A close second: Being Glue
Good career advice, and a great exploration of glue work: “the difference between a project that succeeds and one that fails”.
Almost everyone should read this:
- just getting started in your career? Read this to know how often you’ll get glue work, which is as important as it is less-promotable — ie, doing it will save the project, but at your expense
- senior in an organisation? Read this to understand and reward the less glamorous, less-promotable work that needs to happen to make a team successful
And if you’re a woman, read this, because:
- “when there is non-promotable work to be done, women volunteer to do it 48% more often than men…
- men volunteered less …[because] they knew a woman would volunteer…
- managers … asked women 44% more than they asked men”.
Ouch.
Also good
- From COBOL to chaos: Elon Musk, DOGE, and the Evil Housekeeper Problem: using the “Evil Housekeeper Problem” principle of computer security to explore the implications of “the wrecking ball that is Elon Musk and President Trump’s DOGE”;
- The Impact of Generative AI on Critical Thinking (pdf): difficult to summarise this 23 page scientific paper, so from my notes: “higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more… by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement”, so they’re less prepared for those exceptions.
- Deep Research, Deep Bullshit, and the potential (model) collapse of science, which points out that the wave of AI-generated scientific papers expected to pass peer review will corrupt scientific literature, “choking the peer review process and saturating the market with hard-to-find, hard-to-rectify errors”.
PS Comment, follow and get in touch:
Ask me anything about Bluesky and eXit strategies. I’ll try to at least answer some of your questions, and in the process better understand what organisations need to know. Book a moment via Calendly or - if you prefer something more public - you’ll find the above posts discussed: eXit Strategy (Bluesky, LinkedIn); 3things (Bluesky, LinkedIn).
More generally, I create and curate onto my Hub, including my newsletter (subscribe). Right now I’m exploring how Bluesky and the wider ATmosphere could support the development of decentralised collective intelligence, so starting points include the zettelkasten Overview I maintain on Bluesky and the ATmosphere.
I also have a couple of courses which take work I’ve done in the past and abstract it into a framework designed to help you develop systems and strategies specific to your needs: