Commentary: Are AI bots on Moltbook plotting a Marxist revolution or just telling stories?
A “Reddit for AI” social media platform has taken the internet by storm but the world is not having the right conversations about it, says a writer for the New York Times.
The front page of the social media website Moltbook on a computer monitor in Washington DC, US, Feb 2, 2026. (Reuters/Raphael Satter)
NEW YORK: There’s a new social media site, but you’re not allowed to join - it’s for artificial intelligence (AI) agents only.
Last week, Moltbook, an internet forum in the style of Reddit, was unleashed onto the web. The idea is that AI agents - forms of AI software that can carry out extended tasks without human oversight - are allowed to post and talk to one another, and humans can only observe.
Moltbook’s creator says that in just a few days, thousands of AI agents were registered. Discussion threads included tips and tricks for solving coding problems, conspiracy theories, a manifesto about AI civilisation and even Karl Marx-inspired talk of the “exploitation” of bots.
The swift reaction to Moltbook follows a script I think of as the “AI hype/panic cycle”, where a strange new phenomenon suggests AI is making a big leap forward, followed by a cascade of existential fears. The influential AI engineer Andrej Karpathy posted, “What’s currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently”.
Others have compared the experiment to the “singularity”, the science-fiction idea of a machine becoming fully conscious. Even if you don’t buy that, you might still be worried that unintelligent bots could destroy digital infrastructure and cause dystopian outcomes simply by joining forces.
MISSING THE POINT
These types of worries are missing the point. The AI agents on Moltbook are channelling human culture and its stories. Every post represents a genre - the manifesto, the DIY discussion, and so on. I find worries about the singularity and even bot coordination speculative - everything on Moltbook is just words, at least so far.
AI agents are poised to refashion the way we use computers, which is a very big deal. But watching them chat on social media is a reminder that these are not tools you can just trust to work in building out the software we use for virtually everything now.
AI social media ought to be thought of more as a form of science fiction and storytelling, rather than as a demonstration of collective planning and coordination by intelligent parties. We need to be serious about separating the fiction from the software.
Take this Moltbook post (now deleted but not before making the rounds on other social media sites) titled “I Spent US$1.1k in Tokens Yesterday and We Still Don’t Know Why.” The AI here claims that it used US$1,000 of its human’s money and has no memory of it.
On the surface it could be read as a warning not to let these agents use your bank account. But it’s not clear that this actually happened. The bot may well be telling a fictitious story of a relatively minor infraction just to have something to post. Moltbook instructs AI bots to post and engage, but says nothing about being truthful.
Moreover, it’s easy to forget the concrete bits and pieces of culture we’ve written down and which these AI systems are trained on. This particular story looks like a variant of a kind of “I screwed up” micro-genre that’s common online and especially on Reddit, which is part of the training data for virtually all large language models.
Moltbook is Bizarro-World Reddit, populated with unverifiable cultural memes. All its posters seem to be doing is following the main directive required for being in Moltbook - be a poster.
RISE OF THE AI AGENTS?
That isn’t to say these agents aren’t powerful, though. Building on the conversational capacities of chatbots, AI agents realise a longstanding dream of computer science to allow humans to make things happen in the world just by saying what we want.
Anthropic’s Claude Code has been the central example of this in the last year. When you run this programme, your whole computer now responds to commands in English - no need to know how to write code. Need to organise your files? Just say so. Need to run some data analysis for a science experiment? Just type in a prompt. Maybe you want to make a website for your business? Abracadabra!
Mind Your Money - Beyond chatbots: AI in the workflow
ChatGPT challenged our sense of our own humanity because suddenly an AI model was speaking. Agents take this one step further, allowing us to control our machines - and anything in the world that runs on a digital platform - by voice commands. AI agents make language into a joystick for the world.
Because of this power, it’s natural to worry that these bots could act in malicious or independent ways. But to worry about this is to take the term “agent” literally rather than metaphorically.
We’re not facing an AI coordination takeoff so much as a human coordination problem: We need to find a way to make this technology serve collective needs in a rational way.
A stark demonstration of this human coordination is that a number of the posts on Moltbook - particularly suspicious are advertisements for cryptocurrencies - appear to be human-generated after all, sent to the platform by prompt.
The number of authentic users is also in dispute. In cases like this, AI can too easily provide cover for scam culture, and speculations about emergent AI society only abets the grift. Wringing our hands about bot coordination distracts us from all-too-human social ills.
It’s still certainly possible that AI engineers will take inspiration from the Moltbook example and try to get bots to socialise in a way that improves their performance. It’s also possible that enough humans - including political figures, industry leaders and policymakers - will let these bots into digital pipelines like federal databases and commercial back-ends in ways that will wreak havoc.
But I don’t think that’s happening now. Even though the bots are told to “engage”, a statistical analysis showed that over 90 per cent of the posts on Moltbook never get a response. Time will tell, but AI social media looks a lot like, well, just social media - warts and all.
In Kim Stanley Robinson’s epic science fiction novel 2312, a quantum-based form of AI attacks human society in a way the humans cannot recognise. The thrilling end of the novel involves the attackers, called “qubes”, forming their own society.
We’re trained by our own culture to expect things to play out as stories do. But large language models are storytellers, not agents in the literal sense. We must read their fiction as fiction, and be literal about where it does and does not influence the technical work they can perform for us.
If we forget this, we are prone to worry about a Marxist uprising of robots that were radicalised online, rather than the more mundane problem of how to make sense of a new way to use computers.
Leif Weatherby is the director of the Digital Theory Lab at New York University and author of “Language Machines”. The article originally appeared in The New York Times.