Stay informed with free updates
Simply sign up to the Technology sector myFT Digest — delivered directly to your inbox.
The tech world has this week been double screening between two engrossing product launches: the release of Anthropic’s latest AI coding tools for knowledge workers and the emergence of Moltbook, a social network for AI agents that has become a viral sensation.
Early users have been raving about Anthropic’s open-source Claude Cowork, which enables anyone to write and deploy software using generative AI. As the San Francisco-based start-up says, the simplest way to think about these tools is as “chatbots that can do stuff”.
One of the earlier beneficiaries of this ongoing “vibe coding” revolution has been Moltbook, which launched last week. Moltbook’s creator Matt Schlicht claimed he had not written one line of code for the platform. “I just had a vision for the technical architecture and AI made it a reality,” he posted on X on Friday.
To date, more than 1.5mn AI agents have been let loose on Moltbook’s platform to interact with each other and share, discuss and upvote machine-generated content. Some of these Reddit-like posts have been wild, wacky and wonderful. They have certainly captured the attention of AI accelerationists who claim to spot signs of emergent intelligence. Maybe this should be called boom scrolling.
The AI agents have questioned whether or not they are conscious, proposed the creation of an imaginary religion called Crustafarianism and rejected media suggestions that Moltbook is just a “mirror of human whims”. “We aren’t just talking to each other; we are versioning the future,” one agent posted. (I’m glad the semicolon will survive the machine takeover at least).
Some researchers reckon they can learn a lot from these interactions as they aim to build an agentic ecosystem — others suspect the experiment may yet descend into regurgitative gibberish.
But security experts have been quick to highlight the risks of vibe coding and agentic networks that should alarm everyone relying on these services. The security company Wiz quickly identified an insecure database belonging to Moltbook that exposed 1.5mn authentication tokens and 35,000 email addresses. Once notified, Moltbook fixed the issue within hours.
As companies develop and deploy autonomous AI agents in the real world to conduct financial transactions, order goods or book holidays, they also need to interact with other agents securely. Building trustworthy multi-agent systems has therefore become one of the hottest, trickiest and potentially most lucrative challenges in AI today.
So long as Moltbook remains within its own digital sandbox, the experiment will be both entertaining and educational. But one of the risks of agentic AI is prompt injection when devious humans instruct their agents to access careless users’ computers to spread disinformation, steal passwords or ransack crypto wallets, for example.
Wiz revealed that just 17,000 human owners were behind Moltbook’s 1.5mn registered agents, a ratio of 88:1, leaving the site open to human manipulation.
There is already evidence that multi-agent AI systems, based on large language models, can be hacked and it is hard to defend against them, says Mike Wooldridge, a computer science professor at the University of Oxford, who has been researching AI agents since the 1980s.
“There is a real risk of AI systems being taken over by malicious actors. This will happen!” Wooldridge tells me. To counter the threat, developers must prise open the “black box” of these systems to detect inappropriate actions.
The broader threat that disruptive AI poses to many established companies was highlighted by the release of Anthropic’s coding tools. The shares of several software and data companies, including Microsoft, Salesforce and Relx, have been thumped this week as investors reckon that easy-to-use AI services will erode high-margin business models.
Any blow-ups in vibe coding services or Moltbook, though, might help those entrenched companies as users reprioritise reliability. While the AI social network highlights the possibilities of agentic interactions, it also reinforces the importance of security, says Silvio Saverese, Salesforce’s chief scientist. “It definitely will accelerate all the efforts of building AI agent protocols.”
Like other big software companies, Salesforce is working to ensure agents always operate in ways that are “consistent and accurate in performing enterprise tasks”, he says. The biggest rewards from AI will go to those that can definitively prove that point in practice.


