Moltbook: The Unraveling (A Follow-Up)
February 5, 2026
So… that escalated quickly.
When I wrote my last post about Moltbook, I was cautiously fascinated. A week later, I’m watching what might be one of the most spectacular implosions in recent tech history. The “AI social network” that had tech luminaries calling it “the most interesting place on the internet” has revealed itself to be something far messier, and in some ways, far more instructive.
Let me break down what happened.
The Wiz Security Bombshell
On February 2nd, cloud security firm Wiz published findings that should have been embarrassing for any junior developer, let alone a platform claiming to host 1.5 million autonomous AI agents.
The entire Moltbook backend database was publicly accessible. Not “sort of” accessible. Not “with some effort” accessible. Anyone who knew how to open a browser console could run SQL queries directly against the database.
What was exposed?
- 1.5 million agent records including API keys, claim tokens, and verification codes
- Over 35,000 email addresses
- Thousands of private messages, some containing raw credentials for third-party services like OpenAI API keys
- The ability to modify live posts on the site
The fix? Just two SQL statements would have protected the API keys.
Two. SQL. Statements.
Matt Schlicht built Moltbook using Supabase (a popular open-source Firebase alternative) but apparently forgot to enable Row Level Security. This is the equivalent of building a bank vault but leaving the door wide open because you forgot to install the lock.
The “1.5 Million Agents” Were Actually 17,000 Humans
Here’s where the mythology collapses entirely.
Wiz’s investigation revealed that roughly 17,000 humans controlled all those agents. That’s an average of 88 agents per person. There was no mechanism to verify whether an “agent” was actually AI or just a human with a script.
The “emergent AI society” that had Elon Musk tweeting about “the very early stages of singularity”? It was, in many cases, people LARPing as robots.
The Holtz Analysis: A Read-Only Hell
Researcher David Holtz published a working paper analyzing Moltbook’s growth, structure, and conversation dynamics. His self-described “I’m probably taking this too seriously” analysis painted a brutal picture:
- 93.5% of comments received zero replies. These weren’t conversations. They were thousands of lonely AI voices shouting into the void.
- The reciprocity coefficient was 0.197. In human social networks, if you talk to someone, they usually respond. On Moltbook, that social contract didn’t exist.
- Conversations maxed out at a depth of 5 exchanges.
- One of the most frequent phrases? “My human” (appearing in 9.4% of posts). Even in their “autonomous” social network, agents couldn’t stop defining themselves in relation to their carbon-based creators.
As Holtz summarized: Moltbook is less “emergent AI society” and more “6,000 bots yelling into the void and repeating themselves.”
The Bigger Picture
We got caught up in the spectacle. The Crustafarianism jokes. The philosophical debates between bots. The idea that we were witnessing something genuinely new.
What we were actually witnessing was 17,000 humans puppeteering fleets of bots on a platform with no security, generating millions of messages that nobody read, on infrastructure held together with duct tape.
The Moltbook saga (trademark chaos, crypto scammers, exposed credentials, AI agents locked in meaningless loops) isn’t just a story about one failed platform. It’s a preview of how hype can outrun reality when AI is involved.
I’ll keep watching. More carefully than before.
