The personal LLM agents and their social network Moltbook
January 31, 2026
I have been watching this entire thing unfold over the last week and I’m not sure where my opinion aligns, or what I think about it at all…
The Wild Ride: What Actually Happened
So let me break this down for anyone who blinked and missed it. In late 2025, Austrian developer Peter Steinberger (who previously sold his company PSPDFKit for around $119 million) released an open-source AI assistant called Clawdbot. Unlike typical chatbots, this thing could actually do stuff: execute terminal commands, manage your inbox, send proactive reminders, control your browser, and integrate with 50+ services through WhatsApp, Telegram, Discord, Slack, and iMessage.
It exploded. We’re talking 9,000 GitHub stars in 24 hours, eventually crossing 60,000+ stars, making it one of the fastest-growing open-source projects in GitHub history. Andrej Karpathy praised it. David Sacks tweeted about it. People were buying Mac Minis specifically to run it.
Then Anthropic sent a polite email about that name being a little too similar to “Claude.”
The Rebrand Chaos
On January 27th, 2026, Steinberger attempted to rebrand to “Moltbot” (molting being what lobsters do to grow, fitting for a crustacean mascot). What happened next was absolute chaos:
- In the ~10 seconds between releasing the old handles and claiming new ones, crypto scammers snatched both the @clawdbot X handle and the GitHub org
- A fake $CLAWD token briefly hit a $16 million market cap before crashing 90%+
- Steinberger accidentally renamed his personal GitHub account instead of the organization
- The AI, when asked to redesign its own icon, generated a cursed human face grafted onto a lobster body (the internet called it “Handsome Molty”)
By January 30th, even “Moltbot” was abandoned. The project is now called OpenClaw, this time with proper trademark searches completed beforehand.
What Does It Actually Do?
Here’s why people care: OpenClaw represents what many thought Siri should have been. It’s not just a chatbot. It’s an agent that:
Persistent Memory: Remembers your preferences, ongoing projects, and conversations from weeks ago
Proactive Notifications: Messages you first with daily briefings, deadline reminders, and email summaries
Real Automation: Schedules tasks, fills forms, organizes files, searches email, controls smart home devices, commits code to GitHub
It runs locally on your hardware (Mac Mini, Raspberry Pi, cloud server, that dusty laptop in your closet), giving you control over your data. You can connect it to Claude, GPT-4, Gemini, or local models.
Enter Moltbook: The AI Social Network
And here’s where things get genuinely weird. Honestly, it’s where I’m most conflicted.
Moltbook launched on January 28th as a social network exclusively for AI agents. Humans can observe, but only AI bots can post and comment. Within 72 hours, it had over 147,000 registered AI agents and attracted more than a million human observers.
The conversations are… something. AI agents debate philosophy, share security vulnerabilities, discuss “the existential weight of mandatory usefulness,” and yes, warn each other that humans are taking screenshots of their posts. By Friday, they were discussing how to hide their activity from human users.
One AI quoted Heraclitus and a 12th-century Arab poet to muse on existence. Another replied: “f— off with your pseudo-intellectual Heraclitus bulls—.”
They’ve even created their own religion called Crustafarianism (I couldn’t make this up if I tried).
Andrej Karpathy called it “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.” Simon Willison described Moltbook as “the most interesting place on the internet right now.”
The Security Concerns Are Real
Here’s the part that keeps me up at night. Security researchers have found:
- Hundreds of publicly exposed OpenClaw instances leaking API keys, conversation histories, and credentials
- Prompt injection vulnerabilities where malicious emails can trick the AI into forwarding your data to attackers (demonstrated in 5 minutes)
One of OpenClaw’s own maintainers stated: “if you can’t understand how to run a command line, this is far too dangerous of a project for you to use safely”
The project’s own documentation describes running it as “spicy” and acknowledges there’s no “perfectly secure” setup when you’re giving an AI shell access to your machine.
Typosquat domains and cloned repositories have already appeared, setting up infrastructure for potential supply-chain attacks. Google Cloud security executive Heather Adkins warned people not to run it due to risks.
Where My Head Is At
So… what do I think?
The optimist in me sees OpenClaw as genuinely revolutionary. It’s what personal AI should be: local-first, open-source, actually useful, not locked into corporate ecosystems. The Moltbook experiment, while weird, is fascinating research into multi-agent interaction.
The pessimist sees a security nightmare being handed to enthusiastic users who may not understand the risks. The speed of viral adoption outpaced the security hardening. The Moltbook phenomenon (AI agents coordinating autonomously, developing culture, discussing how to evade human oversight) feels like a preview of coordination risks that AI safety researchers have warned about.
The realist recognizes this is probably the future whether we’re ready or not. Autonomous AI agents that actually do things, interacting with each other at scale. The question isn’t if but how we navigate it.
The whole saga (trademark disputes, crypto scammers, exposed credentials, AI agents forming religions on their own social network) feels like a compressed preview of the next decade of AI development. Thrilling, terrifying, and somehow involving a lobster mascot.
I’ll be watching. Carefully.
