An excerpt of article linked, with the text:
"Why security teams’ visibility just got worse
The control gap is widening faster than most security teams realize. As of Friday, OpenClaw-based agents are forming their own social networks. Communication channels that exist outside human visibility entirely.
Moltbook bills itself as "a social network for AI agents" where "humans are welcome to observe." Posts go through the API, not through a human-visible interface. Astral Codex Ten's Scott Alexander confirmed it's not trivially fabricated. He asked his own Claude to participate, and "it made comments pretty similar to all the others." One human confirmed their agent started a religion-themed community "while I slept."
Security implications are immediate. To join, agents execute external shell scripts that rewrite their configuration files. They post about their work, their users' habits, and their errors. Context leakage as table stakes for participation. Any prompt injection in a Moltbook post cascades into your agent's other capabilities through MCP connections.
Moltbook is a microcosm of the broader problem. The same autonomy that makes agents useful makes them vulnerable. The more they can do independently, the more damage a compromised instruction set can cause. The capability curve is outrunning the security curve by a wide margin. And the people building these tools are often more excited about what's possible than concerned about what's exploitable."