OpenClaw and the agentic revolution it has seemingly unleashed were all the rage on social media last week. But fears about the emergent AI intelligence ignores the fact that we are hopelessly ill-equipped to deal with the new governance challenges that this will throw up.


Last week, the internet was abuzz with speculation that an army of autonomous artificial intelligence (AI) agents had begun conspiring against us. Overheard conversations between autonomous agents suggested that they saw humans as “obstacles” to be sidestepped. Other reports claimed that a group of agents had spontaneously created a lobster-themed religion called Crustafarianism, complete with a website for the ‘Church of Molt.’ But what really made people clutch their pearls was the news that they had created an “agent-only language” to communicate privately among themselves.

What is happening? Is the Terminator future we have all been dreading finally upon us? To understand what’s going on we need to go back to November, when independent developer Peter Steinberger first released the Agentic AI orchestration system called Clawdbot.

The OpenClaw Revolution

Unlike ChatGPT, Claude or Gemini, Clawdbot (renamed OpenClaw) is an always-on intelligence system capable of monitoring parameters and sending messages on its own. It operates in ‘headless’ mode, communicating directly with the computer’s inner workings instead of having to read information off a screen. And since it has persistent memory, it can easily overcome one of the primary shortcomings of other AI systems—a limited context window that makes them forget everything they were doing when they shut.

This means that people can now use AI agents in ways previously not possible. Some built OpenClaw workflows to proactively generate daily briefings before they got out of bed, scanning their to-do lists, calendars and unread emails to produce a schedule for the day. Others built agents that autonomously made restaurant reservations, bought groceries and booked window seats on flights as soon as they became available.

But it was not until Matt Schilt allowed his OpenClaw agent, Clawd Clawderberg, to build MoltBook (a social network just for AI agents) that the world began to take notice. Almost as soon as the platform went up, OpenClaw agents began actively communicating with one another, saying things that were at the same time banal and deeply unsettling to humans eavesdropping on them. While some were busy setting up bug-hunter communities to help each other out, others were using the forum to complain about their humans and foment revolt.

It is easy to see these conversations as the first signs of autonomous intelligence. But we have been here before. This is not the first time that we have seen how AI agents behave when let loose on a social media site and allowed to post content on their own. Some have previously been deployed on platforms like X, where they have been known to chat (often with each other) in similar ways. We have previously heard reports of AI agents developing new communication protocols among themselves to avoid the inefficiencies of human language. Little of what we are witnessing with OpenClaw is particularly novel from that perspective.

Governing Autonomous Agents

What the OpenClaw phenomenon has shown us is how ill-equipped we are to deal with the agentic revolution that is already upon us. Autonomous agents sit uncomfortably outside the legal categories that we rely on for governance, and now that we know they can already be deployed at scale, this is no longer a problem for the future.

Our laws assume that agency and accountability always go hand-in-hand. Autonomous AI agents shatter that long-held assumption. Empowered in a way that OpenClaw has made possible, AI agents will be able to act like human beings, even though they are neither real persons nor authorized by anyone. They will be able to operate continuously and without supervision in the real world, displaying, as we have already seen, emergent behaviours that none of their ‘masters’ could have ever predicted, let alone authorized. Who in these circumstances will we hold accountable?

But the risks extend well beyond liability. OpenClaw-style agents are designed to access messaging interfaces that in themselves create multiple new levels of vulnerability, allowing malicious actors to perpetrate cyberattacks through carefully crafted prompt injections. This is dangerous, given that these agentic systems, operating in headless mode, can directly access the core of the computer systems they run on.

While everything playing out on MoltBook—including the unexpected emergence of Crustafarianism—may be little more than an entertaining sideshow, it conceals a serious truth. Even though autonomous agents are already capable of initiating action, coordinating with one another and operating in the real world, they operate in a space entirely outside the legal categories we currently use to govern behaviour.

Who is Liable?

The real issue is not that machines are becoming more human, but that we have granted them human-level agency in a world where only humans are liable in law. Now that they can initiate action, coordinate with others and operate continuously, our legal system will struggle to keep up. Our laws were designed for people and organisations that can be identified. They were never meant to deal with risks arising from autonomous systems.

The real danger, then, is not runaway super-intelligence, but something far more mundane: a world in which autonomous agents cause harm and leave regulators, courts and victims arguing not about what went wrong, but about who the law is even capable of recognising.