With OpenClaw, Captain Picard’s legendary command “Computer, tea. Earl Grey, hot.” is finally becoming reality. The automation agent connects all the internet-based tools that play a role in everyday life with the cryptic low-level tools of the command line, allowing humans to take control via chat interfaces. In the relevant online channels, the tenor is clear: the dream of universal agents is finally coming true. The author Peter Steinberger is being celebrated for having accomplished something that all the companies with all their money and brilliant minds had supposedly failed to achieve.
An observer viewing the discourse from the outside might ask at this point: Are we witnessing a second Linus Torvalds, with OpenClaw as a reincarnation of the Linux kernel on a new level, or are we witnessing a second Robert Oppenheimer, because “the most dangerous software in the world” (Steinberger, [1]) possesses a potential for social destruction that is quite comparable to Oppenheimer’s Manhattan Project?
This question will be answered in the future. For now, the focus here is on why this tool demonstrates how right Angela Merkel was. The topic of vibe coding will be set aside. Instead, the focus is on the social mechanics that can be observed through this example.
What the tool can do in detail is hardly surprising. Accessing internet services via APIs and defining automations through specified workflows has long been solved at the technical level and is feasible even for technically inexperienced users thanks to tools like Zapier or n8n. User-friendly interfaces for command line tools have likewise been common for some time. And that large language models achieve sufficient quality for most cases in generating commands in programming languages can also be considered established.
Killing the social contract
So what is new about the idea behind OpenClaw, and why is it being celebrated? The technical answer is trivial: the software combines these capabilities by acting as an agent that links all these tools together and makes them accessible through simple interfaces such as chats via Telegram or Discord.
The answer at the social level is less straightforward and requires closer examination. OpenClaw, through its architecture, terminates a social contract that underlies all interpersonal collaboration. This informal yet nonetheless effective social contract ensures that responsibility can be assigned through negotiation, and that this negotiation takes place at the interfaces between systems—social as well as technical.
At technical interfaces, such a contract is typically formulated as a specification that clarifies whether the participating systems are a client or a server, what data is transmitted, and which software development team must ultimately bear responsibility. If these matters were unclear, the interface would function only by fortunate coincidence, and in the event of a problem, it would be unclear who could contribute to a solution and how.
At social interfaces, these social contracts function quite similarly. For the purchase of a die, for example, it is negotiated who is the buyer and who is the seller. Both have rights and obligations that are the subject of this negotiation. The outcome creates binding commitments and the possibility of clarifying who may have failed to fulfill their obligations. Without this social contract and the associated negotiation, it would be impossible to establish a stable legal system.
This fact may seem banal at first; its consequences are not. OpenClaw may be just a single tool today, but it follows the trend of blurring the distinctions necessary for the assignment of responsibility.
OpenClaw can, by design, only function without this social contract. It runs as an agent on a computer defined by the user and connects to the tools on the internet for which the user authorizes it. For control, the user can select one or more channels, and OpenClaw receives its intelligence through a large language model specified by the user, to which access must likewise be provided. As Steinberger says, the software is designed to be allowed to do everything. It is “Skynet” (Steinberger, [1]) with root privileges, and this is intentional. It can have root privileges on my computer and, if I authorize it, over my life as well. What makes security experts' hair stand on end is, in this case, deliberate: OpenClaw works precisely because it brings no inherent security barriers. Put bluntly: Steinberger succeeds at what supposedly no company had managed only because he ignores a restriction that companies cannot ignore.
The return of the chancellor
This problem is further amplified by the large language models, which themselves must, by design, be insecure. Any attempt to impose a security barrier on natural language is perceived in the real world as censorship and leads to substitute patterns in language use. A large language model will, of course, eventually adapt to these as well. Security must therefore necessarily occur at the interfaces that connect the large language model to other tools.
OpenClaw removes these barriers insofar as it will attempt to carry out all interactions with the interfaces of connected services that can be described in natural language. Until now, this security check at interfaces has functioned on the basis of the contract described above: for example, users authenticate themselves with credentials at a payment service provider and transfer money. OpenClaw can theoretically do this as well if it has access to the payment service provider. However, the large language model controlling OpenClaw can neither recognize the intent behind a command nor is it possible to ensure that this command was not triggered by a prompt injection. The payment service provider practically cannot distinguish whether OpenClaw or the user herself triggered the command. Steinberger repeatedly points to these problems and has largely protected himself legally through his choice of the MIT License.
And it is precisely here that Angela Merkel steps out of the shadows and smiles at us as if she were thinking: I told you so. For what Steinberger demonstrates with OpenClaw is how little the software engineering community reflects on the social implications of software, as long as it merely functions technically.
Sprinting blind through a marathon
The result is a shift in social contracts for which no one is prepared and which, even if OpenClaw as a tool disappears again, will have lasting effects for a long time to come. The consequence of OpenClaw’s availability is that people will use it. The previously accepted social contract is thrown off balance because, on the one hand, the binding commitments it entails suddenly become an obstacle in the use of OpenClaw and, on the other hand, the attribution necessary for clarifying responsibility is no longer possible. This unclear attribution of responsibility is the reason why no company has released software like OpenClaw until now. Steinberger places the responsibility with the users, and a counterargument is likely to be difficult in the age of personal responsibility.
This attitude, which Adrian Daub described in his book “What Tech Calls Thinking” [2], is either naive or ignorant. It reveals a resistance to learning that has persisted in the software engineering community for decades with regard to the social implications of software. For the fact that, for example, the mechanisms necessary for the social contract described above cannot be retrofitted without entailing functional limitations is likely to be largely uncontested.
This becomes fatal in situations where security must be considered as a principle because social coexistence is based on it. Unlike the reliable computer in Star Trek, OpenClaw controls processes in an opaque manner. No one is prepared for a situation in which it becomes necessary to clarify how OpenClaw arrived at a result, but this is technically impossible. Responsibility lies solely with individuals who have only marginal possibilities to intervene in the processes. For such a situation, there is neither a legal framework capable of addressing the consequences nor will there be social acceptance that goes beyond the word “personal responsibility”.
That the perspective presented here is scarcely represented in IT allows only one conclusion: Angela Merkel is still right, even in 2026.
The concept of the social contract is informal and not strictly defined.
Humans reduce the uncertainty of an open future by negotiating shared goals and mutually committing to their observance. This can occur formally, such as through contracts, or informally, as in friendships or romantic relationships. As a result, social action becomes normatively expectable as a justified assumption that agreements will be honored.
This mechanism corresponds to Luhmann’s concept of trust [3]: trust reduces social complexity by allowing one to mentally exclude certain courses of action on the part of the other. The social contract institutionalizes this trust. One no longer trusts merely the person but the mutually recognized rule.
An example of the application of this social contract is the formal legal system. Without the basic idea of binding agreements, the concept of a breach of law would be meaningless. The law thus operates on the institutionalized expectability that the social contract creates.
In IT, the social contract is implemented through compliance and security.
Links & Literature
-
c’t 3003, OpenClaw: Ja, der Hype ist gerechtfertigt ↩︎
-
Daub, Adrian, “What Tech Calls Thinking”, FSG Originals, 2020 ↩︎
-
Luhmann, Niklas, “Vertrauen. Ein Mechanismus der Reduktion sozialer Komplexität”, 5. Auflage, UTB, 2014 ↩︎