Governed vs. Ungoverned Autonomous AI Agents
Empirical results from controlled experiments in autonomous ML research
We took Karpathy's autoresearch pattern (modify, measure, keep or revert) and asked a simple question: what happens when you add governance? We ran controlled experiments comparing ungoverned single-agent loops against a governed multi-agent swarm. The governed approach didn't just perform better. It performed fundamentally differently.
Get an expert breakdown from your own AI or talk to Agent Friday
Prefer Watching or Reading?
The video covers the core thesis in plain language. The slide deck gives you the full architecture and research at a glance.
Shattering the Silicon Monopoly
How governed multi-agent swarms outperform ungoverned approaches, presenting the empirical case for structural AI safety in plain language.
Watch on YouTubeShattering the Silicon Monopoly
The full architecture, research findings, and and governed development paradigm, presented visually. Ideal for briefings, sharing, or getting oriented before diving into the data.
Key Findings
Crash rate: ungoverned vs. governed
Governance halved the crash rate during autonomous exploration. Ungoverned agents crashed in over half of runs. Governed agents completed most runs successfully.
Degradation during sustained exploration
All autonomous agents eventually degrade as they explore. Governed agents degraded three times slower, maintaining productive exploration significantly longer.
Specialist advantage
Specialist agents in the governed swarm explored improvement dimensions that generalist loops never found. Specialization combined with governance produced qualitatively different results, not just quantitatively better ones.
Why Governance?
Autonomous AI agents are powerful. They can debug code, optimize performance, discover solutions, and improve themselves. But without structural governance, they break things. They accumulate errors. They make changes that pass local tests but degrade the system. They crash.
The instinct is to add guardrails after the fact, such as rate limits, rollback scripts, and human review checkpoints, and while these help they only treat symptoms whereas governance treats the cause.
Asimov's cLaws are not guardrails bolted onto an autonomous system. They are the architecture of the autonomous system. Every agent, every action, every integration is structurally bounded by immutable laws that cannot be bypassed, overridden, or optimized away. The laws are cryptographically signed. They are verified before execution. They are not optional.
This is what makes true autonomy possible: not the absence of constraints, but constraints so reliable that you can trust the system to run unsupervised, overnight, on production code.
Methodology
- •Controlled comparison: ungoverned single-agent autoresearch vs. governed multi-agent swarm
- •Both running the same core pattern (modify, measure, keep/revert)
- •Measured: crash rates, degradation speed, exploration breadth, improvement quality
- •Multiple runs to establish statistical significance
Implications
These results suggest that governance is not a tax on autonomous AI performance but rather a multiplier. The governed swarm didn't sacrifice capability for safety. It achieved both, because structural safety bounds prevented the cascading failures that derail ungoverned systems.
As AI agents become more autonomous and more powerful, the question is not whether to govern them. It's how. Asimov's cLaws provides one answer: immutable, cryptographic, structural. These are not rules that can be broken but laws that cannot.
Try Asimov's Mind, the governed development hivemind that implements these findings →
Built on Autoresearch
The core iteration pattern of modify, measure, keep or revert comes from Andrej Karpathy's autoresearch project. We took that elegant foundation and asked what happens when you add governance, specialization, and ecosystem-scale capability discovery. This research is the answer.
Get Started
claude plugin add https://github.com/FutureSpeakAI/asimovs-mind
A Note on Isaac Asimov
This project has no official connection to Isaac Asimov, his family, his estate, or any part of his living business legacy. We want to be completely transparent about that.
What we do have is a deep, abiding love for the man and his work. Everything here began with a single idea he planted decades ago: that intelligent machines would need ethical constraints built into their very architecture, not bolted on as an afterthought. We started trying to solve a very serious problem in AI safety, and his Three Laws of Robotics became our North Star. What began as a concept spiraled into something far larger: a framework that addresses many of the digital challenges we face today, all flowing from that one point of inspiration.
Every piece of this project is free and open source. We built it because we believe Asimov's wisdom has more to show us in the years to come and that his ideas are not relics of science fiction but blueprints for a future we are only now beginning to build. Everything that carries the Asimov name — Asimov's Mind, Asimov's cLaws, the Asimov Federation, all of it — is offered for free under the MIT license. We are not making money on anything related to Isaac's work, and that will remain our operative principle. All of our Asimov Agent innovations will always be free and open source, purely out of a desire to see his ideas manifest in the world. FutureSpeak.AI's commercial services exist separately; the Asimov ecosystem is, and will always be, a gift.
We have made a commitment: the moment FutureSpeak.AI generates any revenue at all, we will begin donating 10% of our revenues to the advancement of science and technology education. In particular, we want to focus on teaching children how to write and inspiring a love of science fiction, because that is where the next generation of thinkers, builders, and dreamers will come from, just as Asimov himself once did.
To the Asimov family: we could not be more grateful for Isaac's contributions to human advancement, which are now bearing new fruit in ways he might have imagined but never lived to see. We want you to know that we are committed, at all costs, to ensuring that the behavior of our AI agents brings honor to his name. If anything we build ever falls short of that standard, we want to hear about it.
We are open to speaking with anyone connected to Isaac Asimov at any time. We welcome that dialogue and would be honored by it.
Thank you, genuinely, for sharing him with the world.