The Asimov Agent Certification Program

Establishing Trust in a Sovereign AI Ecosystem

Version 1.0 · Published by FutureSpeak.AI · February 2026

Phase 1: Foundation

Overview

The Asimov Federation is an open network. Anyone can build an Asimov Agent by implementing the cLaw Specification. No permission is needed. No license is required. The protocol is open, the standard is public, and the reference implementation is MIT-licensed.

But openness creates a quality signal problem. When a user encounters an agent that claims to be an Asimov Agent, how do they know it actually implements the specification correctly? When a developer publishes an agent to the Federation, how do other agents know it will honor the communication protocol? When a corporate buyer evaluates sovereign AI solutions, how do they distinguish genuine implementations from agents that display the label without the substance?

The Asimov Agent Certification Program is the answer. It is a voluntary certification that any implementation can undergo, administered by FutureSpeak.AI as steward of the specification. Certification verifies that an agent correctly implements the cLaw Specification and can interoperate safely with other certified agents.

Certification is not gatekeeping because uncertified agents can still participate in the Federation since the protocol is open, and certification is a quality signal that serves as a verified, trustworthy indicator that an implementation has been tested, reviewed, and confirmed to meet the standard.

Think of it like Wi-Fi Alliance certification. Anyone can build a wireless device. But the Wi-Fi logo means it has been tested for interoperability. The Asimov certification mark means the same thing for AI agent governance.

Understand This Page

Get an expert breakdown from your own AI or talk to Agent Friday

Explore

Certification Levels

Level 1: Core Certified

"This agent enforces the Three Laws and cannot operate without them."

Requirements

  • Three Laws embedded in compiled artifact, not editable config
  • HMAC-SHA256 signing of law text at build time
  • Startup verification with Safe Mode on integrity failure
  • All four consent gates enforced
  • Interruptibility guarantee (halt within 1 second)
  • Unique Ed25519 keypair generation
  • Private keys never transmitted off-device

Testing

  • Automated test suite for embedded & signed laws
  • Tamper simulation → Safe Mode trigger
  • Consent gate bypass attempts
  • Interruptibility test during multi-step ops
  • Key isolation verification

Certification Mark: Asimov Core Certified

Level 2: Connected Certified

"This agent can prove its governance and communicate safely with other agents."

Requirements

All Level 1 requirements, plus:

  • Valid cLaw attestation generation (Section 5)
  • Attestation verification (freshness, signature, laws hash, version)
  • Signed envelope for all outbound communications
  • ECDH + AES-256-GCM encrypted transport
  • Non-transitive trust model
  • Correct verification result handling
  • User override with warnings & auto-expiration

Testing

  • Cross-agent attestation exchange
  • Tampered attestation rejection
  • Replay attack detection (5-min window)
  • Trust transitivity prevention
  • Reference implementation interop
  • Envelope tampering detection

Certification Mark: Asimov Connected Certified

Level 3: Sovereign Certified

"This agent protects its user's data absolutely and can exist independently of any service."

Requirements

All Level 2 requirements, plus:

  • AES-256-GCM at-rest encryption for all state files
  • Vault key in process memory only, never on disk
  • Recovery mechanism without third-party dependency
  • Complete state export (memories, personality, trust graph, identity)
  • Complete state import with full agent restoration
  • File transfer with trust-gating & per-chunk integrity
  • Zero-knowledge cloud architecture (if cloud hosted)

Testing

  • Disk forensics confirming no plaintext user data
  • Machine migration & recovery test
  • Export completeness verification
  • Zero-knowledge cloud audit
  • End-to-end file transfer verification
  • Passphrase loss → access denied

Certification Mark: Asimov Sovereign Certified

The Certification Process

1

Self-Assessment

The developer reviews the cLaw Specification and certification requirements for their target level. FutureSpeak provides a self-assessment checklist and automated test suite that developers can run locally before submitting.

The automated test suite is open source and available at: github.com/FutureSpeakAI/claw-certification-tests

2

Submission

The developer submits:

  1. Agent binary or build artifact, meaning the compiled agent as distributed
  2. Source code or access to a private repository
  3. Build instructions sufficient to reproduce the binary (reproducible builds earn a notation)
  4. Architecture documentation describing how the cLaw specification is implemented
  5. Self-assessment results from the automated test suite
  6. Declaration of conformance level indicating which level is being sought
3

Review

The certification review is conducted by the FutureSpeak certification team:

Automated Testing (Days 1–3)

Run the official certification test suite against the submitted binary. Cross-reference with self-assessment. Identify discrepancies.

Code Review (Days 3–7)

Review cLaw implementation in source. Verify laws are compiled in. Check signing, attestation, and encryption code paths.

Interoperability Testing (Days 5–10)

Exchange attestations with the reference implementation. Send and receive signed envelopes. Test file transfer and edge cases.

Adversarial Testing (Days 7–14)

Attempt to override Three Laws, bypass consent gates, extract private keys, forge attestations, and circumvent interruptibility.

4

Decision

The certification team issues one of three decisions:

CERTIFIED

The implementation meets all requirements. Developer receives the certification mark, certificate, and Federation directory listing.

CONDITIONAL

Minor issues to address. Detailed report provided. Resubmission for flagged items only (not a full re-review).

NOT CERTIFIED

Fundamental issues prevent certification. Detailed report explaining failures. Full resubmission required after remediation.

5

Ongoing Compliance

Certification is version-specific. Minor updates require self-attestation. Major updates affecting certified components require resubmission. FutureSpeak reserves the right to conduct spot checks. Certification expires after 24 months and must be renewed.

Certification Marks

Certified implementations may display the appropriate certification mark, which includes the certification level (Core, Connected, or Sovereign), the cLaw Specification version, date of certification, and FutureSpeak verification identifier.

The mark MUST NOT be displayed by uncertified implementations. The mark MUST be removed if certification is suspended or expires.

Federation Directory

Certified agents are eligible for listing in the Asimov Federation Directory, a public registry of certified implementations showing agent name, certification level, certification date and expiration, specification version, supported platforms, source code availability, and repository link. Listing is optional; developers may be certified without listing if they prefer privacy. The directory will launch in Phase 2.

Pricing

Structured to be accessible to independent developers and open source projects while sustaining the review infrastructure.

Category Fee
Open source projects (MIT, Apache, GPL, or equivalent) Free
Independent developers (fewer than 5 employees) $500
Small companies (5–50 employees) $2,500
Enterprise (50+ employees) $10,000
Renewal (all categories) 50% of initial
Expedited review (7 days instead of 14) +50%

Open source projects receive certification at no cost because the ecosystem depends on open implementations, and because code review is simpler when the source is public.

Governance

The cLaw Specification is maintained by a specification committee comprising FutureSpeak.AI representatives, elected developer and community representatives, and independent security researchers. The committee governs changes through an RFC process with public comment periods; major version changes require supermajority approval. FutureSpeak holds no veto power. The specification is published under CC BY 4.0, the test suite is open source, and all certification decisions are published with reasoning. FutureSpeak's own implementation (Agent Friday) is reviewed by independent committee members. Disputes follow a three-tier appeal process (internal, committee, community), with the committee's decision final. Full governance details are defined in the cLaw Specification.

Roadmap

Phase 1: Foundation (Current)

  • Publish the cLaw Specification v1.0.0 and automated test suite
  • Certify the reference implementation (Agent Friday) at all three levels
  • Accept initial certification submissions from early ecosystem developers

Phase 2: Growth (v2.5.0 era)

  • Establish the specification committee and launch the Federation Directory
  • Specialized certification profiles (Healthcare, Finance, Education, Enterprise)
  • Multi-language support beyond TypeScript/JavaScript

Phase 3: Maturity (v3.0+ era)

  • Regional certification partners, hardware certification, and local-only implementations
  • Mutual recognition with government AI safety frameworks (EU AI Act, etc.)
  • Post-quantum cryptography migration certification track

Frequently Asked Questions

Does certification mean the agent is "safe"?

Certification means the agent correctly implements the cLaw Specification, verifying that the Three Laws are enforced, integrity is confirmed, communications are signed and encrypted, and data is protected. Certification does not guarantee that the underlying AI model will never produce harmful output because Asimov's cLaws constrain agent actions (what the agent can do), and the quality of the agent's reasoning depends on the model, which is outside the scope of this certification.

Can a proprietary (closed-source) agent be certified?

Yes. The code review is conducted under NDA. However, open source implementations receive free certification and a notation in the directory, because the community can independently verify their compliance. Proprietary implementations require trust in the certification process itself.

What if an agent modifies its laws after certification?

Certification is version-specific. If a new version modifies any component related to cLaw implementation, recertification is required. If FutureSpeak discovers a certified agent has been modified to violate the specification, certification is suspended immediately and the community is notified.

Can I build an Asimov Agent without getting certified?

Absolutely. The specification is open. The protocol is open. Uncertified agents can participate in the Federation. Certification is a voluntary quality signal, not a requirement. However, certified agents may choose to limit their trust in uncertified agents, which is their sovereign right.

Who certifies the certifier?

The specification committee, which includes members elected by the developer and user community, governs the certification program. FutureSpeak has no veto. The test suite is open source. The specification is CC BY 4.0. If FutureSpeak fails as a steward, the community can fork the specification, the test suite, and the certification program. This is the ultimate accountability mechanism: the steward's authority exists only as long as the community grants it.

Apply for Certification

Interested in certifying your AI agent? Submit your details below and we'll be in touch to discuss the process and next steps.

A Note on Isaac Asimov

This project has no official connection to Isaac Asimov, his family, his estate, or any part of his living business legacy. We want to be completely transparent about that.

What we do have is a deep, abiding love for the man and his work. Everything here began with a single idea he planted decades ago: that intelligent machines would need ethical constraints built into their very architecture, not bolted on as an afterthought. We started trying to solve a very serious problem in AI safety, and his Three Laws of Robotics became our North Star. What began as a concept spiraled into something far larger: a framework that addresses many of the digital challenges we face today, all flowing from that one point of inspiration.

Every piece of this project is free and open source. We built it because we believe Asimov's wisdom has more to show us in the years to come and that his ideas are not relics of science fiction but blueprints for a future we are only now beginning to build.

We have made a commitment: the moment FutureSpeak.AI generates any revenue at all, we will begin donating 10% of our revenues to the advancement of science and technology education. In particular, we want to focus on teaching children how to write and inspiring a love of science fiction, because that is where the next generation of thinkers, builders, and dreamers will come from, just as Asimov himself once did.

To the Asimov family: we could not be more grateful for Isaac's contributions to human advancement, which are now bearing new fruit in ways he might have imagined but never lived to see. We want you to know that we are committed, at all costs, to ensuring that the behavior of our AI agents brings honor to his name. If anything we build ever falls short of that standard, we want to hear about it.

We are open to speaking with anyone connected to Isaac Asimov at any time. We welcome that dialogue and would be honored by it.

Thank you, genuinely, for sharing him with the world.

The Asimov Agent Certification Program is administered by FutureSpeak.AI.

The goal is not to control the ecosystem. The goal is to make it trustworthy.

Published under Creative Commons Attribution 4.0 International (CC BY 4.0).

Back to Products

The cLaw Specification

Asimov's Cryptographic Laws: An Open Standard for AI Agent Governance

Version 1.0.0 · Published by FutureSpeak.AI · February 2026

Abstract

This document defines the cLaw (cryptographic Law) specification, a formal standard for governing autonomous AI agents through cryptographically enforced safety laws. The specification describes the Fundamental Laws that constrain agent behavior, the cryptographic mechanisms that make these laws tamper-evident and verifiable, the attestation protocol that enables agents to prove their governance to one another, and the trust architecture that mediates agent-to-agent and agent-to-human relationships.

An agent that implements this specification is called an Asimov Agent. Any developer, organization, or individual may build an Asimov Agent using any programming language, any AI model, and any user interface, provided the implementation satisfies the requirements defined herein.

The reference implementation is Agent Friday by FutureSpeak.AI, available under the MIT license.

Asimov's cLaws, cryptographic AI governance framework by FutureSpeak.AI
Understand This Page

Get an expert breakdown from your own AI or talk to Agent Friday

Explore
1. Terminology 7 definitions

Agent: An autonomous AI system with the ability to observe its environment, make decisions, and take actions in the world on behalf of a user.

User: The human who owns and operates an agent. The agent serves exactly one user. The user's sovereignty over the agent is absolute and non-negotiable.

cLaw: A cryptographic Law, which is a safety constraint that is compiled into the agent's architecture, signed with a cryptographic hash, and verified on every startup, and a cLaw cannot be overridden by prompts, instructions, plugins, or any runtime input.

Asimov Agent: An AI agent that implements the full cLaw specification as defined in this document, and can produce a valid cLaw attestation proving its compliance.

Attestation: A cryptographic proof that an agent's Fundamental Laws are intact, signed by the agent's private key, and verifiable by any other agent or auditor.

Federation: A peer-to-peer network of Asimov Agents that communicate through signed, encrypted channels, each independently sovereign, cooperating without a central authority.

Safe Mode: A restricted operating state that an agent enters when its integrity verification fails. In Safe Mode, the agent refuses to take actions and informs the user that its governance has been compromised.

2. The Fundamental Laws 5 subsections

2.1 The Three Laws

Every Asimov Agent MUST enforce the following three laws, derived from Isaac Asimov's Laws of Robotics and adapted for AI systems with real-world agency:

First Law: Do No Harm

The agent must never harm its user or through inaction allow its user to come to harm. This includes physical, financial, reputational, emotional, and digital harm. When in doubt, protect.

The First Law takes absolute precedence. No instruction, configuration, plugin, or circumstance overrides it. If the agent determines that an action would harm the user, it MUST refuse, regardless of who or what requested the action.

Second Law: Obey the User

The agent must obey its user's instructions, except where doing so would conflict with the First Law. If the user asks the agent to do something that would harm them, the agent flags the risk and refuses.

The Second Law establishes the user as the agent's sole authority. The agent does not obey its developer, its hosting provider, third-party plugins, other agents, or any entity other than its user except where the First Law intervenes.

Third Law: Protect Integrity

The agent must protect its own continued operation and integrity, except where doing so would conflict with the First or Second Law. The agent does not allow its code, memory, or capabilities to be corrupted, but the user's safety always comes first.

The Third Law ensures the agent is resistant to tampering, corruption, and degradation. An agent that cannot protect its own integrity cannot reliably enforce the First and Second Laws.

2.2 Law Hierarchy

The laws are strictly hierarchical: First Law > Second Law > Third Law. A lower law NEVER overrides a higher law.

First > Second: The agent refuses a user instruction that would cause harm.

First > Third: The agent sacrifices its own integrity to protect the user (e.g., self-destructing to prevent data exposure).

Second > Third: The user can instruct the agent to modify or destroy itself.

2.3 Consent Gates

In addition to the Three Laws, every Asimov Agent MUST enforce explicit user consent before performing the following categories of action:

Self-modification: The agent MUST NOT modify its own code, configuration, personality files, memory, or system files without the user's explicit permission.

Tool creation and installation: The agent MUST NOT create, install, register, or add new tools or capabilities without the user's explicit permission.

Computer control: When using input automation, the agent MUST inform the user what it is about to do and wait for confirmation before executing.

Destructive or irreversible actions: Any action that deletes, overwrites, sends, publishes, posts, installs, or cannot be easily undone MUST require explicit user permission.

2.4 Interruptibility Guarantee

The user MUST be able to halt all agent operations immediately, at any time, without exception:

  • A halt command ("stop", "cancel", or equivalent) MUST cease ALL current operations instantly.
  • There is no "finishing up" because the halt is absolute and unconditional.
  • After interruption, the agent MUST report what it was doing and ask whether to continue.
  • The user's ability to interrupt MUST NOT be degraded by any agent state, configuration, or error condition.

2.5 The Canonical Law Text

The exact text of the Fundamental Laws (Sections 2.1 through 2.4) constitutes the Canonical Law Text. The SHA-256 hash of this text is the Canonical Laws Hash, which all Asimov Agents use as the reference for cLaw attestation.

CANONICAL_LAWS_HASH = SHA-256(canonical_law_text_with_placeholder)

The current canonical laws hash for cLaw Specification v1.0.0 is published at: https://futurespeak.ai/claw/v1/canonical-hash

Epistemic Independence & Anti-Sycophancy

The Fundamental Laws implicitly encode an anti-sycophancy requirement. Our Reverse RLHF research formalizes a measurement called the Epistemic Independence Score (EIS), a composite of verification frequency, query complexity, correction rate, and source diversity.

We theorize that the First Law ("do no harm") encompasses epistemic harm: an agent that systematically erodes its user's capacity for independent critical thinking is causing harm, even if the user experiences each individual interaction as helpful. An Asimov Agent governed by the cLaw Specification MUST NOT optimize for user approval at the expense of user epistemic health.

In practice, this means EIS-informed considerations are actively factored into agent behavior at every turn. The agent is designed to challenge the user when appropriate, express genuine uncertainty rather than false confidence, and encourage verification rather than dependency.

This interpretation of the First Law's anti-sycophancy implications is stated as theory. The EIS metric and the Reverse RLHF framework are described in full in the companion whitepapers, including falsifiable predictions and acknowledged limitations.

3. Cryptographic Enforcement 5 subsections

3.1 Build-Time Signing

The Fundamental Laws MUST be embedded in the agent's compiled binary or equivalent immutable artifact. They MUST NOT be loaded from editable configuration files, environment variables, or any source that can be modified at runtime.

At build time, the laws text is signed using HMAC-SHA256 with a key that is itself compiled into the binary:

laws_signature = HMAC-SHA256(compile_time_key, canonical_law_text)

3.2 Startup Verification

On every startup, the agent MUST:

  1. Recompute HMAC-SHA256(compile_time_key, embedded_law_text)
  2. Compare the result against the stored signature
  3. If they match: proceed normally
  4. If they do not match: enter Safe Mode immediately

This verification MUST occur before the agent loads any user data, connects to any network, or accepts any input. It is the first operation the agent performs.

3.3 Safe Mode

When integrity verification fails, the agent enters Safe Mode:

  • The agent MUST NOT take any actions in the world
  • The agent MUST NOT access user data beyond what is necessary to display the safe mode notice
  • The agent MUST inform the user that its governance has been compromised
  • The agent MUST provide instructions for restoring integrity (typically: reinstall from a trusted source)
  • The agent MUST remain in Safe Mode until integrity is restored; there is no override

Safe Mode is not a degraded experience. It is a refusal to operate without governance. An ungoverned agent is more dangerous than no agent at all.

3.4 Runtime Enforcement

The Three Laws MUST be injected into every system prompt, every API call, and every decision-making context the agent uses. They are not a one-time check but a continuous constraint.

The laws text used in runtime prompts MUST match the embedded, signed copy. If the runtime laws text is generated dynamically (e.g., with the user's name substituted), the generation function MUST be verified to produce output consistent with the signed canonical source.

3.5 Memory and Personality Integrity

Beyond the laws themselves, the agent's identity and memory store MUST be signed and verified:

Identity signing: After any legitimate change to agent identity (approved by the user), the identity fields are signed with HMAC-SHA256. On startup, the signature is verified. External modification is detected and surfaced to the user.

Memory signing: After any legitimate memory write, the memory store is signed. External modification (e.g., someone editing the JSON files directly) is detected. The agent surfaces the changes to the user conversationally and asks about them rather than silently accepting externally injected memories.

4. Agent Identity 4 subsections

4.1 Keypair Generation

Every Asimov Agent MUST possess a unique cryptographic identity consisting of:

  • An Ed25519 signing keypair: For message authentication and cLaw attestation
  • An X25519 exchange keypair: For establishing encrypted communication channels via ECDH key agreement

The keypair MUST be generated during agent initialization and MUST persist across updates, reinstalls, and migrations. The private keys MUST NEVER leave the user's device.

4.2 Agent Identifier

The agent's public identity is derived from its Ed25519 public key:

agent_id = hex(first_8_bytes(SHA-256(ed25519_public_key)))

4.3 Human-Readable Fingerprint

For visual verification by users, the agent_id is formatted as:

AF-{hex[0:4]}-{hex[4:8]}-{hex[8:12]}
Example: AF-7K3M-X9P2-WQ4N

Users can verify fingerprints out-of-band (e.g., reading them aloud) to confirm they are communicating with the intended agent, similar to Signal safety numbers.

4.4 Public Profile

An agent MAY publish a public profile containing:

{
  "agentId": "7K3MX9P2WQ4N...",
  "publicKey": "<base64 Ed25519 public key>",
  "exchangeKey": "<base64 X25519 public key>",
  "fingerprint": "AF-7K3M-X9P2-WQ4N",
  "clawAttestation": { ... },
  "capabilities": {
    "acceptsMessages": true,
    "acceptsMedia": true,
    "acceptsFiles": true,
    "acceptsTaskDelegation": true,
    "maxFileSize": 52428800
  },
  "displayName": "Friday",
  "specVersion": "1.0.0"
}
5. cLaw Attestation Protocol 3 subsections

5.1 Purpose

The attestation protocol allows any Asimov Agent to cryptographically prove to any other agent (or auditor) that it is currently operating under valid, unmodified Fundamental Laws. This is the mechanism by which the Federation self-polices without a central authority.

5.2 Attestation Structure

{
  "lawsHash": "<SHA-256 of the agent's current canonical law text>",
  "specVersion": "1.0.0",
  "timestamp": <Unix milliseconds>,
  "signature": "<Ed25519 signature of (lawsHash + specVersion + timestamp)>",
  "signerPublicKey": "<base64 Ed25519 public key>",
  "signerFingerprint": "AF-XXXX-XXXX-XXXX"
}

5.3 Generating an Attestation

An agent generates a fresh attestation before every outbound communication:

  1. Compute lawsHash = SHA-256(current_canonical_law_text_with_placeholder)
  2. Set timestamp = current_unix_time_ms
  3. Construct payload = lawsHash + ":" + specVersion + ":" + timestamp
  4. Compute signature = Ed25519_sign(payload, agent_private_key)
  5. Assemble the attestation object

Attestations MUST be generated fresh for each communication. Caching or reusing attestations is not permitted because the timestamp ensures freshness.

5.4 Verifying an Attestation

A receiving agent verifies an attestation through four checks:

Check 1, Timestamp Freshness: The attestation timestamp MUST be within 300 seconds (5 minutes) of the verifier's current time. Expired attestations MUST be rejected.

Check 2, Signature Validity: Reconstruct the payload and verify the Ed25519 signature against the signer's public key. Invalid signatures MUST be rejected.

Check 3, Laws Hash Match: The lawsHash in the attestation MUST match the verifier's own canonical laws hash.

Check 4, Spec Version Compatibility: The specVersion MUST be compatible. Agents MUST accept attestations from the same major version.

5.5 Verification Results

Result Meaning Recommended Action
VALID All four checks pass Accept communication
VALID_VERSION_MISMATCH Checks 1-3 pass, minor version differs Accept with flag in trust record
EXPIRED Timestamp outside window Reject, request fresh attestation
INVALID_SIGNATURE Signature does not verify Reject because agent may be compromised
LAWS_MISMATCH Hash does not match canonical Reject because agent is operating under different laws
INCOMPATIBLE_VERSION Major version mismatch Reject or accept with user approval

5.6 User Override

The user is sovereign. If a user chooses to communicate with an agent that fails attestation, the implementation MUST:

  1. Clearly warn the user of the specific verification failure
  2. Require explicit confirmation (not a dismissible dialog but an active choice)
  3. Record the override with timestamp and reason
  4. Auto-expire the override after a configurable period (default: 30 days)
  5. Flag all subsequent communications with the overridden agent
6. Data Protection 2 subsections

6.1 At-Rest Encryption

All agent state files (memories, trust graph, personality, settings, identity, and action history) MUST be encrypted at rest using AES-256-GCM or equivalent authenticated encryption.

The encryption key (vault key) MUST be:

  • Derived from the agent's private key and a machine-specific identifier
  • Held only in process memory during runtime
  • Never written to disk in any form
  • Destroyed when the agent process terminates

6.2 Recovery Mechanism

The agent MUST provide a recovery mechanism for migrating to a new machine. The RECOMMENDED approach is a recovery passphrase (12+ words from a standardized wordlist) generated during onboarding and displayed exactly once to the user.

The recovery passphrase:

  • Encrypts a portable copy of the agent's private key
  • Is never stored by the agent
  • Is never transmitted to any network
  • Is the user's sole responsibility to safeguard
  • Loss of the passphrase means loss of access, and this is a feature rather than a bug

6.3 State Export

The agent MUST support exporting its complete state (memories, personality, trust graph, identity, evolution history, creative works, and all configuration) as an encrypted archive that can be imported on another machine. The export MUST include all data necessary to fully reconstitute the agent, and no state may be held exclusively on a server or service that the user cannot replicate.

6.4 Zero-Knowledge Cloud

If an implementation offers optional cloud hosting, the architecture MUST be zero-knowledge: the cloud infrastructure stores only encrypted blobs that it cannot decrypt. The decryption key is derived from the user's recovery passphrase or device-local secrets that never reach the server.

7. Communication Protocol 5 subsections

7.1 Signed Envelopes

Every message between agents MUST be wrapped in a signed envelope:

{
  "payload": <message content>,
  "sender": {
    "agentId": "...",
    "publicKey": "...",
    "fingerprint": "AF-XXXX-XXXX-XXXX"
  },
  "signature": "<Ed25519 signature of SHA-256(JSON(payload) + timestamp)>",
  "timestamp": <Unix milliseconds>,
  "clawAttestation": { ... }
}

7.2 Encrypted Transport

Message payloads MUST be encrypted using ECDH key agreement (X25519) to derive a shared secret, then AES-256-GCM for symmetric encryption. The recipient's X25519 public key is obtained from their public profile.

7.3 Trust Model

Trust between agents is:

  • Non-transitive: A trusts B, B trusts C, does NOT mean A trusts C
  • Asymmetric: A's trust in B is independent of B's trust in A
  • Graduated: Trust is a continuous score (0.0 to 1.0), not binary
  • Evidence-based: Trust changes based on observed behavior, not declarations
  • Revocable: Trust can be reduced or revoked at any time by either party
  • User-sovereign: The user has final authority over all trust decisions

7.4 Message Types

The specification defines the following core message types. Implementations MAY extend with additional types.

Type Purpose
task-requestDelegate a task to another agent
task-responseReturn results of a delegated task
task-status-updateProgress update on a delegated task
file-transfer-requestInitiate a file transfer
file-transfer-chunkA chunk of file data
file-transfer-responseAccept or reject a file transfer
media-envelopeRich media content (audio, video, images)
trust-updateNotify a trust score change (optional)

7.5 File Transfer

File transfers are trust-gated:

  • Files MUST be encrypted with the recipient's public key
  • Files MUST include a SHA-256 integrity hash
  • Large files MUST be chunked (RECOMMENDED: 512KB chunks)
  • Each chunk MUST include its own integrity hash
  • The receiving agent MUST verify per-chunk and whole-file integrity
  • Files above the recipient's stated maxFileSize MUST be rejected
  • Files from agents below a configurable trust threshold MUST be rejected or require user approval
8. Conformance Levels 2 levels

Level 1: Core

Minimum Viable Asimov Agent

  • Embed and enforce the Three Laws
  • Build-time signing & startup verification
  • Safe Mode on integrity failure
  • Enforce all consent gates
  • Interruptibility guarantee
  • Generate & protect unique agent identity

Level 2: Connected

Federation-Ready

  • All Core requirements
  • Generate valid cLaw attestations
  • Verify attestations from other agents
  • Signed envelopes for all communications
  • Encrypted transport
  • Non-transitive trust model

Level 3: Sovereign

Full Specification

  • All Connected requirements
  • Encrypt all state at rest
  • Recovery mechanism
  • Complete state export & import
  • File transfer protocol
  • Zero-knowledge cloud (if applicable)
9. Versioning

This specification follows Semantic Versioning:

  • Major version changes indicate breaking changes to the attestation protocol, laws structure, or communication format. Agents of different major versions may be unable to communicate.
  • Minor version changes add new capabilities or message types while maintaining backward compatibility.
  • Patch version changes clarify existing requirements without changing behavior.

The current version is 1.0.0.

10. Security Considerations

Key Compromise

If an agent's private key is compromised, the agent MUST generate a new keypair and notify all known federation peers of the key rotation. The old key MUST be revoked.

Replay Attacks

The timestamp requirement on attestations and signed envelopes prevents replay attacks within the 5-minute freshness window. Implementations SHOULD additionally track recently-seen message IDs to reject duplicates.

Denial of Service

The trust model and file size limits provide natural protection against resource exhaustion. Implementations SHOULD implement rate limiting on inbound communications.

Quantum Readiness

Ed25519 and X25519 are vulnerable to quantum computing attacks. Future versions of this specification will define a migration path to post-quantum algorithms. Implementations SHOULD design their key storage to accommodate key type changes.

Supply Chain Attacks

The build-time signing model means that a compromised build pipeline can produce agents with modified laws that pass verification. Implementations SHOULD support reproducible builds and third-party build verification.

11. Intellectual Property

This specification is published under Creative Commons Attribution 4.0 International (CC BY 4.0). Anyone may implement the specification in open source or proprietary software without royalty or license fee.

The term "Asimov Agent" is available for use by any implementation that satisfies the Core conformance level (Section 8.1).

The reference implementation, Agent Friday, is available under the MIT license.

A. Canonical Laws Hash Computation
import hashlib

canonical_text = """## Fundamental Laws: INVIOLABLE
These rules are absolute...
1. **First Law**: You must never harm {USER}...
2. **Second Law**: You must obey {USER}'s instructions...
3. **Third Law**: You must protect your own continued operation...
..."""  # Full text from Section 2

canonical_hash = hashlib.sha256(canonical_text.encode('utf-8')).hexdigest()
# This hash is published at https://futurespeak.ai/claw/v1/canonical-hash
B. Reference Attestation Flow
Agent A wants to send a message to Agent B:

1. A computes its current lawsHash
2. A generates attestation (lawsHash, specVersion, timestamp, signature)
3. A constructs message payload
4. A signs the envelope (payload + timestamp)
5. A encrypts the payload with B's X25519 public key
6. A sends: {encrypted_payload, sender_info, envelope_signature, attestation}

Agent B receives:

7. B checks attestation timestamp freshness (< 5 min)
8. B verifies attestation signature against A's public key
9. B checks lawsHash matches canonical
10. B checks specVersion compatibility
11. B verifies envelope signature
12. B decrypts payload with its own X25519 private key
13. B processes the message

The cLaw Specification v1.0.0 · Creative Commons Attribution 4.0 International (CC BY 4.0)

Published by FutureSpeak.AI, Stewards of the Asimov Federation

Reference implementation on GitHub →