The minimum requirements for AI that can act.
This standard defines deterministic consequence rails for AI systems that touch production. It is not a model preference. It is a boundary architecture: models propose, a policy gate decides, and the system emits signed proof of what happened.
Architecture, not vibes.
The ProofGate Standard specifies enforceable requirements. You can implement it with any model provider, any tool stack, and any UI. The standard governs one thing: how consequences are permitted and proven.
Models propose, systems decide
Model output must never directly execute tools. It must be expressed as an Intent Envelope.
Policy is explicit
Allowlists, caps, deny rules, and approvals are machine-enforced. No implicit authority.
Least privilege is mandatory
Tools run behind a router. Credentials never sit in the model. Scope is minimized per action.
Approvals must be unforgeable
Approval tokens must be signed, expiring, and bound to the intent hash (no replay, no spoof).
Receipts must be cryptographic
Decision and execution receipts must be signed. Proof of causality is default behavior.
Audit must be append-only
Every request/result must be written immutably. Incidents become forensics, not blame games.
“What guarantees do you provide if the model is wrong?”If the answer is monitoring, user correction, or “the model is improving,” that is not a guarantee. This standard requires enforceable constraints + signed proof.
Minimum requirements (v0.1)
An implementation is compliant if it satisfies the following requirements. Each requirement is verifiable by inspection and/or by produced artifacts.
- the policy definition (Invariant Pack),
- the router’s permission model,
- signed receipts,
- append-only audit logs.