Impossibility and machine-checked governance for agentic AI.
Mainstream AI "governance" in 2026 is a runtime-enforcement plane: rate limits, PII guards, prompt-injection detection, audit logs. Necessary work, but upstream of an unasked question — is the rule being enforced actually coherent?
The legitimacy thread pursues that question across two layers: an impossibility theorem in the Arrow lineage, and a machine-checked semantic legitimacy kernel that makes the theorem executable.
In the Arrow-Young-Thomson tradition, no reached scarce peer-relative governance system can jointly satisfy consistency, solidarity, and cross-claimant monotonicity. Strategyproofness follows as a bridge-derived consequence of solidarity and monotonicity, not an independent fourth axiom. Every deployed system sacrifices at least one of the three primitives — typically without declaring which, bounding how much, or specifying a monitor for the sacrificed axiom. The impossibility theorem and its Young-style graph diagnostics make that sacrifice explicit and checkable.
sorry,
admit, or first-party axiom under the verification gate.
Orthogonal to gateway products — they enforce; this audits.
GovernanceGraph
model. Source-to-graph extraction (Rust + tree-sitter) is heuristic and empirical, not
formally verified. Byte-stable fixtures, parity tests, provenance records, and committed
snapshots back the trusted base; the contract states the boundary explicitly so reviewers
can target attacks at the right layer.
codex-rs), Anthropic's open-source Claude Agent SDK, and
OpenClaw; the proprietary Claude Code product is treated separately, metadata-only.
The Rust crate and Lean library are in active development on a private repository; public release is planned.