The Frame names the problem. The Doctrine names the posture organizations should adopt in response. The posture is Zero Trust, applied to AI verification.Documentation Index
Fetch the complete documentation index at: https://decision-grade.ai/llms.txt
Use this file to discover all available pages before exploring further.
Three minute read. Zero Trust in security means never trust by default, always verify. Applied to AI verification, it means the customer should not have to trust the verifier. Every claim a verification system makes about its own behavior should be independently checkable. The doctrine has three layers: Independence (no AI verifies its own work), Doctrine (rules enforced architecturally), Accountability (decisions survive challenge).
Independence: no single AI verifies its own work
The Independence principle is Zero Trust applied to the verification layer. When a single AI family verifies its own output, the customer is back inside the perimeter trust model. OpenAI’s verdict on an OpenAI-produced output is not a verdict. It is a self-assessment with a different prompt. The same model family has the same blind spots, the same training-data biases, and the same failure modes. Verification by the same model family is the cognitive equivalent of a single auditor signing off on their own books. The Zero Trust commitment is that no single model family verifies its own work. Verification requires independent agreement across model families with different training data, different objectives, and different failure modes. When multiple independent providers agree, that agreement carries information that no single provider can replicate. When they disagree, the disagreement is also informative, and the disagreement should be recorded. The independence principle has implementation implications that are not in scope here. The principle is what matters at the doctrine level. The buyer’s checklist later in this framework names the specific commitments to demand from any vendor claiming to verify AI output. The principle is the gate every commitment passes through: if a verification system relies on a single model family for its verdicts, it does not satisfy the Zero Trust posture.Doctrine: rules enforced architecturally
The Doctrine principle is Zero Trust applied to the analytical layer. The standard failure mode for analytical processes is that the rules exist in documentation but not in execution. A style guide says reviewers must check causal claims. The reviewer is under deadline pressure. The check does not happen. The output ships, and the documentation is silent on whether the check was actually performed. The rule existed; the enforcement did not. The Zero Trust commitment is that rules are enforced by the architecture, not by operator preference. If the system requires evidence before a citation can reach the analytical layer, then the system must contain a gate that cannot be overridden, even by the operator, even when commercially convenient. The gate is the rule. The architecture enforces the doctrine. This principle is what distinguishes a serious verification posture from a marketing claim. Any vendor can write “we require evidence” in their documentation. Few can demonstrate that evidence is required by a deterministic gate that the vendor cannot bypass. The deterministic gate is the architectural enforcement. Without it, the doctrine is aspirational. The principle generalizes beyond evidence gates. Any rule that the verification system claims to enforce should be enforced architecturally. Refusals that the system claims to log should be logged automatically, not on operator discretion. Rubric versions that the system claims to apply should be applied by hash-binding, not by operator selection. Doctrine that lives only in documentation is not doctrine. Doctrine that the architecture enforces is.Accountability: every decision survives independent challenge
The Accountability principle is Zero Trust applied to the audit layer. When a verification system makes a decision (an output is approved, a citation is admitted, a refusal is issued), the decision should survive expert challenge. Surviving challenge requires more than an audit log. The log itself has to be tamper-evident and accessible to parties the verifier does not control. The Zero Trust commitment is that every decision the verification system makes is logged in a form that the verifier cannot alter without breaking the record, and the integrity of the record is verifiable by parties outside the verifier’s control. The standard mechanism is cryptographic anchoring: hashes of the decision ledger committed to a public chain (or equivalent infrastructure) that the verifier does not control, cannot quietly alter, and will not lose access to even if the company changes hands. The architectural consequence is that any verification system worth taking seriously publishes commitments that anyone can independently verify. The public hash of a rubric version. The public hash of a source document. The cryptographic certificate that binds an output to the specific model board, the specific rubric, and the specific evidence set that produced it. None of these require trust in the verifier. All of them produce checks that the verifier cannot evade. The accountability principle extends to internal organizational use. A C-suite reader of an analytical output should not have to trust the analyst, the desk lead, or the chief of staff to forward the right version. The reader should be able to verify the cryptographic match between the document on screen and the certificate attached to it. The trust model is the hash, not the messenger.What the three principles produce, taken together
The three principles produce a set of architectural commitments that any serious verification system carries. The list below is general, not specific to any vendor’s implementation. Each commitment is a consequence of the Zero Trust posture. None of them is a feature. Removing any of them is a violation of the constitutional posture, not a product trade-off. Click to expand each commitment.Independent verification across model families
Independent verification across model families
No single AI family verifies its own output. Verdicts require agreement across independent providers with different training data, different objectives, and different failure modes. Disagreement is informative and is recorded, not hidden.What to look for: A vendor that names which model families participate in verification, what happens when they disagree, and how dissent is logged.
Architectural enforcement of doctrine
Architectural enforcement of doctrine
Rules that the system claims to enforce are enforced by deterministic gates, not operator discretion. If the system requires evidence before a citation reaches the analytical layer, the gate cannot be turned off, even by the vendor, even when commercially convenient.What to look for: A vendor that can demonstrate the rule fires deterministically, not on policy. “We require X” is not a doctrine. “X cannot ship without Y, here is the code path” is.
Cryptographic anchoring of decisions
Cryptographic anchoring of decisions
Every verification decision is committed to a tamper-evident record. The integrity of the record is verifiable by parties outside the verifier’s control. Standard implementation is a public chain (blockchain, transparency log, or equivalent infrastructure) that the verifier does not control and cannot quietly alter.What to look for: A vendor that can show you the public anchor for any given decision, and that anyone, including you, can independently verify the anchor without going through the vendor.
Public commitments and refusal logs
Public commitments and refusal logs
Refusals are logged automatically, not at operator discretion. The log is regularly reviewed and queryable. Over time, the refusal pattern becomes a discriminating signal that anyone can examine, and that signal cannot be quietly curated by the vendor.What to look for: A vendor that publishes the refusal log structure and review cadence, and that lets you audit specific refusals against the published policy.
Rubric-version transparency
Rubric-version transparency
The rules used to grade outputs are public-hash-committed for each customer. Customers can verify they are being graded against the rubric version they were sold, not a quietly updated one.What to look for: A vendor that publishes a public hash of the active rubric version per customer, and a change log showing every rubric update with the date and the reason.
Source-document hash binding
Source-document hash binding
The cryptographic match between the document an end-reader sees and the certificate that attests to its provenance is verifiable without going through the verifier. A C-suite reader does not have to trust the analyst, the desk lead, or the chief of staff to forward the right version.What to look for: A vendor whose certificate format includes a hash of the source document, and where the verification of that hash can be performed independently.
Doctrine survives institutional change
Doctrine survives institutional change
Certificates issued before any future acquisition, merger, or change of control remain verifiable against the public chain. New certificates issued after a change of control carry a different signature visible in the chain. Customers can detect a regime change without the verifier having to disclose one.What to look for: A vendor whose public chain entries include a stable issuer identity that cannot be silently replaced. If the issuer key changes, the change is visible in the public record.