Skip to main content

Documentation Index

Fetch the complete documentation index at: https://decision-grade.ai/llms.txt

Use this file to discover all available pages before exploring further.

The Frame names the problem. The Doctrine names the posture organizations should adopt in response. The posture is Zero Trust, applied to AI verification.
Three minute read. Zero Trust in security means never trust by default, always verify. Applied to AI verification, it means the customer should not have to trust the verifier. Every claim a verification system makes about its own behavior should be independently checkable. The doctrine has three layers: Independence (no AI verifies its own work), Doctrine (rules enforced architecturally), Accountability (decisions survive challenge).
Zero Trust is a familiar concept in security architecture: never trust by default, always verify. Don’t assume that being inside a perimeter makes you safe. Every access request authenticates regardless of origin. The principle was articulated in security work over the past decade as a response to a specific failure mode: perimeter-based trust models assume the inside is safe, and they fail catastrophically when the inside is breached. Security stopped relying on the perimeter and started requiring verification on every transaction. AI verification is at the same inflection. The default posture has been to trust the verifier. The verifier’s brand, the verifier’s methodology, the verifier’s staff, the verifier’s stated commitments. That posture has the same failure mode as perimeter trust in security: when the inside fails, the outside has no recourse. The fix is the same: don’t trust the verifier, verify what the verifier says. The constitutional statement is short. The customer should not have to trust the verifier. Every claim the verifier makes about its own behavior should be independently verifiable, by the customer, by a third party, or by a regulator. The reputation of the founder, the team, the company, the doctrine, and the methodology are not inside the trust model. The trust model is the math, the cryptographic anchors, the public commitments, and the records that the verifier cannot quietly alter. Once that statement is articulated, every architectural choice that follows stops being a feature decision and starts being a consequence. The doctrine has three layers.

Independence: no single AI verifies its own work

The Independence principle is Zero Trust applied to the verification layer. When a single AI family verifies its own output, the customer is back inside the perimeter trust model. OpenAI’s verdict on an OpenAI-produced output is not a verdict. It is a self-assessment with a different prompt. The same model family has the same blind spots, the same training-data biases, and the same failure modes. Verification by the same model family is the cognitive equivalent of a single auditor signing off on their own books. The Zero Trust commitment is that no single model family verifies its own work. Verification requires independent agreement across model families with different training data, different objectives, and different failure modes. When multiple independent providers agree, that agreement carries information that no single provider can replicate. When they disagree, the disagreement is also informative, and the disagreement should be recorded.
What Independence rules out:
  • OpenAI checking OpenAI.
  • Anthropic checking Anthropic.
  • A single model issuing a verdict on its own output, even with a different prompt.
  • A vendor claiming “we verify our work” when the verification runs on the same model family that generated the work.
  • A “human in the loop” who only reviews what the same model has already approved.
Same family, same blind spots. A self-assessment is not a verdict.
The independence principle has implementation implications that are not in scope here. The principle is what matters at the doctrine level. The buyer’s checklist later in this framework names the specific commitments to demand from any vendor claiming to verify AI output. The principle is the gate every commitment passes through: if a verification system relies on a single model family for its verdicts, it does not satisfy the Zero Trust posture.

Doctrine: rules enforced architecturally

The Doctrine principle is Zero Trust applied to the analytical layer. The standard failure mode for analytical processes is that the rules exist in documentation but not in execution. A style guide says reviewers must check causal claims. The reviewer is under deadline pressure. The check does not happen. The output ships, and the documentation is silent on whether the check was actually performed. The rule existed; the enforcement did not. The Zero Trust commitment is that rules are enforced by the architecture, not by operator preference. If the system requires evidence before a citation can reach the analytical layer, then the system must contain a gate that cannot be overridden, even by the operator, even when commercially convenient. The gate is the rule. The architecture enforces the doctrine. This principle is what distinguishes a serious verification posture from a marketing claim. Any vendor can write “we require evidence” in their documentation. Few can demonstrate that evidence is required by a deterministic gate that the vendor cannot bypass. The deterministic gate is the architectural enforcement. Without it, the doctrine is aspirational. The principle generalizes beyond evidence gates. Any rule that the verification system claims to enforce should be enforced architecturally. Refusals that the system claims to log should be logged automatically, not on operator discretion. Rubric versions that the system claims to apply should be applied by hash-binding, not by operator selection. Doctrine that lives only in documentation is not doctrine. Doctrine that the architecture enforces is.
What architectural enforcement rules out:
  • A style guide that says reviewers must check causal claims, with no mechanism that prevents a deck from shipping when the check is skipped.
  • A vendor saying “we require evidence for every citation” when the evidence requirement can be turned off for a particular client.
  • A monthly review cadence that happens when someone remembers, on a calendar that someone controls.
  • A doctrine that exists in a PDF on a SharePoint somewhere.
If the only thing standing between the rule and a violation is operator memory or operator discretion, the rule is aspirational.

Accountability: every decision survives independent challenge

The Accountability principle is Zero Trust applied to the audit layer. When a verification system makes a decision (an output is approved, a citation is admitted, a refusal is issued), the decision should survive expert challenge. Surviving challenge requires more than an audit log. The log itself has to be tamper-evident and accessible to parties the verifier does not control. The Zero Trust commitment is that every decision the verification system makes is logged in a form that the verifier cannot alter without breaking the record, and the integrity of the record is verifiable by parties outside the verifier’s control. The standard mechanism is cryptographic anchoring: hashes of the decision ledger committed to a public chain (or equivalent infrastructure) that the verifier does not control, cannot quietly alter, and will not lose access to even if the company changes hands. The architectural consequence is that any verification system worth taking seriously publishes commitments that anyone can independently verify. The public hash of a rubric version. The public hash of a source document. The cryptographic certificate that binds an output to the specific model board, the specific rubric, and the specific evidence set that produced it. None of these require trust in the verifier. All of them produce checks that the verifier cannot evade. The accountability principle extends to internal organizational use. A C-suite reader of an analytical output should not have to trust the analyst, the desk lead, or the chief of staff to forward the right version. The reader should be able to verify the cryptographic match between the document on screen and the certificate attached to it. The trust model is the hash, not the messenger.
What Accountability rules out:
  • An audit log that the verifier hosts and could rewrite without anyone noticing.
  • A “trust us, our methodology is sound” claim with no third party that can independently check.
  • A certificate that says “approved” without anchoring the approval to the specific inputs, the specific rules, and the specific reviewers.
  • A version of a document on a CEO’s screen that the desk team can quietly substitute for a different version.
If the integrity of the record depends on the verifier behaving well, the integrity of the record is not verifiable.

What the three principles produce, taken together

The three principles produce a set of architectural commitments that any serious verification system carries. The list below is general, not specific to any vendor’s implementation. Each commitment is a consequence of the Zero Trust posture. None of them is a feature. Removing any of them is a violation of the constitutional posture, not a product trade-off. Click to expand each commitment.
No single AI family verifies its own output. Verdicts require agreement across independent providers with different training data, different objectives, and different failure modes. Disagreement is informative and is recorded, not hidden.What to look for: A vendor that names which model families participate in verification, what happens when they disagree, and how dissent is logged.
Rules that the system claims to enforce are enforced by deterministic gates, not operator discretion. If the system requires evidence before a citation reaches the analytical layer, the gate cannot be turned off, even by the vendor, even when commercially convenient.What to look for: A vendor that can demonstrate the rule fires deterministically, not on policy. “We require X” is not a doctrine. “X cannot ship without Y, here is the code path” is.
Every verification decision is committed to a tamper-evident record. The integrity of the record is verifiable by parties outside the verifier’s control. Standard implementation is a public chain (blockchain, transparency log, or equivalent infrastructure) that the verifier does not control and cannot quietly alter.What to look for: A vendor that can show you the public anchor for any given decision, and that anyone, including you, can independently verify the anchor without going through the vendor.
Refusals are logged automatically, not at operator discretion. The log is regularly reviewed and queryable. Over time, the refusal pattern becomes a discriminating signal that anyone can examine, and that signal cannot be quietly curated by the vendor.What to look for: A vendor that publishes the refusal log structure and review cadence, and that lets you audit specific refusals against the published policy.
The rules used to grade outputs are public-hash-committed for each customer. Customers can verify they are being graded against the rubric version they were sold, not a quietly updated one.What to look for: A vendor that publishes a public hash of the active rubric version per customer, and a change log showing every rubric update with the date and the reason.
The cryptographic match between the document an end-reader sees and the certificate that attests to its provenance is verifiable without going through the verifier. A C-suite reader does not have to trust the analyst, the desk lead, or the chief of staff to forward the right version.What to look for: A vendor whose certificate format includes a hash of the source document, and where the verification of that hash can be performed independently.
Certificates issued before any future acquisition, merger, or change of control remain verifiable against the public chain. New certificates issued after a change of control carry a different signature visible in the chain. Customers can detect a regime change without the verifier having to disclose one.What to look for: A vendor whose public chain entries include a stable issuer identity that cannot be silently replaced. If the issuer key changes, the change is visible in the public record.

Why the posture is more durable than methodology

A methodology-based verification claim is contestable. Domain experts can challenge a methodology. They can argue about which steps are sufficient, which sources are authoritative, which review cadence is appropriate. The argument never resolves, and the verifier’s claim ultimately reduces to “trust our expertise.” A Zero Trust posture is not contestable in the same way. A cryptographic hash either matches or it does not. A multi-provider verdict either records dissent or it does not. A refusal either appears in the public log or it does not. The doctrine produces checks that are mathematical, not interpretive. Domain experts can challenge a methodology. They cannot challenge a hash. That durability has consequences across every audience the verification system serves. For customers, the posture answers the question “why should I trust your verdict” with “you should not have to. Here is the verification you can run yourself.” For regulators, the posture answers the question “how do we audit verifiers at scale” with “you do not have to audit the verifier. You audit the math the verifier published.” For investors, the posture defines a moat that is harder to erode than methodology-based moats. A methodology can be quietly softened. A cryptographic commitment cannot be walked back without breaking the chain. For an acquirer, the posture is a constraint. Certificates issued before the acquisition still validate against the public chain. New ones carry a different signature. The doctrine is, in a meaningful sense, a constitutional posture rather than a corporate policy. It cannot be repealed without the repeal being visible.

Where this goes next

The Doctrine names the posture. The Buyer’s Checklist translates the posture into the specific questions you should ask AI vendors and the specific commitments you should demand before signing contracts. That is the next page if you want the action layer. If you want to see how the doctrine plays out inside your own organization, jump to Lane Discipline, which covers how to separate decision-grade outputs from volume-grade outputs in practice.