EU AI Act — Article 6 (right to explanation)
Article 6 right-to-explanation — generate a per-action explanation that walks every policy decision, approval, and receipt in human-readable form, with a signed JSON envelope the data subject can re-verify.
Article 6 (together with Article 13) of Regulation (EU) 2024/1689
requires providers of high-risk AI systems to make available, on
request, a clear and meaningful explanation of an individual
decision made by the AI system. Aira's eu_ai_act_art6 output is
that explanation — one action at a time, with every signed piece of
evidence behind it.
What's in an explanation
Aira walks the full chain for a single action:
- Action metadata — agent, action type, model, input/output hashes, instruction hash, status, created-at.
- Policy decision chain — every
PolicyEvaluationthat ran, in evaluation order. For each: mode (rules / AI / consensus / content-scan), decision (allow / deny / require_approval), confidence, reasoning, model votes (for consensus mode), the evaluation's own Ed25519 signature, and when it ran. - Approval chain — every
HumanAuthorizationthat signed off on the action. Authorizer email, HMAC signature, signed-at. - Receipt — the final signed receipt: payload hash, Ed25519 signature, signing key id, RFC 3161 timestamp presence, created-at.
- Regulation — the framework (
eu_ai_act) and the articles this explanation satisfies (6, 13, 14). - Envelope — an Ed25519 signature over the canonical JSON of
everything above. See
article6-envelopefor how to verify.
Inferred lifecycle fallbacks
If an action has no linked PolicyEvaluation row (e.g., it ran
cleanly and no policy matched), the explanation still surfaces a
policy chain entry derived from the action's lifecycle:
status == "denied_by_policy"→ inferreddeny- Has a signed
HumanAuthorization→ inferredrequire_approval - Otherwise (notarized, no deny, no gate) → inferred
allow
Each derived entry is tagged evidence_quality: "inferred" so a
reader can distinguish cryptographically-signed evaluations from
lifecycle-reconstructed ones. The signed _envelope covers both
kinds — the signal is which one the reader is looking at.
Fetch
from aira import Aira
client = Aira(api_key="aira_live_...")
explanation = client.get_action_explanation("action-uuid")
print(explanation.action["agent_id"])
for step in explanation.policy_chain:
print(step["decision"], step.get("policy_name"))import { Aira } from "aira-sdk";
const aira = new Aira({ apiKey: "aira_live_..." });
const explanation = await aira.getActionExplanation("action-uuid");curl https://api.airaproof.com/api/v1/actions/{action_uuid}/explanation \
-H "Authorization: Bearer $AIRA_API_KEY"PDF export
The same data is available as a regulator-ready PDF:
curl https://api.airaproof.com/api/v1/actions/{action_uuid}/explanation/pdf \
-H "Authorization: Bearer $AIRA_API_KEY" -o explanation.pdfThe PDF renders the envelope block at the bottom — so the paper copy carries the same signature info as the JSON form, and a regulator reading the PDF can re-verify the JSON later by typing the signing key id and content hash into the verify endpoint.
Verify the envelope
GET /api/v1/actions/{id}/explanation returns an _envelope block.
Post the whole response back to the public verify endpoint (no API
key required):
curl -X POST https://api.airaproof.com/api/v1/verify/explanation \
-H "Content-Type: application/json" \
-d @saved-explanation.jsonResponse:
{
"valid": true,
"checks": {
"key_known": true,
"content_hash_matches": true,
"signature_valid": true
},
"signing_key_id": "aira-signing-key-v1",
"request_id": "req_..."
}Full details: article6-envelope.
Data-subject workflow
The typical flow when a data subject invokes their right to an explanation:
- Compliance officer pulls the action id from the affected decision (dashboard search by input hash, agent, or time).
GET /actions/{id}/explanationto fetch the structured data.GET /actions/{id}/explanation/pdffor a human-readable copy to send.- Archive the JSON alongside the PDF. The signed envelope means the data subject (or their counsel, or a regulator) can re-verify the response months or years later without trusting your local storage.
Explanations are deterministic per action — the same action always produces the same signature — so repeat requests don't thrash the signing key or pile up redundant report rows.
EU AI Act — Article 9 (risk management)
Article 9 risk register — how Aira classifies your actions into Annex III categories, renders the register, and persists per-agent risk observations for later queries.
Annex IV technical documentation
Generate the full Annex IV technical file — nine sections, mapped 1:1 to the EU AI Act requirements, derived from the cryptographic evidence Aira already holds.