Aira

EU AI Act mapping

Article-by-article mapping from the EU AI Act (Regulation 2024/1689) to the Aira capability that satisfies each requirement. Includes the specific code, config, and API call for every article.

For high-risk AI systems, the EU AI Act imposes binding obligations from August 2, 2026. This page maps each binding article for deployers (Chapter III, Section 2) to a specific Aira capability you can point an auditor at. Every row is a runnable technical control, not a Confluence doc.

How to read this page

The EU AI Act divides obligations by actor:

  • Providers — put AI systems on the market (model vendors, framework builders)
  • Deployers — use those systems under their own authority in the EU (almost everyone building agents)

This page focuses on the deployer obligations, because that's who Aira customers typically are. If you're a provider, the Art. 9 and Art. 10 rows also apply to you for the systems you ship.

For each article we show:

  1. What the text actually requires (in plain English, not legalese)
  2. The Aira capability that satisfies it
  3. A concrete technical control — code, config, or API call
  4. What you'd hand an auditor who asks "prove it"

Mapping matrix

ArticleRequirementAira capability
Art. 9Risk management systemPolicy engine + drift detection + audit log
Art. 10Data and data governanceContent scan, details hashing, replay context
Art. 11Technical documentationCompliance bundles with framework metadata
Art. 12Record-keeping / automatic event loggingEd25519-signed action receipts, eu_ai_act_art12 compliance bundles
Art. 13Transparency & information to deployersW3C DID + public agent profiles + reputation
Art. 14Human oversightHuman-in-the-loop approval primitive + consensus policy mode
Art. 15Accuracy, robustness, cybersecurityContent scan, rate limiting, Ed25519 crypto, RFC 3161 timestamps
Art. 17Quality management systemAgent versioning, drift baselines, compliance snapshots
Art. 26Deployer obligations — monitoring + incident reportingWebhooks, drift alerts, audit trail, legal hold
Art. 27Fundamental rights impact assessment (FRIA)Action type labels, agent capabilities, evidence packages

Article-by-article detail

Article 9 — Risk management system

Requirement. Establish, implement, document and maintain a risk management system throughout the lifecycle of high-risk AI systems. Continuously identify, analyse, evaluate and mitigate foreseeable risks.

How Aira satisfies it. Three layered controls:

  1. Policy engine — stops known-risk actions before they execute (rules for deterministic limits, AI/consensus for judgment calls, content scan for PII/credentials).
  2. Drift detection — catches novel risk patterns that weren't in your threat model at design time. Per-agent behavioral baselines score every window against the expected distribution and alert when something new appears.
  3. Audit log — every risk evaluation is immutable and queryable so your risk-management team can review what the system actually did, not what the spec said it would do.

Concrete control:

# Rule: all wire transfers > €10K require human approval
aira.create_policy(
    name="wire-transfer-high-value",
    mode="rules",
    conditions=[
        {"field": "action_type", "op": "eq", "value": "wire_transfer"},
        {"field": "amount_eur", "op": "gte", "value": 10000},
    ],
    decision="require_approval",
)

# Baseline: establish the expected behavior distribution
aira.seed_synthetic_baseline(
    agent_id="payments-agent",
    expected_distribution={"wire_transfer": 0.7, "refund": 0.25, "query": 0.05},
    expected_actions_per_day=120,
)

# Monitor: periodic drift check
alert = aira.run_drift_check("payments-agent", lookback_hours=24)
if alert and alert.severity == "critical":
    # Your incident response flow
    ...

Auditor-ready evidence: compliance bundle for any date range, listing every policy evaluation + every drift alert for the period.


Article 10 — Data and data governance

Requirement. Training, validation, and test datasets must be subject to appropriate data governance practices. Examine datasets for biases likely to affect the health and safety of persons or fundamental rights.

How Aira satisfies it. Aira doesn't train the models — your LLM provider does. But Aira does give you:

  1. Content scan — 30+ curated regex patterns across pii, credentials, and prompt_injection libraries. Runs in-process on every authorize() call. Critical hits (SSN, credit card, API key) deny the action; warnings require human approval.
  2. Details hashing — every action's input details are SHA-256 hashed into action_details_hash before storage, so the audit trail can prove which input produced which decision without storing the raw PII in the audit column.
  3. Replay context — every receipt commits system_prompt_hash, tool_inputs_hash, and model_params so you can reproduce any decision six months later for a fairness review without trusting memory.

Concrete control:

# Enable content scan on every action for this org
aira.create_policy(
    name="pii-and-credentials-scan",
    mode="content_scan",
    scan_config={
        "libraries": ["pii", "credentials", "prompt_injection"],
        "custom_patterns": [],
    },
)

Auditor-ready evidence: the policy_evaluations row for any denied action shows exactly which pattern fired (us_ssn, github_pat, etc.) — with the matched content already redacted to [REDACTED] so you can show the auditor without leaking the original PII.


Article 11 — Technical documentation

Requirement. Technical documentation shall be drawn up before a high-risk AI system is placed on the market and kept up to date. Must include the general characteristics, capabilities and limitations of the system.

How Aira satisfies it. Compliance bundles with the eu_ai_act_art12 framework mapping produce a self-contained JSON document that covers:

  • Every action the system took in the date range
  • The policies that evaluated each action
  • The model versions used
  • The agents involved (with their W3C DIDs)
  • A Merkle root committing the whole set

Concrete control:

bundle = aira.create_compliance_bundle(
    framework="eu_ai_act_art12",
    period_start="2026-08-01T00:00:00Z",
    period_end="2026-08-31T23:59:59Z",
    title="August 2026 — payments-agent technical documentation",
)
# Download the self-contained JSON
doc = aira.export_compliance_bundle(bundle.id)
# Hand this file to an auditor. It includes every receipt's signed payload,
# the Merkle root, the JWKS URL, and verification instructions in English.

Auditor-ready evidence: the exported bundle. One file, offline-verifiable, no Aira dependency.


Article 12 — Record-keeping

Requirement. High-risk AI systems must have automatic event logging throughout their lifecycle. Logs must record events relevant for identifying situations that may result in the system presenting a risk, and must be retained for the duration of the system's lifecycle.

How Aira satisfies it. This is Aira's core capability.

  1. Every authorize() call persists a signed PolicyEvaluation row
  2. Every notarize() call persists an ActionReceipt with an Ed25519 signature + replay context
  3. Signatures are committed to a Merkle tree via periodic settlements
  4. Each settlement includes an RFC 3161 trusted timestamp so nothing can be backdated
  5. Every receipt is publicly verifiable without an Aira account

Concrete control: using Aira at all satisfies this one. Every agent action is logged automatically by the two-step flow. See the Actions API reference.

Auditor-ready evidence:

  • Individual receipt: GET https://api.airaproof.com/api/v1/verify/action/{action_uuid} — returns the signed payload, public key, and algorithm. No auth.
  • Batch evidence: compliance bundle with framework="eu_ai_act_art12".
  • Retention: receipts are immutable; even Aira staff cannot mutate a signed row without breaking the chain. Store your key ids + the public JWKS snapshot and you have a lifetime audit trail.

Article 13 — Transparency and provision of information

Requirement. High-risk AI systems shall be designed and developed so that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately.

How Aira satisfies it. Three layers:

  1. W3C DID per agent — every agent has a verifiable identity at did:web:airaproof.com:agents:<slug>. Anyone can resolve the DID and see the agent's capabilities, current model, and reputation.
  2. Public agent profilesGET /api/v1/agents/public/<slug> returns name, capabilities, reputation, and registration date without auth.
  3. Ask Aira — natural-language interface over your org's policy config, audit log, drift history, and compliance snapshots. Your compliance team can ask "why did payments-agent deny the wire to vendor-x last Tuesday?" and get the matching action id plus signed evaluation row as the answer.

Auditor-ready evidence: public agent DID document + the conversation transcripts from Ask Aira.


Article 14 — Human oversight

Requirement. High-risk AI systems must be designed so natural persons can effectively oversee them. Deployers must be able to decide not to use the system, override or reverse its output, or intervene in its operation.

How Aira satisfies it. Two orthogonal controls:

  1. Human approval primitive. Any policy can set decision: require_approval. When the policy fires, the action transitions to pending_approval, the agent is held, and a secure single-use approval link is sent to your configured approvers. An approver clicks through a public page, reviews the full action context, and approves or denies. Denial = action.status = denied_by_human, the side effect never runs, the signed evaluation row persists.
  2. Consensus policy mode. Multiple models vote on a decision. Disagreement between models automatically triggers human review.

Concrete control:

aira.create_policy(
    name="loan-approval-consensus",
    mode="consensus",
    ai_prompt="Approve this loan application based on standard criteria",
    ai_models=["claude-sonnet-4-6", "gpt-4o", "gemini-2-pro"],
    decision="require_approval",  # disagreement holds for human review
    approvers=["risk-officer@company.com", "head-of-credit@company.com"],
)

Auditor-ready evidence: the human_authorizations table row for each approved/denied action shows the approver's email, the decision, and the timestamp — all part of the compliance bundle.


Article 15 — Accuracy, robustness, and cybersecurity

Requirement. High-risk AI systems shall achieve an appropriate level of accuracy, robustness, and cybersecurity, and perform consistently in those respects throughout their lifecycle.

How Aira satisfies it. This is more of a defense-in-depth story than a single capability.

RequirementAira control
Accuracy — document levels, metricsConsensus policy mode logs every model vote so you can compute agreement metrics per model per action type
Robustness — resilience to errors, faults, inconsistenciesEvery policy engine call is wrapped in try/except with structured error codes; actions that fail transition to failed state with outcome="failed" and an audit row
Cybersecurity — resilience to attempts by unauthorized parties to alter use/performance/outputsEd25519 signatures prevent tampering; per-org rate limits prevent flooding; content scan blocks prompt injection attempts before they reach the agent; RFC 3161 timestamps prevent backdating

Auditor-ready evidence:

  • Accuracy metrics — consensus policy evaluations over a date range (pull from the compliance bundle)
  • Robustness — failed action rate from the audit log
  • Cybersecurity — content scan hit counts over time + any POLICY_DENIED outcomes with reason prompt_injection

Article 17 — Quality management system

Requirement. Providers of high-risk AI systems must have a quality management system that covers, among other things, techniques for design, design control, testing, validation, data management, risk management, post-market monitoring, reporting of serious incidents, and record keeping.

How Aira satisfies it. Operational infrastructure: agent versioning, compliance snapshots, and drift baselines. Every material change to an agent is versioned and the old version is retained for audit; every deployment can snapshot the exact policy config + signing keys + compliance state at that moment.


Article 26 — Deployer obligations

Requirement. Deployers of high-risk AI systems shall use them in accordance with instructions, assign human oversight, monitor their operation, suspend use when serious risks are identified, and inform the provider + market surveillance authorities of serious incidents.

How Aira satisfies it. The audit trail + webhook layer is the deployer's incident reporting fabric:

  • Monitoring — drift checks run on a schedule, alert on severity: critical
  • Suspension — policy engine can flip any action type to require_approval in one API call, effectively pausing use until a human reviews
  • Incident reporting — webhook events (agent.drift_detected, action.policy_denied, case.requires_human_review) give you a real-time feed to your incident response tooling
  • Legal hold — the legal-hold primitive flags an action as non-repudiable and pins all its audit evidence

Concrete control:

# Register a webhook for incident events
aira.create_webhook(
    url="https://incidents.example.com/aira",
    events=[
        "agent.drift_detected",
        "action.policy_denied",
        "case.requires_human_review",
    ],
)

Article 27 — Fundamental rights impact assessment

Requirement. Before deploying a high-risk AI system, deployers that are public bodies or private entities providing public services must perform a Fundamental Rights Impact Assessment (FRIA) covering the categories of natural persons likely to be affected, the specific risks of harm, and the human oversight measures.

How Aira satisfies it. FRIAs are a paperwork requirement, but Aira gives you the raw material:

  • Action type labels — every authorize() call tags the action with a type, so you can enumerate the categories of operations the system performs in a period
  • Agent capabilities — registered at agent creation time, listed on the public DID document
  • Evidence packages — tie a specific FRIA document to the compliance bundle covering the period it assessed, so an auditor can trace from the FRIA's claimed scope back to the actions that fell inside it

Frequently asked

Does using Aira make my system "low-risk"?

No. Aira is a technical control, not a risk classification. Whether your AI system is high-risk depends on its use case (Annex III of the Regulation), not on what infrastructure you run it on. Aira helps you meet the obligations for high-risk systems — it doesn't reclassify them.

If I self-host, do I still get the evidence property?

Yes. Self-hosted deployments run the same code + same crypto primitives. The public verification URL becomes your own domain (e.g., api.yourcompany.com/api/v1/verify/action/{id}) and the JWKS lives at your domain too. An auditor can still verify offline — they just need your JWKS URL instead of ours.

What about providers, not deployers?

Providers have the heavier burden (Art. 9, 10, 11, 15, 17) plus a conformity assessment. Aira's compliance bundles were designed with provider technical documentation in mind and map directly to the Annex IV template. Reach out to customers@softure-ug.de if you're classified as a provider and want help assembling the full conformity file.

Can I rely on this mapping for a formal audit?

This page is a technical reference, not legal advice. You should validate it with your own counsel and your notified body. We welcome corrections — open an issue at github.com/aira-proof/docs if you think we've over-claimed on any article.

On this page