Introduction
Aira makes AI decisions trustworthy, verifiable, and legally defensible — one API call at a time.
What is Aira?
Aira is an API that sits between your AI application and its decisions. It fans out every high-stakes case to multiple AI models, scores their agreement, and returns a cryptographically signed receipt proving what each model decided, when, and whether they agreed.
curl -X POST https://api.airaproof.com/api/v1/cases \
-H "Authorization: Bearer aira_live_xxxxx" \
-H "Content-Type: application/json" \
-d '{
"details": "Should we approve this loan application?",
"models": ["gpt-5.4", "claude-sonnet-4-6", "gemini-3.1-flash-lite"]
}'You get back:
- Each model's independent decision (APPROVE / DENY / REVIEW)
- A consensus score with disagreement detection
- A cryptographic receipt with Ed25519 signature and RFC 3161 trusted timestamp
- Automatic human review flagging when models disagree
Why Aira?
AI is making decisions that affect people's lives — credit approvals, medical diagnoses, legal analysis, insurance underwriting. Regulators are now demanding proof of what AI decided and why.
| Regulation | Deadline | Penalty |
|---|---|---|
| EU AI Act (Articles 12, 13, 14) | August 2, 2026 | Up to 7% of global revenue |
| US Federal Reserve SR 11-7 | Active now | Supervisory action |
| CFPB Adverse Action (ECOA) | Active now | $45M+ fines (Goldman Sachs precedent) |
| Colorado AI Act | June 30, 2026 | State enforcement |
Aira gives you the compliance artifacts these laws require — out of the box.
Core Concepts
Case
A case is a single execution of your details across multiple AI models. You send the details once; Aira fans it out to 2-10 models in parallel, collects their independent responses, and scores their agreement.
Consensus
Aira uses structured agreement scoring — each model returns a decision (APPROVE/DENY/REVIEW), confidence score, and key factors. Agreement is scored on these structured fields, not free-text similarity. This makes consensus deterministic and explainable.
Receipt
Every case execution produces an immutable, cryptographically signed receipt. The receipt includes a SHA-256 hash of all inputs and outputs, an Ed25519 digital signature, and an RFC 3161 trusted timestamp from an independent authority. Receipts are append-only — they can never be updated or deleted.
Human Review
When model disagreement exceeds a configurable threshold (default: 0.4), Aira automatically flags the decision for human review. This satisfies EU AI Act Article 14 (human oversight) natively — the disagreement score is the oversight trigger.