Aira Gateway
Route any LLM call through Aira with one URL change. Every call authorized, every response signed. OpenAI, Anthropic, and any OpenAI-compatible provider.
What is the Gateway?
The Aira Gateway is a transparent proxy that sits between your application and any LLM provider. You change two lines of code -- the base URL and one extra header -- and every LLM call is automatically authorized, content-scanned, and notarized with an Ed25519-signed receipt.
No SDK wrapping. No decorator. Your existing OpenAI or Anthropic code keeps working exactly as before; Aira handles the compliance layer in the network path.
| Without Gateway | With Gateway |
|---|---|
App calls api.openai.com directly | App calls api.airaproof.com/gateway/openai/v1 |
| No audit trail | Every call gets a signed receipt |
| No policy enforcement | Policies checked before forwarding |
| No content scanning | Prompt scanned for PII, credentials, prompt injection |
Quick Start
OpenAI (2-line change)
import openai
from aira.gateway import gateway_openai_kwargs
client = openai.OpenAI(
api_key="sk-...", # your OpenAI key
**gateway_openai_kwargs(aira_api_key="aira_live_..."),
)
# Use the client exactly as before
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
)
# Every response includes the receipt UUID
print(response._headers.get("x-aira-action-uuid"))import OpenAI from "openai";
import { gatewayOpenAIConfig } from "aira-sdk/gateway";
const client = new OpenAI({
apiKey: "sk-...", // your OpenAI key
...gatewayOpenAIConfig({ airaApiKey: "aira_live_..." }),
});
// Use the client exactly as before
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
});curl -X POST https://api.airaproof.com/gateway/openai/v1/chat/completions \
-H "Authorization: Bearer sk-..." \
-H "X-Aira-Api-Key: aira_live_..." \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello"}]
}'Anthropic (2-line change)
import anthropic
from aira.gateway import gateway_anthropic_kwargs
client = anthropic.Anthropic(
api_key="sk-ant-...", # your Anthropic key
**gateway_anthropic_kwargs(aira_api_key="aira_live_..."),
)
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello"}],
)import Anthropic from "@anthropic-ai/sdk";
import { gatewayAnthropicConfig } from "aira-sdk/gateway";
const client = new Anthropic({
apiKey: "sk-ant-...", // your Anthropic key
...gatewayAnthropicConfig({ airaApiKey: "aira_live_..." }),
});
const response = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [{ role: "user", content: "Hello" }],
});curl -X POST https://api.airaproof.com/gateway/anthropic/v1/messages \
-H "x-api-key: sk-ant-..." \
-H "X-Aira-Api-Key: aira_live_..." \
-H "anthropic-version: 2023-06-01" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-20250514",
"max_tokens": 1024,
"messages": [{"role": "user", "content": "Hello"}]
}'Authentication Model
Every gateway request carries two credentials:
| Header | Purpose | Example |
|---|---|---|
Authorization: Bearer sk-... | Forwarded to the LLM provider (your key) | OpenAI API key |
x-api-key: sk-ant-... | Forwarded to the LLM provider (Anthropic format) | Anthropic API key |
X-Aira-Api-Key: aira_live_... | Authenticates with Aira | Your Aira API key |
Aira never stores your LLM provider key. The Authorization (or x-api-key for Anthropic) header is forwarded to the upstream provider untouched. The X-Aira-Api-Key header is consumed by Aira and not forwarded.
What Happens to Each Call
Every request that passes through the gateway follows a four-step pipeline:
Client → [Authorize] → [Scan] → [Forward] → [Notarize] → Client-
Authorize -- Aira creates an action record and evaluates your org's policies. If a policy denies the call, Aira returns an error in the provider's native format without contacting the upstream.
-
Scan -- The prompt content is scanned for PII, credentials, and prompt injection using your org's configured content-scan libraries. Scan results are recorded on the action.
-
Forward -- The original request body is forwarded verbatim to the upstream LLM provider. Streaming is supported end-to-end.
-
Notarize -- After the response completes, Aira hashes the response, records latency and status, and mints an Ed25519-signed receipt. The receipt is linked to the action UUID.
Response Headers
Every gateway response includes:
| Header | Description |
|---|---|
X-Aira-Action-Uuid | The UUID of the Aira action created for this call. Use it to look up the receipt. |
To retrieve the signed receipt for any gateway call:
curl https://api.airaproof.com/api/v1/receipts?action_id=<action-uuid> \
-H "Authorization: Bearer <your-aira-token>"Policy Denials
When a policy blocks a request, Aira returns an error in the provider's native format so your SDK error handling works unchanged.
Policy denials also return a receipt_uuid in the error response. The denial is cryptographically verifiable — call GET /api/v1/verify/action/{action_uuid} to confirm the signed denial receipt independently.
OpenAI format (403)
{
"error": {
"message": "Request denied by Aira policy",
"type": "aira_policy_denied",
"code": "policy_denied",
"receipt_uuid": "rcpt_01JAC..."
}
}Anthropic format (403)
{
"type": "error",
"error": {
"type": "aira_policy_denied",
"message": "Request denied by Aira policy",
"receipt_uuid": "rcpt_01JAC..."
}
}Pending Approval (429)
When human approval is required, the gateway returns a 429 with a Retry-After: 30 header. The response body includes the action UUID so you can poll for approval or handle it in the dashboard.
OpenAI format:
{
"error": {
"message": "Request held for human approval. Check the Aira dashboard.",
"type": "aira_pending_approval",
"code": "pending_approval",
"aira_action_uuid": "550e8400-e29b-41d4-a716-446655440000"
}
}Custom Upstream URL
By default, the gateway routes to:
- OpenAI:
https://api.openai.com - Anthropic:
https://api.anthropic.com
To route to a different OpenAI-compatible provider (DeepSeek, Mistral, Together, Fireworks, or your own vLLM instance), set the X-Aira-Upstream-Url header:
curl -X POST https://api.airaproof.com/gateway/openai/v1/chat/completions \
-H "Authorization: Bearer sk-..." \
-H "X-Aira-Api-Key: aira_live_..." \
-H "X-Aira-Upstream-Url: https://api.deepseek.com" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-chat",
"messages": [{"role": "user", "content": "Hello"}]
}'The gateway appends the correct path (/v1/chat/completions or /v1/messages) to whatever base URL you provide.
Streaming
The gateway supports streaming end-to-end. Set "stream": true in your request body as usual. The SSE chunks are forwarded to your client in real time. Notarization happens after the final chunk is delivered, so it never adds latency to the stream.
Self-Hosted Deployments
The gateway works identically on self-hosted deployments. Replace api.airaproof.com with your own API domain:
from aira.gateway import gateway_openai_kwargs
kwargs = gateway_openai_kwargs(
aira_api_key="aira_live_...",
gateway_url="https://api.yourdomain.com",
)
client = openai.OpenAI(api_key="sk-...", **kwargs)import { gatewayOpenAIConfig } from "aira-sdk/gateway";
const client = new OpenAI({
apiKey: "sk-...",
...gatewayOpenAIConfig({
airaApiKey: "aira_live_...",
gatewayUrl: "https://api.yourdomain.com",
}),
});The ENABLE_GATEWAY feature flag is on by default. If the gateway is disabled on your instance, requests to /gateway/* return 404.
SDK Helper Reference
Python
from aira.gateway import gateway_openai_kwargs, gateway_anthropic_kwargs
# Returns: {"base_url": "...", "default_headers": {"X-Aira-Api-Key": "..."}}
gateway_openai_kwargs(aira_api_key="aira_live_...", gateway_url="https://...")
# Returns: {"base_url": "...", "default_headers": {"X-Aira-Api-Key": "..."}}
gateway_anthropic_kwargs(aira_api_key="aira_live_...", gateway_url="https://...")TypeScript
import { gatewayOpenAIConfig, gatewayAnthropicConfig } from "aira-sdk/gateway";
// Returns: { baseURL: "...", defaultHeaders: { "X-Aira-Api-Key": "..." } }
gatewayOpenAIConfig({ airaApiKey: "aira_live_...", gatewayUrl: "https://..." });
// Returns: { baseURL: "...", defaultHeaders: { "X-Aira-Api-Key": "..." } }
gatewayAnthropicConfig({ airaApiKey: "aira_live_...", gatewayUrl: "https://..." });