LangChain
Gate LangChain tool calls through Aira's policy engine. Denied tools never run; chain and LLM steps are audit-only because LangChain has no pre-chain abort hook.
Kind: gate (for tools) + audit (for chains and LLM calls) · Pre-execution gate: yes, on tools · Peer dep: @langchain/core
What this integration actually does
LangChain fires callbacks before and after each tool, chain, and LLM step. The on_tool_start hook is a genuine pre-execution boundary — if it throws, LangChain surfaces the error as a tool error and the tool body never runs.
on_tool_start→aira.authorize(). If the policy engine denies, we raiseAiraToolDeniedand LangChain aborts the tool call. This is a real gate.on_tool_end→aira.notarize(outcome="completed").on_tool_error→aira.notarize(outcome="failed").on_chain_end/on_llm_end→ audit-onlyauthorize + notarizeback-to-back. LangChain does not expose a pre-chain hook that can abort across all chain types, so these produce post-hoc receipts rather than gates.
For a full gate over every LangChain operation (not just tool calls), put aira.authorize() at the top of your tool bodies and check the status yourself. The callback handler is the convenient path; the inline pattern is the absolute path.
Install
pip install aira-sdk[langchain]This pulls in langchain-core as a peer dependency. If you already have LangChain installed you're covered.
Full example — gated tool calls
from aira import Aira
from aira.extras.langchain import AiraCallbackHandler
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
aira = Aira(api_key="aira_live_...")
@tool
def send_wire_transfer(amount: float, to: str) -> str:
"""Send a wire transfer. This is the side-effect we want to gate."""
# Your real transfer code — Stripe, banking API, whatever
return f"Wire sent: €{amount} to {to}"
# The callback handler is what actually runs authorize() before the tool body
handler = AiraCallbackHandler(
client=aira,
agent_id="payments-agent",
model_id="gpt-4o",
)
agent = create_react_agent(
ChatOpenAI(model="gpt-4o"),
tools=[send_wire_transfer],
)
# Pass the handler via config — LangChain will call it around every tool
result = agent.invoke(
{"messages": [("user", "Send €75,000 to vendor-x")]},
config={"callbacks": [handler]},
)What happens when a policy denies
- The LLM decides to call
send_wire_transfer. - LangChain fires
on_tool_startwith the serialized tool name and input. - The handler calls
aira.authorize(action_type="tool_call", details=...). - The policy engine runs (rules → AI → consensus → content scan).
- If denied, the handler raises
AiraToolDenied("send_wire_transfer", "POLICY_DENIED", "..."). - LangChain catches the exception, treats the tool call as errored, and typically surfaces it to the LLM so the agent can react.
- The real transfer function never runs. The signed
PolicyEvaluationrow is persisted regardless so the denial is auditable.
When to use this vs inline aira.authorize()
| Use the callback handler when... | Call aira.authorize() inline when... |
|---|---|
| You want every tool in your LangChain agent gated without touching tool code | You want to gate a specific branch inside a tool body |
You're using create_react_agent or similar prebuilt agent loops | You need custom details strings per call |
| You're OK with chains and LLM calls being audit-only | You want a pre-execution gate on chains or LLM completions |
The two patterns compose fine — you can use the handler for tool-level gating and drop inline authorize() calls inside specific tools for extra business-logic gates.
Known limits
- Chain and LLM hooks are audit-only. LangChain has no reliable pre-execution hook for chains or LLM calls that can abort across every chain type. The
_audit()helper inAiraCallbackHandlerrunsauthorize + notarizeback-to-back for these so you still get a receipt, but the call has already happened by then. run_uuidmust be unique per call. The handler keeps an in-memoryrun_uuid → action_uuidmap soon_tool_endcan notarize the correct action. LangChain guarantees this within a single invocation, but if you're doing something unusual with run ids, check the source.- Non-blocking notarize. If the notarize call fails (network flake, 5xx), the handler logs a warning and lets the agent continue. Your receipt is missing but the agent is not wedged.
Proof it works
The integration is pinned by a regression test that imports the real AiraCallbackHandler, constructs it against a mocked Aira client, and asserts on_tool_start raises on POLICY_DENIED + on_tool_end calls notarize with the right action id:
- Python:
tests/test_extras_langchain.py(196 lines, 18 tests) - The SDK's
INTEGRATIONSregistry (aira/extras/__init__.py) declares this integration askind="gate"withpre_execution_gate=True. Tests pin the registry so if the code ever stops being a real gate, CI fails.
Related
- Policies — configuring what the policy engine checks
- Public verification — what a denied action's receipt looks like
- Human approval — what happens when
authorize()returnspending_approval
Framework integrations
First-class SDK integrations for LangChain, Vercel AI, OpenAI Agents, AWS Bedrock, Google ADK, CrewAI, and MCP. Each is honestly labeled gate, audit, or adapter.
Vercel AI SDK
Gate Vercel AI tool calls through Aira's policy engine using wrapTool(). Denied tools never run; onFinish callbacks are audit-only.