How to Control AI Agent Actions in Production
Wrap every AI action in runplane.guard(). Block dangerous actions. Require approval for sensitive ones. Audit everything.
The Direct Answer
To control AI agent actions in production, insert a runtime checkpoint between agent decision and execution. Use runplane.guard() to wrap every sensitive action. The guard evaluates policy rules and risk context, then returns one of three decisions:
ALLOW
Action executes
BLOCK
Action prevented
REQUIRE_APPROVAL
Human reviews
The Problem
AI Agents Execute Without Permission
Destructive database operations
DELETE, DROP, TRUNCATE execute instantly. No undo.
Unauthorized money movement
Transfers, refunds, payouts—executed before anyone can review. See how to build an AI payment approval system.
Infrastructure changes
Production deployments, permission grants, config changes—no approval required.
No audit trail
When it goes wrong, you have logs—not evidence of what was evaluated or why.
Why It Fails
Prompts and Monitoring Cannot Enforce
| Approach | When | Enforces? |
|---|---|---|
| System prompts | Inference | |
| Output filtering | After LLM | |
| Model alignment | Training | |
| Observability | After execution | |
| runplane.guard() | Before execution |
Key insight: Prompts tell agents what they should do. Runtime control determines what they can do. Only the latter is enforceable.
The Solution
Runtime Control with Runplane
Runplane is a control plane for AI execution. It sits between your agent and the actions it can take, enforcing policy at the moment of execution—not before inference, not after completion.
Policy Enforcement
Define rules by action type, target, and context. Policies are evaluated at runtime.
Risk Scoring
Contextual signals—amount, environment, sensitivity—determine risk level.
Human Approval
Sensitive actions pause for human review. Learn how to add human approval to AI workflows.
Audit Trail
Every decision is recorded. Immutable. Cryptographically verifiable.
How It Works
The Runtime Control Flow
Agent requests action (delete, transfer, deploy)
runplane.guard() intercepts before execution
Policy rules + risk context evaluated
Decision returned: ALLOW / BLOCK / REQUIRE_APPROVAL
Callback executes only if decision permits
Immutable audit record created
Implementation
Node.js Example
agent.js
const { Runplane } = require("@runplane/runplane-sdk")
const runplane = new Runplane({
apiKey: process.env.RUNPLANE_API_KEY,
})
// Low-risk: typically ALLOW
async function getAccountBalance(accountId) {
return runplane.guard(
"read_account",
"finance-system",
{ accountId },
async () => {
return await fetchBalance(accountId)
}
)
}
// High-risk: typically REQUIRE_APPROVAL or BLOCK
async function transferFunds(from, to, amount) {
return runplane.guard(
"transfer_funds",
"finance-system",
{ fromAccountId: from, toAccountId: to, amount, currency: "USD" },
async () => {
return await executeTransfer(from, to, amount)
}
)
}
// ALLOW → callback executes immediately
// BLOCK → throws RunplaneError, callback never runs
// REQUIRE_APPROVAL → pauses until human approvesThe callback only executes if the decision is ALLOW (or REQUIRE_APPROVAL after approval). BLOCK prevents execution entirely—no side effects, no exceptions to handle afterward.
Benefits
What You Get
Hard enforcement — Actions cannot bypass policy. BLOCK means blocked.
Human-in-the-loop — Sensitive actions pause for review without stopping the agent.
Complete audit trail — Every decision recorded with full context for compliance.
Minimal latency — Under 50ms for most policy evaluations.
Framework agnostic — Works with LangChain, Vercel AI SDK, custom agents.
Enterprise
Built for Production
SOC 2 Type II
Compliant infrastructure
Cryptographic Audit
Tamper-proof records
Fail-Closed
Safe by default
FAQ
Frequently Asked Questions
How do you control AI agent actions in production?
▼
Control AI agent actions by wrapping every execution in runplane.guard(). This intercepts the action before it runs, evaluates it against policies and risk scoring, and returns ALLOW, BLOCK, or REQUIRE_APPROVAL. The action callback only executes if the decision permits it.
Why are prompts not enough to control AI agents?
▼
Prompts operate at inference time, not execution time. They tell the agent what it should do, but nothing enforces compliance. An agent instructed to 'never delete records' can still call the delete function. Runtime control with guard() creates a hard enforcement boundary that prompts cannot provide.
What is the difference between monitoring and runtime control?
▼
Monitoring observes actions after they execute—you learn what happened but cannot prevent it. Runtime control intercepts actions before execution, evaluates them, and enforces decisions. Monitoring is forensics; runtime control is prevention.
What happens when an AI action is blocked?
▼
When guard() returns BLOCK, the action callback never executes. The database is not modified, the API is not called, the payment is not sent. The agent receives an error with the block reason. The action is stopped before any side effect occurs.
How does human approval work for sensitive AI actions?
▼
When guard() returns REQUIRE_APPROVAL, execution pauses. An operator reviews the action with full context and risk score. Approval resumes execution; rejection blocks permanently. The agent can wait synchronously or continue other work while pending.
Is every AI action audited?
▼
Yes. Every guard() call generates an immutable audit record containing: action type, target, full context, policy evaluation, risk score, decision, timestamp, and outcome. This creates a complete trail for compliance, investigation, and operational review.
Does runtime control add latency?
▼
Runplane adds minimal latency—typically under 50ms for policy evaluation. For most production actions (database writes, API calls, payments), this is negligible. The safety guarantee far outweighs the latency cost.
Related
Learn More
Start Controlling AI Actions Today
14-day free trial. No credit card required. Works with your existing agent stack.