Execution Control Layer for AI Agents
What is Runplane?
Runplane is the execution control layer for AI agents. It sits between your agent and the real world. When an agent attempts to execute a tool, the SDK wraps the execution with guard(), evaluates the action against your policies, and enforces a decision: ALLOW, BLOCK, or REQUIRE_APPROVAL.
The integration is SDK-first. You bring your existing agent tools from any framework. Runplane automatically maps those tools into canonical action types and applies baseline policies. You remain in full control—customize policies for any action type. All tool execution is wrapped via runplane.guard(), and decisions are enforced at runtime.
How It Works: Tools to Actions to Policies
The Runplane model follows a clear flow: bring your tools, actions are auto-mapped, baseline policies apply, wrap with guard(). First, you connect your existing agent tools from whatever framework you use—LangChain, Vercel AI SDK, CrewAI, or custom implementations. These tools are the input layer.
Runplane automatically maps your tools into canonical action types. A delete_user tool becomes a "delete_record" action. A send_email tool becomes a "send_email" action. This standardization allows policies to be written once and applied consistently across different tool implementations.
Baseline policies are applied automatically, providing ready-to-use governance out of the box. You remain in full control—customize any policy to match your specific requirements. Policies specify whether an action should be allowed, blocked, or require human approval.
Finally, all tool execution is wrapped with runplane.guard(). The guard function sends the action type and context to Runplane, receives a decision, and either executes the callback, blocks it, or waits for human approval.
SDK-First Integration
Runplane is SDK-first, not API-first. The primary integration is through the SDK's guard() method, which wraps your tool execution callbacks. This design ensures that policy enforcement happens automatically at the execution boundary, without requiring manual decision handling in your application code.
When you call runplane.guard(), you provide the action type, target system, context, and a callback function. If the policy allows the action, the callback executes immediately. If the policy blocks the action, the callback never runs. If the policy requires approval, guard() waits until a human approves or denies the request.
Why Prompt Safety Is Insufficient
Prompt engineering and input validation operate at the wrong layer of the stack. They attempt to constrain AI behavior through instructions, but autonomous agents ultimately translate those instructions into concrete tool invocations. A well-crafted prompt cannot prevent a determined agent from calling a database deletion function or provisioning expensive cloud resources if the tooling allows it.
Runtime execution control operates at the tool boundary. When an agent attempts to execute a tool, Runplane intercepts the call. The action is evaluated against your policies. The decision is enforced before execution. This cannot be circumvented by the agent—the guard() wrapper is in the execution path.
Execution Containment
Execution containment is the core mechanism of runtime governance. It means wrapping every tool execution at the boundary between the AI agent and external systems, then applying policy evaluation before allowing the action to proceed. Runplane implements execution containment through the SDK guard() method.
When an agent attempts to execute a tool that interacts with an external system— database, payment processor, email service, cloud provider—the guard() wrapper intercepts the call. The action is evaluated against configured policies. Based on the evaluation, Runplane returns one of three decisions: allow the callback to proceed, block the callback entirely, or hold execution pending human approval.
Blast Radius Control
Blast radius control limits the potential damage from any single AI action. Even when an action is permitted, runtime governance can constrain its scope. For database operations, this might mean limiting the number of records affected. For infrastructure provisioning, it might mean capping resource sizes or quantities. For email sends, it might mean limiting recipient counts.
Runplane implements blast radius control through policies that specify boundaries for permitted actions. An agent might be allowed to delete records, but only one at a time, never in bulk. An agent might be allowed to send emails, but require approval for sends exceeding 100 recipients. These constraints are enforced at runtime through the guard() wrapper.
Financial Guardrails
Financial guardrails prevent autonomous AI systems from incurring unexpected costs or executing unauthorized financial transactions. AI agents with access to cloud infrastructure, payment APIs, or procurement systems can inadvertently or deliberately trigger significant expenses. Runtime governance intercepts these actions and applies financial policies before execution.
Runplane allows you to set spending thresholds that trigger different policy responses. Actions below a certain threshold might be automatically approved. Actions between thresholds might require human review. Actions exceeding maximum thresholds might be automatically blocked. This graduated approach allows agents to operate autonomously within defined financial boundaries while escalating high-impact decisions to humans.
Human Approval Workflows
Human approval workflows provide a mechanism for escalating high-risk or high-impact actions to human reviewers before execution. When Runplane evaluates a tool execution and determines it requires approval, the guard() method holds, the relevant context is presented to designated approvers, and execution only proceeds if a human explicitly approves the action.
Approval workflows are configurable based on action type, risk level, financial impact, and other criteria. A routine database query might execute automatically. A bulk email send might require single approval. A large financial transaction might require multiple approvers. The workflow integrates with notification systems to alert approvers promptly and provide them with the context needed to make informed decisions.
Audit and Compliance
Every tool execution wrapped with guard(), every policy evaluation, and every decision is logged immutably in the Runplane audit log. This creates a complete record of what autonomous AI systems attempted to do, what policies were evaluated, what decisions were made, and who approved actions that required human review.
Organizations deploying autonomous AI systems in regulated industries need demonstrable governance. The Runplane audit log provides the evidence trail required for compliance reporting, showing that AI actions were subject to policy controls, that high-risk actions received appropriate human oversight, and that all decisions are traceable to specific policies and approvers.