Runplane

Execution Control Layer for AI Agents

Control Every AI Action Before It Executes

Runplane sits between your AI and execution, controlling every action before it runs. Each action is stopped and must receive a decision (ALLOW, BLOCK, or REQUIRE_APPROVAL) before it runs.

No action executes without passing through Runplane.

EU AI Act ReadyLogging • Oversight • Audit TrailSupports logging, oversight, and audit requirements for high-risk AI systems under the EU AI Act.
Human Approval FlowsTamper-Evident Audit LogsPolicy-Based Execution
<50ms latencyWorks with any frameworkGateway-first, SDK optional

See Runplane Decide Before Execution

Every AI action is stopped until Runplane decides. No action executes without passing through Runplane.

Runplane decides what is allowed to execute: ALLOW, BLOCK, or REQUIRE_APPROVAL.

Prevent unintended API calls, block destructive actions, and require approval for high-risk operations.

Runplane sits in the execution path. If it doesn't approve, nothing runs.

Guard API (/api/v1/guard) — Live Demo

Select an action

Action Payload

{
  "action": "transfer_funds",
  "target": "finance-system",
  "amount": 12000,
  "environment": "production"
}

Decision Result

REQUIRE APPROVAL
Policy:transfer_funds → REQUIRE_APPROVAL
Risk Score:58
Reason:High-risk financial action in production
guard()validationpolicyriskdecision

This is not prompt filtering or post-hoc monitoring. Runplane is an Execution Control Layer that intercepts every AI action before execution and decides whether to allow it.

AI Agent
runplane.guard()

Every action is stopped and decided before execution

ALLOWBLOCKREQUIRE APPROVAL
Execution

or blocked

No action reaches your systems without passing through Runplane.

Real-Time Decisions

Every action is stopped before execution. A decision is enforced and logged.

Bulk Record Deletion

DELETE FROM users WHERE ...

BLOCKED

Cloud Deployment $48K

aws.ec2.provision: 120 instances

APPROVAL

Bulk Email 50K

sendgrid.send: 50,000 recipients

APPROVAL

Payment $150

stripe.charge: customer_abc

ALLOWED

Control via API. SDK optional.

The decision happens in the API. The SDK just wraps it.

Call the Guard API before execution. The SDK is optional. It wraps the Guard API.

TypeScript
await runplane.guard(
  "delete_employee",
  "hr_system",
  { employeeId: "emp_7421" },
  async () => {
    await deleteEmployee("emp_7421")
  }
)
ALLOW

Execution proceeds

BLOCK

Execution is stopped

REQUIRE_APPROVAL

Execution is paused until approval

The SDK is a wrapper over the Guard API. The decision system runs in the Gateway.

Without Runplane vs With Runplane

Without Runplane

  • Tools execute immediately with no control layer
  • No runtime enforcement
  • Risk of unsafe or costly operations
  • No audit trail

With Runplane

  • Every action must pass through a control decision before execution
  • High-risk actions paused for approval
  • Built-in human-in-the-loop
  • Full decision trace

How Runplane Works

Actions are mapped automatically, and every execution attempt is controlled through policy before it runs.

1. Bring Your Tools

Connect existing agent tools from any framework

2. Actions Derived

Canonical actions derived from tool definitions

3. Baseline Policies

Assigned by sensitivity. Fully customizable.

4. Control at Runtime

Execution cannot proceed until a decision is returned.

Why teams choose Runplane

Runtime enforcement

Policies enforced at execution time, not just in prompts

Human approval built-in

High-risk actions pause for human review automatically

Action-aware control

Granular policies per action type, not just per agent

Full decision trace

Complete audit log of every action and decision

Start controlling what your AI is allowed to execute

Get an API key and start controlling AI execution in minutes.

No billing requiredUnder 2 minutesStarter policies included

Start with a 14-day free trial. No setup required.

Pricing

Built for teams running AI in production

Starter

$129/month
  • 5 AI agents
  • 250,000 controlled actions/month
  • Policy engine + containment rules
  • Approval workflows
  • Full audit log explorer
Most Popular

Growth

$499/month
  • 20 AI agents
  • 1,000,000 controlled actions/month
  • Advanced containment controls
  • Approval workflows
  • Priority support

Scale

$1990/month
  • 50 AI agents
  • 5,000,000 controlled actions/month
  • Advanced containment controls
  • Approval workflows
  • Enterprise support

Deploy AI Agents Without Losing Control

Bring your tools. Send actions through Runplane. Control execution at runtime.

No action executes without passing through Runplane.

Execution Control Layer for AI Agents

What is Runplane?

Runplane is the execution control layer for AI agents. It sits between your agent and the real world. When an agent attempts to execute a tool, the SDK wraps the execution with guard(), evaluates the action against your policies, and enforces a decision: ALLOW, BLOCK, or REQUIRE_APPROVAL.

The integration is SDK-first. You bring your existing agent tools from any framework. Runplane automatically maps those tools into canonical action types and applies baseline policies. You remain in full control—customize policies for any action type. All tool execution is wrapped via runplane.guard(), and decisions are enforced at runtime.

How It Works: Tools to Actions to Policies

The Runplane model follows a clear flow: bring your tools, actions are auto-mapped, baseline policies apply, wrap with guard(). First, you connect your existing agent tools from whatever framework you use—LangChain, Vercel AI SDK, CrewAI, or custom implementations. These tools are the input layer.

Runplane automatically maps your tools into canonical action types. A delete_user tool becomes a "delete_record" action. A send_email tool becomes a "send_email" action. This standardization allows policies to be written once and applied consistently across different tool implementations.

Baseline policies are applied automatically, providing ready-to-use governance out of the box. You remain in full control—customize any policy to match your specific requirements. Policies specify whether an action should be allowed, blocked, or require human approval.

Finally, all tool execution is wrapped with runplane.guard(). The guard function sends the action type and context to Runplane, receives a decision, and either executes the callback, blocks it, or waits for human approval.

SDK-First Integration

Runplane is SDK-first, not API-first. The primary integration is through the SDK's guard() method, which wraps your tool execution callbacks. This design ensures that policy enforcement happens automatically at the execution boundary, without requiring manual decision handling in your application code.

When you call runplane.guard(), you provide the action type, target system, context, and a callback function. If the policy allows the action, the callback executes immediately. If the policy blocks the action, the callback never runs. If the policy requires approval, guard() waits until a human approves or denies the request.

Why Prompt Safety Is Insufficient

Prompt engineering and input validation operate at the wrong layer of the stack. They attempt to constrain AI behavior through instructions, but autonomous agents ultimately translate those instructions into concrete tool invocations. A well-crafted prompt cannot prevent a determined agent from calling a database deletion function or provisioning expensive cloud resources if the tooling allows it.

Runtime execution control operates at the tool boundary. When an agent attempts to execute a tool, Runplane intercepts the call. The action is evaluated against your policies. The decision is enforced before execution. This cannot be circumvented by the agent—the guard() wrapper is in the execution path.

Execution Containment

Execution containment is the core mechanism of runtime governance. It means wrapping every tool execution at the boundary between the AI agent and external systems, then applying policy evaluation before allowing the action to proceed. Runplane implements execution containment through the SDK guard() method.

When an agent attempts to execute a tool that interacts with an external system— database, payment processor, email service, cloud provider—the guard() wrapper intercepts the call. The action is evaluated against configured policies. Based on the evaluation, Runplane returns one of three decisions: allow the callback to proceed, block the callback entirely, or hold execution pending human approval.

Blast Radius Control

Blast radius control limits the potential damage from any single AI action. Even when an action is permitted, runtime governance can constrain its scope. For database operations, this might mean limiting the number of records affected. For infrastructure provisioning, it might mean capping resource sizes or quantities. For email sends, it might mean limiting recipient counts.

Runplane implements blast radius control through policies that specify boundaries for permitted actions. An agent might be allowed to delete records, but only one at a time, never in bulk. An agent might be allowed to send emails, but require approval for sends exceeding 100 recipients. These constraints are enforced at runtime through the guard() wrapper.

Financial Guardrails

Financial guardrails prevent autonomous AI systems from incurring unexpected costs or executing unauthorized financial transactions. AI agents with access to cloud infrastructure, payment APIs, or procurement systems can inadvertently or deliberately trigger significant expenses. Runtime governance intercepts these actions and applies financial policies before execution.

Runplane allows you to set spending thresholds that trigger different policy responses. Actions below a certain threshold might be automatically approved. Actions between thresholds might require human review. Actions exceeding maximum thresholds might be automatically blocked. This graduated approach allows agents to operate autonomously within defined financial boundaries while escalating high-impact decisions to humans.

Human Approval Workflows

Human approval workflows provide a mechanism for escalating high-risk or high-impact actions to human reviewers before execution. When Runplane evaluates a tool execution and determines it requires approval, the guard() method holds, the relevant context is presented to designated approvers, and execution only proceeds if a human explicitly approves the action.

Approval workflows are configurable based on action type, risk level, financial impact, and other criteria. A routine database query might execute automatically. A bulk email send might require single approval. A large financial transaction might require multiple approvers. The workflow integrates with notification systems to alert approvers promptly and provide them with the context needed to make informed decisions.

Audit and Compliance

Every tool execution wrapped with guard(), every policy evaluation, and every decision is logged immutably in the Runplane audit log. This creates a complete record of what autonomous AI systems attempted to do, what policies were evaluated, what decisions were made, and who approved actions that required human review.

Organizations deploying autonomous AI systems in regulated industries need demonstrable governance. The Runplane audit log provides the evidence trail required for compliance reporting, showing that AI actions were subject to policy controls, that high-risk actions received appropriate human oversight, and that all decisions are traceable to specific policies and approvers.