Execution Control Platform
How Runplane wraps tool execution and enforces policies at runtime.
AI agents execute real-world actions through tools:
- Database modifications
- External communications
- Infrastructure provisioning
- Financial operations
Runplane sits between your agent and these tools. Every tool execution is wrapped with guard(), which evaluates the action against your policies and enforces a decision: ALLOW, BLOCK, or REQUIRE APPROVAL.
How Runplane Works
The integration model is SDK-first. You bring your existing tools, Runplane derives canonical actions automatically from your tool definitions, and the SDK wraps execution.
1. Bring Your Tools
Existing agent tools from any framework
2. Auto-Mapped Actions
Canonical types generated automatically
3. Baseline Policies
Ready-to-use, fully customizable
4. Wrap with guard()
Enforced at runtime
Runplane works with any agent framework. Your AI system continues to use LangChain, Vercel AI SDK, CrewAI, or custom implementations — Runplane wraps the tool execution layer.
SDK Integration
The primary integration is through the SDK. Wrap any tool execution with guard().
await runplane.guard(
"delete_employee_record",
"hr_system",
{ employeeId: "emp_7421" },
async () => {
await deleteEmployeeRecord("emp_7421")
}
)How guard() works:
- 1. Sends action type + context to Runplane
- 2. Receives decision: ALLOW, BLOCK, or REQUIRE_APPROVAL
- 3. If ALLOW: callback executes immediately
- 4. If BLOCK: callback never runs
- 5. If REQUIRE_APPROVAL: waits for human approval
Policy Engine
When you import tools, Runplane applies baseline policies that provide ready-to-use governance. You remain in full control — customize any policy to match your requirements.
Policy Configuration
Policies can target:
- Action type — delete, deploy, payment, send_email
- Target — production database, external API, payment gateway
- Context — amount thresholds, time windows, agent identity
Decision Outcomes
Callback executes immediately. Low risk, policy permits.
Paused. Human review required before execution.
Callback never runs. Policy violation.
Human Approval Workflows
When an action requires human approval, guard() holds execution until an authorized operator reviews and approves the request.
Approval Process
Agent calls guard()
AI agent attempts to execute a tool wrapped with guard()
Runplane holds execution
Policy returns REQUIRE_APPROVAL, callback is paused
Operator reviews request
Dashboard shows action details, context, and risk assessment
Callback proceeds or is blocked
If approved, callback executes. If denied, callback is blocked.
Audit Trail
Every action that passes through guard() creates an immutable audit event. This provides complete visibility into what your AI systems attempted to do and what decisions were made.
Recorded Attributes
agentId— Which agent made the requestactionType— Canonical action typetarget— Target systemdecision— ALLOW / BLOCK / REQUIRE_APPROVALtimestamp— When the request was madecontext— Action context for policy evaluationImporting Tools
Use the dashboard to import your existing tools. Runplane automatically maps them to canonical action types and applies baseline policies. You stay in control — customize any policy to match your requirements.
Supported import sources:
- LangChain tool definitions
- Vercel AI SDK tools
- OpenAPI specifications
- Manual tool registration