How to Add Human Approval to AI Workflows
Use runplane.guard() with REQUIRE_APPROVAL policies. Sensitive actions pause automatically. Humans review with full context. Approve to execute, reject to block.
The Direct Answer
To add human approval to AI workflows, wrap sensitive actions in runplane.guard() with policies that return REQUIRE_APPROVAL. When the policy triggers:
Execution pauses — The callback does not run
Operator reviews — Full context, risk score, action details
Approve → Execute — Callback runs after approval
Reject → Block — Callback never runs
The Problem
AI Agents Act Without Permission
AI agents make decisions in milliseconds. Without a runtime approval mechanism, sensitive actions execute before any human can intervene. By the time you see the log, the action is complete.
$50,000 transfer
Executed instantly. No approval required. Money gone. See how to build an AI payment approval system.
Customer data deletion
48,000 records purged. Irreversible. No checkpoint.
Production deployment
Config pushed to prod. Breaking change. No review.
Why It Fails
Manual Review Does Not Scale
| Approach | Problem |
|---|---|
| Review all actions | Defeats the purpose of AI automation |
| Slack alerts | Action already executed by the time you see it |
| Post-hoc audit | Forensics, not prevention |
| Prompt instructions | "Ask for permission" is not enforceable |
| Disable sensitive tools | Removes AI capability entirely |
The solution: Selective approval. Keep AI autonomous for safe actions, pause execution only for sensitive ones. Policy-driven, not all-or-nothing.
The Solution
Runtime Approval with Runplane
Runplane adds a runtime checkpoint that pauses execution when policies detect sensitive actions. The agent continues other work while waiting. No blocking, no timeout, no lost context. This is part of Runplane's broader approach to controlling AI agent actions in production.
Execution Pauses
Callback waits until decision is made. No execution occurs.
Full Context
Reviewer sees action, target, parameters, risk score.
Approve or Reject
One click. Approve resumes. Reject blocks permanently.
Audit Trail
Who approved, when, with what context. Immutable record.
How It Works
The Approval Flow
Agent requests sensitive action (transfer $15,000)
guard() evaluates policy → returns REQUIRE_APPROVAL
Execution pauses. Action enters approval queue.
Operator reviews: action, target, amount, risk score
Approve → callback executes | Reject → blocked permanently
Audit record created with full decision context
Implementation
Node.js Example
approval-example.js
const { Runplane } = require("@runplane/runplane-sdk")
const runplane = new Runplane({
apiKey: process.env.RUNPLANE_API_KEY,
})
// This action will require human approval for amounts > $1,000
async function transferFunds(from, to, amount) {
return runplane.guard(
"transfer_funds",
"finance-system",
{
fromAccountId: from,
toAccountId: to,
amount,
currency: "USD",
environment: "production"
},
async () => {
// This callback only runs after approval
return await executeTransfer(from, to, amount)
}
)
}
// Policy configuration (in Runplane dashboard):
// IF action = "transfer_funds" AND amount > 1000 AND environment = "production"
// THEN decision = REQUIRE_APPROVAL
// Usage:
// await transferFunds("acc_1", "acc_2", 15000)
// → Pauses for approval
// → Operator approves
// → executeTransfer() runsThe callback executeTransfer() only runs after a human approves. If rejected, it never runs. The agent receives an error and can handle rejection gracefully.
Use Cases
When to Require Approval
Financial Operations
Transfers, refunds, payouts above threshold
Destructive Actions
DELETE, DROP, TRUNCATE, purge operations
Customer Impact
Account changes, communications, exports
Infrastructure
Production deploys, config changes, scaling
Access Control
Permission grants, role changes, key rotation
Compliance
PII access, audit modifications, regulatory
Benefits
What You Get
Selective control — Only sensitive actions pause. Safe actions execute immediately.
Non-blocking — Agent continues other work while approval is pending.
Full context — Reviewers see everything: action, parameters, risk score.
Audit trail — Every decision recorded. Who, when, why.
Policy-driven — Define thresholds, not manual triage.
Enterprise
Built for Production
SOC 2 Type II
Compliant infrastructure
Cryptographic Audit
Tamper-proof records
Role-Based Access
Approver permissions
FAQ
Frequently Asked Questions
How do you add human approval to AI workflows?
▼
Add human approval by wrapping AI actions in runplane.guard() with policies that return REQUIRE_APPROVAL for sensitive operations. When triggered, execution pauses automatically. An operator reviews the action with full context and approves or rejects. Only after approval does execution resume.
When should AI actions require human approval?
▼
AI actions should require human approval for: financial transactions above thresholds, destructive operations (deletes, drops), actions affecting customer data, production infrastructure changes, and compliance-sensitive operations. The threshold depends on your risk tolerance.
What is human-in-the-loop for AI agents?
▼
Human-in-the-loop for AI agents means inserting a human checkpoint before certain actions execute. Unlike reviewing all outputs, runtime HITL only pauses execution when policies detect sensitive actions—keeping AI autonomous for safe operations while ensuring oversight for risky ones.
Does approval block the entire AI system?
▼
No. With runplane.guard(), only the specific action requiring approval pauses—not the entire agent. The agent can continue processing other tasks while waiting. When approval is granted, execution resumes automatically.
Are approval decisions auditable?
▼
Yes. Every approval request, decision, and outcome is recorded in an immutable audit log. This includes who approved, when, the full action context, and whether execution proceeded. This supports compliance and incident investigation.
What happens if an approver rejects an action?
▼
If rejected, the action is permanently blocked. The callback never executes. The agent receives a rejection error with the reason. The decision is recorded in the audit trail for compliance.
Can approval policies be based on context?
▼
Yes. Policies can require approval based on any contextual factor: amount thresholds, target system, environment (production vs staging), user role, time of day, or custom attributes. This allows fine-grained control over which actions pause for review.
Related
Learn More
Add Human Approval to Your AI Workflows
14-day free trial. Configure approval policies in minutes. No agent code changes required.