LangChain Integration

Stop LangChain Tools Before They Do Damage

Add runtime control to any LangChain agent. Block destructive tools, require human approval for high-risk actions, and maintain a full audit trail.

LangChain agents call tools automatically

There's no native layer to block or require approval before a tool runs. Once the agent decides to call a tool, it executes immediately. A single unchecked tool call can delete records, send emails, or modify production data.

Without Runplane
Agent decidesTool executesNo control

Wrap any LangChain tool with runplane.guard()

One line of code gives you full control. Intercept tool calls, evaluate against your policies, and decide whether to allow, block, or require human approval.

main.py
from runplane import Shield
import os

runplane = Shield(api_key=os.environ["RUNPLANE_API_KEY"])

# Wrap any LangChain tool call
result = await runplane.guard(
    "delete_record",
    "users-database",
    {"recordId": user_id},
    lambda: langchain_tool.run(input)
)

# Result contains the decision and execution outcome
if result.decision == "BLOCKED":
    print("Action was blocked by policy")
elif result.decision == "APPROVED":
    print("Action executed:", result.output)

What You Get

Block destructive tools before they execute

Require human approval for high-risk actions

Full audit trail of every decision

How Runplane Works

1

Intercept

guard() intercepts the action before execution

2

Decide

Policy engine evaluates and returns a decision

3

Execute or Halt

Action runs, blocks, or waits for approval

ALLOW
REQUIRE_APPROVAL
BLOCK

Works with Any LangChain Tool

SQL Tools

File Tools

API Tools

Custom Tools

Start free at runplane.ai

Add runtime control to your LangChain agents in minutes. No credit card required.