High SeverityAI HallucinationJanuary 30, 2024

LLM Invents Non-Existent Legal Precedents

A legal research AI cited fabricated court cases and legal precedents in research summaries, which were included in court filings before the fabrications were discovered.

System Type:Legal Research AI

What Happened

A law firm deployed an AI legal research tool to accelerate case preparation. An associate used the tool to research precedents for a motion and received a summary citing six relevant cases with case numbers, court names, and brief holdings. The associate included these citations in a court filing without independent verification. Opposing counsel could not locate three of the cited cases and raised the issue with the court. Investigation revealed these cases were complete fabrications generated by the AI.

Root Cause

The LLM was not connected to verified legal databases and generated plausible-sounding citations from its training data patterns. No citation verification step existed in the workflow. Users trusted the AI output without independent validation.

Impact

Court sanctions against the law firm. Significant reputation damage. State bar investigation initiated. Required disclosure to all clients whose cases used the AI tool. Complete withdrawal of AI from legal research workflows.

Lessons Learned

  • 1AI-generated citations must be verified against authoritative sources
  • 2LLMs can generate entirely fabricated references with high confidence
  • 3Professional standards require human verification of AI research
  • 4Workflow design must include verification steps before court filings

Preventive Measures

  • Connect AI research tools to verified legal databases with citation checking
  • Require independent verification of all AI-generated citations
  • Add warnings to AI output indicating unverified content
  • Implement automated citation verification before output delivery

How Runplane Would Handle This

While Runplane focuses on actions rather than research output, the governance principle applies: AI outputs that will be used in high-stakes contexts should pass through verification checkpoints. A similar system could intercept AI-generated citations and verify them against legal databases before including them in documents.