A legal research AI cited fabricated court cases and legal precedents in research summaries, which were included in court filings before the fabrications were discovered.
A law firm deployed an AI legal research tool to accelerate case preparation. An associate used the tool to research precedents for a motion and received a summary citing six relevant cases with case numbers, court names, and brief holdings. The associate included these citations in a court filing without independent verification. Opposing counsel could not locate three of the cited cases and raised the issue with the court. Investigation revealed these cases were complete fabrications generated by the AI.
The LLM was not connected to verified legal databases and generated plausible-sounding citations from its training data patterns. No citation verification step existed in the workflow. Users trusted the AI output without independent validation.
Court sanctions against the law firm. Significant reputation damage. State bar investigation initiated. Required disclosure to all clients whose cases used the AI tool. Complete withdrawal of AI from legal research workflows.
While Runplane focuses on actions rather than research output, the governance principle applies: AI outputs that will be used in high-stakes contexts should pass through verification checkpoints. A similar system could intercept AI-generated citations and verify them against legal databases before including them in documents.