A code completion AI model generated code snippets containing fabricated API keys that resembled real credentials, leading developers to accidentally expose sensitive patterns.
An AI-powered code assistant was integrated into the development workflow to suggest code completions and generate boilerplate. When developers requested examples of API integration code, the model generated realistic-looking API keys and secrets in the sample code. These fabricated credentials were committed to repositories by developers who assumed the placeholders would be obvious. Security scanners later flagged the patterns as potential credential leaks, triggering a company-wide security audit.
The LLM was trained on code repositories that contained actual API keys (a common security antipattern). The model learned to generate realistic-looking credentials when writing integration examples. No post-generation filtering was applied to detect and redact credential-like patterns.
Security audit required across 47 repositories. Developer productivity lost during investigation. False positive alerts disrupted security team operations. Required retraining of development teams on AI-assisted coding practices.
While Runplane focuses on runtime actions rather than code generation, the principle applies: AI outputs that could lead to security issues should be validated before reaching production. For code assistants, a similar governance layer could intercept suggested code and apply security scanning before presenting it to developers.