A support AI agent inadvertently included personally identifiable information from other customers in response messages due to context window contamination.
A customer support AI was deployed with access to customer records to provide personalized assistance. The system used a shared context window that retained information across different customer sessions. When Customer A asked about their account, the AI's response included details from Customer B's previous inquiry, including name, email, and partial payment information. The issue affected over 1,200 customer interactions before it was discovered through a customer complaint.
Improper session isolation in the AI's context management. Customer data was persisted in the model's working memory across session boundaries. No output filtering was applied to detect and redact PII from responses.
1,200+ customers affected. Regulatory notification required under GDPR and CCPA. Customer trust significantly damaged. Legal review of potential liability. Emergency system shutdown and redesign required.
Runplane could add a governance layer between the AI and the data retrieval system. Before the AI accesses any customer record, Runplane would verify that the requested data belongs to the customer currently being served. Any attempt to access or include data from other customers would be blocked. Additionally, output policies could scan responses for PII patterns before they reach customers.