This vulnerability occurs when an application uses a generative AI model (like an LLM) but fails to properly check the AI's output before using it. Without this validation, the AI's responses might contain security flaws, harmful content, or data leaks that violate the application's intended policies.
Generative AI models are powerful but unpredictable. They can be tricked into producing malicious code, biased decisions, offensive content, or sensitive training data. If your application blindly trusts and acts on these outputs, it can lead to injection attacks, compliance violations, or data breaches. You must implement robust validation checks—like content filtering, code sanitization, and policy enforcement—on every AI response before it's processed further. Continuously monitoring for these validation failures across all your AI-integrated services is a complex challenge. An ASPM platform like Plexicus can automatically detect these flaws in your runtime environment, while its AI-powered remediation provides specific fixes to harden your validation logic, ensuring your AI features remain secure and reliable.
Impact: Execute Unauthorized Code or CommandsVaries by Context
In an agent-oriented setting, output could be used to cause unpredictable agent invocation, i.e., to control or influence agents that might be invoked from the output. The impact varies depending on the access that is granted to the tools, such as creating a database or writing files.