This vulnerability occurs when a generative AI or ML model is deployed with inference parameters that are too permissive, causing it to frequently generate incorrect, nonsensical, or unpredictable outputs.
Generative AI models, like those for text or image creation, use settings such as temperature, Top P, and Top K to control their creativity and decision-making. Setting these values incorrectly—often too high—forces the model to make overly random guesses, leading to 'hallucinations,' incoherent content, or wildly unrealistic results. When these flawed outputs are used in decision-making processes, data pipelines, or user-facing features, they can corrupt data integrity and create serious reliability issues. For developers, securing AI components means rigorously validating and constraining these inference parameters in production, similar to sanitizing user input. This requires testing across diverse inputs to find safe thresholds. Managing these configurations at scale across multiple models is challenging; an ASPM platform like Plexicus can help by automatically detecting insecure AI parameter settings and tracking these flaws alongside traditional vulnerabilities in your application stack.
Impact: Varies by ContextUnexpected State
The product can generate inaccurate, misleading, or nonsensical information.
Impact: Alter Execution LogicUnexpected StateVaries by Context
If outputs are used in critical decision-making processes, errors could be propagated to other systems or components.
{
json{
json