Insufficient Verification of Data Authenticity

Draft Class
Structure: Simple
Description

This vulnerability occurs when an application fails to properly check where data comes from or confirm its legitimacy, allowing untrusted or forged information to be processed as valid.

Extended Description

Insufficient verification of data authenticity is a common root cause for security flaws like spoofing, CSRF, and replay attacks. It happens when developers trust data based solely on its apparent format or origin, without enforcing strong cryptographic signatures, secure tokens, or proper chain-of-trust validation. Attackers exploit this by tampering with requests, forging headers, or replaying captured data to impersonate users, bypass authorization, or trigger unauthorized actions. To prevent this, always cryptographically verify the source and integrity of critical data—such as session tokens, API requests, and file uploads—using standards like digital signatures, anti-CSRF tokens, and secure challenge-response mechanisms. Managing these validation checks consistently across a complex application landscape is challenging. An ASPM platform like Plexicus can automatically detect missing authenticity checks across your codebase and runtime, using AI to generate precise remediation guidance, ensuring your verification logic is robust and uniformly applied.

Common Consequences 1
Scope: IntegrityOther

Impact: Varies by ContextUnexpected State

Detection Methods 1
Automated Static AnalysisHigh
Automated static analysis, commonly referred to as Static Application Security Testing (SAST), can find some instances of this weakness by analyzing source code (or binary/compiled code) without having to execute it. Typically, this is done by building a model of data flow and control flow, then searching for potentially-vulnerable patterns that connect "sources" (origins of input) with "sinks" (destinations where the data interacts with external components, a lower layer such as the OS, etc.)
Demonstrative Examples 1

ID : DX-153

In 2022, the OT:ICEFALL study examined products by 10 different Operational Technology (OT) vendors. The researchers reported 56 vulnerabilities and said that the products were "insecure by design" [REF-1283]. If exploited, these vulnerabilities often allowed adversaries to change how the products operated, ranging from denial of service to changing the code that the products executed. Since these products were often used in industries such as power, electrical, water, and others, there could even be safety implications.
Multiple vendors did not sign firmware images.
Observed Examples 3
CVE-2022-30260Distributed Control System (DCS) does not sign firmware images and only relies on insecure checksums for integrity checks
CVE-2022-30267Distributed Control System (DCS) does not sign firmware images and only relies on insecure checksums for integrity checks
CVE-2022-30272Remote Terminal Unit (RTU) does not use signatures for firmware images and relies on insecure checksums
References 2
24 Deadly Sins of Software Security
Michael Howard, David LeBlanc, and John Viega
McGraw-Hill
2010
ID: REF-44
OT:ICEFALL: The legacy of "insecure by design" and its implications for certifications and risk management
Forescout Vedere Labs
20-06-2022
ID: REF-1283
Applicable Platforms
Languages:
Not Language-Specific : Undetermined
Technologies:
ICS/OT : Undetermined
Modes of Introduction
Architecture and Design
Implementation
Related Weaknesses
Taxonomy Mapping
  • PLOVER
  • OWASP Top Ten 2004
  • WASC
Notes
Relationship"origin validation" could fall under this.
MaintenanceThe specific ways in which the origin is not properly identified should be laid out as separate weaknesses. In some sense, this is more like a category.