Evade ML Model
Type: technique
Description: Adversaries can craft adversarial data that prevent a machine learning model from correctly identifying the contents of the data. This technique can be used to evade a downstream task where machine learning is utilized. The adversary may evade machine learning based virus/malware detection, or network scanning towards the goal of a traditional cyber attack.
Version: 0.1.0
Created At: 2025-03-04 10:27:40 -0500
Last Modified At: 2025-03-04 10:27:40 -0500
External References
Related Objects
- --> Initial Access (tactic): Bypassing or evading machine learning models used for security or detection to gain unauthorized access.
- --> Defense Evasion (tactic): Evading detection or mitigation measures implemented by machine learning models.
- --> Impact (tactic): Manipulating machine learning models to evade detection can lead to severe security breaches.