Evade AI Model

Type: technique

Description: Adversaries can craft adversarial data that prevent an AI model from correctly identifying the contents of the data. This technique can be used to evade a downstream task where AI is utilized. The adversary may evade AI based virus/malware detection, or network scanning towards the goal of a traditional cyber attack.

Version: 0.1.0

Created At: 2025-10-01 13:13:22 -0400

Last Modified At: 2025-10-01 13:13:22 -0400


External References

  • --> Initial Access (tactic): Bypassing or evading machine learning models used for security or detection to gain unauthorized access.
  • --> Defense Evasion (tactic): Evading detection or mitigation measures implemented by machine learning models.
  • --> Impact (tactic): Manipulating machine learning models to evade detection can lead to severe security breaches.