AI Attack Staging

Type: tactic

Description: The adversary is leveraging their knowledge of and access to the target system to tailor the attack.

AI Attack Staging consists of techniques adversaries use to prepare their attack on the target AI model. Techniques can include training proxy models, poisoning the target model, and crafting adversarial data to feed the target model. Some of these techniques can be performed in an offline manner and are thus difficult to mitigate. These techniques are often used to achieve the adversary's end goal.

Version: 0.1.0

Created At: 2025-10-01 13:13:22 -0400

Last Modified At: 2025-10-01 13:13:22 -0400

Tactic Order: 13


External References

  • <-- Verify Attack (technique): Testing and validating the effectiveness of a machine learning attack before deployment.
  • <-- Manipulate AI Model (technique): Embedding backdoors in machine learning models to prepare for future exploitation or malicious activities.
  • <-- Craft Adversarial Data (technique): Creating adversarial data designed to mislead, manipulate, or disrupt machine learning models.
  • <-- Create Proxy AI Model (technique): Building a proxy machine learning model to simulate a victim's model for further analysis or attack preparation.