Manipulate AI Model

Type: technique

Description: Adversaries may directly manipulate an AI model to change its behavior or introduce malicious code. Manipulating a model gives the adversary a persistent change in the system. This can include poisoning the model by changing its weights, modifying the model architecture to change its behavior, and embedding malware which may be executed when the model is loaded.

Version: 0.1.0

Created At: 2025-07-23 10:23:39 -0400

Last Modified At: 2025-07-23 10:23:39 -0400


External References

  • --> Persistence (tactic): Embedding backdoors in machine learning models to allow unauthorized influence or control over model predictions.
  • --> ML Attack Staging (tactic): Embedding backdoors in machine learning models to prepare for future exploitation or malicious activities.
  • <-- Embed Malware (technique): Sub-technique of
  • <-- Modify AI Model Architecture (technique): Sub-technique of
  • <-- Poison AI Model (technique): Sub-technique of