Backdoor ML Model
Type: technique
Description: Adversaries may introduce a backdoor into a ML model. A backdoored model operates performs as expected under typical conditions, but will produce the adversary's desired output when a trigger is introduced to the input data. A backdoored model provides the adversary with a persistent artifact on the victim system. The embedded vulnerability is typically activated at a later time by data samples with an backdoor trigger
Version: 0.1.0
Created At: 2025-03-04 10:27:40 -0500
Last Modified At: 2025-03-04 10:27:40 -0500
External References
Related Objects
- --> Persistence (tactic): Embedding backdoors in machine learning models to allow unauthorized influence or control over model predictions.
- --> ML Attack Staging (tactic): Embedding backdoors in machine learning models to prepare for future exploitation or malicious activities.