Poison Training Data

Type: technique

Description: Adversaries may attempt to poison datasets used by a ML model by modifying the underlying data or its labels. This allows the adversary to embed vulnerabilities in ML models trained on the data that may not be easily detectable. Data poisoning attacks may or may not require modifying the labels. The embedded vulnerability is activated at a later time by data samples with a backdoor trigger

Poisoned data can be introduced via ML supply chain compromise or the data may be poisoned after the adversary gains initial access to the system.

Version: 0.1.0

Created At: 2025-06-19 08:13:23 -0400

Last Modified At: 2025-06-19 08:13:23 -0400


External References