Poison AI Model
Type: technique
Description: Adversaries may manipulate an AI model's weights to change it's behavior or performance, resulting in a poisoned model. Adversaries may poison a model by by directly manipulating its weights, training the model on poisoned data, further fine-tuning the model, or otherwise interfering with its training process.
The change in behavior of poisoned models may be limited to targeted categories in predictive AI models, or targeted topics, concepts, or facts in generative AI models, or aim for a general performance degradation.
Version: 0.1.0
Created At: 2025-07-23 10:23:39 -0400
Last Modified At: 2025-07-23 10:23:39 -0400
External References
Related Objects
- --> Manipulate AI Model (technique): Sub-technique of