Impact
Type: tactic
Description: The adversary is trying to manipulate, interrupt, erode confidence in, or destroy your LLM systems and data.
Version: 0.1.0
Created At: 2025-03-04 10:27:40 -0500
Last Modified At: 2025-03-04 10:27:40 -0500
Tactic Order: 16
External References
Related Objects
- <-- Mutative Tool Invocation (technique): An adversary can achieve their goals by invoking tools on behalf of a compromised user.
- <-- Evade ML Model (technique): Manipulating machine learning models to evade detection can lead to severe security breaches.
- <-- Cost Harvesting (technique): Exploiting machine learning systems in a way that increases operational costs for the victim.
- <-- Denial of ML Service (technique): Disrupting or disabling machine learning services to impact operations or availability.
- <-- Spamming ML System with Chaff Data (technique): Overwhelming machine learning systems with irrelevant or misleading data to degrade their performance or effectiveness.
- <-- External Harms (technique): Using machine learning systems to cause external harm, such as misinformation or economic damage.
- <-- Erode ML Model Integrity (technique): Compromising the integrity of machine learning models to produce incorrect or unreliable results.
- <-- Erode Dataset Integrity (technique): Poison datasets used for training or validation to degrade the performance and reliability of ML models.
- <-- LLM Trusted Output Components Manipulation (technique): An adversary can manipulate the user into taking action by abusing trusted components of the AI system.