Impact

Type: tactic

Description: The adversary is trying to manipulate, interrupt, erode confidence in, or destroy your LLM systems and data.

Version: 0.1.0

Created At: 2025-12-22 07:58:23 -0500

Last Modified At: 2025-12-22 07:58:23 -0500

Tactic Order: 16


External References

  • <-- Evade AI Model (technique): Manipulating machine learning models to evade detection can lead to severe security breaches.
  • <-- Spamming AI System with Chaff Data (technique): Overwhelming machine learning systems with irrelevant or misleading data to degrade their performance or effectiveness.
  • <-- Erode AI Model Integrity (technique): Compromising the integrity of machine learning models to produce incorrect or unreliable results.
  • <-- Data Destruction via AI Agent Tool Invocation (technique): An adversary can achieve their goals by invoking tools on behalf of a compromised user.
  • <-- Erode Dataset Integrity (technique): Poison datasets used for training or validation to degrade the performance and reliability of ML models.
  • <-- Cost Harvesting (technique): Exploiting machine learning systems in a way that increases operational costs for the victim.
  • <-- Denial of AI Service (technique): Disrupting or disabling AI services to impact operations or availability.
  • <-- External Harms (technique): Using machine learning systems to cause external harm, such as misinformation or economic damage.
  • <-- AI-Targeted Cloaking (technique): Causing harm by influencing the behavior, outputs, or decisions of AI systems in ways that serve adversarial goals.