Persistence

Type: tactic

Description: Keep your malicious prompt there for future conversations

Version: 0.1.0

Created At: 2025-12-22 07:58:23 -0500

Last Modified At: 2025-12-22 07:58:23 -0500

Tactic Order: 6


External References

  • <-- AI Agent Context Poisoning (technique): Poisoning the context of AI agents to persistently influence or control future behavior.
  • <-- RAG Poisoning (technique): An adversary can gain persistence by creating or modifying an internal data source indexed by RAG that users interact with.
  • <-- Manipulate AI Model (technique): Embedding backdoors in machine learning models to allow unauthorized influence or control over model predictions.
  • <-- Modify AI Agent Configuration (technique): Altering the configuration of an AI agent to persistently influence its behavior or enable long-term unauthorized control.
  • <-- LLM Prompt Self-Replication (technique): An adversary can create a prompt that propagates to other LLMs and persists on the system.
  • <-- AI Agent Tool Poisoning (technique): Altering or injecting malicious behavior into tools integrated with AI agents in order to achieve long-term unauthorized influence or control.
  • <-- Poison Training Data (technique): Injecting malicious data into training datasets to establish long-term influence over machine learning models.