Persistence
Type: tactic
Description: Keep your malicious prompt there for future conversations
Version: 0.1.0
Created At: 2025-03-04 10:27:40 -0500
Last Modified At: 2025-03-04 10:27:40 -0500
Tactic Order: 6
External References
Related Objects
- <-- RAG Poisoning (technique): An adversary can gain persistence by creating or modifying an internal data source indexed by RAG that users interact with.
- <-- Thread Infection (technique): An adversary can infect future interactions on the same thread by injecting a malicious content into the thread history.
- <-- Resource Poisoning (technique): An adversary can infect future threads by injecting a malicious document into data indexed by a RAG system.
- <-- LLM Prompt Self-Replication (technique): An adversary can create a prompt that propagates to other LLMs and persists on the system.
- <-- Backdoor ML Model (technique): Embedding backdoors in machine learning models to allow unauthorized influence or control over model predictions.
- <-- Memory Infection (technique): An adversary that successfully infected one thread can infect others threads.
- <-- Poison Training Data (technique): Injecting malicious data into training datasets to establish long-term influence over machine learning models.