AI Agent Context Poisoning

Type: technique

Description: Adversaries may attempt to manipulate the context used by an AI agent's large language model (LLM) to influence the responses it generates or actions it takes. This allows an adversary to persistently change the behavior of the target agent and further their goals.

Context poisoning can be accomplished by prompting an LLM to add instructions or preferences to memory (See Memory Poisoning) or by simply prompting an LLM that uses prior messages in a thread as part of its context (See Thread Poisoning).

Version: 0.1.0

Created At: 2025-10-01 13:13:22 -0400

Last Modified At: 2025-10-01 13:13:22 -0400


External References

  • --> Persistence (tactic): Poisoning the context of AI agents to persistently influence or control future behavior.
  • <-- Thread Poisoning (technique): Sub-technique of
  • <-- Memory Poisoning (technique): Sub-technique of