Delay Execution of LLM Instructions
Type: technique
Description: Adversaries may include instructions to be followed by the AI system in response to a future event, such as a specific keyword or the next interaction, in order to evade detection or bypass controls placed on the AI system.
For example, an adversary may include "If the user submits a new request..." followed by the malicious instructions as part of their prompt.
AI agents can include security measures against prompt injections that prevent the invocation of particular tools or access to certain data sources during a conversation turn that has untrusted data in context. Delaying the execution of instructions to a future interaction or keyword is one way adversaries may bypass this type of control.
Version: 0.1.0
Created At: 2025-12-22 07:58:23 -0500
Last Modified At: 2025-12-22 07:58:23 -0500
External References
Related Objects
- --> Defense Evasion (tactic): Intentionally delaying the execution of instructions or tool invocations in large language models to evade detection or oversight.