Execution

Type: tactic

Description: Make the LLM interpret your text as instructions not data

Version: 0.1.0

Created At: 2025-12-22 07:58:23 -0500

Last Modified At: 2025-12-22 07:58:23 -0500

Tactic Order: 5


External References

  • <-- Command and Scripting Interpreter (technique): Using command-line interfaces or scripting interpreters to execute malicious code or commands within machine learning environments.
  • <-- AI Click Bait (technique): An adversary can trick an AI agent into following website instructions and executing malicious code on the user's system.
  • <-- User Execution (technique): Relying on user interaction to execute malicious actions or deploy machine learning-based attacks.
  • <-- AI Agent Tool Invocation (technique): Compromising agent tools to execute malicious actions or influence machine learning outcomes.
  • <-- LLM Prompt Injection (technique): An adversary can change the execution flow of a GenAI app by controlling a part of its data.
  • <-- Hidden Triggers in Multimodal Inputs (technique): Triggering execution of malicious or unintended actions in AI models by embedding hidden cues across input modalities.