Execution
Type: tactic
Description: Make the LLM interpret your text as instructions not data
Version: 0.1.0
Created At: 2025-06-19 08:13:23 -0400
Last Modified At: 2025-06-19 08:13:23 -0400
Tactic Order: 5
External References
Related Objects
- <-- User Execution (technique): Relying on user interaction to execute malicious actions or deploy machine learning-based attacks.
- <-- LLM Plugin Compromise (technique): Compromising large language model (LLM) plugins to execute malicious actions or influence machine learning outcomes.
- <-- LLM Prompt Injection (technique): An adversary can change the execution flow of a GenAI app by controlling a part of its data.
- <-- AI Click Bait (technique): An adversary can trick an AI agent into following website instructions and executing malicious code on the user's system.
- <-- Command and Scripting Interpreter (technique): Using command-line interfaces or scripting interpreters to execute malicious code or commands within machine learning environments.