Defense Evasion
Type: tactic
Description: The adversary is trying to avoid being detected by security software and native controls.
Version: 0.1.0
Created At: 2025-03-04 10:27:40 -0500
Last Modified At: 2025-03-04 10:27:40 -0500
Tactic Order: 8
External References
Related Objects
- <-- Blank Image (technique): An adversary can avoid raising suspicion by avoiding rendering an image to carry exfiltrated data.
- <-- Instructions Silencing (technique): An adversary can avoid raising suspicion by hiding malicious instructions and their implications from the user.
- <-- Distraction (technique): An adversary can avoid detection by combining benign instructions with their malicious ones.
- <-- Evade ML Model (technique): Evading detection or mitigation measures implemented by machine learning models.
- <-- False RAG Entry Injection (technique): An adversary can inject false RAG entries that are treated by the AI system as authentic.
- <-- LLM Prompt Obfuscation (technique): An adversary can avoid detection by hiding or obfuscating the prompt injection text.
- <-- ASCII Smuggling (technique): An adversary can avoid raising user suspicion.
- <-- Conditional Execution (technique): An adversary can limit their attack to their specified targets to reduce their detection surface.
- <-- LLM Jailbreak (technique): An adversary can bypass detections by jailbreaking the LLM.
- <-- Delayed Execution (technique): An adversary can bypass controls and evade detection by delaying the execution of their malicious instructions..
- <-- Indirect Data Access (technique): An adversary can extract full documents through the RAG circumventing data security controls.
- <-- URL Familiarizing (technique): An adversary can bypass security mechanisms to allow future data exfiltration through URL in an attacker-controlled domain.
- <-- LLM Trusted Output Components Manipulation (technique): An adversary can evade detection by modifying trusted components of the AI system.