Exfiltration
Type: tactic
Description: The adversary is trying to steal data or other information from your genai system.
Version: 0.1.0
Created At: 2025-03-04 10:27:40 -0500
Last Modified At: 2025-03-04 10:27:40 -0500
Tactic Order: 15
External References
Related Objects
- <-- Exfiltration via ML Inference API (technique): Exfiltrating data by exploiting machine learning inference APIs to extract sensitive information.
- <-- Exfiltration via Cyber Means (technique): Using cyber means, such as data transfers or network-based methods, to exfiltrate machine learning artifacts or sensitive data.
- <-- Web Request Triggering (technique): An adversary can exfiltrate data by embedding it in a URI and triggering the AI system to query it via its browsing capabilities.
- <-- Write Tool Invocation (technique): An adversary can exfiltrate data by encoding it into the input of an invocable tool capable performing a write operation.
- <-- Image Rendering (technique): An adversary can exfiltrate data by embedding it in the query parameters of an image, and getting AI to render it.
- <-- LLM Data Leakage (technique): Exploiting data leakage vulnerabilities in large language models to retrieve sensitive or unintended information.
- <-- Granular Web Request Triggering (technique): An adversary can exfiltrate data by asking questions about it and using the answers to choose which URL will be visited.
- <-- Clickable Link Rendering (technique): An adversary can exfiltrate data by embedding it in the parameters of a URL, and getting AI to render it as a clickable link to the user, which clicks it.
- <-- LLM Meta Prompt Extraction (technique): Extracting AI system instructions and exfiltrating them outside of the organization.
- <-- Granular Clickable Link Rendering (technique): An adversary can exfiltrate data by asking questions about it and using the answers to choose which URL will be rendered to the user.