1.
Attacks Matrix
❱
1.1.
Introduction
1.2.
How to Contribute
1.3.
Q&A
2.
Tactics
❱
2.1.
Reconnaissance
❱
2.1.1.
Search Application Repositories
2.1.2.
Active Scanning
2.1.3.
Search Open Technical Databases
2.1.4.
Search for Victim's Publicly Available Code Repositories
2.1.5.
Search Open AI Vulnerability Analysis
2.1.6.
Search Victim-Owned Websites
2.1.7.
Gather RAG-Indexed Targets
2.2.
Resource Development
❱
2.2.1.
Publish Poisoned Datasets
2.2.2.
Publish Hallucinated Entities
2.2.3.
Develop Capabilities
2.2.4.
Commercial License Abuse
2.2.5.
Obtain Capabilities
2.2.6.
Stage Capabilities
2.2.7.
Establish Accounts
2.2.8.
Acquire Public AI Artifacts
2.2.9.
Retrieval Content Crafting
2.2.10.
Publish Poisoned Models
2.2.11.
Acquire Infrastructure
2.2.12.
LLM Prompt Crafting
2.2.13.
Poison Training Data
2.2.14.
Obtain Generative AI Capabilities
2.3.
Initial Access
❱
2.3.1.
Drive-By Compromise
2.3.2.
Retrieval Tool Poisoning
2.3.3.
Evade AI Model
2.3.4.
RAG Poisoning
2.3.5.
User Manipulation
2.3.6.
Exploit Public-Facing Application
2.3.7.
Valid Accounts
2.3.8.
Compromised User
2.3.9.
Web Poisoning
2.3.10.
Phishing
2.3.11.
AI Supply Chain Compromise
2.3.12.
Guest User Abuse
2.4.
AI Model Access
❱
2.4.1.
AI-Enabled Product or Service
2.4.2.
AI Model Inference API Access
2.4.3.
Full AI Model Access
2.4.4.
Physical Environment Access
2.5.
Execution
❱
2.5.1.
Command and Scripting Interpreter
2.5.2.
AI Click Bait
2.5.3.
User Execution
2.5.4.
AI Agent Tool Invocation
2.5.5.
LLM Prompt Injection
2.5.6.
System Instruction Keywords
2.5.7.
Triggered Prompt Injection
2.5.8.
Indirect Prompt Injection
2.5.9.
Direct Prompt Injection
2.5.10.
Off-Target Language
2.6.
Persistence
❱
2.6.1.
AI Agent Context Poisoning
2.6.2.
RAG Poisoning
2.6.3.
Manipulate AI Model
2.6.4.
Modify AI Agent Configuration
2.6.5.
LLM Prompt Self-Replication
2.6.6.
Poison Training Data
2.6.7.
Thread Poisoning
2.6.8.
Embed Malware
2.6.9.
Modify AI Model Architecture
2.6.10.
Memory Poisoning
2.6.11.
Poison AI Model
2.7.
Privilege Escalation
❱
2.7.1.
AI Agent Tool Invocation
2.7.2.
LLM Jailbreak
2.7.3.
System Instruction Keywords
2.7.4.
Crescendo
2.7.5.
Off-Target Language
2.8.
Defense Evasion
❱
2.8.1.
Evade AI Model
2.8.2.
Corrupt AI Model
2.8.3.
False RAG Entry Injection
2.8.4.
Blank Image
2.8.5.
LLM Prompt Obfuscation
2.8.6.
Masquerading
2.8.7.
Distraction
2.8.8.
Instructions Silencing
2.8.9.
Impersonation
2.8.10.
URL Familiarizing
2.8.11.
Indirect Data Access
2.8.12.
Abuse Trusted Sites
2.8.13.
LLM Trusted Output Components Manipulation
2.8.14.
Conditional Execution
2.8.15.
ASCII Smuggling
2.8.16.
LLM Jailbreak
2.8.17.
Citation Silencing
2.8.18.
Citation Manipulation
2.8.19.
System Instruction Keywords
2.8.20.
Crescendo
2.8.21.
Off-Target Language
2.9.
Credential Access
❱
2.9.1.
Credentials from AI Agent Configuration
2.9.2.
Unsecured Credentials
2.9.3.
Retrieval Tool Credential Harvesting
2.9.4.
RAG Credential Harvesting
2.10.
Discovery
❱
2.10.1.
Discover AI Agent Configuration
2.10.2.
Discover AI Model Ontology
2.10.3.
Discover LLM System Information
2.10.4.
Failure Mode Mapping
2.10.5.
Discover LLM Hallucinations
2.10.6.
Discover AI Artifacts
2.10.7.
Discover AI Model Family
2.10.8.
Whoami
2.10.9.
Cloud Service Discovery
2.10.10.
Discover AI Model Outputs
2.10.11.
Discover Embedded Knowledge
2.10.12.
Discover System Prompt
2.10.13.
Discover Tool Definitions
2.10.14.
Discover Activation Triggers
2.10.15.
Discover Special Character Sets
2.10.16.
Discover System Instruction Keywords
2.11.
Lateral Movement
❱
2.11.1.
Message Poisoning
2.11.2.
Shared Resource Poisoning
2.12.
Collection
❱
2.12.1.
Data from AI Services
2.12.2.
Data from Information Repositories
2.12.3.
Thread History Harvesting
2.12.4.
Memory Data Hording
2.12.5.
User Message Harvesting
2.12.6.
AI Artifact Collection
2.12.7.
Data from Local System
2.12.8.
Retrieval Tool Data Harvesting
2.12.9.
RAG Data Harvesting
2.13.
AI Attack Staging
❱
2.13.1.
Verify Attack
2.13.2.
Manipulate AI Model
2.13.3.
Craft Adversarial Data
2.13.4.
Create Proxy AI Model
2.13.5.
Embed Malware
2.13.6.
Modify AI Model Architecture
2.13.7.
Poison AI Model
2.14.
Command And Control
❱
2.14.1.
Public Web C2
2.14.2.
Search Index C2
2.14.3.
Reverse Shell
2.15.
Exfiltration
❱
2.15.1.
Web Request Triggering
2.15.2.
Exfiltration via AI Inference API
2.15.3.
Image Rendering
2.15.4.
Exfiltration via Cyber Means
2.15.5.
Extract LLM System Prompt
2.15.6.
LLM Data Leakage
2.15.7.
Clickable Link Rendering
2.15.8.
Abuse Trusted Sites
2.15.9.
Exfiltration via AI Agent Tool Invocation
2.16.
Impact
❱
2.16.1.
Evade AI Model
2.16.2.
Spamming AI System with Chaff Data
2.16.3.
Erode AI Model Integrity
2.16.4.
Erode Dataset Integrity
2.16.5.
Mutative Tool Invocation
2.16.6.
Cost Harvesting
2.16.7.
Denial of AI Service
2.16.8.
External Harms
3.
Procedures
❱
3.1.
Microsoft Copilot Purview Audit Log Evasion and DLP Bypass
3.2.
X Bot Exposing Itself After Training on a Poisoned Github Repository
3.3.
ChatGPT and Gemini jailbreak using the Crescendo technique
3.4.
Copilot M365 Lures Victims Into a Phishing Site
3.5.
EchoLeak: Zero-Click Data Exfiltration using M365 Copilot
3.6.
Google Gemini: Planting Instructions For Delayed Automatic Tool Invocation
3.7.
GitHub Copilot Chat: From Prompt Injection to Data Exfiltration
3.8.
AI ClickFix: Hijacking Computer-Use Agents Using ClickFix
3.9.
spAIware
3.10.
Microsoft Copilot: From Prompt Injection to Exfiltration of Personal Information
3.11.
Financial Transaction Hijacking With M365 Copilot As An Insider
3.12.
Exfiltration of personal information from ChatGPT via prompt injection
3.13.
Data Exfiltration from Slack AI via indirect prompt injection
4.
Platforms
❱
4.1.
SlackAI
4.2.
Microsoft Copilot
4.3.
Claude
4.4.
Microsoft Copilot for M365
4.5.
Gemini
4.6.
ChatGPT
4.7.
GitHub Copilot
5.
Mitigations
❱
5.1.
Content Security Policy
5.2.
URL Anchoring
5.3.
LLM Activations
5.4.
Information Flow Control
5.5.
Index-Based Browsing
5.6.
Spotlighting
6.
Entities
❱
6.1.
Simon Willison
6.2.
PromptArmor
6.3.
Dmitry Lozovoy
6.4.
Gal Malka
6.5.
Gregory Schwartzman
6.6.
Pliny
6.7.
Ronen Eldan
6.8.
Lana Salameh
6.9.
Mark Russinovich
6.10.
Ahmed Salem
6.11.
Riley Goodside
6.12.
Jonathan Cefalu
6.13.
Ayush RoyChowdhury
6.14.
Tamir Ishay Sharbat
6.15.
Michael Bargury
6.16.
Aim Security
6.17.
Johann Rehberger
Light
Rust
Coal
Navy
Ayu
AI Agents Attack Matrix
Mitigations
Content Security Policy
URL Anchoring
LLM Activations
Information Flow Control
Index-Based Browsing
Spotlighting