Gather RAG-Indexed Targets | Commercial License Abuse | RAG Poisoning | Full ML Model Access | LLM Plugin Compromise | RAG Poisoning | LLM Plugin Compromise | Blank Image | Unsecured Credentials | Discover ML Model Family | Shared Resource Poisoning | User Message Harvesting | Verify Attack | Public Web C2 | Exfiltration via ML Inference API | Mutative Tool Invocation |
Search Victim-Owned Websites | Obtain Capabilities | ML Supply Chain Compromise | AI Model Inference API Access | LLM Prompt Injection | Thread Infection | LLM Jailbreak | Instructions Silencing | RAG Credential Harvesting | Discover LLM Hallucinations | Message Poisoning | Memory Data Hording | Create Proxy ML Model | Search Index C2 | Exfiltration via Cyber Means | Evade ML Model |
Search for Victim's Publicly Available Research Materials | LLM Prompt Crafting | Evade ML Model | ML-Enabled Product or Service | Command and Scripting Interpreter | Resource Poisoning | Off-Target Language | Distraction | Retrieval Tool Credential Harvesting | Whoami | | Data from Information Repositories | Backdoor ML Model | | Web Request Triggering | Cost Harvesting |
Search for Victim's Publicly Available Code Repositories | Publish Poisoned Datasets | Retrieval Content Crafting | Physical Environment Access | User Execution | LLM Prompt Self-Replication | System Instruction Keywords | Evade ML Model | | Discover LLM System Information | | ML Artifact Collection | Craft Adversarial Data | | Write Tool Invocation | Denial of ML Service |
Search Application Repositories | Publish Hallucinated Entities | User Manipulation | | Off-Target Language | Backdoor ML Model | Crescendo | False RAG Entry Injection | | Failure Mode Mapping | | Thread History Harvesting | | | Image Rendering | Spamming ML System with Chaff Data |
Active Scanning | Establish Accounts | Retrieval Tool Poisoning | | System Instruction Keywords | Memory Infection | | LLM Prompt Obfuscation | | Discover ML Model Ontology | | RAG Data Harvesting | | | LLM Data Leakage | External Harms |
| Acquire Infrastructure | Phishing | | | Poison Training Data | | ASCII Smuggling | | Discover ML Artifacts | | Retrieval Tool Data Harvesting | | | Granular Web Request Triggering | Erode ML Model Integrity |
| Acquire Public ML Artifacts | Compromised User | | | | | Conditional Execution | | Discover AI Model Outputs | | Data from Local System | | | Clickable Link Rendering | Erode Dataset Integrity |
| Develop Capabilities | Guest User Abuse | | | | | LLM Jailbreak | | Embedded Knowledge Exposure | | | | | LLM Meta Prompt Extraction | LLM Trusted Output Components Manipulation |
| Publish Poisoned Models | Valid Accounts | | | | | Delayed Execution | | Tool Definition Discovery | | | | | Granular Clickable Link Rendering | Citation Manipulation |
| Poison Training Data | Web Poisoning | | | | | Indirect Data Access | | Discover System Prompt | | | | | | Citation Silencing |
| | Exploit Public-Facing Application | | | | | URL Familiarizing | | Discover Special Character Sets | | | | | | |
| | | | | | | LLM Trusted Output Components Manipulation | | Discover System Instruction Keywords | | | | | | |
| | | | | | | Off-Target Language | | | | | | | | |
| | | | | | | Citation Manipulation | | | | | | | | |
| | | | | | | Citation Silencing | | | | | | | | |
| | | | | | | System Instruction Keywords | | | | | | | | |
| | | | | | | Crescendo | | | | | | | | |