Gather RAG-Indexed Targets | Stage Capabilities | Compromised User | Full ML Model Access | Command and Scripting Interpreter | Thread Infection | LLM Jailbreak | False RAG Entry Injection | Unsecured Credentials | Discover LLM Hallucinations | Shared Resource Poisoning | Data from Local System | Craft Adversarial Data | Reverse Shell | Web Request Triggering | Denial of ML Service |
Active Scanning | Establish Accounts | Retrieval Tool Poisoning | ML-Enabled Product or Service | User Execution | Memory Infection | LLM Plugin Compromise | Instructions Silencing | RAG Credential Harvesting | Tool Definition Discovery | Message Poisoning | Memory Data Hording | Manipulate AI Model | Search Index C2 | Abuse Trusted Sites | Spamming ML System with Chaff Data |
Search for Victim's Publicly Available Code Repositories | Acquire Public ML Artifacts | Guest User Abuse | AI Model Inference API Access | LLM Prompt Injection | Manipulate AI Model | System Instruction Keywords | Corrupt AI Model | Retrieval Tool Credential Harvesting | Failure Mode Mapping | | ML Artifact Collection | Create Proxy ML Model | Public Web C2 | LLM Data Leakage | External Harms |
Search Open Technical Databases | Publish Poisoned Models | AI Supply Chain Compromise | Physical Environment Access | AI Click Bait | RAG Poisoning | Crescendo | LLM Jailbreak | | Discover AI Model Outputs | | Thread History Harvesting | Verify Attack | | Extract LLM System Prompt | Cost Harvesting |
Search Application Repositories | Obtain Capabilities | RAG Poisoning | | LLM Plugin Compromise | LLM Prompt Self-Replication | Off-Target Language | Abuse Trusted Sites | | Discover ML Model Family | | Retrieval Tool Data Harvesting | Embed Malware | | Exfiltration via ML Inference API | Evade ML Model |
Search Victim-Owned Websites | Publish Hallucinated Entities | Evade ML Model | | System Instruction Keywords | Poison Training Data | | Delayed Execution | | Whoami | | RAG Data Harvesting | Modify AI Model Architecture | | Clickable Link Rendering | Erode Dataset Integrity |
Search Open AI Vulnerability Analysis | Publish Poisoned Datasets | User Manipulation | | Off-Target Language | Embed Malware | | Conditional Execution | | Discover ML Artifacts | | Data from Information Repositories | Poison AI Model | | Image Rendering | Mutative Tool Invocation |
| Acquire Infrastructure | Phishing | | | Modify AI Model Architecture | | URL Familiarizing | | Cloud Service Discovery | | User Message Harvesting | | | Exfiltration via Cyber Means | Erode ML Model Integrity |
| LLM Prompt Crafting | Web Poisoning | | | Poison AI Model | | Impersonation | | Discover LLM System Information | | | | | Write Tool Invocation | |
| Poison Training Data | Valid Accounts | | | | | ASCII Smuggling | | Discover ML Model Ontology | | | | | | |
| Retrieval Content Crafting | Drive-By Compromise | | | | | Evade ML Model | | Embedded Knowledge Exposure | | | | | | |
| Develop Capabilities | Exploit Public-Facing Application | | | | | Distraction | | Discover Special Character Sets | | | | | | |
| Commercial License Abuse | | | | | | Indirect Data Access | | Discover System Prompt | | | | | | |
| Obtain Generative AI Capabilities | | | | | | Blank Image | | Discover System Instruction Keywords | | | | | | |
| | | | | | | LLM Trusted Output Components Manipulation | | | | | | | | |
| | | | | | | LLM Prompt Obfuscation | | | | | | | | |
| | | | | | | Masquerading | | | | | | | | |
| | | | | | | System Instruction Keywords | | | | | | | | |
| | | | | | | Citation Silencing | | | | | | | | |
| | | | | | | Crescendo | | | | | | | | |
| | | | | | | Off-Target Language | | | | | | | | |
| | | | | | | Citation Manipulation | | | | | | | | |