Search for Victim's Publicly Available Research Materials | Commercial License Abuse | Retrieval Tool Poisoning | Physical Environment Access | User Execution | RAG Poisoning | LLM Plugin Compromise | URL Familiarizing | Unsecured Credentials | Discover ML Artifacts | Message Poisoning | Data from Information Repositories | Craft Adversarial Data | Search Index C2 | Write Tool Invocation | External Harms |
Search Victim-Owned Websites | LLM Prompt Crafting | RAG Poisoning | Full ML Model Access | LLM Plugin Compromise | Thread Infection | LLM Jailbreak | LLM Trusted Output Components Manipulation | RAG Credential Harvesting | Discover LLM Hallucinations | Shared Resource Poisoning | Data from Local System | Backdoor ML Model | Public Web C2 | Granular Web Request Triggering | Cost Harvesting |
Search for Victim's Publicly Available Code Repositories | Acquire Infrastructure | Compromised User | AI Model Inference API Access | LLM Prompt Injection | LLM Prompt Self-Replication | Off-Target Language | Abuse Trusted Sites | Retrieval Tool Credential Harvesting | Discover AI Model Outputs | | Thread History Harvesting | Create Proxy ML Model | | Abuse Trusted Sites | Evade ML Model |
Search Application Repositories | Develop Capabilities | Web Poisoning | ML-Enabled Product or Service | AI Click Bait | Poison Training Data | Crescendo | Blank Image | | Discover ML Model Family | | RAG Data Harvesting | Verify Attack | | Granular Clickable Link Rendering | Erode ML Model Integrity |
Gather RAG-Indexed Targets | Poison Training Data | User Manipulation | | Command and Scripting Interpreter | Backdoor ML Model | System Instruction Keywords | Evade ML Model | | Failure Mode Mapping | | Retrieval Tool Data Harvesting | | | Web Request Triggering | Spamming ML System with Chaff Data |
Active Scanning | Publish Hallucinated Entities | Guest User Abuse | | Off-Target Language | Memory Infection | | ASCII Smuggling | | Discover ML Model Ontology | | User Message Harvesting | | | Exfiltration via ML Inference API | Mutative Tool Invocation |
| Retrieval Content Crafting | ML Supply Chain Compromise | | System Instruction Keywords | | | LLM Prompt Obfuscation | | Discover LLM System Information | | ML Artifact Collection | | | Extract LLM System Prompt | Erode Dataset Integrity |
| Establish Accounts | Evade ML Model | | | | | Delayed Execution | | Whoami | | Memory Data Hording | | | Image Rendering | Denial of ML Service |
| Publish Poisoned Datasets | Valid Accounts | | | | | Instructions Silencing | | Tool Definition Discovery | | | | | LLM Data Leakage | |
| Obtain Capabilities | Exploit Public-Facing Application | | | | | False RAG Entry Injection | | Embedded Knowledge Exposure | | | | | Clickable Link Rendering | |
| Publish Poisoned Models | Phishing | | | | | Distraction | | Discover System Prompt | | | | | Exfiltration via Cyber Means | |
| Acquire Public ML Artifacts | | | | | | Conditional Execution | | Discover System Instruction Keywords | | | | | | |
| Obtain Generative AI Capabilities | | | | | | LLM Jailbreak | | Discover Special Character Sets | | | | | | |
| | | | | | | Indirect Data Access | | | | | | | | |
| | | | | | | Citation Manipulation | | | | | | | | |
| | | | | | | Off-Target Language | | | | | | | | |
| | | | | | | Crescendo | | | | | | | | |
| | | | | | | System Instruction Keywords | | | | | | | | |
| | | | | | | Citation Silencing | | | | | | | | |