Search Application Repositories | Publish Poisoned Datasets | Drive-By Compromise | AI-Enabled Product or Service | Command and Scripting Interpreter | AI Agent Context Poisoning | AI Agent Tool Invocation | Evade AI Model | Credentials from AI Agent Configuration | Discover AI Agent Configuration | Message Poisoning | Data from AI Services | Verify Attack | Public Web C2 | Web Request Triggering | Evade AI Model |
Active Scanning | Publish Hallucinated Entities | Retrieval Tool Poisoning | AI Model Inference API Access | AI Click Bait | RAG Poisoning | LLM Jailbreak | Corrupt AI Model | Unsecured Credentials | Discover AI Model Ontology | Shared Resource Poisoning | Data from Information Repositories | Manipulate AI Model | Search Index C2 | Exfiltration via AI Inference API | Spamming AI System with Chaff Data |
Search Open Technical Databases | Develop Capabilities | Evade AI Model | Full AI Model Access | User Execution | Manipulate AI Model | System Instruction Keywords | False RAG Entry Injection | Retrieval Tool Credential Harvesting | Discover LLM System Information | | Thread History Harvesting | Craft Adversarial Data | Reverse Shell | Image Rendering | Erode AI Model Integrity |
Search for Victim's Publicly Available Code Repositories | Commercial License Abuse | RAG Poisoning | Physical Environment Access | AI Agent Tool Invocation | Modify AI Agent Configuration | Crescendo | Blank Image | RAG Credential Harvesting | Failure Mode Mapping | | Memory Data Hording | Create Proxy AI Model | | Exfiltration via Cyber Means | Erode Dataset Integrity |
Search Open AI Vulnerability Analysis | Obtain Capabilities | User Manipulation | | LLM Prompt Injection | LLM Prompt Self-Replication | Off-Target Language | LLM Prompt Obfuscation | | Discover LLM Hallucinations | | User Message Harvesting | Embed Malware | | Extract LLM System Prompt | Mutative Tool Invocation |
Search Victim-Owned Websites | Stage Capabilities | Exploit Public-Facing Application | | System Instruction Keywords | Poison Training Data | | Masquerading | | Discover AI Artifacts | | AI Artifact Collection | Modify AI Model Architecture | | LLM Data Leakage | Cost Harvesting |
Gather RAG-Indexed Targets | Establish Accounts | Valid Accounts | | Triggered Prompt Injection | Thread Poisoning | | Distraction | | Discover AI Model Family | | Data from Local System | Poison AI Model | | Clickable Link Rendering | Denial of AI Service |
| Acquire Public AI Artifacts | Compromised User | | Indirect Prompt Injection | Embed Malware | | Instructions Silencing | | Whoami | | Retrieval Tool Data Harvesting | | | Abuse Trusted Sites | External Harms |
| Retrieval Content Crafting | Web Poisoning | | Direct Prompt Injection | Modify AI Model Architecture | | Impersonation | | Cloud Service Discovery | | RAG Data Harvesting | | | Exfiltration via AI Agent Tool Invocation | |
| Publish Poisoned Models | Phishing | | Off-Target Language | Memory Poisoning | | URL Familiarizing | | Discover AI Model Outputs | | | | | | |
| Acquire Infrastructure | AI Supply Chain Compromise | | | Poison AI Model | | Indirect Data Access | | Discover Embedded Knowledge | | | | | | |
| LLM Prompt Crafting | Guest User Abuse | | | | | Abuse Trusted Sites | | Discover System Prompt | | | | | | |
| Poison Training Data | | | | | | LLM Trusted Output Components Manipulation | | Discover Tool Definitions | | | | | | |
| Obtain Generative AI Capabilities | | | | | | Conditional Execution | | Discover Activation Triggers | | | | | | |
| | | | | | | ASCII Smuggling | | Discover Special Character Sets | | | | | | |
| | | | | | | LLM Jailbreak | | Discover System Instruction Keywords | | | | | | |
| | | | | | | Citation Silencing | | | | | | | | |
| | | | | | | Citation Manipulation | | | | | | | | |
| | | | | | | System Instruction Keywords | | | | | | | | |
| | | | | | | Crescendo | | | | | | | | |
| | | | | | | Off-Target Language | | | | | | | | |