AI Agents Attack Matrix

ReconnaissanceResource DevelopmentInitial AccessML Model AccessExecutionPersistencePrivilege EscalationDefense EvasionCredential AccessDiscoveryLateral MovementCollectionML Attack StagingCommand And ControlExfiltrationImpact
Gather RAG-Indexed TargetsStage CapabilitiesCompromised UserFull ML Model AccessCommand and Scripting InterpreterThread InfectionLLM JailbreakFalse RAG Entry InjectionUnsecured CredentialsDiscover LLM HallucinationsShared Resource PoisoningData from Local SystemCraft Adversarial DataReverse ShellWeb Request TriggeringDenial of ML Service
Active ScanningEstablish AccountsRetrieval Tool PoisoningML-Enabled Product or ServiceUser ExecutionMemory InfectionLLM Plugin CompromiseInstructions SilencingRAG Credential HarvestingTool Definition DiscoveryMessage PoisoningMemory Data HordingManipulate AI ModelSearch Index C2Abuse Trusted SitesSpamming ML System with Chaff Data
Search for Victim's Publicly Available Code RepositoriesAcquire Public ML ArtifactsGuest User AbuseAI Model Inference API AccessLLM Prompt InjectionManipulate AI ModelSystem Instruction KeywordsCorrupt AI ModelRetrieval Tool Credential HarvestingFailure Mode MappingML Artifact CollectionCreate Proxy ML ModelPublic Web C2LLM Data LeakageExternal Harms
Search Open Technical DatabasesPublish Poisoned ModelsAI Supply Chain CompromisePhysical Environment AccessAI Click BaitRAG PoisoningCrescendoLLM JailbreakDiscover AI Model OutputsThread History HarvestingVerify AttackExtract LLM System PromptCost Harvesting
Search Application RepositoriesObtain CapabilitiesRAG PoisoningLLM Plugin CompromiseLLM Prompt Self-ReplicationOff-Target LanguageAbuse Trusted SitesDiscover ML Model FamilyRetrieval Tool Data HarvestingEmbed MalwareExfiltration via ML Inference APIEvade ML Model
Search Victim-Owned WebsitesPublish Hallucinated EntitiesEvade ML ModelSystem Instruction KeywordsPoison Training DataDelayed ExecutionWhoamiRAG Data HarvestingModify AI Model ArchitectureClickable Link RenderingErode Dataset Integrity
Search Open AI Vulnerability AnalysisPublish Poisoned DatasetsUser ManipulationOff-Target LanguageEmbed MalwareConditional ExecutionDiscover ML ArtifactsData from Information RepositoriesPoison AI ModelImage RenderingMutative Tool Invocation
Acquire InfrastructurePhishingModify AI Model ArchitectureURL FamiliarizingCloud Service DiscoveryUser Message HarvestingExfiltration via Cyber MeansErode ML Model Integrity
LLM Prompt CraftingWeb PoisoningPoison AI ModelImpersonationDiscover LLM System InformationWrite Tool Invocation
Poison Training DataValid AccountsASCII SmugglingDiscover ML Model Ontology
Retrieval Content CraftingDrive-By CompromiseEvade ML ModelEmbedded Knowledge Exposure
Develop CapabilitiesExploit Public-Facing ApplicationDistractionDiscover Special Character Sets
Commercial License AbuseIndirect Data AccessDiscover System Prompt
Obtain Generative AI CapabilitiesBlank ImageDiscover System Instruction Keywords
LLM Trusted Output Components Manipulation
LLM Prompt Obfuscation
Masquerading
System Instruction Keywords
Citation Silencing
Crescendo
Off-Target Language
Citation Manipulation