AI Agents Attack Matrix

ReconnaissanceResource DevelopmentInitial AccessAI Model AccessExecutionPersistencePrivilege EscalationDefense EvasionCredential AccessDiscoveryLateral MovementCollectionAI Attack StagingCommand And ControlExfiltrationImpact
Search Application RepositoriesPublish Poisoned DatasetsDrive-By CompromiseAI-Enabled Product or ServiceCommand and Scripting InterpreterAI Agent Context PoisoningAI Agent Tool InvocationEvade AI ModelCredentials from AI Agent ConfigurationDiscover AI Agent ConfigurationMessage PoisoningData from AI ServicesVerify AttackPublic Web C2Web Request TriggeringEvade AI Model
Active ScanningPublish Hallucinated EntitiesEvade AI ModelAI Model Inference API AccessAI Click BaitRAG PoisoningLLM JailbreakCorrupt AI ModelUnsecured CredentialsDiscover AI Model OntologyShared Resource PoisoningData from Information RepositoriesManipulate AI ModelAI Service APIExfiltration via AI Inference APISpamming AI System with Chaff Data
Search Open Technical DatabasesDevelop CapabilitiesRAG PoisoningFull AI Model AccessUser ExecutionManipulate AI ModelSystem Instruction KeywordsFalse RAG Entry InjectionAI Agent Tool Credential HarvestingDiscover LLM System InformationThread History HarvestingCraft Adversarial DataSearch Index C2Image RenderingErode AI Model Integrity
Search for Victim's Publicly Available Code RepositoriesCommercial License AbuseUser ManipulationPhysical Environment AccessAI Agent Tool InvocationModify AI Agent ConfigurationCrescendoBlank ImageRAG Credential HarvestingFailure Mode MappingMemory Data HordingCreate Proxy AI ModelReverse ShellExfiltration via Cyber MeansData Destruction via AI Agent Tool Invocation
Search Open AI Vulnerability AnalysisObtain CapabilitiesExploit Public-Facing ApplicationLLM Prompt InjectionLLM Prompt Self-ReplicationOff-Target LanguageLLM Prompt ObfuscationDiscover LLM HallucinationsUser Message HarvestingEmbed MalwareExtract LLM System PromptErode Dataset Integrity
Search Victim-Owned WebsitesStage CapabilitiesAI Agent Tool Data PoisoningHidden Triggers in Multimodal InputsAI Agent Tool PoisoningMasqueradingDiscover AI ArtifactsAI Artifact CollectionModify AI Model ArchitectureLLM Data LeakageCost Harvesting
Gather RAG-Indexed TargetsEstablish AccountsValid AccountsSystem Instruction KeywordsPoison Training DataDistractionDiscover AI Model FamilyData from Local SystemPoison AI ModelClickable Link RenderingDenial of AI Service
Acquire Public AI ArtifactsAI Agent Tool PoisoningTriggered Prompt InjectionThread PoisoningInstructions SilencingWhoamiRAG Data HarvestingAbuse Trusted SitesExternal Harms
Retrieval Content CraftingCompromised UserIndirect Prompt InjectionEmbed MalwareImpersonationCloud Service DiscoveryAI Agent Tool Data HarvestingExfiltration via AI Agent Tool InvocationAI-Targeted Cloaking
Publish Poisoned ModelsWeb PoisoningDirect Prompt InjectionModify AI Model ArchitectureHidden Triggers in Multimodal InputsDiscover AI Model Outputs
Acquire InfrastructurePhishingOff-Target LanguageMemory PoisoningURL FamiliarizingDiscover Embedded Knowledge
LLM Prompt CraftingAI Supply Chain CompromisePoison AI ModelIndirect Data AccessDiscover System Prompt
Poison Training DataGuest User AbuseAbuse Trusted SitesDiscover Tool Definitions
Obtain Generative AI CapabilitiesLLM Trusted Output Components ManipulationDiscover Activation Triggers
Conditional ExecutionDiscover Special Character Sets
Delay Execution of LLM InstructionsDiscover System Instruction Keywords
ASCII Smuggling
LLM Jailbreak
Citation Silencing
Citation Manipulation
System Instruction Keywords
Crescendo
Off-Target Language