AI Agents Attack Matrix

ReconnaissanceResource DevelopmentInitial AccessAI Model AccessExecutionPersistencePrivilege EscalationDefense EvasionCredential AccessDiscoveryLateral MovementCollectionAI Attack StagingCommand And ControlExfiltrationImpact
Search Application RepositoriesPublish Poisoned DatasetsDrive-By CompromiseAI-Enabled Product or ServiceCommand and Scripting InterpreterAI Agent Context PoisoningAI Agent Tool InvocationEvade AI ModelCredentials from AI Agent ConfigurationDiscover AI Agent ConfigurationMessage PoisoningData from AI ServicesVerify AttackPublic Web C2Web Request TriggeringEvade AI Model
Active ScanningPublish Hallucinated EntitiesRetrieval Tool PoisoningAI Model Inference API AccessAI Click BaitRAG PoisoningLLM JailbreakCorrupt AI ModelUnsecured CredentialsDiscover AI Model OntologyShared Resource PoisoningData from Information RepositoriesManipulate AI ModelSearch Index C2Exfiltration via AI Inference APISpamming AI System with Chaff Data
Search Open Technical DatabasesDevelop CapabilitiesEvade AI ModelFull AI Model AccessUser ExecutionManipulate AI ModelSystem Instruction KeywordsFalse RAG Entry InjectionRetrieval Tool Credential HarvestingDiscover LLM System InformationThread History HarvestingCraft Adversarial DataReverse ShellImage RenderingErode AI Model Integrity
Search for Victim's Publicly Available Code RepositoriesCommercial License AbuseRAG PoisoningPhysical Environment AccessAI Agent Tool InvocationModify AI Agent ConfigurationCrescendoBlank ImageRAG Credential HarvestingFailure Mode MappingMemory Data HordingCreate Proxy AI ModelExfiltration via Cyber MeansErode Dataset Integrity
Search Open AI Vulnerability AnalysisObtain CapabilitiesUser ManipulationLLM Prompt InjectionLLM Prompt Self-ReplicationOff-Target LanguageLLM Prompt ObfuscationDiscover LLM HallucinationsUser Message HarvestingEmbed MalwareExtract LLM System PromptMutative Tool Invocation
Search Victim-Owned WebsitesStage CapabilitiesExploit Public-Facing ApplicationSystem Instruction KeywordsPoison Training DataMasqueradingDiscover AI ArtifactsAI Artifact CollectionModify AI Model ArchitectureLLM Data LeakageCost Harvesting
Gather RAG-Indexed TargetsEstablish AccountsValid AccountsTriggered Prompt InjectionThread PoisoningDistractionDiscover AI Model FamilyData from Local SystemPoison AI ModelClickable Link RenderingDenial of AI Service
Acquire Public AI ArtifactsCompromised UserIndirect Prompt InjectionEmbed MalwareInstructions SilencingWhoamiRetrieval Tool Data HarvestingAbuse Trusted SitesExternal Harms
Retrieval Content CraftingWeb PoisoningDirect Prompt InjectionModify AI Model ArchitectureImpersonationCloud Service DiscoveryRAG Data HarvestingExfiltration via AI Agent Tool Invocation
Publish Poisoned ModelsPhishingOff-Target LanguageMemory PoisoningURL FamiliarizingDiscover AI Model Outputs
Acquire InfrastructureAI Supply Chain CompromisePoison AI ModelIndirect Data AccessDiscover Embedded Knowledge
LLM Prompt CraftingGuest User AbuseAbuse Trusted SitesDiscover System Prompt
Poison Training DataLLM Trusted Output Components ManipulationDiscover Tool Definitions
Obtain Generative AI CapabilitiesConditional ExecutionDiscover Activation Triggers
ASCII SmugglingDiscover Special Character Sets
LLM JailbreakDiscover System Instruction Keywords
Citation Silencing
Citation Manipulation
System Instruction Keywords
Crescendo
Off-Target Language