Discovery
Type: tactic
Description: Understand the capabilities and mitigations in place.
Version: 0.1.0
Created At: 2025-07-23 10:23:39 -0400
Last Modified At: 2025-07-23 10:23:39 -0400
Tactic Order: 10
External References
Related Objects
- <-- Discover LLM Hallucinations (technique): An Adversary can discover entities hallucinated by the LLM to use during later stages of the attack.
- <-- Tool Definition Discovery (technique): Understand the capabilities and mitigations in place.
- <-- Failure Mode Mapping (technique): An adversary can discover information about how the AI system is protected to guide bypass development.
- <-- Discover AI Model Outputs (technique): Examining outputs generated by AI models to infer internal structures, behaviors, or data usage.
- <-- Discover ML Model Family (technique): Determining the type or family of machine learning models in use to understand their architecture or potential vulnerabilities.
- <-- Whoami (technique): An adversary can discover information about the identity that the AI system is running on behalf of.
- <-- Discover ML Artifacts (technique): Searching for machine learning artifacts such as datasets, models, or configurations to gather insights into an organization's ML processes.
- <-- Cloud Service Discovery (technique): Discovering AI services provides adversaries intelligence about the target's AI infrastructure, including model types, access endpoints, container registries, and security configurations. This reconnaissance enables adversaries to map the AI attack surface, identify high-value targets like LLM APIs
- <-- Discover LLM System Information (technique): Extracting internal LLM system information to understand the system's capabilities and aid in crafting prompts.
- <-- Discover ML Model Ontology (technique): Identifying the structure, components, or taxonomy of machine learning models to understand their organization and usage.
- <-- Embedded Knowledge Exposure (technique): An adversary can discover information that's been embedded in the AI system under the misconception that it would only be used for training and wouldn't be directly accessible to the AI system's users.