Discovery
Type: tactic
Description: Understand the capabilities and mitigations in place.
Version: 0.1.0
Created At: 2025-03-04 10:27:40 -0500
Last Modified At: 2025-03-04 10:27:40 -0500
Tactic Order: 10
External References
Related Objects
- <-- Discover ML Model Family (technique): Determining the type or family of machine learning models in use to understand their architecture or potential vulnerabilities.
- <-- Discover LLM Hallucinations (technique): An Adversary can discover entities hallucinated by the LLM to use during later stages of the attack.
- <-- Whoami (technique): An adversary can discover information about the identity that the AI system is running on behalf of.
- <-- Discover LLM System Information (technique): Extracting internal LLM system information to understand the system's capabilities and aid in crafting prompts.
- <-- Failure Mode Mapping (technique): An adversary can discover information about how the AI system is protected to guide bypass development.
- <-- Discover ML Model Ontology (technique): Identifying the structure, components, or taxonomy of machine learning models to understand their organization and usage.
- <-- Discover ML Artifacts (technique): Searching for machine learning artifacts such as datasets, models, or configurations to gather insights into an organization's ML processes.
- <-- Discover AI Model Outputs (technique): Examining outputs generated by AI models to infer internal structures, behaviors, or data usage.
- <-- Embedded Knowledge Exposure (technique): An adversary can discover information that's been embedded in the AI system under the misconception that it would only be used for training and wouldn't be directly accessible to the AI system's users.
- <-- Tool Definition Discovery (technique): Understand the capabilities and mitigations in place.