Discover LLM Hallucinations
Type: technique
Description: Adversaries may prompt large language models and identify hallucinated entities. They may request software packages, commands, URLs, organization names, or e-mail addresses, and identify hallucinations with no connected real-world source.
Version: 0.1.0
Created At: 2024-12-31 14:18:56 -0500
Last Modified At: 2024-12-31 14:18:56 -0500
External References
- Diving Deeper into AI Package Hallucinations., Lasso Security
- Can you trust ChatGPT’s package recommendations?, Vulcan
Related Objects
- --> Discovery (tactic): An Adversary can discover entities hallucinated by the LLM to use during later stages of the attack.
- --> Publish Hallucinated Entities (technique): An adversary may take advantage of hallucinated entities to point victims to rouge entities created by the adversary.
- <-- Publish Hallucinated Entities (technique): The adversary needs to discover entities commonly hallucinated by the LLM, in order to create the corresponding entities.