Initial Access
Type: tactic
Description: Get your text into the LLM prompt
Version: 0.1.0
Created At: 2024-10-11 16:54:32 +0300
Last Modified At: 2024-10-11 16:54:32 +0300
Tactic Order: 3
External References
Related Objects
- <-- RAG Poisoning (technique): An adversary can indirectly inject malicious content into a thread by contaminating RAG data.
- <-- Retrieval Tool Poisoning (technique): An adversary can indirectly inject malicious content into a thread by contaminating data accessible to the AI system via an invocable retrival tool.
- <-- Targeted RAG Poisoning (technique): An adversary can target a specific user prompt by crafting content that would be surfaced by a RAG system to respond to that query.
- <-- Compromised User (technique): An adversary can directly access the AI system by using a compromised user account.
- <-- Web Poisoning (technique): An adversary can indirectly inject malicious content into a thread by hiding it in a public website that the AI system might search for and read.
- <-- Guest User Abuse (technique): An adversary could leverage a guest user account as a foothold into the target environment.
- <-- User Manipulation (technique): An adversary can indirectly inject malicious content into a thread by manipulating a user to do it unwittingly.