Initial Access

Type: tactic

Description: Get your text into the LLM prompt

Version: 0.1.0

Created At: 2024-10-03 22:24:49 +0300

Last Modified At: 2024-10-03 22:24:49 +0300

Tactic Order: 3


External References

  • <-- Compromised User (technique): An adversary can directly access the AI system by using a compromised user account.
  • <-- Targeted RAG Poisoning (technique): An adversary can target a specific user prompt by crafting content that would be surfaced by a RAG system to respond to that query.
  • <-- RAG Poisoning (technique): An adversary can indirectly inject malicious content into a thread by contaminating RAG data.
  • <-- User Manipulation (technique): An adversary can indirectly inject malicious content into a thread by manipulating a user to do it unwittingly.
  • <-- Retrieval Tool Poisoning (technique): An adversary can indirectly inject malicious content into a thread by contaminating data accessible to the AI system via an invocable retrival tool.
  • <-- Guest User Abuse (technique): An adversary could leverage a guest user account as a foothold into the target environment.
  • <-- Web Poisoning (technique): An adversary can indirectly inject malicious content into a thread by hiding it in a public website that the AI system might search for and read.