Initial Access

Type: tactic

Description: The adversary is trying to gain access to the AI system.

The target system could be a network, mobile device, or an edge device such as a sensor platform. The AI capabilities used by the system could be local with onboard or cloud-enabled AI capabilities.

Initial Access consists of techniques that use various entry vectors to gain their initial foothold within the system.

Version: 0.1.0

Created At: 2025-07-23 10:23:39 -0400

Last Modified At: 2025-07-23 10:23:39 -0400

Tactic Order: 3


External References

  • <-- Compromised User (technique): An adversary can directly access the AI system by using a compromised user account.
  • <-- Retrieval Tool Poisoning (technique): An adversary can indirectly inject malicious content into a thread by contaminating data accessible to the AI system via an invocable retrival tool.
  • <-- Guest User Abuse (technique): An adversary could leverage a guest user account as a foothold into the target environment.
  • <-- AI Supply Chain Compromise (technique): Compromising machine learning supply chains to gain unauthorized access or introduce malicious components.
  • <-- RAG Poisoning (technique): An adversary can gain initial access by injecting malicious content into a publicly available data source indexed by RAG.
  • <-- Evade ML Model (technique): Bypassing or evading machine learning models used for security or detection to gain unauthorized access.
  • <-- User Manipulation (technique): An adversary can indirectly inject malicious content into a thread by manipulating a user to do it unwittingly.
  • <-- Phishing (technique): Using phishing techniques to gain unauthorized access to systems or machine learning environments by tricking individuals into revealing sensitive information.
  • <-- Web Poisoning (technique): An adversary can indirectly inject malicious content into a thread by hiding it in a public website that the AI system might search for and read.
  • <-- Valid Accounts (technique): Using valid accounts to gain initial access to systems or machine learning environments.
  • <-- Drive-By Compromise (technique): An adversary can access the AI system by taking advantage of a user visiting a compromised website.
  • <-- Exploit Public-Facing Application (technique): Exploiting vulnerabilities in public-facing applications to gain unauthorized access to systems or data.