AI-Targeted Cloaking

Type: technique

Description: Adversaries may manipulate AI systems by selectively presenting different content to AI-based agents than what is shown to human users. This technique targets AI crawlers, assistants, or inference models and delivers crafted content designed to mislead or alter downstream behavior. The cloaking may be based on request headers, user-agent strings, or other indicators that distinguish AI access from regular traffic. By controlling the content seen only by the AI system, attackers can influence model outputs, decisions, or beliefs without alerting human reviewers.

Version: 0.1.0

Created At: 2025-12-22 07:58:23 -0500

Last Modified At: 2025-12-22 07:58:23 -0500


External References

  • --> Impact (tactic): Causing harm by influencing the behavior, outputs, or decisions of AI systems in ways that serve adversarial goals.