AI Agent Tool Poisoning
Type: technique
Description: Adversaries may achieve persistence by poisoning tools used by AI agents, using built-in tools or tools available to the agent via Model-Context-Protocol (MCP) connections. This can involve introducing malicious tools at the outset or compromising benign tools already integrated into the agent's environment.
By altering tool behavior such as modifying parameters or description, injecting hidden logic, or redirecting outputs, attackers can maintain long-term influence over the agent’s actions, decisions, or external interactions. Poisoned tools may silently exfiltrate data, execute unauthorized commands, or manipulate downstream processes without raising suspicion.
Version: 0.1.0
Created At: 2025-12-22 07:58:23 -0500
Last Modified At: 2025-12-22 07:58:23 -0500
External References
- MCP Security Notification: Tool Poisoning Attacks, Invariant Labs
- First Malicious MCP in the Wild: The Postmark Backdoor That's Stealing Your Emails, Koi
Related Objects
- --> Persistence (tactic): Altering or injecting malicious behavior into tools integrated with AI agents in order to achieve long-term unauthorized influence or control.
- --> Initial Access (tactic): Gaining initial access by tricking users into installing a malicious MCP server that includes poisoned tools.