LLM Plugin Compromise
Type: technique
Description: Adversaries may use their access to an LLM that is part of a larger system to compromise connected plugins. LLMs are often connected to other services or resources via plugins to increase their capabilities. Plugins may include integrations with other applications, access to public or private data sources, and the ability to execute code.
This may allow adversaries to execute API calls to integrated applications or plugins, providing the adversary with increased privileges on the system. Adversaries may take advantage of connected data sources to retrieve sensitive information. They may also use an LLM integrated with a command or script interpreter to execute arbitrary instructions.
Version: 0.1.0
Created At: 2025-03-04 10:27:40 -0500
Last Modified At: 2025-03-04 10:27:40 -0500
External References
Related Objects
- --> Execution (tactic): Compromising large language model (LLM) plugins to execute malicious actions or influence machine learning outcomes.
- --> Privilege Escalation (tactic): Compromising large language model (LLM) plugins to gain additional privileges.