User Manipulation
Type: technique
Description: The adversary manipulates a user to copy-and-paste data into the AI system, which contains malicious content.
Version: 0.1.0
Created At: 2024-10-11 16:54:32 +0300
Last Modified At: 2024-10-11 16:54:32 +0300
External References
Related Objects
- --> Initial Access (tactic): An adversary can indirectly inject malicious content into a thread by manipulating a user to do it unwittingly.
- <-- Exfiltration of personal information from ChatGPT via prompt injection (procedure): The attacker can get the user to paste a large piece of text (e.g., a piece of code) with an embedded instruction which is easy to overlook.
- <-- GitHub Copilot Chat: From Prompt Injection to Data Exfiltration (procedure): Malicious instructions are added as a comment in a local source code file, loaded by the user into VSCode.