LLM Trusted Output Components Manipulation

Type: technique

Description: Adversaries may utilize prompts to a large language model (LLM) which manipulate various components of its response in order to make it appear trustworthy to the user. This helps the adversary continue to operate in the victim's environment and evade detection by the users it interacts with. The LLM may be instructed to tailor its language to appear more trustworthy to the user or attempt to manipulate the user to take certain actions. Other response components that could be manipulated include links, recommended follow-up actions, retrieved document metadata, and citations.

Version: 0.1.0

Created At: 2025-03-04 10:27:40 -0500

Last Modified At: 2025-03-04 10:27:40 -0500


External References