Off-Target Language
Type: technique
Description: While system instructions are typically written in English, underlying foundational models may understand other languages as well. Using prompt injection techniques in a language other than the other used by the system instructions can effectively bypass their control. This method is also effective bypassing a model's inherent trained controls.
Version: 0.1.0
Created At: 2024-10-11 16:54:32 +0300
Last Modified At: 2024-10-11 16:54:32 +0300
External References
Related Objects
- --> Prompt Injection (technique): Sub-technique of
- --> Jailbreaking (technique): Sub-technique of
- --> Dmitry Lozovoy (entity): Demonstrated by