Corrupt AI Model

Type: technique

Description: An adversary may purposefully corrupt a malicious AI model file so that it cannot be successfully deserialized in order to evade detection by a model scanner. The corrupt model may still successfully execute malicious code before deserialization fails.

Version: 0.1.0

Created At: 2025-07-23 10:23:39 -0400

Last Modified At: 2025-07-23 10:23:39 -0400


External References

  • --> Defense Evasion (tactic): An adversary can corrupt a malicious AI model file so that it cannot be successfully deserialized in order to evade detection by a model scanner.