Modify AI Model Architecture

Type: technique

Description: Adversaries may directly modify an AI model's architecture to re-define it's behavior. This can include adding or removing layers as well as adding pre or post-processing operations.

The effects could include removing the ability to predict certain classes, adding erroneous operations to increase computation costs, or degrading performance. Additionally, a separate adversary-defined network could be injected into the computation graph, which can change the behavior based on the inputs, effectively creating a backdoor.

Version: 0.1.0

Created At: 2025-07-23 10:23:39 -0400

Last Modified At: 2025-07-23 10:23:39 -0400


External References