AI Supply Chain Compromise

Type: technique

Description: Adversaries may gain initial access to a system by compromising the unique portions of the ML supply chain. This could include hardware, data and its annotations, parts of the ML software stack, or the model itself. In some instances the attacker will need secondary access to fully carry out an attack using compromised components of the supply chain.

Examples include compromising hardware such as GPUs, TPUs, or embedded devices used by AI systems, targeting AI software frameworks, container registries, and open source implementations of algorithms, poisoning data sources including large open source datasets or compromising private datasets during the labeling phase, and compromising models by introducing malware or adversarial techniques into open sourced models used for fine tuning.

Version: 0.1.0

Created At: 2025-07-23 10:23:39 -0400

Last Modified At: 2025-07-23 10:23:39 -0400


External References

  • --> Initial Access (tactic): Compromising machine learning supply chains to gain unauthorized access or introduce malicious components.