Exfiltration via ML Inference API

Type: technique

Description: Adversaries may exfiltrate private information via the AI model inference API. ML Models have been shown leak private information about their training data. The model itself may also be extracted for the purposes of ML Intellectual Property Theft.

Exfiltration of information relating to private training data raises privacy concerns. Private training data may include personally identifiable information, or other protected data.

Version: 0.1.0

Created At: 2025-03-04 10:27:40 -0500

Last Modified At: 2025-03-04 10:27:40 -0500


External References

  • --> Exfiltration (tactic): Exfiltrating data by exploiting machine learning inference APIs to extract sensitive information.