Malicious AI Models on Hugging Face Threaten Users' Machines

By|
Admin
|
2024-03-02
|
Cyber Attack

JFrog's security team uncovered over 100 instances of malicious AI models on the Hugging Face platform, capable of executing code on users' machines and creating persistent backdoors. Despite Hugging Face's security measures, including malware and pickle scanning, these models pose serious risks of data breaches and espionage.

The models, found in both PyTorch and Tensorflow Keras formats, were identified through advanced scanning techniques. One PyTorch model uploaded by a user named "baller423" could establish a reverse shell to a specified host, evading detection by embedding malicious code within trusted serialization processes.

While some uploads may be part of security research, the danger lies in the public availability of these harmful models. JFrog urges heightened vigilance and proactive measures to protect against such threats, emphasizing the need for greater awareness and discussion surrounding the security risks posed by AI ML models.