LOWSupply Chain
Global
Malicious Hugging Face model masquerading as OpenAI release hits 244K downloads
·Source: CSO Online
Updated:
Executive Summary
A malicious Hugging Face repository posing as an OpenAI release delivered infostealer malware to Windows systems and logged 244,000 downloads before being removed, raising fresh concerns about how enterprises source and validate AI models from public repositories. The repository, named Open-OSS/privacy-filter, impersonated OpenAI’s legitimate Privacy Filter release, copied its model card almost wo
Analysis
A malicious Hugging Face repository posing as an OpenAI release delivered infostealer malware to Windows systems and logged 244,000 downloads before being removed, raising fresh concerns about how enterprises source and validate AI models from public repositories. The repository, named Open-OSS/privacy-filter, impersonated OpenAI’s legitimate Privacy Filter release, copied its model card almost word-for-word, and included a malicious loader.py file that fetched and executed credential-stealing malware on Windows hosts, AI security firm HiddenLayer said in a research advisory . “The repository reached the #1 trending position on Hugging Face with approximately 244K downloads and 667 likes in under 18 hours, numbers that were almost certainly artificially inflated to make the repository appear legitimate,” the advisory added. The incident highlights growing concerns that public AI model registries are emerging as a new software supply-chain risk for enterprises, particularly as developers and data scientists increasingly clone open-source models directly into corporate environments with access to source code, cloud credentials, and internal systems. The README accompanying the fake model diverged from the legitimate project in one key area, instructing users to run start.bat on Windows or execute python loader.py on Linux and macOS. Researchers have previously found malicious code hidden inside Pickle-serialised model files on Hugging Face that bypassed the platform’s scanners. They have also warned that the AI supply chain is lagging behind traditional software in oversight and tooling. Malicious loader disguised as a legitimate model setup According to HiddenLayer, the loader.py script first executes decoy code that resembles a legitimate AI model loader before launching a concealed infection chain. The script disabled SSL verification, decoded a base64-encoded URL linked to the public JSON hosting service jsonkeeper.com, retrieved a remote payload instruction, and passed commands to PowerShell. “Using jsonkeeper[.]com as the C2 channel lets the attacker rotate the payload without modifying the repository,” the researchers wrote. The resulting PowerShell command downloaded an additional batch file from an attacker-controlled domain and established persistence by creating a scheduled task designed to mimic a legitimate Microsoft Edge update process. The infection chain ultimately deployed a Rust-based infostealer targeting Chromium and Firefox-derived browsers, Discord local storage, cryptocurrency wallets, FileZilla configurations, and host system information, the advisory said. The malware also attempted to disable Windows Antimalware Scan Interface and Event Tracing for Windows while checking for sandbox and virtual machine environments to evade analysis. Part of a broader AI supply chain targeting HiddenLayer, in its advisory, said that it identified six additional Hugging Face repositories uploaded under a separate account that used nearly identical loader logic and shared infrastructure with the campaign. The researchers also linked elements of the operation to earlier software supply-chain attacks involving npm typosquatting campaigns and fake AI packages distributed through PyPI. The shared infrastructure “suggests these campaigns are possibly linked and likely part of a broader supply chain operation targeting open-source ecosystems,” HiddenLayer wrote. The incident follows earlier warnings from researchers about malicious code embedded inside Pickle-serialized AI model files on Hugging Face, as well as separate campaigns involving poisoned AI SDKs and fake OpenClaw installers. Traditional security controls are falling short The incident also exposes limitations in existing software composition analysis and application security tooling when applied to AI artifacts, analysts said. “Traditional SCA was designed to inspect dependency manifests, libraries, and container images, not the increasingly complex behaviors associated with AI development workflows,” said Sakshi Grover, senior research manager for cybersecurity services at IDC. “It is far less effective at identifying malicious loader logic concealed within seemingly legitimate AI repositories.” Jaishiv Prakash, director analyst at Gartner, said enterprises now need dedicated governance controls at the AI registry layer itself. “Enterprises must establish dedicated controls for model sources, approved versions, access, and runtime validation at the registry layer,” Prakash said, adding that model repositories distribute executable artifacts and embedded logic that often fall outside the effective scope of traditional SCA tools. IDC’s November 2025 FutureScape report predicts that by 2027, 60% of enterprises deploying agentic AI systems will require an AI bill of materials to support continuous vulnerability scanning and compliance assurance, Grover said. What should enterprises do now HiddenLayer urged affected users to treat impacted systems as fully compromised and prioritize reimaging over cleanup efforts. “If you cloned Open-OSS/privacy-filter and executed start.bat, python loader.py, or any file from the repository on a Windows host, treat the system as fully compromised,” the advisory said. Browser sessions should also be considered compromised even where passwords were not stored locally, the researchers added, because stolen session cookies can bypass multifactor authentication protections. The company also recommended blocking listed indicators of compromise, rotating credentials, invalidating active sessions, and conducting historical network hunts for connections tied to the campaign. Hugging Face confirmed to HiddenLayer that the repository violated its terms of service and removed it from the platform, according to the advisory.