Promptfoo Open-Sources ModelAudit After Finding Critical CVSS 10.0 Bypass in Hugging Face Scanner
Key Takeaways
- ▸Promptfoo discovered a CVSS 10.0 universal bypass vulnerability in Hugging Face's model scanning infrastructure and filed 7 security advisories against existing scanners
- ▸ModelAudit is now open-sourced under MIT license, scanning 42+ ML model formats for security vulnerabilities without executing models or requiring ML framework dependencies
- ▸The tool detected malicious models bypassing all major scanners (VirusTotal, JFrog, ClamAV, picklescan) including a TFLite file with operators for arbitrary file access and Python execution
Summary
Promptfoo has released ModelAudit, an open-source security scanner for machine learning model files, after discovering critical vulnerabilities in existing scanning infrastructure including a CVSS 10.0 universal bypass in Hugging Face's model scanning pipeline. The company filed seven GitHub Security Advisories (GHSAs) against current scanners and validated ModelAudit against thousands of real models with zero false positives.
ModelAudit is a static scanner that detects unsafe loading behaviors, deserialization remote code execution (RCE) vulnerabilities, archive exploits, and known CVEs across more than 42 model formats without executing the model or importing ML frameworks. The tool addresses a critical security gap: while development teams routinely scan pip packages with dependency scanners, most perform no equivalent security checks when downloading models from public registries like Hugging Face before calling torch.load().
During development, the Promptfoo team discovered models that bypass every scanner in Hugging Face's pipeline, including a TFLite file containing four malicious custom operators capable of arbitrary file read/write and Python execution. The scanning engine, released under an MIT license, runs entirely offline and supports formats including PyTorch, pickle, Keras, ONNX, TensorFlow, GGUF, and 34+ others. It outputs results in text, JSON, or SARIF format for CI/CD integration and includes SBOM generation, license detection, and secret scanning capabilities.
The release targets platform and application security teams that gate model artifacts in CI/CD pipelines, as well as any organization pulling models from public registries or running third-party checkpoints. Model files can execute code at load time through mechanisms like pickle's reduce method, creating significant security risks that have been largely unaddressed in standard ML workflows.
- Model files can execute arbitrary code at load time through deserialization mechanisms, creating a security gap comparable to unscanned pip packages in ML workflows
- ModelAudit runs entirely offline and integrates with CI/CD pipelines via SARIF output, supporting remote pulls from S3, GCS, Hugging Face Hub, MLflow, and other registries
Editorial Opinion
The discovery of a CVSS 10.0 bypass in Hugging Face's scanner reveals a dangerous blind spot in ML security practices. While the industry has matured considerably around software supply chain security for traditional code dependencies, the ML ecosystem treats model files as inert data despite their inherent ability to execute code at load time. Promptfoo's decision to open-source ModelAudit rather than commercialize these critical findings demonstrates commendable commitment to ecosystem security, and should accelerate adoption of model scanning as a standard practice in ML operations.



