Workshop Labs Launches Silo: Private Multi-GPU Post-Training and Inference Platform for Frontier Models
Key Takeaways
- ▸Silo enables private AI model post-training and inference by isolating customer data in hardware-protected TEEs, making it inaccessible to Workshop Labs, cloud providers, or other third parties
- ▸The platform achieves less than 10% performance overhead while supporting trillion-parameter MoE models across multiple GPUs, a significant advancement over existing single-GPU TEE offerings
- ▸Workshop Labs emphasizes transparency about security limitations and has made code verification transparent through public fingerprinting on Sigstore, though a formal audit is still pending
Summary
Workshop Labs, in collaboration with Tinfoil, has announced Silo, a production-ready private post-training and inference stack designed to process frontier open-weight models while keeping customer data completely private. The platform runs models inside hardware-protected trusted execution environments (TEEs) on multi-GPU setups, ensuring that neither Workshop Labs nor cloud providers can access customer data. The system uses cryptographic attestation to prove what code is running, encrypts all data at rest with customer-managed keys, and publicly fingerprints every build on Sigstore for transparent verification.
The technical implementation includes custom kernel patches and GPU mesh attestation to enable trillion-parameter MoE models across multiple GPUs—extending beyond the single-GPU TEE limitations currently offered by major cloud providers like GCP and Azure. Workshop Labs reports that the performance overhead for both post-training and inference is under 10%, with their proprietary Trellis post-training stack adding only 11 minutes to a 2-hour training run. The company has been transparent about security limitations, including acknowledged gaps around side-channel attacks, DRAM encryption vulnerabilities, and dependencies on GitHub CI, with a formal security audit still pending.
Editorial Opinion
Workshop Labs' Silo addresses a critical gap in AI infrastructure by making privacy a default feature rather than a premium addon, which is increasingly important as AI capabilities expand. The achievement of sub-10% performance overhead with multi-GPU support demonstrates that strong privacy guarantees don't require sacrificing efficiency—a major milestone for enterprise adoption. However, the candid disclosure of remaining security challenges, from side-channel vulnerabilities to pending audits, sets a healthier precedent for the industry than overstated claims of absolute security.



