BotBeat
...
← Back

> ▌

CovenantCovenant
RESEARCHCovenant2026-03-11

Covenant-72B: First Large-Scale LLM Trained Through Trustless Distributed Network With Open Participation

Key Takeaways

  • ▸Covenant-72B represents the largest globally distributed pre-training run in terms of both compute and model scale with open participation
  • ▸The use of blockchain-based protocols enabled trustless, permissionless participation from contributors worldwide, removing traditional whitelisting barriers
  • ▸SparseLoCo optimizer proved effective at handling dynamic participation, allowing peers to join and leave freely without disrupting training
Source:
Hacker Newshttps://arxiv.org/abs/2603.08163↗

Summary

Researchers have unveiled Covenant-72B, a 72-billion parameter large language model trained through the largest collaborative globally distributed pre-training run to date, featuring open, permissionless participation enabled by blockchain technology. The model was pre-trained on approximately 1.1 trillion tokens using SparseLoCo, a communication-efficient optimizer that supports dynamic participation with peers freely joining and leaving the network. Unlike previous distributed training efforts limited to whitelisted participants, Covenant-72B demonstrates that truly democratized, non-whitelisted participation in foundation model development is feasible at scale. The model achieves competitive performance compared to fully centralized models trained with similar or higher compute budgets, marking a significant milestone in decentralized AI development.

  • Competitive performance with centralized models demonstrates that democratized distributed training can achieve state-of-the-art results at unprecedented scale

Editorial Opinion

Covenant-72B represents a watershed moment for democratizing AI development, proving that large-scale foundation models can be trained collaboratively across untrusted networks without sacrificing performance. By combining blockchain-based governance with sophisticated optimization techniques, this work challenges the centralized AI paradigm and opens new possibilities for global participation in building AI infrastructure. However, questions remain about the practical accessibility, computational requirements for participation, and how well this model scales as additional untrusted peers join the network.

Large Language Models (LLMs)Generative AIMLOps & InfrastructureOpen Source

More from Covenant

CovenantCovenant
UPDATE

Bun Fixes Critical Container Resource Detection Bug with cgroup-Aware CPU Core Counting

2026-04-03
CovenantCovenant
UPDATE

Bun Runtime Bug May Have Exposed Claude Code Source in Recent Leak

2026-03-31
CovenantCovenant
RESEARCH

Autonomous RL Fine-Tuning Framework Successfully Extends Karpathy's Autoresearch with On-Demand GPU Infrastructure

2026-03-31

Comments

Suggested

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
Google / AlphabetGoogle / Alphabet
RESEARCH

Deep Dive: Optimizing Sharded Matrix Multiplication on TPU with Pallas

2026-04-05
GitHubGitHub
PRODUCT LAUNCH

GitHub Launches Squad: Open Source Multi-Agent AI Framework to Simplify Complex Workflows

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us