BotBeat
...
← Back

> ▌

Independent ResearchIndependent Research
RESEARCHIndependent Research2026-03-10

Covenant-72B: Researchers Demonstrate Trustless Distributed Pre-Training of 72B Parameter LLM Over the Internet

Key Takeaways

  • ▸Covenant-72B successfully demonstrates trustless, peer-to-peer pre-training of a 72B parameter model over the internet without centralized coordination
  • ▸The approach uses cryptographic and decentralized mechanisms to maintain security and integrity across distributed training nodes
  • ▸This research could democratize large-scale LLM training by enabling collaborative efforts across independent parties without requiring trust relationships
Source:
Hacker Newshttps://twitter.com/tplr_ai/status/2031388295972929720↗
Loading tweet...

Summary

Researchers have successfully pre-trained a 72 billion parameter large language model called Covenant-72B using a novel trustless peer-to-peer approach that allows distributed training over the internet without requiring centralized infrastructure or trust between participants. This breakthrough demonstrates that large-scale LLM pre-training can be conducted collaboratively across geographically dispersed nodes, potentially democratizing access to training resources that traditionally require enormous computational investments from well-funded organizations. The approach employs cryptographic mechanisms and decentralized coordination to ensure data integrity and prevent malicious actors from compromising the training process. This work challenges the current paradigm where LLM pre-training is concentrated among a small number of well-resourced AI companies.

  • The work challenges the current concentration of LLM development among well-funded organizations by showing an alternative distributed training paradigm

Editorial Opinion

Covenant-72B represents a significant conceptual advance in making large-scale AI development more accessible and decentralized. If this trustless approach proves practically viable and scalable, it could fundamentally reshape the AI landscape by enabling smaller organizations and researchers to participate in frontier-level model development. However, the real-world performance and efficiency of such distributed training methods compared to centralized approaches will need careful evaluation before declaring this a true alternative to current practices.

Large Language Models (LLMs)Machine LearningDeep LearningMLOps & InfrastructureOpen Source

More from Independent Research

Independent ResearchIndependent Research
RESEARCH

New Research Proposes Infrastructure-Level Safety Framework for Advanced AI Systems

2026-04-05
Independent ResearchIndependent Research
RESEARCH

DeepFocus-BP: Novel Adaptive Backpropagation Algorithm Achieves 66% FLOP Reduction with Improved NLP Accuracy

2026-04-04
Independent ResearchIndependent Research
RESEARCH

Research Reveals How Large Language Models Process and Represent Emotions

2026-04-03

Comments

Suggested

Google / AlphabetGoogle / Alphabet
RESEARCH

Deep Dive: Optimizing Sharded Matrix Multiplication on TPU with Pallas

2026-04-05
GitHubGitHub
PRODUCT LAUNCH

GitHub Launches Squad: Open Source Multi-Agent AI Framework to Simplify Complex Workflows

2026-04-05
NVIDIANVIDIA
RESEARCH

Nvidia Pivots to Optical Interconnects as Copper Hits Physical Limits, Plans 1,000+ GPU Systems by 2028

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us