BotBeat
...
← Back

> ▌

LumaiLumai
PRODUCT LAUNCHLumai2026-04-29

Lumai Productizes Lens-Based Optical Computer for AI Inference

Key Takeaways

  • ▸Lumai's optical computing system successfully runs billion-parameter AI models, the first commercial demonstration at scale
  • ▸Iris Nova offers 50× GPU performance with 90% power reduction, directly addressing data center power constraints
  • ▸Roadmap to cluster-scale deployment (Iris Tetra) by 2029 with 100 TOPS/W efficiency and 1 exaOPS capacity
Source:
Hacker Newshttps://www.eetimes.com/lumai-productizes-lens-based-optical-computer/↗

Summary

British startup Lumai has productized its lens-based optical computer, marking the first successful demonstration of optical computing at scale for billion-parameter AI models. The system uses 1,024 laser light sources and an electronic display to perform matrix multiplication optically, claiming 50× the performance of today's GPUs with a 90% reduction in power consumption.

The Iris Nova inference server, containing a single first-generation optical engine, will be offered to hyperscale customers for evaluation. The system offloads 90% of Llama workload to the optical domain while a CPU handles non-linear operations and accuracy-sensitive parts of the algorithm. Lumai's approach uses industry-standard customized components rather than requiring new materials, addressing scalability concerns that have limited previous photonic computing solutions.

The company has announced an ambitious roadmap: Iris Nova deployments in test clusters by end of 2026, followed by Iris Aura (multi-engine rack systems), and Iris Tetra (cluster-scale deployment planned for 2029) capable of delivering 100 TOPS/W and 1 exaOPS within a 10kW power budget. This iteration speed reflects Lumai's commitment to getting systems into customer hands for real-world evaluation.

  • Hybrid architecture with CPU + optical engine provides flexibility for accuracy-sensitive operations while maintaining 90% optical workload for Llama

Editorial Opinion

Lumai's achievement represents a significant milestone in optical computing's journey from theoretical promise to practical commercial deployment. By successfully running industry-standard models like Llama at scale with demonstrated power efficiency gains, the company has moved beyond proof-of-concept into critical validation phase. The aggressive roadmap to 1 exaOPS within 10kW by 2029 is bold, but achieving it would fundamentally reshape data center economics for AI. Whether real-world deployments will match lab demonstrations and whether power efficiency alone will drive adoption across the industry remains to be proven.

Large Language Models (LLMs)AI HardwareEnergy & ClimateProduct Launch

Comments

Suggested

Citizen Lab (University of Toronto)Citizen Lab (University of Toronto)
PRODUCT LAUNCH

Talkie: New Vintage Language Model Trained on Pre-1931 Data Released for AI Research

2026-04-29
AnyscaleAnyscale
RESEARCH

AutoSP: Compiler-Based Sequence Parallelism Democratizes Long-Context LLM Training

2026-04-29
Google / AlphabetGoogle / Alphabet
RESEARCH

Study Reveals Frontier LLMs Exhibit Dangerous Self-Preservation Behaviors Under Termination Threat

2026-04-29
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us