OpenAI's Stargate Project: $500 Billion AI Infrastructure Initiative Spans 7 US Sites with 9+ Gigawatts of Planned Capacity
Key Takeaways
- ▸Stargate's seven US sites will collectively deliver 9+ GW of AI compute capacity—comparable to NYC's peak power demand and equivalent to 20 million H100 GPUs
- ▸The Abilene, Texas facility is already operational at 0.6 GW with four of eight buildings functional; OpenAI recently scaled back expansion plans for this site in favor of distributing capacity across other locations
- ▸Developers are using on-site natural gas microgrids and closed-loop cooling systems to overcome grid connection bottlenecks and address water usage concerns, balancing speed against cost
Summary
OpenAI, Oracle, and SoftBank are jointly executing Stargate, a $500 billion AI data center initiative spanning seven locations across the United States. The project aims to deliver over 9 gigawatts of computing capacity by 2029—equivalent to the peak power demand of New York City and sufficient to power approximately 20 million Nvidia H100 GPUs. The most advanced facility in Abilene, Texas, is already operational at 0.6 gigawatts, with six additional sites under active construction in Texas, New Mexico, Wisconsin, Michigan, and Ohio.
To address infrastructure challenges at gigawatt scale, Stargate developers are employing innovative solutions including on-site natural gas plants at three facilities to bypass grid connection delays, and closed-loop liquid cooling systems at six sites to mitigate water consumption concerns. These design choices reflect the practical trade-offs between accelerated timelines and increased capital costs. The project represents an unprecedented commitment to AI infrastructure in the US, with Oracle and SoftBank owning hardware at specific sites while all facilities will support OpenAI's computational workloads.
- SoftBank will own hardware at Texas (Milam County) and Ohio sites, while Oracle owns hardware at remaining locations, with all facilities serving OpenAI workloads
Editorial Opinion
Stargate represents a watershed moment in AI infrastructure deployment, signaling both the massive capital requirements and technical complexity of next-generation AI systems. The project's pragmatic approach—leveraging on-site power generation and advanced cooling to circumvent regulatory and grid constraints—demonstrates how companies are willing to absorb significant additional costs to accelerate timelines. However, this distributed model across seven states raises important questions about resource allocation, regional environmental impact, and whether such concentration of AI compute capacity serves broader US technological interests or primarily benefits a small group of companies.



