BotBeat
...
← Back

> ▌

RackspaceRackspace
PRODUCT LAUNCHRackspace2026-05-04

Rackspace Launches GPU-as-a-Service with Spot Instance Pricing in San Jose Expansion

Key Takeaways

  • ▸NVIDIA H100 and A30 GPUs available via auction-based spot pricing with cost savings of up to 90% compared to reserved instances
  • ▸New San Jose data center location brings Rackspace's global infrastructure to seven sites, providing lower latency for West Coast AI/ML workloads
  • ▸OpenStack and Kubernetes-based architecture enables true multi-cloud portability with consistent APIs across Rackspace public cloud and private deployments
Source:
Hacker Newshttps://siliconangle.com/2024/11/04/rackspace-offers-gpus-cloud-service-spot-instance-pricing/↗

Summary

Rackspace Technology announced the launch of a new GPU-as-a-Service offering that provides enterprises access to NVIDIA's flagship H100 and A30 GPUs through an auction-based spot instance pricing model. The announcement includes expansion to a new data center in San Jose, California, bringing Rackspace's global footprint to seven locations and reducing latency for West Coast customers. The service targets organizations seeking cost-effective GPU compute capacity for AI and machine learning workloads without the upfront capital investment required to purchase GPUs directly, which can exceed $25,000 per unit.

The GPU service operates within Rackspace's existing spot instance framework, which uses open market auction mechanics to determine pricing. Spot instances offer significant cost savings—up to 90% below reserved instance pricing—though customers accept the risk of interruptions when Rackspace reclaims capacity. The service is built on OpenStack, an open-source cloud computing platform, and leverages Kubernetes for container orchestration, emphasizing multi-cloud compatibility and workload portability.

Rackspace positions the offering as an open alternative to hyperscalers, emphasizing consistent APIs across private and public cloud environments. The approach provides customers with flexibility to choose their own GPU software solutions or use NVIDIA's GPU operator for native functionality, avoiding vendor lock-in and enabling true workload portability between Rackspace's public cloud and private OpenStack deployments.

  • Service enables cost-effective GPU access for variable workloads that can tolerate brief interruptions, reducing total cost of ownership for AI/ML initiatives

Editorial Opinion

Rackspace's GPU-as-a-Service addresses a critical cost barrier for enterprises pursuing AI and machine learning initiatives. By combining NVIDIA's leading GPU hardware with auction-based spot pricing and multi-cloud flexibility, Rackspace offers a compelling alternative to hyperscaler lock-in—particularly valuable for organizations with variable workload patterns that can tolerate brief interruptions. The move signals renewed competitive pressure on public cloud giants in the GPU infrastructure space.

MLOps & InfrastructureAI HardwareMarket TrendsProduct Launch

Comments

Suggested

IBMIBM
INDUSTRY REPORT

Mainframes Return as Cost-Effective Infrastructure as Gartner Reports VMware Exodus to IBM Big Iron

2026-05-04
NVIDIANVIDIA
RESEARCH

NVIDIA GPU Resilience Study Reveals H100 Memory Vulnerabilities Despite Hardware Improvements

2026-05-04
OrchestraOrchestra
PRODUCT LAUNCH

iOrchestra.ai Launches Prompt-to-Hardware Mass Production Platform

2026-05-04
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us