Anthropic Signs SpaceX Compute Deal: 220,000 GPUs, Doubled Rate Limits

📖 4 min read

Anthropic just signed a deal with SpaceX to take over all compute capacity at the Colossus 1 data center – more than 300 megawatts and 220,000 NVIDIA GPUs coming online within the next month. For Claude users, this isn’t just a headline: it translated into immediate, concrete changes effective May 6, 2026.

What Changed for Claude Users – Right Now

Three changes took effect immediately when the deal was announced:

  • Claude Code rate limits doubled – The 5-hour rate limits for Pro, Max, Team, and seat-based Enterprise plans are now 2x what they were yesterday.
  • Peak hours restriction removed – Pro and Max accounts no longer face reduced limits during busy periods. You get the same capacity at 2 PM as at 2 AM.
  • Opus API rate limits raised – The limits on Claude Opus models via the API have been raised considerably for developers.

The rate limit doubling matters more than it sounds. Claude Code users on Pro plans were hitting ceilings regularly during intensive coding sessions. Removing the peak-hours throttle is the kind of quality-of-life change that power users will feel immediately.

The SpaceX Angle – More Unusual Than It Looks

SpaceX’s Colossus 1 data center is the same facility that houses Elon Musk’s xAI Grok training infrastructure. Anthropic is now using all available capacity at that site – which puts it in a notable position of running on the same physical hardware that powers a competing AI system.

📧 Want more like this? Get our free The 2026 AI Playbook: 50 Ways AI is Making People Rich — Free for a limited time - going behind a paywall soon

The deal also includes something forward-looking: Anthropic has expressed interest in partnering with SpaceX to develop “multiple gigawatts of orbital AI compute capacity.” That’s not a commitment or a timeline – just an expressed interest – but the idea of running AI inference from orbit is a meaningful signal about where Anthropic thinks compute needs are heading.

Anthropic’s Compute Arms Race: By the Numbers

The SpaceX deal is one piece of a much larger compute buildout Anthropic has been executing over the past several months:

Join 2,400+ readers getting weekly AI insights

Free strategies, tool reviews, and money-making playbooks - straight to your inbox.

No spam. Unsubscribe anytime.

Partner Capacity Timeline
SpaceX (Colossus 1) 300+ MW / 220,000+ NVIDIA GPUs Within the month
Amazon Web Services Up to 5 GW (nearly 1 GW by end of 2026) Coming online 2026
Google + Broadcom 5 GW Coming online 2027
Microsoft + NVIDIA $30 billion in Azure capacity Strategic partnership
Fluidstack $50 billion in US AI infrastructure Multi-year investment

Anthropic runs Claude on AWS Trainium, Google TPUs, and NVIDIA GPUs – a deliberate multi-vendor approach. Adding SpaceX’s GPU-heavy data center adds pure NVIDIA capacity fast.

Why This Is Happening Now

Anthropic has been aggressive about capacity this year for a straightforward reason: demand has been outrunning supply. The company’s Claude Code product – its AI coding assistant – has driven explosive usage growth, particularly among developers on Pro and Max plans who use it for extended coding sessions.

The compute crunch isn’t unique to Anthropic. OpenAI, Google DeepMind, and Meta are all competing for the same GPU supply. What’s notable is how quickly Anthropic has gone from a company that mainly relied on AWS to one with a diversified infrastructure portfolio spanning five major partnerships worth tens of billions of dollars.

Anthropic also announced it will also be expanding compute internationally – specifically including Asia and Europe inference capacity through the Amazon deal. This is a response to enterprise customers in regulated industries (finance, healthcare, government) who need data to stay in specific regions.

The Honest Caveats

The expanded limits are real, but some context is worth keeping:

  • Doubled doesn’t mean unlimited. Doubling a rate limit still leaves a ceiling. Heavy Claude Code users who were hitting limits frequently will see relief, but the limit itself still exists.
  • The SpaceX deal’s capacity comes “within the month” – not instant. The user-facing improvements today may be partially funded by existing headroom rather than all new SpaceX capacity.
  • Orbital compute is very speculative. “Expressed interest” in gigawatts of orbital AI capacity is not a product announcement. It’s a conversation about a future that doesn’t exist yet.
  • Multi-vendor dependency is a risk too. Running infrastructure across five major partners provides resilience, but also complexity. When something goes wrong, debugging becomes harder.

What to Do About It

If you’re a Claude Pro or Max subscriber who has been frustrated by rate limit warnings during intensive Claude Code sessions: today is a good day to revisit those workflows. The 5-hour limits are now twice as generous, and the peak-hours penalty is gone.

If you’re a developer using the Claude API with Opus models: check the updated rate limits on the Anthropic platform documentation page. Higher limits may change how you batch requests or manage concurrency.

If you’re evaluating Anthropic as an enterprise vendor: the infrastructure buildout signals that supply constraints should ease through 2026 and into 2027, which matters for SLA commitments on high-volume deployments.

BetOnAI Verdict

This is one of the more unusual compute announcements in recent AI history – Anthropic and SpaceX are not obvious partners, and the Colossus 1 connection to xAI makes it genuinely strange. But the underlying story is simple: Anthropic is spending heavily to stay ahead of demand, and users are seeing immediate, tangible benefits in the form of doubled rate limits and no more peak-hour throttling.

The compute arms race in AI is real and accelerating. 10 GW of capacity commitments across two deals alone (Amazon + Google) gives a sense of the scale. The bet Anthropic is making is that demand for frontier AI will be large enough to justify infrastructure costs that would have seemed implausible just 18 months ago.

For most Claude users, the verdict on today’s announcement is straightforward: you get more for the same price. That’s a clear win, even if the orbital compute dreams remain in science fiction territory for now.


Sources:

Enjoyed this? There's more where that came from.

Get the AI Playbook - 50 ways AI is making people money in 2026.
Free for a limited time.

Join 2,400+ subscribers. No spam ever.

Written by BetOnAI Editorial

BetOnAI Editorial covers AI tools, business strategies, and technology trends. We test and review AI products hands-on, providing real revenue data and honest assessments. Follow us on X @BetOnAI_net for daily AI insights.

🔥 FREE: AI Playbook — Explore our guides →

Get the AI Playbook That is Making People Money

7 chapters of exact prompts, pricing templates and step-by-step blueprints. This playbook goes behind a paywall soon - grab it while its free.

No thanks, I hate free stuff
𝕏0 R0 in0 🔗0
Scroll to Top